US20080136826A1 - PC-based computing system employing a silicon chip with a routing unit to distribute geometrical data and graphics commands to multiple GPU-driven pipeline cores during a mode of parallel operation - Google Patents

PC-based computing system employing a silicon chip with a routing unit to distribute geometrical data and graphics commands to multiple GPU-driven pipeline cores during a mode of parallel operation Download PDF

Info

Publication number
US20080136826A1
US20080136826A1 US11/977,718 US97771807A US2008136826A1 US 20080136826 A1 US20080136826 A1 US 20080136826A1 US 97771807 A US97771807 A US 97771807A US 2008136826 A1 US2008136826 A1 US 2008136826A1
Authority
US
United States
Prior art keywords
gpu
computing system
based computing
graphics
pipeline cores
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/977,718
Inventor
Reuven Bakalash
Offir Remez
Efi Fogel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lucid Information Technology Ltd
Google LLC
Original Assignee
Lucid Information Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lucid Information Technology Ltd filed Critical Lucid Information Technology Ltd
Priority to US11/977,718 priority Critical patent/US20080136826A1/en
Publication of US20080136826A1 publication Critical patent/US20080136826A1/en
Assigned to LUCID INFORMATION TECHNOLOGY, LTD. reassignment LUCID INFORMATION TECHNOLOGY, LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BAKALASH, REUVEN, FOGEL, EFI, REMEZ, OFFIR
Assigned to GOOGLE LLC reassignment GOOGLE LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LUCIDLOGIX TECHNOLOGY LTD.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3885Concurrent instruction execution, e.g. pipeline or look ahead using a plurality of independent parallel functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/52Parallel processing
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2300/00Aspects of the constitution of display devices
    • G09G2300/04Structural and physical details of display devices
    • G09G2300/0421Structural details of the set of electrodes
    • G09G2300/0426Layout of electrodes and connections
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2310/00Command of the display device
    • G09G2310/02Addressing, scanning or driving the display screen or processing steps related thereto
    • G09G2310/0224Details of interlacing
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/06Use of more than one graphics processor to process data before displaying to one or more screens
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/363Graphics controllers

Definitions

  • FIG. 1A shows a block diagram of a conventional graphic system as part of PC architecture, comprising of CPU ( 111 ), system memory ( 112 ), I/O chipset ( 113 ), high speed CPU-GPU bus ( 114 ) (e.g. PCI express 16 ⁇ ), video (graphic) card ( 115 ) based on a single GPU, and display ( 116 ).
  • the single GPU graphic pipeline decomposes into two major parts: a geometry subsystem for processing 3D graphics primitives (e.g. polygons) and a pixel subsystem for computing pixel values. These two parts are consistently designed for increased parallelism.
  • the graphics databases are regular, typically consisting of a large number of primitives that receive nearly identical processing; therefore the natural concurrency is to partition the data into separate streams and to process them independently.
  • image parallelism has long been an attractive approach for high-speed rasterization architectures, since pixels can be generated in parallel in many ways.
  • An example of a highly parallel Graphic Processing Unit chip (GPU) in prior art is depicted in FIG. 2A (taken from 3 D Architecture White Paper , by ATI).
  • the geometry subsystem consists of six (6) parallel pipes while the pixel subsystem has sixteen (16) parallel pipes.
  • the “converge stage” 221 between these two subsystems is very problematic as it must handle the full data stream bandwidth.
  • the multiple streams of transformed and clipped primitives must be directed to the processors doing rasterization. This can require sorting primitives based on spatial information while different processors are assigned to different screen regions.
  • a second difficulty in the parallel pixel stage is that ordering of data may change as those data pass through parallel processors. For example, one processor may transform two small primitives before another processor transforms a single, large one. Certain global commands, such as commands to update one window instead of another, or to switch between double buffers, require that data be synchronized before and after command. This converge stage between the geometry and pixel stages, restricts the parallelism in a single GPU.
  • a typical technology increasing the level of parallelism employs multiple GPU-cards, or multiple GPU chips on a card, where the rendering performance is additionally improved, beyond the converge limitation in a single core GPU.
  • This technique is practiced today by several academic researches (e.g. Chromium parallel graphics system by Stanford University) and commercial products (e.g. SLI—a dual GPU system by Nvidia, Crossfire—a dual GPU by ATI).
  • FIG. 3 shows a commercial dual GPU system, Asus A8N-SLI, based on Nvidia SLI technology.
  • FIG. 2C indicates typical bottlenecks in a graphic pipeline that breaks-down into segmented stages of bus transfer, geometric processing and fragment fill bound processing.
  • a given pipeline is only as strong as the weakest link of one of the above stages, thus the main bottleneck determines overall throughput.
  • pipeline bottlenecks stem from: ( 231 ) geometry, texture, animation and meta data transfer, ( 232 ) geometry data memory limits, ( 233 ) texture data memory limits, ( 234 ) geometry transformations, and ( 235 ) fragment rendering.
  • a primary object of the present invention is to provide a novel method of and apparatus for high-speed graphics processing and display, which avoid the shortcomings and drawbacks of prior art apparatus and methodologies.
  • Another object of the present invention is to provide a novel graphics processing and display system having multiple graphics cores with unlimited graphics parallelism, getting around the inherent converge bottleneck of a single GPU system.
  • Another object of the present invention is to provide a novel graphics processing and display system which ensures the best graphics performance, eliminating the shortages of a multi-chip system, the restricted bandwidth of inter-GPU communication, mechanical complexity (size, power, and heat), redundancy of components, and high cost.
  • Another object of the present invention is to provide a novel graphics processing and display system that has an amplified graphics processing and display power by parallelizing multiple graphic cores in a single silicon chip.
  • Another object of the present invention is to provide a novel graphics processing and display system that is realized on a silicon chip having a non-restricted number of multiple graphic cores.
  • Another object of the present invention is to provide a novel graphics processing and display system that is realized on a silicon chip which utilizes a cluster of multiple graphic cores.
  • Another object of the present invention is to provide a novel graphics processing and display system that is realized on a silicon chip having multiple graphic cores or pipes (i.e. a multiple-pipe system-on-chip, or MP-SOC) and providing architectural flexibility to achieve the advanced parallel graphics display performance.
  • a novel graphics processing and display system that is realized on a silicon chip having multiple graphic cores or pipes (i.e. a multiple-pipe system-on-chip, or MP-SOC) and providing architectural flexibility to achieve the advanced parallel graphics display performance.
  • Another object of the present invention is to provide a novel graphics processing and display system that is realized on a silicon chip having multiple graphic cores, and adaptively supporting different modes of parallelism within both its geometry and pixel processing subsystems.
  • Another object of the present invention is to provide a novel graphics processing and display system that is realized on a silicon chip having multiple GPU cores, and providing adaptivity for highly advanced graphics processing and display performance.
  • Another object of the present invention is to provide a novel graphics processing and display system and method, wherein the graphic pipeline bottlenecks of vertex (i.e. 3D polygon geometry) processing and fragment processing are transparently and intelligently resolved.
  • vertex i.e. 3D polygon geometry
  • Another object of the present invention to provide a method and system for an intelligent decomposition of data and graphic commands, preserving the basic features of graphic libraries as state machines and tightly sticking to the graphic standard.
  • Another object of the present invention to provide a new PCI graphics card supporting a graphics processing and display system realized on a silicon chip having multiple graphic cores, and providing architectural flexibility to achieve the best parallel performance.
  • Another object of the present invention to provide a computing system having improved graphics processing and display capabilities, employing a graphics card having a silicon chip with multiple graphic cores, and providing architectural flexibility to achieve the best parallel performance.
  • Another object of the present invention to provide such a computing system having improved graphics processing and display performance required by applications including, video-gaming, virtual reality, scientific visualization, and other interactive application requiring or demanding photo-realistic graphics display capabilities.
  • FIG. 1A is a schematic representation of a prior art, standard PC architecture, in which its conventional single GPU graphic card is shown circled;
  • FIG. 1B is a simplified block diagram of a prior art conventional graphics system employing a single GPU, having geometry and pixel processing subsystems, wherein the data converge stream therebetween presents a serious system bottleneck that significantly limits performance;
  • FIG. 2A is a schematic diagram illustrating high parallelism in a typical prior art ATI X800 Graphic Processing Unit chip (GPU), wherein the geometry subsystem consists of 6 parallel pipes and the pixel subsystem consists of 16 parallel pipes;
  • GPU Graphic Processing Unit chip
  • FIG. 2B is a schematic diagram of the internal portion of a prior art graphic processing unit (GPU) chip (e.g. ATI X800) illustrating the bottlenecking converge stage (setup engine) between geometric and pixel parallel engines therein;
  • GPU graphic processing unit
  • FIG. 2C is a schematic representation of a conventional graphics pipeline, illustrating the data bottleneck problem existing therein;
  • FIG. 3 is a photograph of a prior art dual GPU-driven video graphics card
  • FIG. 4A is a schematic system block diagram representation of a computing system employing a printed circuit graphics card employing the multiple-pipe system-on-chip (MP-SOC) device in accordance with the principles of the present invention, wherein the system block diagram shows the CPU, I/O chipset, system memory, graphic card based on MP-SOC, and display screen(s);
  • MP-SOC multiple-pipe system-on-chip
  • FIG. 4B is schematic representation of the physical implementation of the MP-SOC of the present invention, mounted on a printed circuit (PC) video graphics board;
  • PC printed circuit
  • FIG. 4C is a photograph of a standard PCI express graphics slot on a motherboard to which MP-SOC-based PC graphics board of the present invention is interconnected;
  • FIG. 4D is a schematic representation of an exemplary MP-SOC silicon-layout including four GPU-driven pipeline cores according to the principles of the present invention
  • FIG. 4E is a schematic representation of an exemplary packaging of the MP-SOC chip of the present invention.
  • FIG. 5 is a schematic block diagram of the MP-SOC architecture, according to the illustrative embodiment of the present invention.
  • FIG. 6 is the software block diagram of MP-SOC based computing system, according to the illustrative embodiment of the present invention.
  • FIG. 7A is a schematic block diagram further illustrating the modules that comprise the multi-pipe software drivers of MP-SOC based system of the illustrative embodiment of the present invention.
  • FIG. 7B is a flow chart illustrating the steps carried out by the mechanism that runs the three parallellization modes (i.e. Object Division, Image Division and Time Division) within the MP-SOC-based devices and systems of the present invention
  • FIG. 8 is a schematic representation illustrating the object-division configuration of the MP-SOC system of the present invention.
  • FIG. 9 is a schematic representation illustrating the image-division configuration of the MP-SOC system of the present invention.
  • FIG. 10 is a schematic representation illustrating the time-division configuration of the MP-SOC system of the present invention.
  • FIG. 11 is a flowchart illustrating the process for distributing polygons between multiple GPU-driven pipeline cores along the MP-SOC-based system of the present invention.
  • FIG. 12 shows an example of eight (8) GPU-driven pipeline cores arranged as a combination of parallel modes, in accordance with the principles of the present invention.
  • WO 2004/070652 A2 incorporated herein by reference, teaches the use of compositing image mechanism based on associative decision making, to provide fast and non-expensive re-compositing of frame buffers as part of Object Division parallelism.
  • the benefits of this novel alternative approach include VLSI-based miniaturization of multi-GPU clusters, high bandwidth of inter-GPU communication, lower power and heat dissipation, no redundancy of components, and low cost. Details on practicing this alternative approach will now be described below.
  • the present invention disclosed herein teaches an improved way of and a means for parallelizing graphics functions on a semiconductor level, as a multiple graphic pipeline architecture realized on a single chip, preferably of monolithic construction.
  • a multiple graphic pipeline architecture realized on a single chip, preferably of monolithic construction.
  • MP-SOC multi-pipe system on chip
  • This system “on a silicon chip” comprises a cluster of GPU-driven pipeline cores organized in flexible topology, allowing different parallelization schemes. Theoretically, the number of pipeline cores is unlimited, restricted only by silicon area considerations.
  • the MP-SOC is driven by software driver modes, which are resident to the host CPU.
  • the variety of parallelization schemes enables performance optimization. These schemes are time, image and object division, and derivatives of thereof.
  • the illustrative embodiment of the present invention enjoys the advantages of a multi GPU chip, namely: bypassing the converge limitation of a single GPU, while at the same time it gets rid of the inherent problems of a multi-GPU system, such as restricted bandwidth of inter-GPU communication, mechanical complexity (size, power, and heat), redundancy of components, and high cost.
  • the physical graphic system of the present embodiment comprises of a conventional motherboard ( 418 ) and MP-SOC based graphic card ( 415 ).
  • the motherboard carries the usual set of components, which are CPU ( 411 ), system memory ( 412 ), I/O chipset ( 413 ), and other non-graphic components as well (see FIG. 1A for the complete set of components residing on a PC motherboard).
  • the printed circuit graphic card based on the MP-SOC chip 416 connects to the motherboard via a PCI express 16 ⁇ lanes connector ( 414 ).
  • the card has also an output to at least one screen ( 416 ).
  • the MP-SOC graphic card replaces the conventional single-GPU graphic card on the motherboard.
  • FIG. 4B shows a possible physical implementation of the present invention.
  • a standard form PC card ( 421 ) on which the MP-SOC ( 422 ) is mounted connects to the motherboard ( 426 ) of the host computing system, via PCI express 16 ⁇ lanes connector ( 423 ).
  • the display screen is connected via standard DVI connector ( 424 ). Since the multiple pipelines on MP-SOC are anticipated to consume high power, for which the standard supply via PCI express connector is not adequate, an auxiliary power is supplied to the card via dedicated power cable ( 425 ).
  • FIG. 4C shows the PCI express connector ( 431 ) on a motherboard to which a MP-SOC based card connects. It should be emphasized that the standard physical implementation of MP-SOC on a PC card makes it an easy and natural replacement of the prior art GPU-driven video graphics cards.
  • FIGS. 4D and 4E describe an artist's concept of the MP-SOC chip to further illustrate a physical implementation of the semiconductor device.
  • FIG. 4D shows a possible MP-SOC silicon layout.
  • FIG. 4E shows possible packaging and appearance of the MP-SOC chip.
  • this chip along with other peripheral components (e.g. memory chips, bus chips, etc.) intends to be mounted on a standard sized PCB (printed circuit board) and used as a sole graphic card in a PC system, replacing prior art video graphics cards. Production of MP-SOC based cards can be carried out by graphic card manufacturers (e.g. AsusTech, Gigabyte).
  • the multi-pipe-SOC architecture consists of the following components:
  • the software of the system comprises the graphic application, graphics library (e.g. graphic standards OpenGL or DirectX), and proprietary soft driver (multi-pipe driver).
  • graphics library e.g. graphic standards OpenGL or DirectX
  • proprietary soft driver multi-pipe driver
  • FIG. 7 shows a functional block diagram presenting the main tasks of the multi-pipe driver, according to an embodiment the present invention.
  • the multi-pipe driver carries on at least the following actions:
  • a major feature of the present invention is its topological flexibility which enables revamping of performance bottlenecks. Such flexibility is gained by rearranging the cluster of graphics pipelines by means of routing center and different merging schemes at the compositing unit. Different parallelization schemes affect different performance bottlenecks. Therefore bottlenecks, identified by the profiling module, can be cured by utilizing the corresponding parallelization scheme.
  • the flowchart of FIG. 7B describes the mechanism that runs the three parallel modes: Object Division, Image Division and Time Division.
  • the mechanism combines the activity of soft driver modules with MP-SOC units.
  • the cycle of the flowchart is one frame.
  • the mode to begin with is the Object Division (OD), since it is the preferred parallel mode, as it will be explained hereinafter.
  • the profiling and analysis of the application is constantly on, under control of the soft Profile and Analysis module (S-PA). Every frame the Parallel Policy Management (S-PPM) module checks for the optimal mode, to choose from the three parallelization modes.
  • S-PA Soft Profile and Analysis module
  • the Distributed Graphic Functions Control (S-DGFC) module configures the entire system for OD, characterized by distribution of geometric data and the compositing algorithm in use. This configuration is shown in FIG. 8 , and described in detail later on.
  • the S-DGFC module decomposes the geometric data into partitions, each sent by the Routing unit (C-RC) to different GPU-driven pipe core (C-PC) for rendering.
  • the rendered stream of data is monitored by the State Monitoring (S-SM) module for blocking commands, as shown in FIG. 11 , and described in great detail hereinafter.
  • the left path in the flowchart is Image Division (ID) operation.
  • ID Image Division
  • the ID configuration as set by the S-DGFC, is also shown in FIG. 9 , and described later in greater detail. It is characterized by broadcasting of the same data among all pipe cores, and by image based compositing algorithm. The partitioning of image among pipe cores is done by S-DGFC. The data is broadcast by the Routing Center, and then rendered at pipe cores (C-PC), while each one is designated another portion of image. Upon accomplishing of rendering, the C-Ctrl moves the partial FBs to compositing unit (C-CU) for reconstruction of the complete image. Then C-DI moves the FB to Display. Finally the Change test is performed by S-PS and S-PPM modules. Pending the result, a new frame will continue the ID mode, or switch to another mode.
  • the Time Division mode alternates frames among the GPU-driven pipe cores. It is set for alternation by the S-GDFC module, while each core is designated a frame data by S-DGFC and delivered by the C-RC unit. Each core (C-PC) generates a frame, in a line. Then the C-Ctrl moves the matured FB via compositing unit to the Display Interface, and out to the display. Actually, the compositing unit in this mode acts just as a transit. Finally there is a change-mode test by S-PA and S-PPM modules, same as in the other modes before.
  • FIG. 8 describes the object-division parallelization scheme.
  • the soft driver and specifically the Distributed Graphic Functions Control module, breaks down the polygon data of a scene into N partial streams (N—the number of participating pipeline cores).
  • N the number of participating pipeline cores.
  • the entire data is sent, by the GPU Drivers module, to the MP-SOC Routing Center, which distributes the data to N pipeline cores for rendering, according to the soft driver's partition, each of approximately 1/N polygons.
  • Rendering in the pipeline cores is done under the monitoring of State Monitoring module of the soft driver ( FIG. 11 and detailed description below).
  • the resultant full frame buffers are gathered in the Compositing Unit. They are depth-composed, pixel by pixel to find the final set of visible pixels. At each x-y coordinate all hidden pixels are eliminated by compositing mechanism.
  • the final frame buffer is moved out to display.
  • FIG. 9 describes the image-division parallelization scheme, which is chosen by Parallelism Policy Management module, as a result of profiling, analysis, and decision making in the Profiling and Analysis module of the soft driver.
  • Each pipeline core is designated a unique 1/N part of the screen.
  • the complete polygon data is delivered to each of the pipeline cores via the GPU Driver module and Routing Center.
  • the parallel rendering in pipeline cores results in partial frame buffer at each.
  • the image segments are moved to the Compositing Unit for 2D merging into a single image and moved out to the display.
  • FIG. 10 describes the time-division parallelization scheme which is chosen by Parallelism Policy Management module, as a result of profiling, analysis, and decision making in the Profiling and Analysis module of the soft driver.
  • the Compositing unit functions here as a simple switch, alternating the access to the Display among all the pipeline cores.
  • the profiler identifies problem areas within the graphics system which cause bottlenecks. It is implemented in the Application Profiling and Analysis module of the driver.
  • the profiler module requires such inputs as usage of graphic API commands (e.g. OpenGL, DirectX, other), memory speed, memory usage in bytes, total pixels rendered, geometric data entering rendering, frame rate, workload of each GPU, load balance among GPUs, volumes of transferred data, textures count, and depth complexity, etc.
  • the performance data is retrieved on a frame time basis, however, the periodicity can also be a configuration attribute of the profiler, or can be set based on a detected configuration event which the profiler is designed to detect before retrieving performance data.
  • object-division method supersedes the other division modes in that it reduces more bottlenecks.
  • the object-division relaxes virtually all bottleneck across the pipeline: (i) the geometry (i.e. polygons, lines, dots, etc) transform processing is offloaded at each pipeline, handling only 1/N of polygons (N—number of participating pipeline cores); (ii) fill bound processing is reduced since less polygons are feeding the rasterizer, (iii) less geometry memory is needed; (iv) less texture memory is needed.
  • time-division method releases bottlenecks by allowing to each pipeline core more time per frame generation, however this method suffers from severe problems such as CPU bottlenecks, the pipeline cores generated frame buffers that are not available to each other, and there are frequent cases of pipeline latency. Therefore this method is not suitable to all applications. Consequently, due to its superiority as bottleneck opener, object-division becomes the primary parallel mode.
  • the following object division algorithm distributes polygons among the multiple graphic pipeline cores.
  • Typical application generates a stream of graphic calls that includes blocks of graphic data; each block consists of a list of geometric operations, such as single vertex operations or buffer based operations (vertex array).
  • the decomposition algorithm splits the data between pipeline cores preserving the blocks as basic data units.
  • Geometric operations are attached to the block(s) of data, instructing the way the data is handled.
  • a block is directed to designated GPU.
  • there are operations belonging to the group of Blocking Operations such as Flush, Swap, Alpha blending, which affect the entire graphic system, setting the system to blocking mode.
  • Blocking operations are exceptional in that they require a composed valid FB data, thus in the parallel setting of the present invention, they have an effect on all pipeline cores. Therefore, whenever one of the Blocking operations is issued, all the pipeline cores must be synchronized.
  • Each frame has at least 2 blocking operations: Flush and Swap, which terminate the frame.
  • FIG. 11 presents a flowchart describing an algorithm for distributing polygons among multiple GPU-driven pipeline cores, according to an illustrative embodiment of the present invention.
  • the frame activity starts with distributing blocks of data among GPUs.
  • Each graphic operation is tested for blocking mode at step 1112 .
  • a regular path non-blocking path
  • data is redirected to the designated pipeline core at step 1113 . This loop is repeated until a blocking operation is detected.
  • the Swap operation activates the double buffering mechanism, swapping the back and front color buffers. If Swap is detected at step 1115 , it means that the composted frame must be terminated at all pipeline cores, except pipeline 0 . All pipeline cores have the final composed contents of a FB designated to store said contents, but only the one connected to the screen (pipeline 0 ) displays the image at step 1116 .
  • Another case is operations that are applied globally to the scene and need to be broadcasted to all the pipeline cores. If one of the other blocking operations is identified, such as Alpha blending for transparency, then all pipeline cores are flushed as before at step 1114 , and merged into a common FB. This time the Swap operation is not detected (step 1115 ), therefore all pipeline cores have the same data, and as long as the blocking mode is on (step 1117 ), all of them keep processing the same data (step 1118 ). If the end of the block mode is detected at step 1117 , pipeline cores return working on designated data (step 1113 ).
  • Depth complexity is the number of fragment replacements as a result of depth tests (the number of polygons drawn on every pixel). In the ideal case of no fragment replacement (e.g. all polygons of the scene are located on the same depth level), the fill is reduced according to the reduced number of polygons (as for 2 pipeline cores).
  • depth complexity is getting high, the advantage of object-division drops down, and in some cases the image-division may even perform better, e.g. applications with small number of polygons and high volume of textures.
  • the present invention introduces a dynamic load-balancing technique that combines the object division method with the image division and time division methods in image and time domains, based on the load exhibits by previous processing stages. Combining all the three parallel methods into a unified framework dramatically increases the frame rate stability of the graphic system.
  • FIG. 12 discloses a sample configuration of the system, employing 8 pipeline cores, according to an embodiment of the present invention.
  • the pipeline cores are divided into two groups for time division parallelism. Pipeline cores indexed with 1 , 2 , 3 , and 4 are configured to process even frames and pipeline cores indexed with 5 , 6 , 7 , and 8 are configured to process odd frames.
  • two pipeline core subgroups are set for image division: the pipeline cores with the lower indexes ( 1 , 2 and 5 , 6 respectively) are configured to process half of the screen, and the high-indexed pipeline cores ( 3 , 4 and 7 , 8 respectively) are configured to process the other half.
  • pipeline cores indexed with 1 , 3 , 5 and 7 are fed with half of the objects
  • pipeline cores indexed with 2 , 4 , 6 and 8 are fed with the other half of the objects.
  • pipeline cores are reconfigured, so that each pipeline core will render a quarter of the screen within the respective frame.
  • the original partition for time division, between pipeline cores 1 , 2 , 3 , 4 and between 5 , 6 , 7 , 8 still holds, but pipeline core 2 and pipeline core 5 are configured to render the first quarter of screen in even and odd frames respectively.
  • Pipeline cores 1 and 6 render the second quarter
  • pipeline cores 4 and 7 the third quarter
  • pipeline cores 3 and 8 the forth quarter. No object division is implied.
  • pipeline cores are reconfigured, so that each pipeline core will process a quarter of the geometrical data within the respective frame. That is, pipeline cores 3 and 5 are configured to process the first quarter of the polygons in even and odd frames respectively.
  • Pipeline cores 1 and 7 render the second quarter
  • pipeline cores 4 and 6 the third quarter
  • pipeline cores 2 and 8 the forth quarter. No image division is implied.
  • any combination between the parallel modes can be scheduled to evenly balance the graphic load.
  • the parallelization process between all pipeline cores may be based on an object division mode or image division mode or time division mode or any combination thereof in order to optimize the processing performance of each frame.
  • the decision on parallel mode is done on a per-frame basis, based on the above profiling and analysis. It is then carried out by reconfiguration of the parallelization scheme, as described above and shown in FIGS. 8 , 9 , 10 and 12 .
  • the MP-SOC architecture described in great detail hereinabove can be readily adapted for use in diverse kinds of graphics processing and display systems. While the illustrative embodiments of the present invention have been described in connection with PC-type computing systems, it is understood that the present invention can be use improve graphical performance in diverse kinds of systems including mobile computing devices, embedded systems, and as well as scientific and industrial computing systems supporting graphic visualization of photo-realistic quality.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Image Generation (AREA)
  • Image Processing (AREA)
  • Advance Control (AREA)

Abstract

A PC-based computing system employing a silicon chip with a routing unit to distribute geometrical data and graphics commands to multiple GPU-driven pipeline cores during a mode of parallel operation. The PC-based computing system includes a system memory for storing software graphics applications, software drivers and graphics libraries, and an operating system (OS), stored in the system memory, and a central processing unit (CPU), for executing the OS, graphics applications, drivers and graphics libraries. The system also includes a CPU/memory interface module and a CPU bus. The silicon chip interfaces with the CPU/memory interface module. The routing unit (i) routes the stream of geometrical data and graphic commands from the graphics application to one or more of the GPU-driven pipeline cores, and (ii) routes pixel data output from one or more of GPU-driven pipeline cores during the composition of frames of pixel data corresponding to final images for display on the display surface.

Description

    RELATED CASES
  • This application is a Continuation of U.S. application Ser. No. 11/340,402 filed Jan. 25, 2006; which is a Continuation-in-Part of provisional Application No. 60/647,146 filed Jan. 25, 2005; International Application No. PCT/IL2004/000079 filed Jan. 28, 2004, published as WIPO Publication No. WO 2004/070652 A2 on Aug. 19, 2004; and International Application No. PCT/IL2004/001069 filed Nov. 19, 2004, published as WIPO Publication No. WO 2005/050557 A2 on Jun. 2, 2005, and entered in the U.S. National Stage on May 17, 2006 as U.S. application Ser. No. 10/579,682, and based on U.S. provisional Application Nos. 60/523,084 and 60/523,102, both filed Nov. 19, 2003; each Application being commonly owned by Lucid Information Technology Ltd, of Israel, and incorporated fully herein.
  • BACKGROUND OF INVENTION
  • 1. Field of the Invention
  • Over the past few decades, much of the research and development in the graphics architecture field has been concerned the ways to improve the performance of three-dimensional (3D) computer graphics rendering. Graphics architecture is driven by the same advances in semiconductor technology that have driven general-purpose computer architecture. Many of the same acceleration techniques have been used in this field, including pipelining and parallelism. The graphics rendering application, however, imposes special demands and makes available new opportunities. For example, since image display generally involves a large number of repetitive calculations, it can more easily exploit massive parallelism than can general-purpose computations.
  • In high-performance graphics systems, the number of computations highly exceeds the capabilities of a single processing unit, so parallel systems have become the rule of graphics architectures. A very high-level of parallelism is applied today in silicon-based graphics processing units (GPU), to perform graphics computations.
  • Typically these computations are performed by graphics pipeline, supported by video memory, which are part of a graphic system. FIG. 1A shows a block diagram of a conventional graphic system as part of PC architecture, comprising of CPU (111), system memory (112), I/O chipset (113), high speed CPU-GPU bus (114) (e.g. PCI express 16×), video (graphic) card (115) based on a single GPU, and display (116). The single GPU graphic pipeline, as shown in FIG. 1B, decomposes into two major parts: a geometry subsystem for processing 3D graphics primitives (e.g. polygons) and a pixel subsystem for computing pixel values. These two parts are consistently designed for increased parallelism.
  • In the geometry subsystem, the graphics databases are regular, typically consisting of a large number of primitives that receive nearly identical processing; therefore the natural concurrency is to partition the data into separate streams and to process them independently. In the pixel subsystem, image parallelism has long been an attractive approach for high-speed rasterization architectures, since pixels can be generated in parallel in many ways. An example of a highly parallel Graphic Processing Unit chip (GPU) in prior art is depicted in FIG. 2A (taken from 3D Architecture White Paper, by ATI). The geometry subsystem consists of six (6) parallel pipes while the pixel subsystem has sixteen (16) parallel pipes.
  • However, as shown in FIG. 2B, the “converge stage” 221 between these two subsystems is very problematic as it must handle the full data stream bandwidth. In the pixel subsystem, the multiple streams of transformed and clipped primitives must be directed to the processors doing rasterization. This can require sorting primitives based on spatial information while different processors are assigned to different screen regions. A second difficulty in the parallel pixel stage is that ordering of data may change as those data pass through parallel processors. For example, one processor may transform two small primitives before another processor transforms a single, large one. Certain global commands, such as commands to update one window instead of another, or to switch between double buffers, require that data be synchronized before and after command. This converge stage between the geometry and pixel stages, restricts the parallelism in a single GPU.
  • A typical technology increasing the level of parallelism employs multiple GPU-cards, or multiple GPU chips on a card, where the rendering performance is additionally improved, beyond the converge limitation in a single core GPU. This technique is practiced today by several academic researches (e.g. Chromium parallel graphics system by Stanford University) and commercial products (e.g. SLI—a dual GPU system by Nvidia, Crossfire—a dual GPU by ATI). FIG. 3 shows a commercial dual GPU system, Asus A8N-SLI, based on Nvidia SLI technology.
  • Parallelization is capable of increasing performance by releasing bottlenecks in graphic systems. FIG. 2C indicates typical bottlenecks in a graphic pipeline that breaks-down into segmented stages of bus transfer, geometric processing and fragment fill bound processing. A given pipeline is only as strong as the weakest link of one of the above stages, thus the main bottleneck determines overall throughput. As indicated in FIG. 2C, pipeline bottlenecks stem from: (231) geometry, texture, animation and meta data transfer, (232) geometry data memory limits, (233) texture data memory limits, (234) geometry transformations, and (235) fragment rendering.
  • There are different ways to parallelize the GPUs, such as: time-division (each GPU renders the next successive frame); image-division (each GPU renders a subset of the pixels of each frame); and object-division (each GPU renders a subset of the whole data, including geometry and textures), and derivatives and combinations of thereof. Although promising, this approach of parallelizing cluster of GPU chips suffers from some inherent problems, such as: restricted bandwidth of inter-GPU communication; mechanical complexity (e.g. size, power, and heat); redundancy of components; and high cost.
  • Thus, there is a great need in the art for an improved method of and apparatus for high-speed graphics processing and display, which avoids the shortcomings and drawbacks of such prior art apparatus and methodologies.
  • OBJECTS AND SUMMARY OF THE PRESENT INVENTION
  • Accordingly, a primary object of the present invention is to provide a novel method of and apparatus for high-speed graphics processing and display, which avoid the shortcomings and drawbacks of prior art apparatus and methodologies.
  • Another object of the present invention is to provide a novel graphics processing and display system having multiple graphics cores with unlimited graphics parallelism, getting around the inherent converge bottleneck of a single GPU system.
  • Another object of the present invention is to provide a novel graphics processing and display system which ensures the best graphics performance, eliminating the shortages of a multi-chip system, the restricted bandwidth of inter-GPU communication, mechanical complexity (size, power, and heat), redundancy of components, and high cost.
  • Another object of the present invention is to provide a novel graphics processing and display system that has an amplified graphics processing and display power by parallelizing multiple graphic cores in a single silicon chip.
  • Another object of the present invention is to provide a novel graphics processing and display system that is realized on a silicon chip having a non-restricted number of multiple graphic cores.
  • Another object of the present invention is to provide a novel graphics processing and display system that is realized on a silicon chip which utilizes a cluster of multiple graphic cores.
  • Another object of the present invention is to provide a novel graphics processing and display system that is realized on a silicon chip having multiple graphic cores or pipes (i.e. a multiple-pipe system-on-chip, or MP-SOC) and providing architectural flexibility to achieve the advanced parallel graphics display performance.
  • Another object of the present invention is to provide a novel graphics processing and display system that is realized on a silicon chip having multiple graphic cores, and adaptively supporting different modes of parallelism within both its geometry and pixel processing subsystems.
  • Another object of the present invention is to provide a novel graphics processing and display system that is realized on a silicon chip having multiple GPU cores, and providing adaptivity for highly advanced graphics processing and display performance.
  • Another object of the present invention is to provide a novel graphics processing and display system and method, wherein the graphic pipeline bottlenecks of vertex (i.e. 3D polygon geometry) processing and fragment processing are transparently and intelligently resolved.
  • Another object of the present invention to provide a method and system for an intelligent decomposition of data and graphic commands, preserving the basic features of graphic libraries as state machines and tightly sticking to the graphic standard.
  • Another object of the present invention to provide a new PCI graphics card supporting a graphics processing and display system realized on a silicon chip having multiple graphic cores, and providing architectural flexibility to achieve the best parallel performance.
  • Another object of the present invention to provide a computing system having improved graphics processing and display capabilities, employing a graphics card having a silicon chip with multiple graphic cores, and providing architectural flexibility to achieve the best parallel performance.
  • Another object of the present invention to provide such a computing system having improved graphics processing and display performance required by applications including, video-gaming, virtual reality, scientific visualization, and other interactive application requiring or demanding photo-realistic graphics display capabilities.
  • These and other objects and advantages of the present invention will become apparent hereinafter.
  • BRIEF DESCRIPTION OF DRAWINGS OF THE PRESENT INVENTION
  • For a more complete understanding of how to practice the Objects of the Present Invention, the following Detailed Description of the Illustrative Embodiments can be read in conjunction with the accompanying Drawings, briefly described below, wherein:
  • FIG. 1A is a schematic representation of a prior art, standard PC architecture, in which its conventional single GPU graphic card is shown circled;
  • FIG. 1B is a simplified block diagram of a prior art conventional graphics system employing a single GPU, having geometry and pixel processing subsystems, wherein the data converge stream therebetween presents a serious system bottleneck that significantly limits performance;
  • FIG. 2A is a schematic diagram illustrating high parallelism in a typical prior art ATI X800 Graphic Processing Unit chip (GPU), wherein the geometry subsystem consists of 6 parallel pipes and the pixel subsystem consists of 16 parallel pipes;
  • FIG. 2B is a schematic diagram of the internal portion of a prior art graphic processing unit (GPU) chip (e.g. ATI X800) illustrating the bottlenecking converge stage (setup engine) between geometric and pixel parallel engines therein;
  • FIG. 2C is a schematic representation of a conventional graphics pipeline, illustrating the data bottleneck problem existing therein;
  • FIG. 3 is a photograph of a prior art dual GPU-driven video graphics card;
  • FIG. 4A is a schematic system block diagram representation of a computing system employing a printed circuit graphics card employing the multiple-pipe system-on-chip (MP-SOC) device in accordance with the principles of the present invention, wherein the system block diagram shows the CPU, I/O chipset, system memory, graphic card based on MP-SOC, and display screen(s);
  • FIG. 4B is schematic representation of the physical implementation of the MP-SOC of the present invention, mounted on a printed circuit (PC) video graphics board;
  • FIG. 4C is a photograph of a standard PCI express graphics slot on a motherboard to which MP-SOC-based PC graphics board of the present invention is interconnected;
  • FIG. 4D is a schematic representation of an exemplary MP-SOC silicon-layout including four GPU-driven pipeline cores according to the principles of the present invention;
  • FIG. 4E is a schematic representation of an exemplary packaging of the MP-SOC chip of the present invention;
  • FIG. 5 is a schematic block diagram of the MP-SOC architecture, according to the illustrative embodiment of the present invention;
  • FIG. 6 is the software block diagram of MP-SOC based computing system, according to the illustrative embodiment of the present invention;
  • FIG. 7A is a schematic block diagram further illustrating the modules that comprise the multi-pipe software drivers of MP-SOC based system of the illustrative embodiment of the present invention;
  • FIG. 7B is a flow chart illustrating the steps carried out by the mechanism that runs the three parallellization modes (i.e. Object Division, Image Division and Time Division) within the MP-SOC-based devices and systems of the present invention;
  • FIG. 8 is a schematic representation illustrating the object-division configuration of the MP-SOC system of the present invention;
  • FIG. 9 is a schematic representation illustrating the image-division configuration of the MP-SOC system of the present invention;
  • FIG. 10 is a schematic representation illustrating the time-division configuration of the MP-SOC system of the present invention;
  • FIG. 11 is a flowchart illustrating the process for distributing polygons between multiple GPU-driven pipeline cores along the MP-SOC-based system of the present invention; and
  • FIG. 12 shows an example of eight (8) GPU-driven pipeline cores arranged as a combination of parallel modes, in accordance with the principles of the present invention.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • The techniques taught in Applicant's prior PCT application No. PCT/IL04/001069, published as WIPO Publication No. WO 2005/050557 A2, incorporated herein by reference, teaches the use of a graphics scalable Hub architecture, comprised of Hardware Hub and Software Hub Driver, which serves to glue together (i.e. functioning in parallel) off-the-shelf GPU chips for the purpose of providing a high performance and scalable visualization solution, object division decomposition algorithm, employing multiple parallel modes and combination thereof, and adaptive parallel mode management. Also, PCT Application No. PCT/IL2004/000079, published as WIPO Publication No. WO 2004/070652 A2, incorporated herein by reference, teaches the use of compositing image mechanism based on associative decision making, to provide fast and non-expensive re-compositing of frame buffers as part of Object Division parallelism.
  • The approaches taught in Applicant's PCT Applications identified above have numerous advantages and benefits, namely the ability to construct powerful parallel systems by use of off-the-shelf GPUs, transparently to existing applications. However, in many applications, it will be desirable to provide such benefits in conventional graphics systems, using an alternative approach, namely: by providing PCs with a graphics processing and display architecture employing powerful graphics processing and display system realized on monolithic silicon chips, for the purpose of delivering high performance, high frame-rate stability of graphic solutions at relatively low-cost, and transparency to existing graphics applications.
  • The benefits of this novel alternative approach include VLSI-based miniaturization of multi-GPU clusters, high bandwidth of inter-GPU communication, lower power and heat dissipation, no redundancy of components, and low cost. Details on practicing this alternative approach will now be described below.
  • In general, the present invention disclosed herein teaches an improved way of and a means for parallelizing graphics functions on a semiconductor level, as a multiple graphic pipeline architecture realized on a single chip, preferably of monolithic construction. For convenience of expression, such a device is termed herein as a “multi-pipe system on chip” or “MP-SOC”. This system “on a silicon chip” comprises a cluster of GPU-driven pipeline cores organized in flexible topology, allowing different parallelization schemes. Theoretically, the number of pipeline cores is unlimited, restricted only by silicon area considerations. The MP-SOC is driven by software driver modes, which are resident to the host CPU. The variety of parallelization schemes enables performance optimization. These schemes are time, image and object division, and derivatives of thereof.
  • The illustrative embodiment of the present invention enjoys the advantages of a multi GPU chip, namely: bypassing the converge limitation of a single GPU, while at the same time it gets rid of the inherent problems of a multi-GPU system, such as restricted bandwidth of inter-GPU communication, mechanical complexity (size, power, and heat), redundancy of components, and high cost.
  • As shown in FIG. 4A, the physical graphic system of the present embodiment comprises of a conventional motherboard (418) and MP-SOC based graphic card (415). The motherboard carries the usual set of components, which are CPU (411), system memory (412), I/O chipset (413), and other non-graphic components as well (see FIG. 1A for the complete set of components residing on a PC motherboard). The printed circuit graphic card based on the MP-SOC chip 416) connects to the motherboard via a PCI express 16× lanes connector (414). The card has also an output to at least one screen (416). The MP-SOC graphic card replaces the conventional single-GPU graphic card on the motherboard. The way the MP-SOC graphic card integrates in a conventional PC system becomes apparent from comparing FIG. 4A with FIG. 1A By simply replacing the single-GPU graphic card (circled in FIG. 1A) with the MP-SOC based card of the present invention, and replacing its drivers with the multi-pipe soft drivers on the host CPU (419), the system of invention is realized with all of the advantages and benefits described herein. This modification is completely transparent to the user and application, apart from the improved performance.
  • FIG. 4B shows a possible physical implementation of the present invention. A standard form PC card (421) on which the MP-SOC (422) is mounted, connects to the motherboard (426) of the host computing system, via PCI express 16× lanes connector (423). The display screen is connected via standard DVI connector (424). Since the multiple pipelines on MP-SOC are anticipated to consume high power, for which the standard supply via PCI express connector is not adequate, an auxiliary power is supplied to the card via dedicated power cable (425).
  • FIG. 4C shows the PCI express connector (431) on a motherboard to which a MP-SOC based card connects. It should be emphasized that the standard physical implementation of MP-SOC on a PC card makes it an easy and natural replacement of the prior art GPU-driven video graphics cards.
  • FIGS. 4D and 4E describe an artist's concept of the MP-SOC chip to further illustrate a physical implementation of the semiconductor device. FIG. 4D shows a possible MP-SOC silicon layout. In this example there are 4 off-the-shelf cores of graphic pipelines. The number of cores can be scaled to any number, pending silicon area restrictions. The detailed discussion on the MP-SOC functional units is given below. FIG. 4E shows possible packaging and appearance of the MP-SOC chip. As mentioned before, this chip, along with other peripheral components (e.g. memory chips, bus chips, etc.) intends to be mounted on a standard sized PCB (printed circuit board) and used as a sole graphic card in a PC system, replacing prior art video graphics cards. Production of MP-SOC based cards can be carried out by graphic card manufacturers (e.g. AsusTech, Gigabyte).
  • As presented in FIG. 5, the multi-pipe-SOC architecture consists of the following components:
      • Routing center which is located on the CPU bus (e.g. PCI express of 16 lanes). It distributes the graphics data stream, coming from CPU among graphic pipeline cores, and then collects the rendered results (frame buffers) from cores, to the compositing unit. The way data is distributed is dictated by the control unit, depending on current parallelization mode.
      • Compositing unit re-composes the partial frame buffers according to the ongoing parallelization mode.
      • Control unit is under control of the CPU-resident soft multi-pipe driver. It is responsible for configuration and functioning of the entire MP-SOC system according to the parallelization mode.
      • Processing element (PE) unit with internal or external memory, and optional cache memory. The PE can be any kind of processor-on-chip according to architectural needs. Besides serving the PE, the cache and memory can be used to cache graphics data common to all pipeline cores, such as textures, vertex objects, etc.
      • Multiple GPU-driven pipeline cores. These cores may, but need not to be of proprietary designed. They can be originally designed as a regular single core GPU.
      • Profiling functions unit. This unit delivers to the multi-pipe driver a benchmarking data such as memory speed, memory usage in bytes, total pixels rendered, geometric data entering rendering, frame rate, workload of each pipeline core, load balance among pipeline cores, volumes of transferred data, textures count, and depth complexity.
      • Display interface, capable of running single or multiple screens.
  • As shown in FIG. 6, the software of the system comprises the graphic application, graphics library (e.g. graphic standards OpenGL or DirectX), and proprietary soft driver (multi-pipe driver). The generic graphics application needs no modifications or special porting efforts to run on the MP-SOC.
  • FIG. 7 shows a functional block diagram presenting the main tasks of the multi-pipe driver, according to an embodiment the present invention. The multi-pipe driver carries on at least the following actions:
      • Generic GPU drivers. Perform all the functions of a generic GPU driver associated with interaction with the Operation System, graphic library (e.g. OpenGL or DirectX), and controlling the GPUs.
      • Distributed graphic functions control. This module performs all functions associated with carrying on the different parallelization modes according to parallelization policy management. In each mode, the data is differently distributed and re-composed among pipelines, as will be described in greater detail hereinafter.
      • State monitoring. The graphic libraries (e.g. OpenGL and DirectX) are state machines. Parallelization must preserve cohesive state across the graphic system. It is done by continuous analysis of all incoming commands, while the state commands and some of the data must be multiplicated to all pipelines in order to preserve the valid state across the graphic pipelines. A specific problem is posed by the class called Blocking operations such as Flush, Swap, Alpha blending, which affect the entire graphic system, setting the system to blocking mode. Blocking operations are exceptional in that they require a composed valid FB data, thus in the parallel setting of the present invention, they have an effect on all pipeline cores. A more detailed description of handling Blocking operations will be given hereinafter.
      • Application profiling and analysis module. This module performs real-time profiling and analysis of the running application. It continuously monitors of application parameters in the system, such as memory speed, memory usage in bytes, total pixels rendered, geometric data entering rendering, frame rate, workload of each pipeline core, load balance among graphic pipelines, volumes of transferred data, textures count, and depth complexity, etc. The profiler module identifies problem areas within the graphics system which cause bottlenecks. The profiler module requires inputs from the registers of the multi-pipe cores, registers of the MP-SOC control unit, and graphic API commands (e.g. OpenGL, DirectX).
      • Parallelism policy management makes a decision on the parallel mode to be performed, on a per-frame basis, based on the above profiling and analysis. The decision is then carried out by means of the control unit in the MP-SOC.
  • A major feature of the present invention is its topological flexibility which enables revamping of performance bottlenecks. Such flexibility is gained by rearranging the cluster of graphics pipelines by means of routing center and different merging schemes at the compositing unit. Different parallelization schemes affect different performance bottlenecks. Therefore bottlenecks, identified by the profiling module, can be cured by utilizing the corresponding parallelization scheme.
  • The flowchart of FIG. 7B describes the mechanism that runs the three parallel modes: Object Division, Image Division and Time Division. The mechanism combines the activity of soft driver modules with MP-SOC units. The cycle of the flowchart is one frame. The mode to begin with is the Object Division (OD), since it is the preferred parallel mode, as it will be explained hereinafter. The profiling and analysis of the application is constantly on, under control of the soft Profile and Analysis module (S-PA). Every frame the Parallel Policy Management (S-PPM) module checks for the optimal mode, to choose from the three parallelization modes.
  • Let us assume that the Object Division (OD) path was taken. The Distributed Graphic Functions Control (S-DGFC) module configures the entire system for OD, characterized by distribution of geometric data and the compositing algorithm in use. This configuration is shown in FIG. 8, and described in detail later on. The S-DGFC module decomposes the geometric data into partitions, each sent by the Routing unit (C-RC) to different GPU-driven pipe core (C-PC) for rendering. The rendered stream of data is monitored by the State Monitoring (S-SM) module for blocking commands, as shown in FIG. 11, and described in great detail hereinafter. When the rendering is completed, all the Frame Buffers are moved by the Control Unit (C-Ctrl) to Compositing Unit (C-CU) to composite all buffers to a single one, based on depth test (as explained in detail below). The final FB is moved to Display by Display Interface Unit (C-DI). At the end of the frame the S-PA and S-PPM modules test for the option of changing the parallel mode. If decision was taken to stay with the same mode, a new OD frame starts with another data partition. Otherwise, a new test for optimal mode is performed by S-PA and S-PPM modules.
  • The left path in the flowchart is Image Division (ID) operation. The ID configuration, as set by the S-DGFC, is also shown in FIG. 9, and described later in greater detail. It is characterized by broadcasting of the same data among all pipe cores, and by image based compositing algorithm. The partitioning of image among pipe cores is done by S-DGFC. The data is broadcast by the Routing Center, and then rendered at pipe cores (C-PC), while each one is designated another portion of image. Upon accomplishing of rendering, the C-Ctrl moves the partial FBs to compositing unit (C-CU) for reconstruction of the complete image. Then C-DI moves the FB to Display. Finally the Change test is performed by S-PS and S-PPM modules. Pending the result, a new frame will continue the ID mode, or switch to another mode.
  • The Time Division mode alternates frames among the GPU-driven pipe cores. It is set for alternation by the S-GDFC module, while each core is designated a frame data by S-DGFC and delivered by the C-RC unit. Each core (C-PC) generates a frame, in a line. Then the C-Ctrl moves the matured FB via compositing unit to the Display Interface, and out to the display. Actually, the compositing unit in this mode acts just as a transit. Finally there is a change-mode test by S-PA and S-PPM modules, same as in the other modes before.
  • FIG. 8 describes the object-division parallelization scheme. The soft driver, and specifically the Distributed Graphic Functions Control module, breaks down the polygon data of a scene into N partial streams (N—the number of participating pipeline cores). The entire data is sent, by the GPU Drivers module, to the MP-SOC Routing Center, which distributes the data to N pipeline cores for rendering, according to the soft driver's partition, each of approximately 1/N polygons. Rendering in the pipeline cores is done under the monitoring of State Monitoring module of the soft driver (FIG. 11 and detailed description below). The resultant full frame buffers are gathered in the Compositing Unit. They are depth-composed, pixel by pixel to find the final set of visible pixels. At each x-y coordinate all hidden pixels are eliminated by compositing mechanism. The final frame buffer is moved out to display.
  • FIG. 9 describes the image-division parallelization scheme, which is chosen by Parallelism Policy Management module, as a result of profiling, analysis, and decision making in the Profiling and Analysis module of the soft driver. Each pipeline core is designated a unique 1/N part of the screen. The complete polygon data is delivered to each of the pipeline cores via the GPU Driver module and Routing Center. The parallel rendering in pipeline cores results in partial frame buffer at each. The image segments are moved to the Compositing Unit for 2D merging into a single image and moved out to the display.
  • FIG. 10 describes the time-division parallelization scheme which is chosen by Parallelism Policy Management module, as a result of profiling, analysis, and decision making in the Profiling and Analysis module of the soft driver. The Distributed Graphic Functions Control module, through GPU Drivers module, divides the frames into N cycles (N=number of cores) letting each core time slot of N frames for rendering the entire polygon data. Therefore the scene polygon data is distributed, via Router, to a different pipeline core at a time Each core performs rendering during N cycles, and outputs its full frame buffer to display, for a single frame. The Compositing unit functions here as a simple switch, alternating the access to the Display among all the pipeline cores.
  • Different parallelization schemes resolve different performance bottlenecks. Therefore bottlenecks must be identified and then eliminated (or reduced) by applying the right scheme at the right time.
  • As shown in FIG. 7B, the profiler identifies problem areas within the graphics system which cause bottlenecks. It is implemented in the Application Profiling and Analysis module of the driver. The profiler module requires such inputs as usage of graphic API commands (e.g. OpenGL, DirectX, other), memory speed, memory usage in bytes, total pixels rendered, geometric data entering rendering, frame rate, workload of each GPU, load balance among GPUs, volumes of transferred data, textures count, and depth complexity, etc. These data types are collected from the following sources within the MP-SOC based graphics system:
  • 1. The profiling functions unit in MP-SOC
  • 2. The driver
  • 3. The pipeline cores
  • 4. Chipset Architecture Performance (CHAP) Counters
  • 5.
  • Typically, the performance data is retrieved on a frame time basis, however, the periodicity can also be a configuration attribute of the profiler, or can be set based on a detected configuration event which the profiler is designed to detect before retrieving performance data.
  • The analysis, resulting in the selection of a preferred parallel method is based on the assumption that in a well defined case (described below), object-division method supersedes the other division modes in that it reduces more bottlenecks. In contrast to image-division, that reduces only the fragment/fill bound processing at each pipeline core, the object-division relaxes virtually all bottleneck across the pipeline: (i) the geometry (i.e. polygons, lines, dots, etc) transform processing is offloaded at each pipeline, handling only 1/N of polygons (N—number of participating pipeline cores); (ii) fill bound processing is reduced since less polygons are feeding the rasterizer, (iii) less geometry memory is needed; (iv) less texture memory is needed.
  • Although the time-division method releases bottlenecks by allowing to each pipeline core more time per frame generation, however this method suffers from severe problems such as CPU bottlenecks, the pipeline cores generated frame buffers that are not available to each other, and there are frequent cases of pipeline latency. Therefore this method is not suitable to all applications. Consequently, due to its superiority as bottleneck opener, object-division becomes the primary parallel mode.
  • The following object division algorithm distributes polygons among the multiple graphic pipeline cores. Typical application generates a stream of graphic calls that includes blocks of graphic data; each block consists of a list of geometric operations, such as single vertex operations or buffer based operations (vertex array). Typically, the decomposition algorithm splits the data between pipeline cores preserving the blocks as basic data units. Geometric operations are attached to the block(s) of data, instructing the way the data is handled. A block is directed to designated GPU. However, there are operations belonging to the group of Blocking Operations, such as Flush, Swap, Alpha blending, which affect the entire graphic system, setting the system to blocking mode. Blocking operations are exceptional in that they require a composed valid FB data, thus in the parallel setting of the present invention, they have an effect on all pipeline cores. Therefore, whenever one of the Blocking operations is issued, all the pipeline cores must be synchronized. Each frame has at least 2 blocking operations: Flush and Swap, which terminate the frame.
  • FIG. 11 presents a flowchart describing an algorithm for distributing polygons among multiple GPU-driven pipeline cores, according to an illustrative embodiment of the present invention. The frame activity starts with distributing blocks of data among GPUs. Each graphic operation is tested for blocking mode at step 1112. In a regular path (non-blocking path), data is redirected to the designated pipeline core at step 1113. This loop is repeated until a blocking operation is detected.
  • When the blocking operation is detected, all pipeline cores must be synchronized at step 1114 by at least the following sequence:
      • performing a flush operation in order to terminate rendering and clean up the internal pipeline (flushing) in pipeline core;
      • performing a composition in order to merge the contents of all FBs into a single FB; and
      • transmitting the contents of said single FB back to all pipeline cores, in order to create a common ground for continuation.
  • The Swap operation activates the double buffering mechanism, swapping the back and front color buffers. If Swap is detected at step 1115, it means that the composted frame must be terminated at all pipeline cores, except pipeline0. All pipeline cores have the final composed contents of a FB designated to store said contents, but only the one connected to the screen (pipeline0) displays the image at step 1116.
  • Another case is operations that are applied globally to the scene and need to be broadcasted to all the pipeline cores. If one of the other blocking operations is identified, such as Alpha blending for transparency, then all pipeline cores are flushed as before at step 1114, and merged into a common FB. This time the Swap operation is not detected (step 1115), therefore all pipeline cores have the same data, and as long as the blocking mode is on (step 1117), all of them keep processing the same data (step 1118). If the end of the block mode is detected at step 1117, pipeline cores return working on designated data (step 1113).
  • The relative advantage of object-division depends very much on depth complexity of the scene. Depth complexity is the number of fragment replacements as a result of depth tests (the number of polygons drawn on every pixel). In the ideal case of no fragment replacement (e.g. all polygons of the scene are located on the same depth level), the fill is reduced according to the reduced number of polygons (as for 2 pipeline cores). However, when depth complexity is getting high, the advantage of object-division drops down, and in some cases the image-division may even perform better, e.g. applications with small number of polygons and high volume of textures.
  • In addition, the present invention introduces a dynamic load-balancing technique that combines the object division method with the image division and time division methods in image and time domains, based on the load exhibits by previous processing stages. Combining all the three parallel methods into a unified framework dramatically increases the frame rate stability of the graphic system.
  • FIG. 12 discloses a sample configuration of the system, employing 8 pipeline cores, according to an embodiment of the present invention. According to the above sample configuration, a balanced graphic application is assumed. The pipeline cores are divided into two groups for time division parallelism. Pipeline cores indexed with 1, 2, 3, and 4 are configured to process even frames and pipeline cores indexed with 5, 6, 7, and 8 are configured to process odd frames. Within each group, two pipeline core subgroups are set for image division: the pipeline cores with the lower indexes (1,2 and 5,6 respectively) are configured to process half of the screen, and the high-indexed pipeline cores (3,4 and 7,8 respectively) are configured to process the other half. Finally, for the object division, pipeline cores indexed with 1, 3, 5 and 7 are fed with half of the objects, and pipeline cores indexed with 2, 4, 6 and 8 are fed with the other half of the objects.
  • If at some point the system detects that the bottlenecks exhibited in previous frames occur at the raster stage of the pipeline, it means that fragment processing dominates the time it takes to render the frames and that the configuration is imbalanced. At that point the pipeline cores are reconfigured, so that each pipeline core will render a quarter of the screen within the respective frame. The original partition for time division, between pipeline cores 1,2,3,4 and between 5,6,7,8 still holds, but pipeline core 2 and pipeline core 5 are configured to render the first quarter of screen in even and odd frames respectively. Pipeline cores 1 and 6—render the second quarter, pipeline cores 4 and 7—the third quarter, and pipeline cores 3 and 8—the forth quarter. No object division is implied.
  • In addition, if at some point the system detects that the bottleneck exhibited in previous frames occurs at the geometry stage of the pipe, the pipeline cores are reconfigured, so that each pipeline core will process a quarter of the geometrical data within the respective frame. That is, pipeline cores 3 and 5 are configured to process the first quarter of the polygons in even and odd frames respectively. Pipeline cores 1 and 7—render the second quarter, pipeline cores 4 and 6—the third quarter and pipeline cores 2 and 8—the forth quarter. No image division is implied.
  • It should be noted, that taking 8 pipeline cores is sufficient in order to combine all three parallel modes, which are time, image and object division modes, per frame. Taking the number of pipeline cores larger than 8, also enables combining all 3 modes, but in a non-symmetric fashion. The flexibility also exists in frame count in a time division cycle. In the above example, the cluster of 8 pipeline cores was broken down into the two groups, each group handling a frame. However, it is possible to extend the number of frames in a time division mode to a sequence, which is longer than 2 frames, for example 3 or 4 frames.
  • Taking a smaller number of pipeline cores still allows the combination of the parallel modes, however the combination of two modes only. For example, taking only 4 pipeline cores enables to combine image and object division modes, without time division mode. It is clearly understood from FIG. 12, while taking the group of pipeline cores 1-4, which is the left cluster. Similarly, the group of pipeline cores 1,2,5, and 6 which consist the upper cluster, employs both object and time division modes. Finally, the configuration of the group of pipeline cores 2,4,5, and 6, which is the middle cluster, employs image and time division modes.
  • It should be noted, that similarly to the above embodiments, any combination between the parallel modes can be scheduled to evenly balance the graphic load.
  • It also should be noted, that according to the present invention, the parallelization process between all pipeline cores may be based on an object division mode or image division mode or time division mode or any combination thereof in order to optimize the processing performance of each frame.
  • The decision on parallel mode is done on a per-frame basis, based on the above profiling and analysis. It is then carried out by reconfiguration of the parallelization scheme, as described above and shown in FIGS. 8, 9, 10 and 12.
  • The MP-SOC architecture described in great detail hereinabove can be readily adapted for use in diverse kinds of graphics processing and display systems. While the illustrative embodiments of the present invention have been described in connection with PC-type computing systems, it is understood that the present invention can be use improve graphical performance in diverse kinds of systems including mobile computing devices, embedded systems, and as well as scientific and industrial computing systems supporting graphic visualization of photo-realistic quality.
  • It is understood that the graphics processing and display technology described in the illustrative embodiments of the present invention may be modified in a variety of ways which will become readily apparent to those skilled in the art of having the benefit of the novel teachings disclosed herein. All such modifications and variations of the illustrative embodiments thereof shall be deemed to be within the scope and spirit of the present invention as defined by the Claims to Invention appended hereto.

Claims (16)

1. A PC-based computing system comprising:
a system memory for storing software graphics applications, software drivers and graphics libraries;
an operating system (OS), stored in said system memory;
one or more graphics applications, stored in said system memory, for generating a stream of geometrical data and graphics commands supporting (i) the representation of one or more 3D objects in a scene having 3D geometrical characteristics and (ii) the viewing of images of said one or more 3D objects in said scene during an interactive process carried out between said PC-based computing system and a user of said PC-based computing system;
one or more graphic libraries, stored in said system memory, for storing data used to implement said stream of geometrical data and graphics commands;
a central processing unit (CPU), for executing said OS, said graphics applications, said drivers and said graphics libraries;
a CPU bus;
a CPU/memory interface module for interfacing with said CPU by way of CPU bus;
a display surface for displaying said images by graphically displaying frames of pixel data;
a plurality of GPU-driven pipeline cores arranged in a parallel architecture and operating according to a parallelization mode of operation so that said GPU-driven pipeline cores process data in a parallel manner;
a silicon chip having a routing unit interfacing with said CPU/memory interface module and said GPU-driven pipeline cores; and
software multi-pipe drivers, stored in said system memory, and including a GPU driver module allowing said GPU-driven pipeline cores to interact with said OS and said graphic libraries;
wherein said CPU/memory interface module provides an interface between said software multi-pipe drivers and said silicon chip;
wherein said routing unit (i) routes the stream of geometrical data and graphic commands from said graphics application to one or more of said GPU-driven pipeline cores, and (ii) routes pixel data output from one or more of said GPU-driven pipeline cores during the composition of each frames of pixel data corresponding to a final image, for display on said display surface;
wherein said software multi-pipe drivers perform the following functions:
(i) controlling the operation of said silicon chip,
(ii) interacting with said OS and said graphic libraries, and
(iii) forwarding said stream of geometrical data and graphic commands, or a portion thereof, over said CPU bus to each said GPU-driven pipeline core; and
wherein, for each image of said 3D object to be generated and displayed on said display surface, the following operations are performed:
(i) said silicon chip uses said routing unit to distribute said stream of geometrical data and graphic commands, or a portion thereof, to said GPU-driven pipeline cores,
(ii) one or more of said GPU-driven pipeline cores process said stream of geometrical data and graphic commands, or a portion thereof, during the generation of each said frame, while operating in said parallelization mode, so as to generate pixel data corresponding to at least a portion of said image, and
(iii) said silicon chip uses said routing unit to route said pixel data output from one or more of said GPU-driven pipeline cores and compose a frame of pixel data, representative of the image of said 3D object, for display on said display surface.
2. The PC-based computing system of claim 1, wherein said silicon chip further comprises a control unit for accepting commands from said software multi-pipe drivers, and controlling components within said silicon chip, including said routing unit.
3. The PC-based computing system of claim 1, wherein said silicon chip further comprises a memory unit for storing intermediate processing results from one or more of said multiple GPU-driven pipeline cores, and data required for composition and transferring frames of pixel data for display.
4. The PC-based computing system of claim 1, wherein said CPU/memory interface module is an I/O chip or chipset.
5. The PC-based computing system of claim 1, wherein each said GPU-driven pipeline core has a frame buffer (FB) for storing a fragment of pixel data.
6. The PC-based computing system of claim 1, wherein said geometrical data comprises a set of scene polygons, textures and vertex objects.
7. The PC-based computing system of claim 1, wherein said graphics commands includes commands selected from the group consisting of display lists and display vertex arrays.
8. The PC-based computing system of claim 1, wherein said graphic libraries are selected from the group consisting of OpenGL and DirectX.
9. The PC-based computing system of claim 1, wherein said software multi-pipe drivers coordinate the operation of said GPU-driven pipeline cores so generate a continuous sequence of frames of pixel data for displaying a sequence of images of said 3D object on said display surface.
10. The PC-based computing system of claim 1, wherein each pixel associated with a frame of pixel data includes attributes selected from the group consisting of color, alpha, position, depth, and stencil.
11. The PC-based computing system of claim 1, wherein said parallelization mode of operation is a time division mode of parallel operation.
12. The PC-based computing system of claim 1, wherein said parallelization mode of operation is an image division mode of parallel operation.
13. The PC-based computing system of claim 1, wherein said parallelization mode of operation is an object division mode of parallel operation.
14. The PC-based computing system of claim 16, wherein each said 3D object is decomposable into a plurality of polygons, and wherein said geometrical data comprises the vertices of said polygons.
15. The PC-based computing system of claim 1, wherein at least one of said GPU-driven pipeline cores is realized on said silicon chip, along with said routing unit.
16. The PC-based computing system of claim 1, which further comprises a graphics card, and wherein said silicon chip is mounted on said graphics card.
US11/977,718 2005-01-25 2007-10-25 PC-based computing system employing a silicon chip with a routing unit to distribute geometrical data and graphics commands to multiple GPU-driven pipeline cores during a mode of parallel operation Abandoned US20080136826A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/977,718 US20080136826A1 (en) 2005-01-25 2007-10-25 PC-based computing system employing a silicon chip with a routing unit to distribute geometrical data and graphics commands to multiple GPU-driven pipeline cores during a mode of parallel operation

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US64714605P 2005-01-25 2005-01-25
US11/340,402 US7812844B2 (en) 2004-01-28 2006-01-25 PC-based computing system employing a silicon chip having a routing unit and a control unit for parallelizing multiple GPU-driven pipeline cores according to the object division mode of parallel operation during the running of a graphics application
US11/977,718 US20080136826A1 (en) 2005-01-25 2007-10-25 PC-based computing system employing a silicon chip with a routing unit to distribute geometrical data and graphics commands to multiple GPU-driven pipeline cores during a mode of parallel operation

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/340,402 Continuation US7812844B2 (en) 2003-11-19 2006-01-25 PC-based computing system employing a silicon chip having a routing unit and a control unit for parallelizing multiple GPU-driven pipeline cores according to the object division mode of parallel operation during the running of a graphics application

Publications (1)

Publication Number Publication Date
US20080136826A1 true US20080136826A1 (en) 2008-06-12

Family

ID=37108069

Family Applications (19)

Application Number Title Priority Date Filing Date
US11/340,402 Active 2028-05-11 US7812844B2 (en) 2003-11-19 2006-01-25 PC-based computing system employing a silicon chip having a routing unit and a control unit for parallelizing multiple GPU-driven pipeline cores according to the object division mode of parallel operation during the running of a graphics application
US11/386,454 Active 2026-03-03 US7834880B2 (en) 2003-11-19 2006-03-22 Graphics processing and display system employing multiple graphics cores on a silicon chip of monolithic construction
US11/977,719 Abandoned US20080117218A1 (en) 2004-01-28 2007-10-25 PC-based computing system employing parallelized GPU-driven pipeline cores integrated with a routing unit and control unit on a silicon chip of monolithic construction
US11/977,734 Abandoned US20080136827A1 (en) 2005-01-25 2007-10-25 PC-based computing system employing a silicon chip having a routing unit and a control unit for parallelizing multiple GPU-driven pipeline cores during a graphics application
US11/977,718 Abandoned US20080136826A1 (en) 2005-01-25 2007-10-25 PC-based computing system employing a silicon chip with a routing unit to distribute geometrical data and graphics commands to multiple GPU-driven pipeline cores during a mode of parallel operation
US11/978,239 Expired - Lifetime US7812846B2 (en) 2003-11-19 2007-10-26 PC-based computing system employing a silicon chip of monolithic construction having a routing unit, a control unit and a profiling unit for parallelizing the operation of multiple GPU-driven pipeline cores according to the object division mode of parallel operation
US11/978,220 Expired - Lifetime US7843457B2 (en) 2003-11-19 2007-10-26 PC-based computing systems employing a bridge chip having a routing unit for distributing geometrical data and graphics commands to parallelized GPU-driven pipeline cores supported on a plurality of graphics cards and said bridge chip during the running of a graphics application
US11/978,148 Abandoned US20080129742A1 (en) 2004-01-28 2007-10-26 PC-based computing system employing a bridge chip having a routing unit and a control unit for parallelizing multiple GPU-driven pipeline cores during the running of a graphics application
US11/978,228 Active 2024-11-04 US7808504B2 (en) 2004-01-28 2007-10-26 PC-based computing system having an integrated graphics subsystem supporting parallel graphics processing operations across a plurality of different graphics processing units (GPUS) from the same or different vendors, in a manner transparent to graphics applications
US11/978,146 Abandoned US20080129741A1 (en) 2004-01-28 2007-10-26 PC-based computing system employing a bridge chip having a routing unit, a control unit and a profiling unit for parallelizing the operation of multiple GPU-driven pipeline cores according to the object division mode of parallel operation
US11/978,149 Abandoned US20080129743A1 (en) 2004-01-28 2007-10-26 Silicon chip of monolithic construction for integration in a PC-based computing system and having multiple GPU-driven pipeline cores supporting multiple modes of parallelization dynamically controlled while running a graphics application
US11/978,226 Active US7812845B2 (en) 2004-01-28 2007-10-26 PC-based computing system employing a silicon chip implementing parallelized GPU-driven pipelines cores supporting multiple modes of parallelization dynamically controlled while running a graphics application
US11/978,229 Abandoned US20080122850A1 (en) 2004-01-28 2007-10-26 PC-based computing system employing a bridge chip implementing parallelized GPU-driven pipelines cores supporting multiple modes of parallelization dynamically controlled while running a graphics application
US12/946,032 Active US8754897B2 (en) 2004-01-28 2010-11-15 Silicon chip of a monolithic construction for use in implementing multiple graphic cores in a graphics processing and display subsystem
US14/281,195 Active US10147157B2 (en) 2005-01-25 2014-05-19 System on chip having processing and graphics units
US14/304,991 Active 2026-04-01 US9659340B2 (en) 2004-01-28 2014-06-16 Silicon chip of a monolithic construction for use in implementing multiple graphic cores in a graphics processing and display subsystem
US16/208,000 Active US10614545B2 (en) 2005-01-25 2018-12-03 System on chip having processing and graphics units
US16/791,770 Active US10867364B2 (en) 2005-01-25 2020-02-14 System on chip having processing and graphics units
US17/121,468 Active US11341602B2 (en) 2005-01-25 2020-12-14 System on chip having processing and graphics units

Family Applications Before (4)

Application Number Title Priority Date Filing Date
US11/340,402 Active 2028-05-11 US7812844B2 (en) 2003-11-19 2006-01-25 PC-based computing system employing a silicon chip having a routing unit and a control unit for parallelizing multiple GPU-driven pipeline cores according to the object division mode of parallel operation during the running of a graphics application
US11/386,454 Active 2026-03-03 US7834880B2 (en) 2003-11-19 2006-03-22 Graphics processing and display system employing multiple graphics cores on a silicon chip of monolithic construction
US11/977,719 Abandoned US20080117218A1 (en) 2004-01-28 2007-10-25 PC-based computing system employing parallelized GPU-driven pipeline cores integrated with a routing unit and control unit on a silicon chip of monolithic construction
US11/977,734 Abandoned US20080136827A1 (en) 2005-01-25 2007-10-25 PC-based computing system employing a silicon chip having a routing unit and a control unit for parallelizing multiple GPU-driven pipeline cores during a graphics application

Family Applications After (14)

Application Number Title Priority Date Filing Date
US11/978,239 Expired - Lifetime US7812846B2 (en) 2003-11-19 2007-10-26 PC-based computing system employing a silicon chip of monolithic construction having a routing unit, a control unit and a profiling unit for parallelizing the operation of multiple GPU-driven pipeline cores according to the object division mode of parallel operation
US11/978,220 Expired - Lifetime US7843457B2 (en) 2003-11-19 2007-10-26 PC-based computing systems employing a bridge chip having a routing unit for distributing geometrical data and graphics commands to parallelized GPU-driven pipeline cores supported on a plurality of graphics cards and said bridge chip during the running of a graphics application
US11/978,148 Abandoned US20080129742A1 (en) 2004-01-28 2007-10-26 PC-based computing system employing a bridge chip having a routing unit and a control unit for parallelizing multiple GPU-driven pipeline cores during the running of a graphics application
US11/978,228 Active 2024-11-04 US7808504B2 (en) 2004-01-28 2007-10-26 PC-based computing system having an integrated graphics subsystem supporting parallel graphics processing operations across a plurality of different graphics processing units (GPUS) from the same or different vendors, in a manner transparent to graphics applications
US11/978,146 Abandoned US20080129741A1 (en) 2004-01-28 2007-10-26 PC-based computing system employing a bridge chip having a routing unit, a control unit and a profiling unit for parallelizing the operation of multiple GPU-driven pipeline cores according to the object division mode of parallel operation
US11/978,149 Abandoned US20080129743A1 (en) 2004-01-28 2007-10-26 Silicon chip of monolithic construction for integration in a PC-based computing system and having multiple GPU-driven pipeline cores supporting multiple modes of parallelization dynamically controlled while running a graphics application
US11/978,226 Active US7812845B2 (en) 2004-01-28 2007-10-26 PC-based computing system employing a silicon chip implementing parallelized GPU-driven pipelines cores supporting multiple modes of parallelization dynamically controlled while running a graphics application
US11/978,229 Abandoned US20080122850A1 (en) 2004-01-28 2007-10-26 PC-based computing system employing a bridge chip implementing parallelized GPU-driven pipelines cores supporting multiple modes of parallelization dynamically controlled while running a graphics application
US12/946,032 Active US8754897B2 (en) 2004-01-28 2010-11-15 Silicon chip of a monolithic construction for use in implementing multiple graphic cores in a graphics processing and display subsystem
US14/281,195 Active US10147157B2 (en) 2005-01-25 2014-05-19 System on chip having processing and graphics units
US14/304,991 Active 2026-04-01 US9659340B2 (en) 2004-01-28 2014-06-16 Silicon chip of a monolithic construction for use in implementing multiple graphic cores in a graphics processing and display subsystem
US16/208,000 Active US10614545B2 (en) 2005-01-25 2018-12-03 System on chip having processing and graphics units
US16/791,770 Active US10867364B2 (en) 2005-01-25 2020-02-14 System on chip having processing and graphics units
US17/121,468 Active US11341602B2 (en) 2005-01-25 2020-12-14 System on chip having processing and graphics units

Country Status (6)

Country Link
US (19) US7812844B2 (en)
EP (1) EP1846834A2 (en)
JP (1) JP2008538620A (en)
CN (1) CN101849227A (en)
CA (1) CA2595085A1 (en)
WO (1) WO2006117683A2 (en)

Families Citing this family (87)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1687732A4 (en) 2003-11-19 2008-11-19 Lucid Information Technology Ltd Method and system for multiple 3-d graphic pipeline over a pc bus
US9363481B2 (en) * 2005-04-22 2016-06-07 Microsoft Technology Licensing, Llc Protected media pipeline
US7423651B2 (en) * 2005-10-13 2008-09-09 V. R. Technology Co., Ltd. Single-chip multi-output graphic system
US7483032B1 (en) * 2005-10-18 2009-01-27 Nvidia Corporation Zero frame buffer
US7705850B1 (en) * 2005-11-08 2010-04-27 Nvidia Corporation Computer system having increased PCIe bandwidth
US7948497B2 (en) * 2005-11-29 2011-05-24 Via Technologies, Inc. Chipset and related method of processing graphic signals
US7546574B2 (en) * 2005-12-02 2009-06-09 Gauda, Inc. Optical proximity correction on hardware or software platforms with graphical processing units
US7685371B1 (en) * 2006-04-19 2010-03-23 Nvidia Corporation Hierarchical flush barrier mechanism with deadlock avoidance
US7659901B2 (en) * 2006-07-24 2010-02-09 Microsoft Corporation Application program interface for programmable graphics pipeline
US7961192B2 (en) * 2006-08-01 2011-06-14 Nvidia Corporation Multi-graphics processor system and method for processing content communicated over a network for display purposes
US7969443B2 (en) * 2006-08-01 2011-06-28 Nvidia Corporation System and method for dynamically processing content being communicated over a network for display purposes
US11714476B2 (en) 2006-12-31 2023-08-01 Google Llc Apparatus and method for power management of a computing system
US9275430B2 (en) 2006-12-31 2016-03-01 Lucidlogix Technologies, Ltd. Computing system employing a multi-GPU graphics processing and display subsystem supporting single-GPU non-parallel (multi-threading) and multi-GPU application-division parallel modes of graphics processing operation
US7940261B2 (en) * 2007-01-10 2011-05-10 Qualcomm Incorporated Automatic load balancing of a 3D graphics pipeline
EP1990774A1 (en) * 2007-05-11 2008-11-12 Deutsche Thomson OHG Renderer for presenting an image frame by help of a set of displaying commands
GB0715000D0 (en) * 2007-07-31 2007-09-12 Symbian Software Ltd Command synchronisation
US7982742B2 (en) * 2007-09-06 2011-07-19 Dell Products L.P. System and method for an information handling system having an external graphics processor system for operating multiple monitors
GB0723536D0 (en) * 2007-11-30 2008-01-09 Imagination Tech Ltd Multi-core geometry processing in a tile based rendering system
KR100980449B1 (en) * 2007-12-17 2010-09-07 한국전자통신연구원 Method and system for rendering of parallel global illumination
KR100969322B1 (en) 2008-01-10 2010-07-09 엘지전자 주식회사 Data processing unit with multi-graphic controller and Method for processing data using the same
US8289333B2 (en) 2008-03-04 2012-10-16 Apple Inc. Multi-context graphics processing
US8477143B2 (en) 2008-03-04 2013-07-02 Apple Inc. Buffers for display acceleration
EP2257874A4 (en) 2008-03-27 2013-07-17 Rocketick Technologies Ltd Design simulation using parallel processors
US8319782B2 (en) * 2008-07-08 2012-11-27 Dell Products, Lp Systems and methods for providing scalable parallel graphics rendering capability for information handling systems
JP5733860B2 (en) * 2008-07-10 2015-06-10 ロケティック テクノロジーズ リミテッド Efficient parallel computation of dependency problems
US9032377B2 (en) 2008-07-10 2015-05-12 Rocketick Technologies Ltd. Efficient parallel computation of dependency problems
US7930519B2 (en) * 2008-12-17 2011-04-19 Advanced Micro Devices, Inc. Processor with coprocessor interfacing functional unit for forwarding result from coprocessor to retirement unit
WO2010096143A1 (en) * 2009-02-18 2010-08-26 Zoran Corporation System and method for a versatile display pipeline architecture for an lcd display panel
US9336028B2 (en) * 2009-06-25 2016-05-10 Apple Inc. Virtual graphics device driver
US9142057B2 (en) * 2009-09-03 2015-09-22 Advanced Micro Devices, Inc. Processing unit with a plurality of shader engines
US8400458B2 (en) * 2009-09-09 2013-03-19 Hewlett-Packard Development Company, L.P. Method and system for blocking data on a GPU
US20110212761A1 (en) * 2010-02-26 2011-09-01 Igt Gaming machine processor
US8798386B2 (en) * 2010-04-22 2014-08-05 Broadcom Corporation Method and system for processing image data on a per tile basis in an image sensor pipeline
US20120159090A1 (en) * 2010-12-16 2012-06-21 Microsoft Corporation Scalable multimedia computer system architecture with qos guarantees
US20120218277A1 (en) * 2011-02-25 2012-08-30 ST-Ericcson SA Display list mechanism and scalable display engine structures
US8786619B2 (en) 2011-02-25 2014-07-22 Adobe Systems Incorporated Parallelized definition and display of content in a scripting environment
US9128748B2 (en) 2011-04-12 2015-09-08 Rocketick Technologies Ltd. Parallel simulation using multiple co-simulators
US9886899B2 (en) * 2011-05-17 2018-02-06 Ignis Innovation Inc. Pixel Circuits for AMOLED displays
US9092267B2 (en) * 2011-06-20 2015-07-28 Qualcomm Incorporated Memory sharing in graphics processing unit
US10013731B2 (en) * 2011-06-30 2018-07-03 Intel Corporation Maximizing parallel processing in graphics processors
JP5361962B2 (en) * 2011-08-25 2013-12-04 株式会社東芝 Graphics processing unit and information processing apparatus
WO2013074124A1 (en) 2011-11-18 2013-05-23 Intel Corporation Scalable geometry processing within a checkerboard multi-gpu configuration
US10217270B2 (en) 2011-11-18 2019-02-26 Intel Corporation Scalable geometry processing within a checkerboard multi-GPU configuration
WO2014084419A1 (en) * 2012-11-29 2014-06-05 Lee Eun-Suk Computer capable of providing set content through screen while power is off
CN103064657B (en) * 2012-12-26 2016-09-28 深圳中微电科技有限公司 Realize the method and device applying parallel processing on single processor more
US9348602B1 (en) * 2013-09-03 2016-05-24 Amazon Technologies, Inc. Resource allocation for staged execution pipelining
KR102244620B1 (en) 2014-09-05 2021-04-26 삼성전자 주식회사 Method and apparatus for controlling rendering quality
US9690615B2 (en) * 2014-11-12 2017-06-27 Intel Corporation Live migration of virtual machines from/to host computers with graphics processors
EP3274851B1 (en) 2015-03-27 2020-06-17 Intel Corporation Dynamic configuration of input/output controller access lanes
US9622182B2 (en) * 2015-04-17 2017-04-11 Suunto Oy Embedded computing device
US10796397B2 (en) * 2015-06-12 2020-10-06 Intel Corporation Facilitating dynamic runtime transformation of graphics processing commands for improved graphics performance at computing devices
US11295506B2 (en) 2015-09-16 2022-04-05 Tmrw Foundation Ip S. À R.L. Chip with game engine and ray trace engine
JP6766495B2 (en) * 2016-07-21 2020-10-14 富士通株式会社 Programs, computers and information processing methods
KR20180038793A (en) * 2016-10-07 2018-04-17 삼성전자주식회사 Method and apparatus for processing image data
KR102644276B1 (en) * 2016-10-10 2024-03-06 삼성전자주식회사 Apparatus and method for processing graphic
CN106648547A (en) * 2016-12-12 2017-05-10 中国航空工业集团公司西安航空计算技术研究所 Distributed unified management method for GPU graphic state parameters
JP6550418B2 (en) * 2017-04-03 2019-07-24 エンパイア テクノロジー ディベロップメント エルエルシー online game
US10872394B2 (en) * 2017-04-27 2020-12-22 Daegu Gyeongbuk Institute Of Science And Technology Frequent pattern mining method and apparatus
JP2020532795A (en) * 2017-08-31 2020-11-12 レール ビジョン リミテッドRail Vision Ltd Systems and methods for high throughput in multiple computations
US11301951B2 (en) * 2018-03-15 2022-04-12 The Calany Holding S. À R.L. Game engine and artificial intelligence engine on a chip
CN112005218B (en) * 2018-04-28 2024-01-30 华为技术有限公司 Method, device and system for distributing power of image processor
CN110032597B (en) * 2018-11-30 2023-04-11 创新先进技术有限公司 Visual processing method and device for operation behaviors of application program
CN109408456B (en) * 2018-12-07 2023-08-29 中国地质大学(武汉) S905D chip and STM32 chip based cooperative hardware circuit
US11625884B2 (en) 2019-06-18 2023-04-11 The Calany Holding S. À R.L. Systems, methods and apparatus for implementing tracked data communications on a chip
CN110335190A (en) * 2019-06-20 2019-10-15 合肥芯碁微电子装备有限公司 Direct-write type lithography machine data expanding method based on CUDA
US11475601B2 (en) * 2019-10-21 2022-10-18 Google Llc Image decoding during bitstream interruptions
CN111221771B (en) * 2019-11-18 2023-04-28 天津津航计算技术研究所 GPU blade device suitable for VPX framework
CN111045623B (en) * 2019-11-21 2023-06-13 中国航空工业集团公司西安航空计算技术研究所 Method for processing graphics commands in multi-GPU splicing environment
CN110895788A (en) * 2019-11-29 2020-03-20 上海众链科技有限公司 System for enhancing graphic processing capability and external device
CN113129205A (en) * 2019-12-31 2021-07-16 华为技术有限公司 Electronic equipment and computer system
US11170461B2 (en) 2020-02-03 2021-11-09 Sony Interactive Entertainment Inc. System and method for efficient multi-GPU rendering of geometry by performing geometry analysis while rendering
US11263718B2 (en) 2020-02-03 2022-03-01 Sony Interactive Entertainment Inc. System and method for efficient multi-GPU rendering of geometry by pretesting against in interleaved screen regions before rendering
US11508110B2 (en) 2020-02-03 2022-11-22 Sony Interactive Entertainment Inc. System and method for efficient multi-GPU rendering of geometry by performing geometry analysis before rendering
US12112394B2 (en) 2020-02-03 2024-10-08 Sony Interactive Entertainment Inc. System and method for efficient multi-GPU rendering of geometry by pretesting against screen regions using configurable shaders
US11120522B2 (en) * 2020-02-03 2021-09-14 Sony Interactive Entertainment Inc. System and method for efficient multi-GPU rendering of geometry by subdividing geometry
US11080814B1 (en) 2020-02-03 2021-08-03 Sony Interactive Entertainment Inc. System and method for efficient multi-GPU rendering of geometry by pretesting against screen regions using prior frame information
US11514549B2 (en) 2020-02-03 2022-11-29 Sony Interactive Entertainment Inc. System and method for efficient multi-GPU rendering of geometry by generating information in one rendering phase for use in another rendering phase
US11321800B2 (en) * 2020-02-03 2022-05-03 Sony Interactive Entertainment Inc. System and method for efficient multi-GPU rendering of geometry by region testing while rendering
CN111491059B (en) * 2020-04-09 2021-08-10 上海众链科技有限公司 Image rendering enhancement system and method
CN112419147B (en) * 2020-04-14 2023-07-04 上海哔哩哔哩科技有限公司 Image rendering method and device
US11315209B2 (en) * 2020-05-08 2022-04-26 Black Sesame Technolgies Inc. In-line and offline staggered bandwidth efficient image signal processing
CN115117045A (en) * 2021-03-19 2022-09-27 上海寒武纪信息科技有限公司 Packaging frame for chip, processing method and related product
US11662798B2 (en) 2021-07-30 2023-05-30 Advanced Micro Devices, Inc. Technique for extended idle duration for display to improve power consumption
US12086644B2 (en) 2021-08-11 2024-09-10 Apple Inc. Logical slot to hardware slot mapping for graphics processors
WO2023018529A1 (en) * 2021-08-11 2023-02-16 Apple Inc. Logical slot to hardware slot mapping for graphics processors
CN113849448A (en) * 2021-09-28 2021-12-28 联想(北京)有限公司 Electronic equipment
CN115100022B (en) * 2022-08-23 2022-12-06 芯动微电子科技(珠海)有限公司 Graphic processing method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6259460B1 (en) * 1998-03-26 2001-07-10 Silicon Graphics, Inc. Method for efficient handling of texture cache misses by recirculation
US6753878B1 (en) * 1999-03-08 2004-06-22 Hewlett-Packard Development Company, L.P. Parallel pipelined merge engines

Family Cites Families (224)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07104923B2 (en) 1988-12-28 1995-11-13 工業技術院長 Parallel image display processing method
US5392391A (en) 1991-10-18 1995-02-21 Lsi Logic Corporation High performance graphics applications controller
CA2073516A1 (en) 1991-11-27 1993-05-28 Peter Michael Kogge Dynamic multi-mode parallel processor array architecture computer system
JPH07325934A (en) 1992-07-10 1995-12-12 Walt Disney Co:The Method and equipment for provision of graphics enhanced to virtual world
JP3199205B2 (en) 1993-11-19 2001-08-13 株式会社日立製作所 Parallel processing unit
US5495419A (en) 1994-04-19 1996-02-27 Lsi Logic Corporation Integrated circuit physical design automation system utilizing optimization process decomposition and parallel processing
US5734874A (en) 1994-04-29 1998-03-31 Sun Microsystems, Inc. Central processing unit with integrated graphics functions
EP0693737A3 (en) 1994-07-21 1997-01-08 Ibm Method and apparatus for managing multiprocessor graphical workload distribution
US5745762A (en) 1994-12-15 1998-04-28 International Business Machines Corporation Advanced graphics driver architecture supporting multiple system emulations
US5687357A (en) 1995-04-14 1997-11-11 Nvidia Corporation Register array for utilizing burst mode transfer on local bus
US5754866A (en) 1995-05-08 1998-05-19 Nvidia Corporation Delayed interrupts with a FIFO in an improved input/output architecture
US5909595A (en) 1995-05-15 1999-06-01 Nvidia Corporation Method of controlling I/O routing by setting connecting context for utilizing I/O processing elements within a computer system to produce multimedia effects
US5623692A (en) 1995-05-15 1997-04-22 Nvidia Corporation Architecture for providing input/output operations in a computer system
US5758182A (en) 1995-05-15 1998-05-26 Nvidia Corporation DMA controller translates virtual I/O device address received directly from application program command to physical i/o device address of I/O device on device bus
US5794016A (en) 1995-12-11 1998-08-11 Dynamic Pictures, Inc. Parallel-processor graphics architecture
US5838756A (en) * 1996-01-08 1998-11-17 Kabushiki Kaisha Toshiba Radiation computed tomography apparatus
KR100269106B1 (en) 1996-03-21 2000-11-01 윤종용 Multiprocessor graphics system
EP0948859A1 (en) 1996-08-14 1999-10-13 Nortel Networks Limited Internet-based telephone call manager
US5797016A (en) * 1996-10-29 1998-08-18 Cheyenne Software Inc. Regeneration agent for back-up software
US5977997A (en) 1997-03-06 1999-11-02 Lsi Logic Corporation Single chip computer having integrated MPEG and graphical processors
US6118462A (en) 1997-07-01 2000-09-12 Memtrax Llc Computer system controller having internal memory and external memory control
US6169553B1 (en) 1997-07-02 2001-01-02 Ati Technologies, Inc. Method and apparatus for rendering a three-dimensional scene having shadowing
US6201545B1 (en) 1997-09-23 2001-03-13 Ati Technologies, Inc. Method and apparatus for generating sub pixel masks in a three dimensional graphic processing system
US6856320B1 (en) 1997-11-25 2005-02-15 Nvidia U.S. Investment Company Demand-based memory system for graphics applications
US6337686B2 (en) 1998-01-07 2002-01-08 Ati Technologies Inc. Method and apparatus for line anti-aliasing
US6496187B1 (en) 1998-02-17 2002-12-17 Sun Microsystems, Inc. Graphics system configured to perform parallel sample to pixel calculation
US6473089B1 (en) 1998-03-02 2002-10-29 Ati Technologies, Inc. Method and apparatus for a video graphics circuit having parallel pixel processing
US7038692B1 (en) 1998-04-07 2006-05-02 Nvidia Corporation Method and apparatus for providing a vertex cache
JP3497988B2 (en) 1998-04-15 2004-02-16 株式会社ルネサステクノロジ Graphic processing apparatus and graphic processing method
US6092124A (en) 1998-04-17 2000-07-18 Nvidia Corporation Method and apparatus for accelerating the rendering of images
US6184908B1 (en) 1998-04-27 2001-02-06 Ati Technologies, Inc. Method and apparatus for co-processing video graphics data
US6212617B1 (en) 1998-05-13 2001-04-03 Microsoft Corporation Parallel processing method and system using a lazy parallel data type to reduce inter-processor communication
US6477687B1 (en) 1998-06-01 2002-11-05 Nvidia U.S. Investment Company Method of embedding RAMS and other macrocells in the core of an integrated circuit chip
US6646639B1 (en) 1998-07-22 2003-11-11 Nvidia Corporation Modified method and apparatus for improved occlusion culling in graphics systems
US6636215B1 (en) 1998-07-22 2003-10-21 Nvidia Corporation Hardware-assisted z-pyramid creation for host-based occlusion culling
US7068272B1 (en) 2000-05-31 2006-06-27 Nvidia Corporation System, method and article of manufacture for Z-value and stencil culling prior to rendering in a computer graphics processing pipeline
US7023437B1 (en) 1998-07-22 2006-04-04 Nvidia Corporation System and method for accelerating graphics processing using a post-geometry data stream during multiple-pass rendering
US6415345B1 (en) 1998-08-03 2002-07-02 Ati Technologies Bus mastering interface control system for transferring multistream data over a host bus
US6191800B1 (en) 1998-08-11 2001-02-20 International Business Machines Corporation Dynamic balancing of graphics workloads using a tiling strategy
US6552723B1 (en) 1998-08-20 2003-04-22 Apple Computer, Inc. System, apparatus and method for spatially sorting image data in a three-dimensional graphics pipeline
US6492987B1 (en) 1998-08-27 2002-12-10 Ati Technologies, Inc. Method and apparatus for processing object elements that are being rendered
US6188412B1 (en) 1998-08-28 2001-02-13 Ati Technologies, Inc. Method and apparatus for performing setup operations in a video graphics system
US6591347B2 (en) * 1998-10-09 2003-07-08 National Semiconductor Corporation Dynamic replacement technique in a shared cache
US6292200B1 (en) 1998-10-23 2001-09-18 Silicon Graphics, Inc. Apparatus and method for utilizing multiple rendering pipes for a single 3-D display
US6731407B1 (en) 1998-11-02 2004-05-04 Seiko Epson Corporation Image processing method and device
US6323866B1 (en) * 1998-11-25 2001-11-27 Silicon Integrated Systems Corp. Integrated circuit device having a core controller, a bus bridge, a graphical controller and a unified memory control unit built therein for use in a computer system
US6362825B1 (en) 1999-01-19 2002-03-26 Hewlett-Packard Company Real-time combination of adjacent identical primitive data sets in a graphics call sequence
US6477683B1 (en) 1999-02-05 2002-11-05 Tensilica, Inc. Automated processor generation system for designing a configurable processor and method for the same
JP3169933B2 (en) 1999-03-16 2001-05-28 四国日本電気ソフトウェア株式会社 Parallel drawing device
US6535209B1 (en) 1999-03-17 2003-03-18 Nvidia Us Investments Co. Data stream splitting and storage in graphics data processing
US6288418B1 (en) 1999-03-19 2001-09-11 Nvidia Corporation Multiuse input/output connector arrangement for graphics accelerator integrated circuit
US6333744B1 (en) 1999-03-22 2001-12-25 Nvidia Corporation Graphics pipeline including combiner stages
US6577320B1 (en) 1999-03-22 2003-06-10 Nvidia Corporation Method and apparatus for processing multiple types of pixel component representations including processes of premultiplication, postmultiplication, and colorkeying/chromakeying
US6181352B1 (en) 1999-03-22 2001-01-30 Nvidia Corporation Graphics pipeline selectively providing multiple pixels or multiple textures
DE19917092A1 (en) 1999-04-15 2000-10-26 Sp3D Chip Design Gmbh Accelerated method for grid forming of graphic basic element in order beginning with graphic base element instruction data to produce pixel data for graphic base element
US6442656B1 (en) 1999-08-18 2002-08-27 Ati Technologies Srl Method and apparatus for interfacing memory with a bus
US6352479B1 (en) 1999-08-31 2002-03-05 Nvidia U.S. Investment Company Interactive gaming server and online community forum
US6578068B1 (en) 1999-08-31 2003-06-10 Accenture Llp Load balancer in environment services patterns
US6657635B1 (en) 1999-09-03 2003-12-02 Nvidia Corporation Binning flush in graphics data processing
US6417851B1 (en) 1999-12-06 2002-07-09 Nvidia Corporation Method and apparatus for lighting module in a graphics processor
US6353439B1 (en) 1999-12-06 2002-03-05 Nvidia Corporation System, method and computer program product for a blending operation in a transform module of a computer graphics pipeline
US6844880B1 (en) 1999-12-06 2005-01-18 Nvidia Corporation System, method and computer program product for an improved programmable vertex processing model with instruction set
US6870540B1 (en) 1999-12-06 2005-03-22 Nvidia Corporation System, method and computer program product for a programmable pixel processing model with instruction set
US6573900B1 (en) 1999-12-06 2003-06-03 Nvidia Corporation Method, apparatus and article of manufacture for a sequencer in a transform/lighting module capable of processing multiple independent execution threads
US7002577B2 (en) 1999-12-06 2006-02-21 Nvidia Corporation Clipping system and method for a single graphics semiconductor platform
US6198488B1 (en) 1999-12-06 2001-03-06 Nvidia Transform, lighting and rasterization system embodied on a single semiconductor platform
US6452595B1 (en) 1999-12-06 2002-09-17 Nvidia Corporation Integrated graphics processing unit with antialiasing
US6473086B1 (en) 1999-12-09 2002-10-29 Ati International Srl Method and apparatus for graphics processing using parallel graphics processors
US6557065B1 (en) 1999-12-20 2003-04-29 Intel Corporation CPU expandability bus
US6760031B1 (en) 1999-12-31 2004-07-06 Intel Corporation Upgrading an integrated graphics subsystem
WO2001069207A1 (en) 2000-03-16 2001-09-20 Fuji Photo Film Co., Ltd. Measuring method and instrument utilizing total reflection attenuation
US6831652B1 (en) 2000-03-24 2004-12-14 Ati International, Srl Method and system for storing graphics data
US6975319B1 (en) 2000-03-24 2005-12-13 Nvidia Corporation System, method and article of manufacture for calculating a level of detail (LOD) during computer graphics processing
US20030132291A1 (en) 2002-01-11 2003-07-17 Metrologic Instruments, Inc. Point of sale (POS) station having bar code reading system with integrated internet-enabled customer-kiosk terminal
US6741243B2 (en) 2000-05-01 2004-05-25 Broadcom Corporation Method and system for reducing overflows in a computer graphics system
US6725457B1 (en) 2000-05-17 2004-04-20 Nvidia Corporation Semaphore enhancement to improve system performance
US6633296B1 (en) 2000-05-26 2003-10-14 Ati International Srl Apparatus for providing data to a plurality of graphics processors and method thereof
US6670958B1 (en) 2000-05-26 2003-12-30 Ati International, Srl Method and apparatus for routing data to multiple graphics devices
US6728820B1 (en) 2000-05-26 2004-04-27 Ati International Srl Method of configuring, controlling, and accessing a bridge and apparatus therefor
US6789154B1 (en) 2000-05-26 2004-09-07 Ati International, Srl Apparatus and method for transmitting data
US6662257B1 (en) 2000-05-26 2003-12-09 Ati International Srl Multiple device bridge apparatus and method thereof
US6664963B1 (en) 2000-05-31 2003-12-16 Nvidia Corporation System, method and computer program product for programmable shading using pixel shaders
US6724394B1 (en) 2000-05-31 2004-04-20 Nvidia Corporation Programmable pixel shading architecture
US6532013B1 (en) 2000-05-31 2003-03-11 Nvidia Corporation System, method and article of manufacture for pixel shaders for programmable shading
US6593923B1 (en) 2000-05-31 2003-07-15 Nvidia Corporation System, method and article of manufacture for shadow mapping
US6690372B2 (en) 2000-05-31 2004-02-10 Nvidia Corporation System, method and article of manufacture for shadow mapping
JP2002008060A (en) 2000-06-23 2002-01-11 Hitachi Ltd Data processing method, recording medium and data processing device
US6801202B2 (en) 2000-06-29 2004-10-05 Sun Microsystems, Inc. Graphics system configured to parallel-process graphics data using multiple pipelines
US7405734B2 (en) 2000-07-18 2008-07-29 Silicon Graphics, Inc. Method and system for presenting three-dimensional computer graphics images using multiple graphics processing units
US6959110B1 (en) 2000-08-17 2005-10-25 Nvidia Corporation Multi-mode texture compression algorithm
US7116331B1 (en) 2000-08-23 2006-10-03 Intel Corporation Memory controller hub interface
US6842180B1 (en) 2000-09-20 2005-01-11 Intel Corporation Opportunistic sharing of graphics resources to enhance CPU performance in an integrated microprocessor
US6885378B1 (en) * 2000-09-28 2005-04-26 Intel Corporation Method and apparatus for the implementation of full-scene anti-aliasing supersampling
US6532525B1 (en) 2000-09-29 2003-03-11 Ati Technologies, Inc. Method and apparatus for accessing memory
US6502173B1 (en) 2000-09-29 2002-12-31 Ati Technologies, Inc. System for accessing memory and method therefore
US6731298B1 (en) 2000-10-02 2004-05-04 Nvidia Corporation System, method and article of manufacture for z-texture mapping
US6828980B1 (en) 2000-10-02 2004-12-07 Nvidia Corporation System, method and computer program product for z-texture mapping
JP3580789B2 (en) 2000-10-10 2004-10-27 株式会社ソニー・コンピュータエンタテインメント Data communication system and method, computer program, recording medium
US6961057B1 (en) 2000-10-12 2005-11-01 Nvidia Corporation Method and apparatus for managing and accessing depth data in a computer graphics system
US6362997B1 (en) 2000-10-16 2002-03-26 Nvidia Memory system for use on a circuit board in which the number of loads are minimized
US6636212B1 (en) 2000-11-14 2003-10-21 Nvidia Corporation Method and apparatus for determining visibility of groups of pixels
US6778181B1 (en) 2000-12-07 2004-08-17 Nvidia Corporation Graphics processing system having a virtual texturing array
US7027972B1 (en) 2001-01-24 2006-04-11 Ati Technologies, Inc. System for collecting and analyzing graphics data and method thereof
US7358974B2 (en) 2001-01-29 2008-04-15 Silicon Graphics, Inc. Method and system for minimizing an amount of data needed to test data against subarea boundaries in spatially composited digital video
US6888580B2 (en) 2001-02-27 2005-05-03 Ati Technologies Inc. Integrated single and dual television tuner having improved fine tuning
US7130316B2 (en) 2001-04-11 2006-10-31 Ati Technologies, Inc. System for frame based audio synchronization and method thereof
US6542971B1 (en) 2001-04-23 2003-04-01 Nvidia Corporation Memory access system and method employing an auxiliary buffer
US6664960B2 (en) 2001-05-10 2003-12-16 Ati Technologies Inc. Apparatus for processing non-planar video graphics primitives and associated method of operation
US6700583B2 (en) 2001-05-14 2004-03-02 Ati Technologies, Inc. Configurable buffer for multipass applications
US6894687B1 (en) 2001-06-08 2005-05-17 Nvidia Corporation System, method and computer program product for vertex attribute aliasing in a graphics pipeline
US6697064B1 (en) 2001-06-08 2004-02-24 Nvidia Corporation System, method and computer program product for matrix tracking during vertex processing in a graphics pipeline
WO2002101497A2 (en) 2001-06-08 2002-12-19 Nvidia Corporation System, method and computer program product for programmable fragment processing in a graphics pipeline
JP2003030641A (en) 2001-07-19 2003-01-31 Nec System Technologies Ltd Plotting device, parallel plotting method therefor and parallel plotting program
US6828987B2 (en) 2001-08-07 2004-12-07 Ati Technologies, Inc. Method and apparatus for processing video and graphics data
US6778189B1 (en) 2001-08-24 2004-08-17 Nvidia Corporation Two-sided stencil testing system and method
US6744433B1 (en) 2001-08-31 2004-06-01 Nvidia Corporation System and method for using and collecting information from a plurality of depth layers
US6989840B1 (en) 2001-08-31 2006-01-24 Nvidia Corporation Order-independent transparency rendering system and method
US6704025B1 (en) 2001-08-31 2004-03-09 Nvidia Corporation System and method for dual-depth shadow-mapping
US6947047B1 (en) 2001-09-20 2005-09-20 Nvidia Corporation Method and system for programmable pipelined graphics processing with branching instructions
US6938176B1 (en) 2001-10-05 2005-08-30 Nvidia Corporation Method and apparatus for power management of graphics processors and subsystems that allow the subsystems to respond to accesses when subsystems are idle
US7091971B2 (en) 2001-10-29 2006-08-15 Ati Technologies, Inc. System, method, and apparatus for multi-level hierarchical Z buffering
US6999076B2 (en) 2001-10-29 2006-02-14 Ati Technologies, Inc. System, method, and apparatus for early culling
US6677953B1 (en) 2001-11-08 2004-01-13 Nvidia Corporation Hardware viewport system and method for use in a graphics pipeline
US20030117971A1 (en) 2001-12-21 2003-06-26 Celoxica Ltd. System, method, and article of manufacture for profiling an executable hardware model using calls to profiling functions
US6683614B2 (en) 2001-12-21 2004-01-27 Hewlett-Packard Development Company, L.P. System and method for automatically configuring graphics pipelines by tracking a region of interest in a computer graphical display system
US7012610B2 (en) 2002-01-04 2006-03-14 Ati Technologies, Inc. Portable device for providing dual display and method thereof
US6774895B1 (en) 2002-02-01 2004-08-10 Nvidia Corporation System and method for depth clamping in a hardware graphics pipeline
US6829689B1 (en) 2002-02-12 2004-12-07 Nvidia Corporation Method and system for memory access arbitration for minimizing read/write turnaround penalties
JP4079410B2 (en) 2002-02-15 2008-04-23 株式会社バンダイナムコゲームス Image generation system, program, and information storage medium
US6947865B1 (en) 2002-02-15 2005-09-20 Nvidia Corporation Method and system for dynamic power supply voltage adjustment for a semiconductor integrated circuit device
US6933943B2 (en) * 2002-02-27 2005-08-23 Hewlett-Packard Development Company, L.P. Distributed resource architecture and system
US6700580B2 (en) 2002-03-01 2004-03-02 Hewlett-Packard Development Company, L.P. System and method utilizing multiple pipelines to render graphical data
US6853380B2 (en) 2002-03-04 2005-02-08 Hewlett-Packard Development Company, L.P. Graphical display system and method
US20030171907A1 (en) 2002-03-06 2003-09-11 Shay Gal-On Methods and Apparatus for Optimizing Applications on Configurable Processors
US6919896B2 (en) 2002-03-11 2005-07-19 Sony Computer Entertainment Inc. System and method of optimizing graphics processing
US7009605B2 (en) 2002-03-20 2006-03-07 Nvidia Corporation System, method and computer program product for generating a shader program
CN1656465B (en) * 2002-03-22 2010-05-26 迈克尔·F·迪林 Method and system for rendering graph by executing render computation by multiple interconnecting nodes
US20030212735A1 (en) 2002-05-13 2003-11-13 Nvidia Corporation Method and apparatus for providing an integrated network of processors
US20040153778A1 (en) 2002-06-12 2004-08-05 Ati Technologies, Inc. Method, system and software for configuring a graphics processing communication mode
US6980209B1 (en) 2002-06-14 2005-12-27 Nvidia Corporation Method and system for scalable, dataflow-based, programmable processing of graphics data
US6812927B1 (en) 2002-06-18 2004-11-02 Nvidia Corporation System and method for avoiding depth clears using a stencil buffer
US6876362B1 (en) 2002-07-10 2005-04-05 Nvidia Corporation Omnidirectional shadow texture mapping
US6797998B2 (en) 2002-07-16 2004-09-28 Nvidia Corporation Multi-configuration GPU interface device
US6954204B2 (en) 2002-07-18 2005-10-11 Nvidia Corporation Programmable graphics system and method using flexible, high-precision data formats
US6825843B2 (en) 2002-07-18 2004-11-30 Nvidia Corporation Method and apparatus for loop and branch instructions in a programmable graphics pipeline
US6864893B2 (en) 2002-07-19 2005-03-08 Nvidia Corporation Method and apparatus for modifying depth values using pixel programs
US6952206B1 (en) 2002-08-12 2005-10-04 Nvidia Corporation Graphics application program interface system and method for accelerating graphics processing
US7112884B2 (en) 2002-08-23 2006-09-26 Ati Technologies, Inc. Integrated circuit having memory disposed thereon and method of making thereof
US6779069B1 (en) 2002-09-04 2004-08-17 Nvidia Corporation Computer system with source-synchronous digital link
JP4467267B2 (en) 2002-09-06 2010-05-26 株式会社ソニー・コンピュータエンタテインメント Image processing method, image processing apparatus, and image processing system
US7061495B1 (en) * 2002-11-18 2006-06-13 Ati Technologies, Inc. Method and apparatus for rasterizer interpolation
US7633506B1 (en) * 2002-11-27 2009-12-15 Ati Technologies Ulc Parallel pipeline graphics system
US7324547B1 (en) 2002-12-13 2008-01-29 Nvidia Corporation Internet protocol (IP) router residing in a processor chipset
US6885376B2 (en) 2002-12-30 2005-04-26 Silicon Graphics, Inc. System, method, and computer program product for near-real time load balancing across multiple rendering pipelines
US7233964B2 (en) 2003-01-28 2007-06-19 Lucid Information Technology Ltd. Method and system for compositing three-dimensional graphics images using associative decision mechanism
US7145565B2 (en) 2003-02-27 2006-12-05 Nvidia Corporation Depth bounds testing
US6911983B2 (en) 2003-03-12 2005-06-28 Nvidia Corporation Double-buffering of pixel data using copy-on-write semantics
US7129909B1 (en) 2003-04-09 2006-10-31 Nvidia Corporation Method and system using compressed display mode list
US6900810B1 (en) 2003-04-10 2005-05-31 Nvidia Corporation User programmable geometry engine
US6940515B1 (en) 2003-04-10 2005-09-06 Nvidia Corporation User programmable primitive engine
US7120816B2 (en) 2003-04-17 2006-10-10 Nvidia Corporation Method for testing synchronization and connection status of a graphics processing unit module
US7068278B1 (en) 2003-04-17 2006-06-27 Nvidia Corporation Synchronized graphics processing units
US7483031B2 (en) 2003-04-17 2009-01-27 Nvidia Corporation Method for synchronizing graphics processing units
US7038678B2 (en) 2003-05-21 2006-05-02 Nvidia Corporation Dependent texture shadow antialiasing
US7415708B2 (en) 2003-06-26 2008-08-19 Intel Corporation Virtual machine management using processor state information
US7038685B1 (en) 2003-06-30 2006-05-02 Nvidia Corporation Programmable graphics processor for multithreaded execution of programs
US7119808B2 (en) 2003-07-15 2006-10-10 Alienware Labs Corp. Multiple parallel processor computer graphics system
US6995767B1 (en) 2003-07-31 2006-02-07 Nvidia Corporation Trilinear optimization for texture filtering
WO2005015504A1 (en) 2003-08-07 2005-02-17 Renesas Technology Corp. Image processing semiconductor processor
US7525547B1 (en) 2003-08-12 2009-04-28 Nvidia Corporation Programming multiple chips from a command buffer to process multiple images
US7015915B1 (en) 2003-08-12 2006-03-21 Nvidia Corporation Programming multiple chips from a command buffer
US6956579B1 (en) 2003-08-18 2005-10-18 Nvidia Corporation Private addressing in a multi-processor graphics processing system
US7075541B2 (en) 2003-08-18 2006-07-11 Nvidia Corporation Adaptive load balancing in a multi-processor graphics processing system
US7388581B1 (en) 2003-08-28 2008-06-17 Nvidia Corporation Asynchronous conditional graphics rendering
US8250412B2 (en) 2003-09-26 2012-08-21 Ati Technologies Ulc Method and apparatus for monitoring and resetting a co-processor
US20050086040A1 (en) * 2003-10-02 2005-04-21 Curtis Davis System incorporating physics processing unit
US7782325B2 (en) 2003-10-22 2010-08-24 Alienware Labs Corporation Motherboard for supporting multiple graphics cards
US8035646B2 (en) 2003-11-14 2011-10-11 Microsoft Corporation Systems and methods for downloading algorithmic elements to a coprocessor and corresponding techniques
US7221896B2 (en) * 2003-12-09 2007-05-22 Sharp Kabushiki Kaisha Fixing device for fixing an unfixed developing agent on a recording medium and image forming apparatus including the same
US7015914B1 (en) 2003-12-10 2006-03-21 Nvidia Corporation Multiple data buffers for processing graphics data
US7053901B2 (en) 2003-12-11 2006-05-30 Nvidia Corporation System and method for accelerating a special purpose processor
US7248261B1 (en) 2003-12-15 2007-07-24 Nvidia Corporation Method and apparatus to accelerate rendering of shadow effects for computer-generated images
JP3879002B2 (en) 2003-12-26 2007-02-07 国立大学法人宇都宮大学 Self-optimizing arithmetic unit
US6975325B2 (en) 2004-01-23 2005-12-13 Ati Technologies Inc. Method and apparatus for graphics processing using state and shader management
US7259606B2 (en) 2004-01-27 2007-08-21 Nvidia Corporation Data sampling clock edge placement training for high speed GPU-memory interface
US7483034B2 (en) 2004-02-25 2009-01-27 Siemens Medical Solutions Usa, Inc. System and method for GPU-based 3D nonrigid registration
US7289125B2 (en) 2004-02-27 2007-10-30 Nvidia Corporation Graphics device clustering with PCI-express
US7027062B2 (en) 2004-02-27 2006-04-11 Nvidia Corporation Register based queuing for texture requests
US20050275760A1 (en) 2004-03-02 2005-12-15 Nvidia Corporation Modifying a rasterized surface, such as by trimming
US7978194B2 (en) 2004-03-02 2011-07-12 Ati Technologies Ulc Method and apparatus for hierarchical Z buffering and stenciling
US20050195186A1 (en) 2004-03-02 2005-09-08 Ati Technologies Inc. Method and apparatus for object based visibility culling
US7315912B2 (en) 2004-04-01 2008-01-01 Nvidia Corporation Deadlock avoidance in a bus fabric
US7336284B2 (en) 2004-04-08 2008-02-26 Ati Technologies Inc. Two level cache memory architecture
US7265759B2 (en) 2004-04-09 2007-09-04 Nvidia Corporation Field changeable rendering system for a computing device
US6985152B2 (en) 2004-04-23 2006-01-10 Nvidia Corporation Point-to-point bus bridging without a bridge controller
US20050237329A1 (en) 2004-04-27 2005-10-27 Nvidia Corporation GPU rendering to system memory
US7738045B2 (en) 2004-05-03 2010-06-15 Broadcom Corporation Film-mode (3:2/2:2 Pulldown) detector, method and video device
US7079156B1 (en) 2004-05-14 2006-07-18 Nvidia Corporation Method and system for implementing multiple high precision and low precision interpolators for a graphics pipeline
US7426724B2 (en) 2004-07-02 2008-09-16 Nvidia Corporation Optimized chaining of vertex and fragment programs
US7218291B2 (en) 2004-09-13 2007-05-15 Nvidia Corporation Increased scalability in the fragment shading pipeline
US7868891B2 (en) 2004-09-16 2011-01-11 Nvidia Corporation Load balancing
US7571296B2 (en) 2004-11-11 2009-08-04 Nvidia Corporation Memory controller-adaptive 1T/2T timing control
US7633505B1 (en) 2004-11-17 2009-12-15 Nvidia Corporation Apparatus, system, and method for joint processing in graphics processing units
US7477256B1 (en) 2004-11-17 2009-01-13 Nvidia Corporation Connecting graphics adapters for scalable performance
US8066515B2 (en) 2004-11-17 2011-11-29 Nvidia Corporation Multiple graphics adapter connection systems
US7598958B1 (en) 2004-11-17 2009-10-06 Nvidia Corporation Multi-chip graphics processing unit apparatus, system, and method
US7275123B2 (en) 2004-12-06 2007-09-25 Nvidia Corporation Method and apparatus for providing peer-to-peer data transfer within a computing environment
US7451259B2 (en) 2004-12-06 2008-11-11 Nvidia Corporation Method and apparatus for providing peer-to-peer data transfer within a computing environment
US7372465B1 (en) 2004-12-17 2008-05-13 Nvidia Corporation Scalable graphics processing for remote display
US20060156399A1 (en) 2004-12-30 2006-07-13 Parmar Pankaj N System and method for implementing network security using a sequestered partition
US7924281B2 (en) 2005-03-09 2011-04-12 Ati Technologies Ulc System and method for determining illumination of a pixel by shadow planes
US7796095B2 (en) 2005-03-18 2010-09-14 Ati Technologies Ulc Display specific image processing in an integrated circuit
US7568056B2 (en) 2005-03-28 2009-07-28 Nvidia Corporation Host bus adapter that interfaces with host computer bus to multiple types of storage devices
US7681187B2 (en) 2005-03-31 2010-03-16 Nvidia Corporation Method and apparatus for register allocation in presence of hardware constraints
US20080143731A1 (en) 2005-05-24 2008-06-19 Jeffrey Cheng Video rendering across a high speed peripheral interconnect bus
US7817155B2 (en) 2005-05-24 2010-10-19 Ati Technologies Inc. Master/slave graphics adapter arrangement
US7539801B2 (en) 2005-05-27 2009-05-26 Ati Technologies Ulc Computing device with flexibly configurable expansion slots, and method of operation
US20060282604A1 (en) 2005-05-27 2006-12-14 Ati Technologies, Inc. Methods and apparatus for processing graphics data using multiple processing circuits
US7516301B1 (en) * 2005-12-16 2009-04-07 Nvidia Corporation Multiprocessor computing systems with heterogeneous processors
US7325086B2 (en) 2005-12-15 2008-01-29 Via Technologies, Inc. Method and system for multiple GPU support
US7728841B1 (en) 2005-12-19 2010-06-01 Nvidia Corporation Coherent shader output for multiple targets
US7768517B2 (en) 2006-02-21 2010-08-03 Nvidia Corporation Asymmetric multi-GPU processing
US8284204B2 (en) 2006-06-30 2012-10-09 Nokia Corporation Apparatus, method and a computer program product for providing a unified graphics pipeline for stereoscopic rendering
US7941791B2 (en) 2007-04-13 2011-05-10 Perry Wang Programming environment for heterogeneous processor resource integration

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6259460B1 (en) * 1998-03-26 2001-07-10 Silicon Graphics, Inc. Method for efficient handling of texture cache misses by recirculation
US6753878B1 (en) * 1999-03-08 2004-06-22 Hewlett-Packard Development Company, L.P. Parallel pipelined merge engines

Also Published As

Publication number Publication date
US20080129741A1 (en) 2008-06-05
US20200184594A1 (en) 2020-06-11
US7812844B2 (en) 2010-10-12
US10147157B2 (en) 2018-12-04
US20080117218A1 (en) 2008-05-22
US10867364B2 (en) 2020-12-15
US20080122850A1 (en) 2008-05-29
US9659340B2 (en) 2017-05-23
EP1846834A2 (en) 2007-10-24
US20110169841A1 (en) 2011-07-14
US7834880B2 (en) 2010-11-16
US20080136827A1 (en) 2008-06-12
JP2008538620A (en) 2008-10-30
US20080117219A1 (en) 2008-05-22
US7843457B2 (en) 2010-11-30
CN101849227A (en) 2010-09-29
US20140253565A1 (en) 2014-09-11
US7812846B2 (en) 2010-10-12
US7808504B2 (en) 2010-10-05
US10614545B2 (en) 2020-04-07
WO2006117683A3 (en) 2009-05-22
US8754897B2 (en) 2014-06-17
US20210104010A1 (en) 2021-04-08
US7812845B2 (en) 2010-10-12
WO2006117683A2 (en) 2006-11-09
US11341602B2 (en) 2022-05-24
US20060232590A1 (en) 2006-10-19
US20080129745A1 (en) 2008-06-05
US20080122851A1 (en) 2008-05-29
US20060279577A1 (en) 2006-12-14
US20190180408A1 (en) 2019-06-13
US20080129744A1 (en) 2008-06-05
US20080129743A1 (en) 2008-06-05
US20080129742A1 (en) 2008-06-05
CA2595085A1 (en) 2006-11-09
US20140292775A1 (en) 2014-10-02

Similar Documents

Publication Publication Date Title
US11341602B2 (en) System on chip having processing and graphics units
US9405586B2 (en) Method of dynamic load-balancing within a PC-based computing system employing a multiple GPU-based graphics pipeline architecture supporting multiple modes of GPU parallelization
US20090096798A1 (en) Graphics Processing and Display System Employing Multiple Graphics Cores on a Silicon Chip of Monolithic Construction

Legal Events

Date Code Title Description
AS Assignment

Owner name: LUCID INFORMATION TECHNOLOGY, LTD., ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BAKALASH, REUVEN;REMEZ, OFFIR;FOGEL, EFI;REEL/FRAME:021666/0395

Effective date: 20060403

STCB Information on status: application discontinuation

Free format text: EXPRESSLY ABANDONED -- DURING EXAMINATION

AS Assignment

Owner name: GOOGLE LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LUCIDLOGIX TECHNOLOGY LTD.;REEL/FRAME:046361/0169

Effective date: 20180131