WO2008048940A2 - Graphics processing unit with shared arithmetic logic unit - Google Patents

Graphics processing unit with shared arithmetic logic unit Download PDF

Info

Publication number
WO2008048940A2
WO2008048940A2 PCT/US2007/081428 US2007081428W WO2008048940A2 WO 2008048940 A2 WO2008048940 A2 WO 2008048940A2 US 2007081428 W US2007081428 W US 2007081428W WO 2008048940 A2 WO2008048940 A2 WO 2008048940A2
Authority
WO
WIPO (PCT)
Prior art keywords
stage
attribute
gpu
image data
setup
Prior art date
Application number
PCT/US2007/081428
Other languages
French (fr)
Other versions
WO2008048940A3 (en
Inventor
Guofang Jiao
Brian Ruttenberg
Chun Yu
Yun Du
Original Assignee
Qualcomm Incorporated
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Incorporated filed Critical Qualcomm Incorporated
Priority to JP2009533470A priority Critical patent/JP2010507175A/en
Priority to EP07854073A priority patent/EP2084670A2/en
Priority to CA002666064A priority patent/CA2666064A1/en
Publication of WO2008048940A2 publication Critical patent/WO2008048940A2/en
Publication of WO2008048940A3 publication Critical patent/WO2008048940A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing

Definitions

  • This disclosure relates to graphics processing units and, more particularly, graphics processing units that have a multi-stage pipelined configuration for processing images.
  • a graphics processing unit is a dedicated graphics rendering device utilized to manipulate and display computerized graphics on a display. GPUs are built with a highly parallel structure that provides more efficient processing than typical, general purpose central processing units (CPUs) for a range of complex graphics-related algorithms. For example, the complex algorithms may correspond to representations of three-dimensional computerized graphics.
  • a GPU may implement a number of so-called "primitive" graphics operations, such as forming points, lines, and triangles, to create complex, three-dimensional images on a display more quickly than drawing the images directly to the display with a CPU.
  • Vertex shading and pixel shading are often utilized in the video gaming industry to determine final surface properties of a computerized image, such as light absorption and diffusion, texture mapping, light reflection and refraction, shadowing, surface displacement, and post-processing effects.
  • GPUs typically include a number of pipeline stages such as one or more shader stages, setup stages, rasterizer stages and interpolation stages.
  • a vertex shader for example, is typically applied to image data, such as the geometry for an image, and the vertex shader generates vertex coordinates and attributes of vertices within the image data.
  • Vertex attributes may include color, normal, and texture coordinates associated with a vertex.
  • One or more primitive setup and rejection modules may form primitive shapes such as points, lines, or triangles, and may reject hidden or invisible primitive shapes based on the vertices within the image data.
  • An attribute setup module computes gradients of attributes within the primitive shapes for the image data. Once the attribute gradient values are computed, primitive shapes for the image data may be converted into pixels, and pixel rejection may be performed with respect to hidden primitive shapes.
  • An attribute interpolator then interpolates the attributes over pixels within the primitive shapes for the image data based on the attribute gradient values, and sends the interpolated attribute values to the fragment shader for pixel rendering. Results of the fragment shader are output to a post-processing block and a frame buffer for presentation of the processed image on the display. This process is performed along successive stages of the GPU pipeline.
  • this disclosure describes a graphics processing unit (GPU) pipeline that uses one or more shared arithmetic logic units (ALUs).
  • ALUs shared arithmetic logic units
  • the stages of the disclosed GPU pipeline may be rearranged relative to conventional GPU pipelines.
  • efficiencies may be achieved in the image processing.
  • an extended vertex cache is also described for the GPU pipeline, which can significantly reduce the amount of data needed to be transferred through the successive stages of the GPU pipeline.
  • the disclosure provides a method comprising receiving image data for an image within a GPU pipeline, and processing the image data within the GPU pipeline using a shared arithmetic logic unit for an attribute gradient setup stage and an attribute interpolator stage.
  • this disclosure provides a device comprising a GPU pipeline that receives image data for an image and processes the image data within multiple stages, wherein the multiple stages include an attribute gradient setup stage and an attribute interpolator stage, and a shared arithmetic logic unit that performs attribute gradient setups and attribute interpolations associated with both the attribute gradient setup stage and the attribute interpolator stage.
  • this disclosure provides a device comprising means for receiving image data for an image, means for processing the image data in an attribute gradient setup stage using a shared arithmetic logic unit, and means for processing the image data in an attribute interpolator stage using the shared arithmetic logic unit.
  • the techniques described herein may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the techniques may be realized in whole or in part by a computer readable medium comprising instructions that, when executed by a machine, such as a processor, perform one or more of the methods described herein.
  • this disclosure also contemplates a computer-readable medium comprising instructions that upon execution cause a machine to receive image data for an image within a GPU pipeline, and process the image data within the GPU pipeline using a shared arithmetic logic unit for an attribute gradient setup stage and an attribute interpolator stage.
  • FIG. 1 is a block diagram illustrating an exemplary device including a graphics processing unit (GPU) that uses one or more shared arithmetic logic units (ALUs) and an extended vertex cache.
  • GPU graphics processing unit
  • ALUs shared arithmetic logic units
  • FIG. 2 is a block diagram illustrating a conventional GPU pipeline.
  • FIG. 3 is a block diagram illustrating an exemplary GPU according to an embodiment of this disclosure.
  • FIG. 4 is a block diagram illustrating an exemplary GPU according to another embodiment of this disclosure.
  • FIGS. 5 and 6 are flowcharts illustrating techniques that may be performed in a GPU pipeline according to embodiments of this disclosure.
  • FIG. 1 is a block diagram illustrating an exemplary device 10 including a graphics processing unit (GPU) 14 that includes a GPU pipeline 18 for processing computerized images.
  • GPU pipeline 18 utilizes one or more shared arithmetic logic units (ALUs) 15 to reduce complexity of GPU 14 and create efficiency in the image processing.
  • GPU pipeline may implement an extended vertex cache 16 in order to reduce the amount of data propagated through GPU pipeline 18.
  • the stages of GPU pipeline 18 may be rearranged relative to conventional GPU pipelines, which may improve the process of image processing and facilitate the use of shared ALUs 15. Some stages, however, may still use dedicated (unshared) ALUs like those used in stages of conventional GPU pipelines.
  • device 10 includes a controller 12, GPU 14 and a display 20.
  • Device 10 may also include many other components (not shown).
  • device 10 may comprise a wireless communication device and display 20 may comprise a display within the wireless communication device.
  • device 10 may comprise a desktop or notebook computer, and display 20 may comprise a dedicated monitor or display of the computer.
  • Device 10 may also comprise a wired communication device or a device not principally directed to communication.
  • device 10 may comprise a personal digital assistant (PDA), handheld video game device, game console or television device that includes display 20.
  • PDA personal digital assistant
  • Computerized video imagery may be obtained from a remote device or from a local device, such as a video server that generates video or video objects, or a video archive that retrieves stored video or video objects.
  • Controller 12 controls operation of GPU 14. Controller 12 may be a specific controller for GPU 14 or a more general controller that controls the overall operation of device 10.
  • GPU 14 includes a GPU pipeline 18 that implements and accesses shared ALUs 15.
  • GPU 14 may include an extended vertex cache 16 coupled to GPU pipeline 18. Again, shared ALUs may create efficiency in the image processing and the incorporation of extended vertex cache 16 may reduce an amount of data passing through GPU pipeline 18 within GPU 14.
  • GPU pipeline 18 may be arranged in a non-conventional manner in order to facilitate the use of shared ALUs 15 and extended vertex cache 16
  • GPU 14 receives image data, such as geometrical data and rendering commands for an image from controller 12 within device 10.
  • the image data may correspond to representations of complex, two-dimensional or three-dimensional computerized graphics.
  • GPU 14 processes the image data to present image effects, background images, or video gaming images, for example, to a user of device 10 via a display 20.
  • the images may be formed as video frames in a sequence of video frames.
  • Display 20 may comprise a liquid crystal display (LCD), a cathode ray tube (CRT) display, a plasma display, or another type of display integrated with or coupled to device 10.
  • LCD liquid crystal display
  • CRT cathode ray tube
  • plasma display or another type of display integrated with or coupled to device 10.
  • controller 12 may receive the image data from applications operating within device 10.
  • device 10 may comprise a computing device operating a video gaming application based on image data received from an internal hard drive or a removable data storage device.
  • controller 12 may receive the image data from applications operating external to device 10.
  • device 10 may comprise a computing device operating a video gaming application based on image data received from an external server via a wired or wireless network, such as the Internet.
  • the image data may be received via streaming media or broadcast media, which may be wired, wireless or a combination of both.
  • controller 12 receives the corresponding image data from an application and sends the image data to GPU 14 for image processing.
  • GPU 14 processes the image data to prepare the corresponding image for presentation on display 20.
  • GPU 14 may implement a number of primitive graphics operations, such as forming points, lines, and triangles, to create a three-dimensional image represented by the received image data on display 20.
  • GPU pipeline 18 receives the image data for the image and stores attributes for vertices within the image data in extended vertex cache 16.
  • GPU pipeline 18 only passes vertex coordinates that identify the vertices, and vertex cache index values that indicate storage locations of the attributes for each of the vertices in extended vertex cache 16 to other processing stages along GPU pipeline 18.
  • GPU pipeline 18 temporarily stores the vertex coordinates in extended vertex cache 16. In this manner, GPU pipeline 18 is not clogged with the transfer of the vertex attributes between stages, and can support increased throughput, and storage buffers between stages may also be eliminated or possibly reduced in size.
  • the vertex coordinates identify the vertices within the image data based on, for example, a four-dimensional coordinate system with X, Y, and Z (width, height, and depth) coordinates that identify a location of a vertex within the image data, and a W coordinate that comprises a perspective parameter for the image data.
  • the vertex attributes may include color, normal, and texture coordinates associated with a vertex.
  • one or more shared ALUs 15 are used for different stages.
  • a shared ALU may be used for both a triangle setup stage and a Z-Gradient setup stage.
  • a shared lookup table for reciprocal operation may also be used in these triangle setup and Z-Gradient setup stages.
  • a shared ALU may be used for both attribute gradient setup stage and an attribute interpolator stage.
  • the attribute gradient setup stage can be located much later in the pipeline, and the attribute interpolator stage may immediately follow the attribute gradient setup stage. This allows sharing of an ALU, and may have added benefits in that attribute gradient setups can be avoided for hidden primitives that are rejected.
  • GPU pipeline 18 within GPU 14 includes several stages, including a vertex shader stage, several primitive setup stages, such as triangle setup and Z-Gradient setup, a rasterizer stage, a primitive rejection sages, an attribute gradient setup stage, an attribute interpolation stage, and a fragment shader stage. More or fewer stages may be included in other embodiments. Various ones of the different stages of GPU pipelines may also be referred to as "modules" of the pipeline in this disclosure.
  • the various primitive setup stages and primitive rejection stages only utilize vertex coordinates to form primitives and may discard a subset of the primitives that are unnecessary for the image.
  • Primitives are the simplest types of geometric figures, including points, lines, triangles, and other polygons, and may be formed with one or more vertices within the image data.
  • Primitives or portions of primitives may be rejected from consideration during processing of a specific frame of the image when the primitives or the portions of primitives are invisible (e.g., located on a backside of an object) within the image frame, or are hidden (e.g., located behind another object or transparent) within the image frame. This is the purpose of a hidden primitive and pixel rejection stages.
  • Attribute gradient setup and attribute interpolation stages may utilize the vertex attributes to compute attribute gradient values and interpolate the attributes based on the attribute gradient values.
  • Techniques described in this disclosure defer the computationally intensive setup of attribute gradients to just before attribute interpolation in GPU pipeline 18. This allows a shared ALU to be used by both the attribute gradient setup and attribute interpolation stages.
  • the vertex attributes may be retrieved from extended vertex cache 16 for attribute gradient setup as one of the last steps before attribute interpolation in GPU pipeline 18. In this way, the vertex attributes are not introduced to GPU pipeline 18 until after primitive setup and primitive rejection, which creates efficiencies insofar as attribute gradient setup can be avoided for rejected primitives.
  • GPU pipeline 18 can be made more efficient.
  • the extended vertex cache 16 can eliminate the need to pass large amounts of attribute data through GPU pipeline 18, and may substantially eliminate bottlenecks in GPU pipeline 18 for primitives that include large numbers of attributes.
  • deferring the attribute gradient setup to just before attribute interpolation in GPU pipeline 18 may improve image processing speed within GPU pipeline 18. More specifically, deferring the attribute gradient setup within GPU pipeline 18 until after rejection of the subset of the primitives that are unnecessary for the image may substantially reduce computations and power consumption as the attribute gradient setup will only be performed on a subset of the primitives that are necessary for the image.
  • Display 20 may be coupled to device 10 either wirelessly or with a wired connection.
  • device 10 may comprise a server or other computing device of a wireless communication service provider, and display 20 may be included within a wireless communication device.
  • display 20 may comprise a display within a mobile radiotelephone, a satellite radiotelephone, a portable computer with a wireless communication card, a personal digital assistant (PDA) equipped with wireless communication capabilities, or any of a variety of devices capable of wireless communication.
  • PDA personal digital assistant
  • device 10 may comprise a server or other computing device connected to display 20 via a wired network, and display 20 may be included within a wired communication device or a device not principally directed to communication. In other embodiments, display 20 may be integrated within device 10.
  • FIG. 2 is a block diagram illustrating a conventional GPU pipeline 22.
  • GPU pipeline 22 of FIG. 2 includes, in the following order, a command engine 24, a vertex shader 26, a triangle setup module 28, a Z-Gradient setup module 29, an attribute gradient setup module
  • a rasterizer 31 a hidden primitive and pixel rejection module 32, an attribute interpolator 34, a fragment shader 36, and a post processor 38.
  • hidden primitive and pixel rejection module 32 includes a dedicated arithmetic logic unit (ALU), which are labeled as elements 25A-25H respectively.
  • ALU arithmetic logic unit
  • Command engine 24 receives an image data for an image from a controller of the device in which conventional GPU pipeline 22 resides.
  • the image data may correspond to representations of complex, two-dimensional or three-dimensional computerized graphics.
  • Command engine 24 passes the image data along GPU pipeline 22 to the other processing stages. In particular, all of the attributes and coordinates of the image data are passed from stage to stage along GPU pipeline 22. Each respective stage uses its respective ALU, and if any bottlenecks occur, the image processing may be stalled at that respective stage.
  • FIG. 3 is a block diagram illustrating a GPU 14A, an exemplary embodiment of GPU 14 from FIG. 1, including a GPU pipeline 18A.
  • a set of ALUs 45 A, 55A, 45B, 45C, 55B and 45D, and an extended vertex cache 16A are coupled to GPU pipeline 18 A.
  • Extended vertex cache 16A within GPU 14A may reduce an amount of data passing through GPU pipeline 18A within GPU 14A.
  • ALUs 55A and 55B are shared ALUs, each of which are used by two different successive stages in the GPU pipeline 18 A.
  • the stages of GPU pipeline 18A are rearranged relative to conventional GPU pipeline 22 of FIG. 2, which may facilitate the sharing of ALU 55B by attribute gradient setup module 52 and attribute interpolator 54.
  • attribute gradient setup module 52 is executed after hidden primitive and pixel rejection module 50, efficiencies are gained. Namely, attribute gradient setup may be avoided for any hidden or rejected primitives.
  • GPU pipeline 18A includes a command engine 42, a vertex shader 44, a triangle and Z-Gradient setup modules 46 and 47, a rasterizer 48, a hidden primitive and pixel rejection module 50, an attribute gradient setup module 52, an attribute interpolator 54, a fragment shader 56, and a post processor 58.
  • attribute gradient setup module 52 follows hidden primitive and pixel rejection module 50.
  • Attribute interpolator 54 immediately follows attribute gradient setup module 52.
  • Triangle and Z-Gradient setup modules 46 and 47 may be collectively referred to as primitive setup modules, and some cases, other types of primitive setups may also be used.
  • Command engine 42 receives image data, which may include rendering commands, for an image from controller 12 of device 10.
  • the image data may correspond to representations of complex, two-dimensional or three-dimensional computerized graphics.
  • Command engine 42 passes a subset of this data, i.e., information for vertices within the image data that are not included in extended vertex cache 16A ("missed vertices") to vertex shader 44.
  • Command engine 42 will pass vertex cache index information for missed vertices to primitive setup and rejection module 46.
  • Command engine 42 passes vertex cache index information for vertices within the image data that are already included in extended vertex cache 16A (“hit vertices") directly to primitive setup and rejection module 46.
  • Vertex data for hit vertices are not typically sent to vertex shader 44. Initial processing of hit and missed vertices within the image data is described in more detail below.
  • GPU pipeline 18A includes several stages, although the techniques of this disclosure may operate in pipelines with more or fewer stages than those illustrated.
  • Vertex shader 44 is applied to the missed vertices within the image data and determines surface properties of the image at the missed vertices within an image data. In this way, vertex shader 44 generates vertex coordinates and attributes of each of the missed vertices within the image data. Vertex shader 44 then stores the attributes for the missed vertices in extended vertex cache 16A. In this manner, the attributes need not be passed along the GPU pipeline 18 A, but can be accessed from extended vertex cache 16A, as needed, by respective stages of the GPU pipeline 18 A. Vertex shader 44 is not applied to each of the hit vertices within the image data as vertex coordinates and attributes of each of the hit vertices may have been previously generated and stored in extended vertex cache 16A.
  • the vertex coordinates identify the vertices within the image data (such as geometry within the image) based on, for example, a four-dimensional coordinate system with X, Y, and Z (width, height, and depth) coordinates that identify a location of a vertex within the image data, and a W coordinate that comprises a perspective parameter for the image data.
  • the vertex attributes may include color, normal, and texture coordinates associated with a vertex.
  • Extended vertex cache 16A may be easily configured for different numbers of attributes and primitive types.
  • Vertex cache index values that indicate storage locations within extended vertex cache 16A of the vertex coordinates and attributes for both the hit and missed vertices in the image data are then placed in a buffer (not shown) positioned between command engine 42 and primitive setup and rejection module 46.
  • Triangle setup 46 and Z-Gradient setup 47 are exemplary primitive setup stages, although additional primitive setup stages may also be included.
  • a shared ALU 55 A is used by both triangle setup 46 and Z-Gradient setup 47. The different stages use either vertex coordinates or vertex attributes to process a respective image. For example, triangle setup 46, Z-Gradient setup 47, rasterizer 48, and hidden primitive and pixel rejection module 50 only utilize the vertex coordinates.
  • attribute gradient setup module 52 and attribute interpolator 54 utilize the vertex attributes. Therefore, according to this disclosure, attribute gradient setup module 52 is deferred to just before attribute interpolator 54 in GPU pipeline 18 A.
  • the vertex attributes may be retrieved from extended vertex cache 16A for attribute gradient setup module 52 as one of the last steps in GPU pipeline 18A before interpolating the attributes with attribute interpolator 54. In this way, the vertex attributes are not introduced to GPU pipeline 18A until after hidden primitive and pixel rejection module 50, and just before attribute interpolator 54, providing significant gains in efficiency.
  • attribute interpolator 54 immediately follows attribute gradient setup module 52, these respective stages may share ALU 55B. For large sized primitives, ALU 55B will be utilized most for interpolation.
  • ALU 55B when primitives are small, ALU 55B will be used mostly for attribute setup. A relatively large ALU 55B can promote processing speed particularly for gradient setup, although a relatively small ALU 55B can reduce power consumption at a cost of performance speed in the gradient setup.
  • device 10 can eliminate a large amount of data from passing through GPU pipeline 18 A, which reduces the width of the internal data bus included in GPU pipeline 18 A. By reducing the amount of data movement, these techniques can also reduce power consumption within GPU 18 A.
  • buffers positioned between each of the processing stages may be removed from GPU pipeline 18A to reduce the area of GPU 14A within device 10.
  • Primitive setup modules 46 and 47 receive the vertex cache index values for the attributes of each of the vertices in the image data. Primitive setup modules 46 and 47 then retrieve vertex coordinates for each of the vertices within the image data using the vertex cache index values. Primitive setup modules 46 and 47 form the respective primitives with one or more vertices within the image data. Primitives are the simplest types of geometric figures and may include points, lines, triangles, and other polygons. According to this disclosure, the triangle setup 28 and Z-Gradient setup 29 can share ALU 55A in order to promote efficiency. The triangle setup 28 and Z-Gradient setup 29 may also share a lookup table for reciprocal operation for additional efficiency.
  • a Z-Gradient refers to a difference of two Z coordinates of two neighbor pixels over a triangle in either X direction or Y direction.
  • Z-Gradient setup is used to compute the difference of two Z values by using three original vertices' Z values of the triangle and XY coordinates.
  • primitive setup modules 46 and 47 may also reject some primitives by performing scissoring and backface culling using the XY coordinates of the vertices within the image data. Scissoring and backface culling rejects primitives and portions of primitives from consideration during processing of a specific frame of the image when the primitives and the portions of primitives are invisible within the image frame.
  • the primitives and the portions of primitives may be located on a backside of an object within the image frame.
  • Primitive setup modules 46 and 47 may request extended vertex cache 16Ato release storage space for the attributes associated with the rejected primitives.
  • device 10 may substantially eliminate bottlenecks in GPU pipeline 18A for primitives that include large numbers of attributes.
  • Rasterizer 48 converts the primitives for the image data into pixels based on the XY coordinates of vertices within the primitives and the number of pixels included in the primitives.
  • Hidden primitive and pixel rejection module 50 rejects additional hidden primitives and hidden pixels within the primitives using the early depth and stencil test based on the Z coordinates of the vertices within the primitives. If hidden primitive and pixel rejection module 50 rejects all pixels within a primitive, the primitive is automatically rejected. Primitives or pixels within primitives may be considered hidden, and be rejected from consideration during processing of a specific frame of the image, when the primitives or the pixels within primitives are located behind another object within the image frame or are transparent within the image frame. Hidden primitive and pixel rejection module 50 may request extended vertex cache 16Ato release storage space for the attributes associated with the rejected primitives.
  • Attribute gradient setup module 52 retrieves the vertex attributes from extended vertex cache 16A using the vertex cache index values for each of the vertices within the primitives. Attribute gradient setup module 52 computes gradients of attributes associated with the primitives for the image data.
  • An attribute gradient comprises a difference between the attribute value at a first pixel and the attribute value at a second pixel within a primitive moving in either a horizontal (X) direction or a vertical (Y) direction.
  • attribute gradient setup module 52 may request extended vertex cache 16Ato release storage space for the attributes of the vertices within the primitive.
  • attribute interpolator 54 interpolates the attributes over pixels within the primitives based on the attribute gradient values. Again, the same ALU 55B is used in the attribute gradient setup stage 52 and the attribute interpolator stage 54.
  • the interpolated attribute values are input to fragment shader 56 to perform pixel rendering of the primitives. Fragment shader 56 determines surface properties of the image at pixels within the primitives for the image data. Results of fragment shader 56 are then output to post-processor 58 for presentation of the processed image on display 20.
  • vertex shader 44 may not be applied to missed vertices within the image data.
  • FIG. 4 is a block diagram illustrating GPU 14B, another exemplary embodiment of GPU 14 from FIG. 1, including a GPU pipeline 18B and an extended vertex cache 16B coupled to GPU pipeline 18B.
  • GPU pipeline 18B includes a command engine 62, a vertex shader 64, a triangle set up module 66, and Z-Gradient setup module 67 (modules 66 and 67 are collectively referred to as primitive setup modules), a rasterizer 68, a hidden primitive and pixel rejection module 70, an attribute gradient setup module 72, an attribute interpolator 74, a fragment shader 76, and a post-processor 78.
  • GPU 14B illustrated in FIG. 4 may operate substantially similar to GPU 14A illustrated in FIG. 3, except for the initial processing of vertices in the image data.
  • the different stages utilize ALUs 65A, 75A, 65B, 65C, 75B and 65D respectively.
  • ALUs 75A and 75B are shared for two different stages of GPU pipeline 18B.
  • Command engine 62 receives image data, including geometry and rendering commands, for an image from controller 12 of device 10.
  • Command engine 62 passes the image data along GPU pipeline 18B to the other processing stages. In other words, command engine 62 passes information for all the vertices within the image data to vertex shader 64.
  • vertex shader 64 is applied to all vertices within the image data. Vertex shader 64 is applied to the image data and determines surface properties of the image at the vertices within the image data. In this way, vertex shader 64 generates vertex coordinates and attributes of each of the vertices within the image data. Vertex shader 64 then stores only the attributes in extended vertex cache 16B. Vertex shader 64 passes the vertex coordinates and vertex cache index values that indicate storage locations of the attributes within extended vertex cache 16B for each of the vertices in the image data along GPU pipeline 18B.
  • vertex shader 64 passes the vertex coordinates and vertex cache index values for the vertices in the image data directly to primitive setup and rejection module 66, all the buffers positioned between each of the processing stages may be removed from GPU pipeline 18B.
  • Primitive setup modules 66 and 67 forms primitives with one or more vertices within the image data. These primitive setup modules 66 and 67 may share one or more ALUs. Primitive setup and rejection module 66 may request extended vertex cache 16B to release storage space for the attributes associated with the rejected primitives.
  • Rasterizer 68 converts the primitives for the image data into pixels based on the XY coordinates of vertices within the primitives and the number of pixels included in the primitives.
  • Hidden primitive and pixel rejection module 70 rejects hidden primitives and hidden pixels within the primitives using the early depth and stencil test based on the Z coordinates of the vertices within the primitives. Hidden primitive and pixel rejection module 70 may request extended vertex cache 16B to release storage space for the attributes associated with the rejected primitives.
  • Attribute gradient setup module 72 retrieves the vertex attributes from extended vertex cache 16B using the vertex cache index values for each of the vertices within the primitives. Attribute gradient setup module 72 computes gradients of attributes associated with the primitives for the image data. After attribute gradient setup module 72 computes gradients of attributes of all vertices within a primitive for the image data, attribute gradient setup module 72 may request extended vertex cache 16B to release storage space for the attributes of the vertices within the primitive.
  • attribute interpolator 74 interpolates the attributes over pixels within the primitives based on the attribute gradient values by sharing one or more ALUs with the attribute gradient setup module 72.
  • the interpolated attribute values are then input to fragment shader 76 to perform pixel rendering of the primitives.
  • Fragment shader 76 determines surface properties of the image at pixels within the primitives for the image data. Results of fragment shader 76 will be output to postprocessor 78 for presentation of the processed image on display 20.
  • FIG. 5 is a flowchart illustrating an exemplary operation of processing an image within a GPU using an extended vertex cache. The operations of FIG. 5 will be described with reference to GPU 14 from FIG. 1 although similar techniques could be used with other GPUs.
  • Extended vertex cache 16 may be created within GPU 14 during manufacture of device 10 and coupled to GPU pipeline 18 (80). Extended vertex cache 16 may be easily configured for different numbers of attributes and primitive types.
  • GPU 14 receives image data, which may include rendering commands and geometry, for an image from controller 12 of device 10 (82).
  • the image data may correspond to representations of complex, two-dimensional or three-dimensional computerized graphics.
  • GPU 14 sends the image data to GPU pipeline 18 to process the image for display on display 20 connected to device 10.
  • GPU pipeline 18 stores attributes for vertices within the image data in extended vertex cache 16 (84). In some embodiments, GPU pipeline 18 temporarily stores vertex coordinates for the vertices within the image data in extended vertex cache 16.
  • GPU pipeline 18 then sends vertex coordinates that identify the vertices, and vertex cache index values that indicate storage locations of the attributes for each of the vertices in extended vertex cache 16 to other processing stages along GPU pipeline 18 (86).
  • GPU pipeline 18 processes the image based on the vertex coordinates and the vertex cache index values for each of the vertices in the image data (88). During such processing, GPU pipeline 18 reuses one or more ALUs 18 along the GPU pipeline 18 (89). Specifically, according to this disclosure, a shared ALU can be used for an attribute gradient setup stage and an attribute interpolation stage. The non-conventional ordering of the GPU pipeline may facilitate the ability for the attribute gradient setup stage and the attribute interpolation stage to share an ALU.
  • FIG. 6 is a flowchart illustrating another exemplary operation of processing an image with a GPU pipeline using shared ALUs. For purposes of explanation, the operation shown in FIG. 6 will be described with reference to GPU 14A from FIG. 3 although similar techniques could be used with other GPUs.
  • Command engine 42 receives image data, including geometry and rendering commands, for an image and passes the image data along GPU pipeline 18B.
  • vertex shader 44 performs vertex shading using a first ALU 45 A (91).
  • Triangle setup module 46 performs triangle setup for any triangle primitives using a second ALU 55A (92). This second ALU 55A is reused by another stage insofar as Z-Gradient setup module 47 performs Z-Gradient setup using second ALU 55 A (93).
  • Rasterizer then performs rasterizing using a third ALU 45B (94).
  • Hidden primitive and pixel rejection module 50 performs an early depth/stencil test using a forth ALU 45 C in order to remove primitives that will not be viewable in the final image (95). Such non-viewable primitives, for example, may be covered by other objects or shapes and can be removed from the image without sacrificing any image quality.
  • Attribute gradient setup module uses a fifth ALU 55B for attribute gradient setup (96), which notably, does not occur with respect to rejected primitives.
  • Attribute interpolator 54 then uses the fifth ALU 55B (97), which was also used for attribute gradient setup, in order to perform any interpolations.
  • Fragment shader 56 performs fragment shading (98), and post processor 58 performs any final post processing prior to image display (99).
  • an extended vertex cache 16A may be implemented along GPU pipeline 18A in order to reduce complexity and eliminate the need to propagate large amounts of data through the respective stages. Instead, each respective stage that needs portions of the image data can access such data stored in extended vertex cache 16 A.
  • the techniques described in this disclosure may be implemented within a general purpose microprocessor, digital signal processor (DSP), application specific integrated circuit (ASIC), field programmable gate array (FPGA), or other equivalent logic devices. If implemented in software, the techniques may be embodied as instructions on a computer-readable medium such as random access memory (RAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, or the like. The instructions cause a machine, such as a programmable processor, to perform the techniques described in this disclosure.
  • RAM random access memory
  • ROM read-only memory
  • NVRAM non-volatile random access memory
  • EEPROM electrically erasable programmable read-only memory
  • FLASH memory or the like.
  • the instructions cause a machine, such as a programmable processor, to perform the techniques described in this disclosure.
  • an embodiment may be implemented in part or in whole in a hard-wired circuit, in a circuit configuration fabricated into an application-specific integrated circuit, or as a firmware program loaded into non-volatile storage or a software program loaded from or into a data storage medium as machine-readable code, such code being instructions executable by an array of logic elements such as a microprocessor or other digital signal processing unit.
  • the data storage medium may be an array of storage elements such as semiconductor memory (which may include without limitation dynamic or static RAM, ROM, and/or flash RAM) or ferroelectric, ovonic, polymeric, or phase-change memory, or a disk medium such as a magnetic or optical disk.
  • the techniques may substantially eliminate bottlenecks in the GPU pipeline for primitives that include large numbers of attributes, and can promote efficient processing that substantially reduces idle time of ALUs.
  • the techniques improve image processing speed within the GPU pipeline by deferring the attribute gradient setup to just before attribute interpolation in the GPU pipeline. More specifically, deferring the attribute gradient setup within the GPU pipeline until after rejection of a subset of the primitives that are unnecessary for the image may substantially reduce computations and power consumption as the attribute gradient setup will only be performed on a subset of the primitives that are necessary for the image.
  • This arrangement of the stages also facilitates ALU sharing by the attribute gradient setup and attribute interpolation stages.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Generation (AREA)
  • Image Processing (AREA)

Abstract

This disclosure describes a graphics processing unit (GPU) pipeline that uses one or more shared arithmetic logic units (ALUs). In order to facilitate such sharing of ALUs, the stages of the disclosed GPU pipeline may be rearranged relative to conventional GPU pipelines. In addition, by rearranging the stages of the GPU pipeline, efficiencies may be achieved in the image processing. Unlike conventional GPU pipelines, for example, an attribute gradient setup stage can be located much later in the pipeline, and the attribute interpolator stage may immediately follow the attribute gradient setup stage. This allows sharing of an ALU by the attribute gradient setup and attribute interpolator stages. Several other techniques and features for the GPU pipeline are also described, which may improve performance and possibly achieve additional processing efficiencies.

Description

GRAPHICS PROCESSING UNIT WITH SHARED ARITHMETIC LOGIC UNIT
TECHNICAL FIELD
[0001] This disclosure relates to graphics processing units and, more particularly, graphics processing units that have a multi-stage pipelined configuration for processing images.
BACKGROUND
[0002] A graphics processing unit (GPU) is a dedicated graphics rendering device utilized to manipulate and display computerized graphics on a display. GPUs are built with a highly parallel structure that provides more efficient processing than typical, general purpose central processing units (CPUs) for a range of complex graphics-related algorithms. For example, the complex algorithms may correspond to representations of three-dimensional computerized graphics. A GPU may implement a number of so-called "primitive" graphics operations, such as forming points, lines, and triangles, to create complex, three-dimensional images on a display more quickly than drawing the images directly to the display with a CPU.
[0003] Vertex shading and pixel shading are often utilized in the video gaming industry to determine final surface properties of a computerized image, such as light absorption and diffusion, texture mapping, light reflection and refraction, shadowing, surface displacement, and post-processing effects. GPUs typically include a number of pipeline stages such as one or more shader stages, setup stages, rasterizer stages and interpolation stages. [0004] A vertex shader, for example, is typically applied to image data, such as the geometry for an image, and the vertex shader generates vertex coordinates and attributes of vertices within the image data. Vertex attributes may include color, normal, and texture coordinates associated with a vertex. One or more primitive setup and rejection modules may form primitive shapes such as points, lines, or triangles, and may reject hidden or invisible primitive shapes based on the vertices within the image data. An attribute setup module computes gradients of attributes within the primitive shapes for the image data. Once the attribute gradient values are computed, primitive shapes for the image data may be converted into pixels, and pixel rejection may be performed with respect to hidden primitive shapes. [0005] An attribute interpolator then interpolates the attributes over pixels within the primitive shapes for the image data based on the attribute gradient values, and sends the interpolated attribute values to the fragment shader for pixel rendering. Results of the fragment shader are output to a post-processing block and a frame buffer for presentation of the processed image on the display. This process is performed along successive stages of the GPU pipeline.
SUMMARY
[0006] In general, this disclosure describes a graphics processing unit (GPU) pipeline that uses one or more shared arithmetic logic units (ALUs). In order to facilitate such sharing of ALUs, the stages of the disclosed GPU pipeline may be rearranged relative to conventional GPU pipelines. In addition, by rearranging the stages of the GPU pipeline, efficiencies may be achieved in the image processing. Several other techniques and features for the GPU pipeline are also described, which may improve performance and possibly achieve additional processing efficiencies. For example, an extended vertex cache is also described for the GPU pipeline, which can significantly reduce the amount of data needed to be transferred through the successive stages of the GPU pipeline.
[0007] In one embodiment, the disclosure provides a method comprising receiving image data for an image within a GPU pipeline, and processing the image data within the GPU pipeline using a shared arithmetic logic unit for an attribute gradient setup stage and an attribute interpolator stage.
[0008] In another embodiment, this disclosure provides a device comprising a GPU pipeline that receives image data for an image and processes the image data within multiple stages, wherein the multiple stages include an attribute gradient setup stage and an attribute interpolator stage, and a shared arithmetic logic unit that performs attribute gradient setups and attribute interpolations associated with both the attribute gradient setup stage and the attribute interpolator stage.
[0009] In another embodiment, this disclosure provides a device comprising means for receiving image data for an image, means for processing the image data in an attribute gradient setup stage using a shared arithmetic logic unit, and means for processing the image data in an attribute interpolator stage using the shared arithmetic logic unit. [0010] The techniques described herein may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the techniques may be realized in whole or in part by a computer readable medium comprising instructions that, when executed by a machine, such as a processor, perform one or more of the methods described herein.
[0011] Accordingly, this disclosure also contemplates a computer-readable medium comprising instructions that upon execution cause a machine to receive image data for an image within a GPU pipeline, and process the image data within the GPU pipeline using a shared arithmetic logic unit for an attribute gradient setup stage and an attribute interpolator stage.
[0012] The details of one or more embodiments are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the invention will be apparent from the description and drawings, and from the claims.
BRIEF DESCRIPTION OF DRAWINGS
[0013] FIG. 1 is a block diagram illustrating an exemplary device including a graphics processing unit (GPU) that uses one or more shared arithmetic logic units (ALUs) and an extended vertex cache.
[0014] FIG. 2 is a block diagram illustrating a conventional GPU pipeline.
[0015] FIG. 3 is a block diagram illustrating an exemplary GPU according to an embodiment of this disclosure.
[0016] FIG. 4 is a block diagram illustrating an exemplary GPU according to another embodiment of this disclosure.
[0017] FIGS. 5 and 6 are flowcharts illustrating techniques that may be performed in a GPU pipeline according to embodiments of this disclosure.
DETAILED DESCRIPTION
[0018] FIG. 1 is a block diagram illustrating an exemplary device 10 including a graphics processing unit (GPU) 14 that includes a GPU pipeline 18 for processing computerized images. According to this disclosure, GPU pipeline 18 utilizes one or more shared arithmetic logic units (ALUs) 15 to reduce complexity of GPU 14 and create efficiency in the image processing. In addition, GPU pipeline may implement an extended vertex cache 16 in order to reduce the amount of data propagated through GPU pipeline 18. As discussed in greater detail below, the stages of GPU pipeline 18 may be rearranged relative to conventional GPU pipelines, which may improve the process of image processing and facilitate the use of shared ALUs 15. Some stages, however, may still use dedicated (unshared) ALUs like those used in stages of conventional GPU pipelines.
[0019] In the example of FIG. 1, device 10 includes a controller 12, GPU 14 and a display 20. Device 10 may also include many other components (not shown). For example, device 10 may comprise a wireless communication device and display 20 may comprise a display within the wireless communication device. As another example, device 10 may comprise a desktop or notebook computer, and display 20 may comprise a dedicated monitor or display of the computer. Device 10 may also comprise a wired communication device or a device not principally directed to communication. As other examples, device 10 may comprise a personal digital assistant (PDA), handheld video game device, game console or television device that includes display 20. In various embodiments, computerized video imagery may be obtained from a remote device or from a local device, such as a video server that generates video or video objects, or a video archive that retrieves stored video or video objects. [0020] Controller 12 controls operation of GPU 14. Controller 12 may be a specific controller for GPU 14 or a more general controller that controls the overall operation of device 10. In accordance with the techniques described herein, GPU 14 includes a GPU pipeline 18 that implements and accesses shared ALUs 15. In addition, GPU 14 may include an extended vertex cache 16 coupled to GPU pipeline 18. Again, shared ALUs may create efficiency in the image processing and the incorporation of extended vertex cache 16 may reduce an amount of data passing through GPU pipeline 18 within GPU 14. GPU pipeline 18 may be arranged in a non-conventional manner in order to facilitate the use of shared ALUs 15 and extended vertex cache 16
[0021] GPU 14 receives image data, such as geometrical data and rendering commands for an image from controller 12 within device 10. The image data may correspond to representations of complex, two-dimensional or three-dimensional computerized graphics. GPU 14 processes the image data to present image effects, background images, or video gaming images, for example, to a user of device 10 via a display 20. The images may be formed as video frames in a sequence of video frames. Display 20 may comprise a liquid crystal display (LCD), a cathode ray tube (CRT) display, a plasma display, or another type of display integrated with or coupled to device 10.
[0022] In some cases, controller 12 may receive the image data from applications operating within device 10. For example, device 10 may comprise a computing device operating a video gaming application based on image data received from an internal hard drive or a removable data storage device. In other cases, controller 12 may receive the image data from applications operating external to device 10. For example, device 10 may comprise a computing device operating a video gaming application based on image data received from an external server via a wired or wireless network, such as the Internet. The image data may be received via streaming media or broadcast media, which may be wired, wireless or a combination of both.
[0023] When a user of device 10 triggers an image effect, selects a background image, or initiates a video game, controller 12 receives the corresponding image data from an application and sends the image data to GPU 14 for image processing. GPU 14 processes the image data to prepare the corresponding image for presentation on display 20. For example, GPU 14 may implement a number of primitive graphics operations, such as forming points, lines, and triangles, to create a three-dimensional image represented by the received image data on display 20.
[0024] According to the techniques described herein, GPU pipeline 18 receives the image data for the image and stores attributes for vertices within the image data in extended vertex cache 16. GPU pipeline 18 only passes vertex coordinates that identify the vertices, and vertex cache index values that indicate storage locations of the attributes for each of the vertices in extended vertex cache 16 to other processing stages along GPU pipeline 18. In some embodiments, GPU pipeline 18 temporarily stores the vertex coordinates in extended vertex cache 16. In this manner, GPU pipeline 18 is not clogged with the transfer of the vertex attributes between stages, and can support increased throughput, and storage buffers between stages may also be eliminated or possibly reduced in size. The vertex coordinates identify the vertices within the image data based on, for example, a four-dimensional coordinate system with X, Y, and Z (width, height, and depth) coordinates that identify a location of a vertex within the image data, and a W coordinate that comprises a perspective parameter for the image data. The vertex attributes, for example, may include color, normal, and texture coordinates associated with a vertex.
[0025] Furthermore, in accordance with this disclosure, during the processing of image data in GPU pipeline 18, one or more shared ALUs 15 are used for different stages. As one example, a shared ALU may be used for both a triangle setup stage and a Z-Gradient setup stage. A shared lookup table for reciprocal operation may also be used in these triangle setup and Z-Gradient setup stages. As another example, a shared ALU may be used for both attribute gradient setup stage and an attribute interpolator stage. Unlike conventional GPU pipelines, the attribute gradient setup stage can be located much later in the pipeline, and the attribute interpolator stage may immediately follow the attribute gradient setup stage. This allows sharing of an ALU, and may have added benefits in that attribute gradient setups can be avoided for hidden primitives that are rejected. Conventional GPU pipelines, in contrast, typically perform attribute gradient setup prior to hidden primitive rejection, which creates inefficiency that can be avoided by the techniques of this disclosure. [0026] GPU pipeline 18 within GPU 14 includes several stages, including a vertex shader stage, several primitive setup stages, such as triangle setup and Z-Gradient setup, a rasterizer stage, a primitive rejection sages, an attribute gradient setup stage, an attribute interpolation stage, and a fragment shader stage. More or fewer stages may be included in other embodiments. Various ones of the different stages of GPU pipelines may also be referred to as "modules" of the pipeline in this disclosure.
[0027] In any case, the various primitive setup stages and primitive rejection stages only utilize vertex coordinates to form primitives and may discard a subset of the primitives that are unnecessary for the image. Primitives are the simplest types of geometric figures, including points, lines, triangles, and other polygons, and may be formed with one or more vertices within the image data. Primitives or portions of primitives may be rejected from consideration during processing of a specific frame of the image when the primitives or the portions of primitives are invisible (e.g., located on a backside of an object) within the image frame, or are hidden (e.g., located behind another object or transparent) within the image frame. This is the purpose of a hidden primitive and pixel rejection stages. [0028] Attribute gradient setup and attribute interpolation stages may utilize the vertex attributes to compute attribute gradient values and interpolate the attributes based on the attribute gradient values. Techniques described in this disclosure defer the computationally intensive setup of attribute gradients to just before attribute interpolation in GPU pipeline 18. This allows a shared ALU to be used by both the attribute gradient setup and attribute interpolation stages. The vertex attributes may be retrieved from extended vertex cache 16 for attribute gradient setup as one of the last steps before attribute interpolation in GPU pipeline 18. In this way, the vertex attributes are not introduced to GPU pipeline 18 until after primitive setup and primitive rejection, which creates efficiencies insofar as attribute gradient setup can be avoided for rejected primitives.
[0029] Moreover, by storing the attributes for vertices within the image data in extended vertex cache 16, GPU pipeline 18 can be made more efficient. In particular, the extended vertex cache 16 can eliminate the need to pass large amounts of attribute data through GPU pipeline 18, and may substantially eliminate bottlenecks in GPU pipeline 18 for primitives that include large numbers of attributes. In addition, deferring the attribute gradient setup to just before attribute interpolation in GPU pipeline 18 may improve image processing speed within GPU pipeline 18. More specifically, deferring the attribute gradient setup within GPU pipeline 18 until after rejection of the subset of the primitives that are unnecessary for the image may substantially reduce computations and power consumption as the attribute gradient setup will only be performed on a subset of the primitives that are necessary for the image.
[0030] Display 20 may be coupled to device 10 either wirelessly or with a wired connection. For example, device 10 may comprise a server or other computing device of a wireless communication service provider, and display 20 may be included within a wireless communication device. In this case, as examples, display 20 may comprise a display within a mobile radiotelephone, a satellite radiotelephone, a portable computer with a wireless communication card, a personal digital assistant (PDA) equipped with wireless communication capabilities, or any of a variety of devices capable of wireless communication. As another example, device 10 may comprise a server or other computing device connected to display 20 via a wired network, and display 20 may be included within a wired communication device or a device not principally directed to communication. In other embodiments, display 20 may be integrated within device 10.
[0031] FIG. 2 is a block diagram illustrating a conventional GPU pipeline 22. GPU pipeline 22 of FIG. 2 includes, in the following order, a command engine 24, a vertex shader 26, a triangle setup module 28, a Z-Gradient setup module 29, an attribute gradient setup module
30, a rasterizer 31, a hidden primitive and pixel rejection module 32, an attribute interpolator 34, a fragment shader 36, and a post processor 38. Each of the vertex shader 26, triangle setup module 28, Z-Gradient setup module 29, attribute gradient setup module 30, rasterizer
31, hidden primitive and pixel rejection module 32, attribute interpolator 34, and fragment shader 36 includes a dedicated arithmetic logic unit (ALU), which are labeled as elements 25A-25H respectively.
[0032] Command engine 24 receives an image data for an image from a controller of the device in which conventional GPU pipeline 22 resides. The image data may correspond to representations of complex, two-dimensional or three-dimensional computerized graphics. Command engine 24 passes the image data along GPU pipeline 22 to the other processing stages. In particular, all of the attributes and coordinates of the image data are passed from stage to stage along GPU pipeline 22. Each respective stage uses its respective ALU, and if any bottlenecks occur, the image processing may be stalled at that respective stage. [0033] FIG. 3 is a block diagram illustrating a GPU 14A, an exemplary embodiment of GPU 14 from FIG. 1, including a GPU pipeline 18A. A set of ALUs 45 A, 55A, 45B, 45C, 55B and 45D, and an extended vertex cache 16A are coupled to GPU pipeline 18 A. Extended vertex cache 16A within GPU 14Amay reduce an amount of data passing through GPU pipeline 18A within GPU 14A. Moreover, ALUs 55A and 55B are shared ALUs, each of which are used by two different successive stages in the GPU pipeline 18 A. Notably, the stages of GPU pipeline 18A are rearranged relative to conventional GPU pipeline 22 of FIG. 2, which may facilitate the sharing of ALU 55B by attribute gradient setup module 52 and attribute interpolator 54. Moreover, because attribute gradient setup module 52 is executed after hidden primitive and pixel rejection module 50, efficiencies are gained. Namely, attribute gradient setup may be avoided for any hidden or rejected primitives. [0034] In the illustrated embodiment of FIG. 3, GPU pipeline 18A includes a command engine 42, a vertex shader 44, a triangle and Z-Gradient setup modules 46 and 47, a rasterizer 48, a hidden primitive and pixel rejection module 50, an attribute gradient setup module 52, an attribute interpolator 54, a fragment shader 56, and a post processor 58. Again, the order of these stages is non-conventional insofar as attribute gradient setup module 52 follows hidden primitive and pixel rejection module 50. Attribute interpolator 54 immediately follows attribute gradient setup module 52. Triangle and Z-Gradient setup modules 46 and 47 may be collectively referred to as primitive setup modules, and some cases, other types of primitive setups may also be used.
[0035] Command engine 42 receives image data, which may include rendering commands, for an image from controller 12 of device 10. The image data may correspond to representations of complex, two-dimensional or three-dimensional computerized graphics. Command engine 42 passes a subset of this data, i.e., information for vertices within the image data that are not included in extended vertex cache 16A ("missed vertices") to vertex shader 44. Command engine 42 will pass vertex cache index information for missed vertices to primitive setup and rejection module 46. Command engine 42 passes vertex cache index information for vertices within the image data that are already included in extended vertex cache 16A ("hit vertices") directly to primitive setup and rejection module 46. Vertex data for hit vertices are not typically sent to vertex shader 44. Initial processing of hit and missed vertices within the image data is described in more detail below.
[0036] GPU pipeline 18A includes several stages, although the techniques of this disclosure may operate in pipelines with more or fewer stages than those illustrated. Vertex shader 44 is applied to the missed vertices within the image data and determines surface properties of the image at the missed vertices within an image data. In this way, vertex shader 44 generates vertex coordinates and attributes of each of the missed vertices within the image data. Vertex shader 44 then stores the attributes for the missed vertices in extended vertex cache 16A. In this manner, the attributes need not be passed along the GPU pipeline 18 A, but can be accessed from extended vertex cache 16A, as needed, by respective stages of the GPU pipeline 18 A. Vertex shader 44 is not applied to each of the hit vertices within the image data as vertex coordinates and attributes of each of the hit vertices may have been previously generated and stored in extended vertex cache 16A.
[0037] The vertex coordinates identify the vertices within the image data (such as geometry within the image) based on, for example, a four-dimensional coordinate system with X, Y, and Z (width, height, and depth) coordinates that identify a location of a vertex within the image data, and a W coordinate that comprises a perspective parameter for the image data. The vertex attributes, for example, may include color, normal, and texture coordinates associated with a vertex. Extended vertex cache 16A may be easily configured for different numbers of attributes and primitive types. Vertex cache index values that indicate storage locations within extended vertex cache 16A of the vertex coordinates and attributes for both the hit and missed vertices in the image data are then placed in a buffer (not shown) positioned between command engine 42 and primitive setup and rejection module 46. [0038] Triangle setup 46 and Z-Gradient setup 47 are exemplary primitive setup stages, although additional primitive setup stages may also be included. A shared ALU 55 A is used by both triangle setup 46 and Z-Gradient setup 47. The different stages use either vertex coordinates or vertex attributes to process a respective image. For example, triangle setup 46, Z-Gradient setup 47, rasterizer 48, and hidden primitive and pixel rejection module 50 only utilize the vertex coordinates. However, attribute gradient setup module 52 and attribute interpolator 54 utilize the vertex attributes. Therefore, according to this disclosure, attribute gradient setup module 52 is deferred to just before attribute interpolator 54 in GPU pipeline 18 A. The vertex attributes may be retrieved from extended vertex cache 16A for attribute gradient setup module 52 as one of the last steps in GPU pipeline 18A before interpolating the attributes with attribute interpolator 54. In this way, the vertex attributes are not introduced to GPU pipeline 18A until after hidden primitive and pixel rejection module 50, and just before attribute interpolator 54, providing significant gains in efficiency. [0039] Moreover, because attribute interpolator 54 immediately follows attribute gradient setup module 52, these respective stages may share ALU 55B. For large sized primitives, ALU 55B will be utilized most for interpolation. Alternatively, when primitives are small, ALU 55B will be used mostly for attribute setup. A relatively large ALU 55B can promote processing speed particularly for gradient setup, although a relatively small ALU 55B can reduce power consumption at a cost of performance speed in the gradient setup. [0040] Again, by storing the vertex attributes for the vertices of image data in extended vertex cache 16A, device 10 can eliminate a large amount of data from passing through GPU pipeline 18 A, which reduces the width of the internal data bus included in GPU pipeline 18 A. By reducing the amount of data movement, these techniques can also reduce power consumption within GPU 18 A. In addition, with the exception of a buffer that may be positioned between command engine 42 and primitive setup and rejection module 46, buffers positioned between each of the processing stages may be removed from GPU pipeline 18A to reduce the area of GPU 14A within device 10.
[0041] Primitive setup modules 46 and 47 (and possibly other types of primitive setups) receive the vertex cache index values for the attributes of each of the vertices in the image data. Primitive setup modules 46 and 47 then retrieve vertex coordinates for each of the vertices within the image data using the vertex cache index values. Primitive setup modules 46 and 47 form the respective primitives with one or more vertices within the image data. Primitives are the simplest types of geometric figures and may include points, lines, triangles, and other polygons. According to this disclosure, the triangle setup 28 and Z-Gradient setup 29 can share ALU 55A in order to promote efficiency. The triangle setup 28 and Z-Gradient setup 29 may also share a lookup table for reciprocal operation for additional efficiency. A Z-Gradient refers to a difference of two Z coordinates of two neighbor pixels over a triangle in either X direction or Y direction. Z-Gradient setup is used to compute the difference of two Z values by using three original vertices' Z values of the triangle and XY coordinates. [0042] In some cases, primitive setup modules 46 and 47 may also reject some primitives by performing scissoring and backface culling using the XY coordinates of the vertices within the image data. Scissoring and backface culling rejects primitives and portions of primitives from consideration during processing of a specific frame of the image when the primitives and the portions of primitives are invisible within the image frame. For example, the primitives and the portions of primitives may be located on a backside of an object within the image frame. Primitive setup modules 46 and 47 may request extended vertex cache 16Ato release storage space for the attributes associated with the rejected primitives. By only moving the primitives for the image data, the vertex coordinates associated with the primitives, and the vertex cache index values for each of the vertices within the primitives through GPU pipeline 18 A, device 10 may substantially eliminate bottlenecks in GPU pipeline 18A for primitives that include large numbers of attributes.
[0043] Rasterizer 48 converts the primitives for the image data into pixels based on the XY coordinates of vertices within the primitives and the number of pixels included in the primitives. Hidden primitive and pixel rejection module 50 rejects additional hidden primitives and hidden pixels within the primitives using the early depth and stencil test based on the Z coordinates of the vertices within the primitives. If hidden primitive and pixel rejection module 50 rejects all pixels within a primitive, the primitive is automatically rejected. Primitives or pixels within primitives may be considered hidden, and be rejected from consideration during processing of a specific frame of the image, when the primitives or the pixels within primitives are located behind another object within the image frame or are transparent within the image frame. Hidden primitive and pixel rejection module 50 may request extended vertex cache 16Ato release storage space for the attributes associated with the rejected primitives.
[0044] Typically, a large percentage of primitives are rejected by scissoring and backface culling performed by primitive setup and rejection modules 46, 47, and the early depth and stencil test performed by hidden primitive and pixel rejection module 50. Therefore, by deferring the attribute gradient setup stage 52 until after hidden primitive and pixel rejection 50, computations can be eliminated for attributes associated with a subset of the primitives that are rejected as being hidden and unnecessary for the image.
[0045] Attribute gradient setup module 52 retrieves the vertex attributes from extended vertex cache 16A using the vertex cache index values for each of the vertices within the primitives. Attribute gradient setup module 52 computes gradients of attributes associated with the primitives for the image data. An attribute gradient comprises a difference between the attribute value at a first pixel and the attribute value at a second pixel within a primitive moving in either a horizontal (X) direction or a vertical (Y) direction. After attribute gradient setup module 52 computes gradients of attributes of all vertices within a primitive for the image data, attribute gradient setup module 52 may request extended vertex cache 16Ato release storage space for the attributes of the vertices within the primitive. [0046] Once the attribute gradient values are computed, attribute interpolator 54 interpolates the attributes over pixels within the primitives based on the attribute gradient values. Again, the same ALU 55B is used in the attribute gradient setup stage 52 and the attribute interpolator stage 54. The interpolated attribute values are input to fragment shader 56 to perform pixel rendering of the primitives. Fragment shader 56 determines surface properties of the image at pixels within the primitives for the image data. Results of fragment shader 56 are then output to post-processor 58 for presentation of the processed image on display 20. [0047] In some cases, vertex shader 44 may not be applied to missed vertices within the image data. It may be assumed that vertex coordinates and attributes of all vertices within the image data are determined external to GPU pipeline 18A. Therefore, primitives formed with the missed vertices do not need vertex shader 44 to calculate attributes of the missed vertices. In this case, extended vertex cache 16A may operate as an extended vertex buffer. Command engine 42 may assign vertex index values that identify storage location for the attributes within the extended vertex buffer and send the predetermined vertex coordinates and attributes of each of the vertices within the image data to the extended vertex buffer. [0048] FIG. 4 is a block diagram illustrating GPU 14B, another exemplary embodiment of GPU 14 from FIG. 1, including a GPU pipeline 18B and an extended vertex cache 16B coupled to GPU pipeline 18B. In the illustrated embodiment, GPU pipeline 18B includes a command engine 62, a vertex shader 64, a triangle set up module 66, and Z-Gradient setup module 67 (modules 66 and 67 are collectively referred to as primitive setup modules), a rasterizer 68, a hidden primitive and pixel rejection module 70, an attribute gradient setup module 72, an attribute interpolator 74, a fragment shader 76, and a post-processor 78. GPU 14B illustrated in FIG. 4 may operate substantially similar to GPU 14A illustrated in FIG. 3, except for the initial processing of vertices in the image data. The different stages utilize ALUs 65A, 75A, 65B, 65C, 75B and 65D respectively. Notably, ALUs 75A and 75B are shared for two different stages of GPU pipeline 18B.
[0049] Command engine 62 receives image data, including geometry and rendering commands, for an image from controller 12 of device 10. Command engine 62 passes the image data along GPU pipeline 18B to the other processing stages. In other words, command engine 62 passes information for all the vertices within the image data to vertex shader 64.
[0050] In the embodiment of FIG. 4, vertex shader 64 is applied to all vertices within the image data. Vertex shader 64 is applied to the image data and determines surface properties of the image at the vertices within the image data. In this way, vertex shader 64 generates vertex coordinates and attributes of each of the vertices within the image data. Vertex shader 64 then stores only the attributes in extended vertex cache 16B. Vertex shader 64 passes the vertex coordinates and vertex cache index values that indicate storage locations of the attributes within extended vertex cache 16B for each of the vertices in the image data along GPU pipeline 18B.
[0051] Since vertex shader 64 passes the vertex coordinates and vertex cache index values for the vertices in the image data directly to primitive setup and rejection module 66, all the buffers positioned between each of the processing stages may be removed from GPU pipeline 18B. Primitive setup modules 66 and 67 forms primitives with one or more vertices within the image data. These primitive setup modules 66 and 67 may share one or more ALUs. Primitive setup and rejection module 66 may request extended vertex cache 16B to release storage space for the attributes associated with the rejected primitives. [0052] Rasterizer 68 converts the primitives for the image data into pixels based on the XY coordinates of vertices within the primitives and the number of pixels included in the primitives. Hidden primitive and pixel rejection module 70 rejects hidden primitives and hidden pixels within the primitives using the early depth and stencil test based on the Z coordinates of the vertices within the primitives. Hidden primitive and pixel rejection module 70 may request extended vertex cache 16B to release storage space for the attributes associated with the rejected primitives.
[0053] Attribute gradient setup module 72 retrieves the vertex attributes from extended vertex cache 16B using the vertex cache index values for each of the vertices within the primitives. Attribute gradient setup module 72 computes gradients of attributes associated with the primitives for the image data. After attribute gradient setup module 72 computes gradients of attributes of all vertices within a primitive for the image data, attribute gradient setup module 72 may request extended vertex cache 16B to release storage space for the attributes of the vertices within the primitive.
[0054] Once the attribute gradient values are computed, attribute interpolator 74 interpolates the attributes over pixels within the primitives based on the attribute gradient values by sharing one or more ALUs with the attribute gradient setup module 72. The interpolated attribute values are then input to fragment shader 76 to perform pixel rendering of the primitives. Fragment shader 76 determines surface properties of the image at pixels within the primitives for the image data. Results of fragment shader 76 will be output to postprocessor 78 for presentation of the processed image on display 20. [0055] FIG. 5 is a flowchart illustrating an exemplary operation of processing an image within a GPU using an extended vertex cache. The operations of FIG. 5 will be described with reference to GPU 14 from FIG. 1 although similar techniques could be used with other GPUs. Extended vertex cache 16 may be created within GPU 14 during manufacture of device 10 and coupled to GPU pipeline 18 (80). Extended vertex cache 16 may be easily configured for different numbers of attributes and primitive types.
[0056] GPU 14 receives image data, which may include rendering commands and geometry, for an image from controller 12 of device 10 (82). The image data may correspond to representations of complex, two-dimensional or three-dimensional computerized graphics. GPU 14 sends the image data to GPU pipeline 18 to process the image for display on display 20 connected to device 10. GPU pipeline 18 stores attributes for vertices within the image data in extended vertex cache 16 (84). In some embodiments, GPU pipeline 18 temporarily stores vertex coordinates for the vertices within the image data in extended vertex cache 16. [0057] GPU pipeline 18 then sends vertex coordinates that identify the vertices, and vertex cache index values that indicate storage locations of the attributes for each of the vertices in extended vertex cache 16 to other processing stages along GPU pipeline 18 (86). GPU pipeline 18 processes the image based on the vertex coordinates and the vertex cache index values for each of the vertices in the image data (88). During such processing, GPU pipeline 18 reuses one or more ALUs 18 along the GPU pipeline 18 (89). Specifically, according to this disclosure, a shared ALU can be used for an attribute gradient setup stage and an attribute interpolation stage. The non-conventional ordering of the GPU pipeline may facilitate the ability for the attribute gradient setup stage and the attribute interpolation stage to share an ALU.
[0058] FIG. 6 is a flowchart illustrating another exemplary operation of processing an image with a GPU pipeline using shared ALUs. For purposes of explanation, the operation shown in FIG. 6 will be described with reference to GPU 14A from FIG. 3 although similar techniques could be used with other GPUs. Command engine 42 receives image data, including geometry and rendering commands, for an image and passes the image data along GPU pipeline 18B. As shown in FIG. 6, vertex shader 44 performs vertex shading using a first ALU 45 A (91). Triangle setup module 46 performs triangle setup for any triangle primitives using a second ALU 55A (92). This second ALU 55A is reused by another stage insofar as Z-Gradient setup module 47 performs Z-Gradient setup using second ALU 55 A (93). Rasterizer then performs rasterizing using a third ALU 45B (94). [0059] Hidden primitive and pixel rejection module 50 performs an early depth/stencil test using a forth ALU 45 C in order to remove primitives that will not be viewable in the final image (95). Such non-viewable primitives, for example, may be covered by other objects or shapes and can be removed from the image without sacrificing any image quality. Attribute gradient setup module uses a fifth ALU 55B for attribute gradient setup (96), which notably, does not occur with respect to rejected primitives. Attribute interpolator 54 then uses the fifth ALU 55B (97), which was also used for attribute gradient setup, in order to perform any interpolations. Fragment shader 56 performs fragment shading (98), and post processor 58 performs any final post processing prior to image display (99). As noted above, an extended vertex cache 16A may be implemented along GPU pipeline 18A in order to reduce complexity and eliminate the need to propagate large amounts of data through the respective stages. Instead, each respective stage that needs portions of the image data can access such data stored in extended vertex cache 16 A.
[0060] A number of embodiments have been described. However, various modifications to these embodiments are possible, and the principles presented herein may be applied to other embodiments as well. The techniques and methods described herein may be implemented in hardware, software, and/or firmware. The various tasks of such methods may be implemented as sets of instructions executable by one or more arrays of logic elements, microprocessors, embedded controllers, or integrated processor cores. In one example, one or more such tasks are arranged for execution within a chipset that is configured to control operations of various devices of a personal communications device, such as a so-called cellular telephone.
[0061] In various examples, the techniques described in this disclosure may be implemented within a general purpose microprocessor, digital signal processor (DSP), application specific integrated circuit (ASIC), field programmable gate array (FPGA), or other equivalent logic devices. If implemented in software, the techniques may be embodied as instructions on a computer-readable medium such as random access memory (RAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, or the like. The instructions cause a machine, such as a programmable processor, to perform the techniques described in this disclosure.
[0062] As further examples, an embodiment may be implemented in part or in whole in a hard-wired circuit, in a circuit configuration fabricated into an application-specific integrated circuit, or as a firmware program loaded into non-volatile storage or a software program loaded from or into a data storage medium as machine-readable code, such code being instructions executable by an array of logic elements such as a microprocessor or other digital signal processing unit. The data storage medium may be an array of storage elements such as semiconductor memory (which may include without limitation dynamic or static RAM, ROM, and/or flash RAM) or ferroelectric, ovonic, polymeric, or phase-change memory, or a disk medium such as a magnetic or optical disk.
[0063] In this disclosure, various techniques have been described for processing images with a GPU using an extended vertex cache and one or more shared ALUs. The techniques may substantially eliminate bottlenecks in the GPU pipeline for primitives that include large numbers of attributes, and can promote efficient processing that substantially reduces idle time of ALUs. In addition, the techniques improve image processing speed within the GPU pipeline by deferring the attribute gradient setup to just before attribute interpolation in the GPU pipeline. More specifically, deferring the attribute gradient setup within the GPU pipeline until after rejection of a subset of the primitives that are unnecessary for the image may substantially reduce computations and power consumption as the attribute gradient setup will only be performed on a subset of the primitives that are necessary for the image. This arrangement of the stages also facilitates ALU sharing by the attribute gradient setup and attribute interpolation stages. These and other embodiments are within the scope of the following claims.

Claims

CLAIMS:
1. A method comprising: receiving image data for an image within a graphics processing unit (GPU) pipeline; and processing the image data within the GPU pipeline using a shared arithmetic logic unit for an attribute gradient setup stage and an attribute interpolator stage.
2. The method of claim 1, wherein the attribute interpolator stage immediately follows the attribute gradient setup stage in the GPU pipeline.
3. The method of claim 1, wherein the shared arithmetic logic unit comprises a first shared arithmetic logic unit, the method further comprising: using a second shared arithmetic logic unit for a triangle setup stage; and using the second shared arithmetic logic unit for a Z-Gradient setup stage.
4. The method of claim 3, further comprising using a shared lookup table for reciprocal operation for the triangle setup stage and the Z-Gradient setup stage.
5. The method of claim 4, wherein: the Z-Gradient setup stage immediately follows the triangle setup stage in the GPU pipeline; and the attribute interpolator stage immediately follows the attribute gradient setup stage in the GPU pipeline.
6. The method of claim 5, wherein the attribute gradient setup and attribute interpolator stages follow a hidden primitive and pixel rejection stage in the GPU pipeline.
7. The method of claim 6, wherein the hidden primitive and pixel rejection stage follows the Z-Gradient setup and triangle setup stages in the GPU pipeline.
8. The method of claim 1, further comprising: storing attributes for vertices within the image data in an extended vertex cache coupled to the GPU pipeline; and processing the image data within the GPU pipeline based on vertex coordinates that identify the vertices and vertex cache index values that indicate storage locations of the attributes within the extended vertex cache for each of the vertices within the image data.
9. A computer-readable medium comprising instructions that upon execution cause a machine to: receive image data for an image within a graphics processing unit (GPU) pipeline; and process the image data within the GPU pipeline using a shared arithmetic logic unit for an attribute gradient setup stage and an attribute interpolator stage.
10. The computer-readable medium of claim 9, wherein the machine comprises a programmable processor.
11. The computer readable medium of claim 9, wherein the attribute interpolator stage immediately follows the attribute gradient setup stage in the GPU pipeline.
12. The computer readable medium of claim 10, wherein the shared arithmetic logic unit comprises a first shared arithmetic logic unit, and wherein the instructions upon execution cause the machine to: use a second shared arithmetic logic unit for a triangle setup stage; and use the second shared arithmetic logic unit for a Z-Gradient setup stage.
13. The computer readable medium of claim 12, wherein the instructions upon execution cause the machine to use a shared lookup table for reciprocal operation for the triangle setup stage and the Z-Gradient setup stage.
14. The computer readable medium of claim 13, wherein: the Z-Gradient setup stage immediately follows the triangle setup stage in the GPU pipeline; and the attribute interpolator stage immediately follows the attribute gradient setup stage in the GPU pipeline.
15. The computer readable medium of claim 14, wherein the attribute gradient setup and attribute interpolator stages follow a hidden primitive and pixel rejection stage in the GPU pipeline.
16. The computer readable medium of claim 15, wherein the hidden primitive and pixel rejection stage follows the Z-Gradient setup and triangle setup stages in the GPU pipeline
17. The computer readable medium of claim 9, wherein the instructions upon execution cause the machine to: store attributes for vertices within the image data in an extended vertex cache coupled to the GPU pipeline; and process the image data within the GPU pipeline based on vertex coordinates that identify the vertices and vertex cache index values that indicate storage locations of the attributes within the extended vertex cache for each of the vertices within the image data.
18. A device comprising: a graphics processing unit (GPU) pipeline that receives image data for an image and processes the image data within multiple stages, wherein the multiple stages include an attribute gradient setup stage and an attribute interpolator stage; and a shared arithmetic logic unit that performs attribute gradient setups and attribute interpolations associated with both the attribute gradient setup stage and the attribute interpolator stage.
19. The device of claim 18, wherein the attribute interpolator stage immediately follows the attribute gradient setup stage in the GPU pipeline.
20. The device of claim 18, wherein the shared arithmetic logic unit comprises a first shared arithmetic logic unit, the device further comprising a second shared arithmetic logic used for both a triangle setup stage and a Z-Gradient setup stage in the GPU pipeline.
21. The device of claim 20, further comprising a shared lookup table for reciprocal operation used in both the triangle setup stage and the Z-Gradient setup stage.
22. The device of claim 21 , wherein: the Z-Gradient setup stage immediately follows the triangle setup stage in the GPU pipeline; and the attribute interpolator stage immediately follows the attribute gradient setup stage in the GPU pipeline.
23. The device of claim 22, wherein the attribute gradient setup and attribute interpolator stages follow a hidden primitive and pixel rejection stage in the GPU pipeline.
24. The device of claim 23, wherein the hidden primitive and pixel rejection stage follows the Z-Gradient setup and triangle setup stages in the GPU pipeline
25. The device of claim 18, further comprising an extended vertex cache coupled to the GPU pipeline, wherein attributes for vertices within the image data are stored in the extended vertex cache, and the image is processed within the GPU pipeline based on vertex coordinates that identify the vertices and vertex cache index values that indicate storage locations of the attributes within the extended vertex cache for each of the vertices within the image data.
26. A device comprising: means for receiving image data for an image; means for processing the image data in an attribute gradient setup stage using a shared arithmetic logic unit; and means for processing the image data in an attribute interpolator stage using the shared arithmetic logic unit.
27. The device of claim 26, further comprising: means for using another shared arithmetic logic unit for a triangle setup stage; and means for using the another shared arithmetic logic unit for a Z-Gradient setup stage.
28. The device of claim 27, further comprising means for using a shared lookup table for reciprocal operation for the triangle setup stage and the Z-Gradient setup stage.
29. The device of claim 28, wherein the means for processing comprises a graphics processing unit (GPU) pipeline and wherein: the Z-Gradient setup stage immediately follows the triangle setup stage in the GPU pipeline; and the attribute interpolator stage immediately follows the attribute gradient setup stage in the GPU pipeline.
30. The device of claim 22, wherein the attribute gradient setup and attribute interpolator stages follow a hidden primitive and pixel rejection stage in the GPU pipeline, and the hidden primitive and pixel rejection stage follows the Z-Gradient setup and triangle setup stages in the GPU pipeline.
PCT/US2007/081428 2006-10-17 2007-10-15 Graphics processing unit with shared arithmetic logic unit WO2008048940A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2009533470A JP2010507175A (en) 2006-10-17 2007-10-15 Graphics processing unit that uses a shared arithmetic processing unit
EP07854073A EP2084670A2 (en) 2006-10-17 2007-10-15 Graphics processing unit with shared arithmetic logic unit
CA002666064A CA2666064A1 (en) 2006-10-17 2007-10-15 Graphics processing unit with shared arithmetic logic unit

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/550,344 US8009172B2 (en) 2006-08-03 2006-10-17 Graphics processing unit with shared arithmetic logic unit
US11/550,344 2006-10-17

Publications (2)

Publication Number Publication Date
WO2008048940A2 true WO2008048940A2 (en) 2008-04-24
WO2008048940A3 WO2008048940A3 (en) 2009-04-30

Family

ID=39314778

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/081428 WO2008048940A2 (en) 2006-10-17 2007-10-15 Graphics processing unit with shared arithmetic logic unit

Country Status (8)

Country Link
US (1) US8009172B2 (en)
EP (1) EP2084670A2 (en)
JP (1) JP2010507175A (en)
KR (1) KR20090079241A (en)
CN (1) CN101523442A (en)
CA (1) CA2666064A1 (en)
TW (1) TW200830220A (en)
WO (1) WO2008048940A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976432A (en) * 2010-11-22 2011-02-16 长沙景嘉微电子有限公司 Implementation of hierarchical cutting strategy in graphic chip design
JP2011044143A (en) * 2009-08-21 2011-03-03 Intel Corp Technique to store and retrieve image data
KR101118594B1 (en) 2008-12-02 2012-02-27 한국과학기술원 Apparatus and method for sharing look-up table

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8968087B1 (en) * 2009-06-01 2015-03-03 Sony Computer Entertainment America Llc Video game overlay
US20100277488A1 (en) * 2009-04-30 2010-11-04 Kevin Myers Deferred Material Rasterization
US20110242115A1 (en) * 2010-03-30 2011-10-06 You-Ming Tsao Method for performing image signal processing with aid of a graphics processing unit, and associated apparatus
US8228406B2 (en) * 2010-06-04 2012-07-24 Apple Inc. Adaptive lens shading correction
US9524572B2 (en) * 2010-11-23 2016-12-20 Microsoft Technology Licensing, Llc Parallel processing of pixel data
GB201103698D0 (en) * 2011-03-03 2011-04-20 Advanced Risc Mach Ltd Graphics processing
GB201103699D0 (en) 2011-03-03 2011-04-20 Advanced Risc Mach Ltd Graphic processing
WO2013100935A1 (en) * 2011-12-28 2013-07-04 Intel Corporation A method and device to augment volatile memory in a graphics subsystem with non-volatile memory
US20130271465A1 (en) * 2011-12-30 2013-10-17 Franz P. Clarberg Sort-Based Tiled Deferred Shading Architecture for Decoupled Sampling
WO2017007044A1 (en) * 2015-07-07 2017-01-12 삼성전자 주식회사 Signal processing device and method
US10699366B1 (en) 2018-08-07 2020-06-30 Apple Inc. Techniques for ALU sharing between threads

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1096427A2 (en) * 1999-10-28 2001-05-02 Nintendo Co., Limited Vertex cache for 3D computer graphics
US6549209B1 (en) * 1997-05-22 2003-04-15 Kabushiki Kaisha Sega Enterprises Image processing device and image processing method

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4965750A (en) * 1987-03-31 1990-10-23 Hitachi, Ltd. Graphic processor suitable for graphic data transfer and conversion processes
US4951232A (en) * 1988-09-12 1990-08-21 Silicon Graphics, Inc. Method for updating pipelined, single port Z-buffer by segments on a scan line
US5870509A (en) 1995-12-12 1999-02-09 Hewlett-Packard Company Texture coordinate alignment system and method
US5886711A (en) * 1997-04-29 1999-03-23 Hewlett-Packard Companu Method and apparatus for processing primitives in a computer graphics display system
JP3514945B2 (en) 1997-05-26 2004-04-05 株式会社ソニー・コンピュータエンタテインメント Image creation method and image creation device
US5914726A (en) 1997-06-27 1999-06-22 Hewlett-Packard Co. Apparatus and method for managing graphic attributes in a memory cache of a programmable hierarchical interactive graphics system
US7038692B1 (en) * 1998-04-07 2006-05-02 Nvidia Corporation Method and apparatus for providing a vertex cache
WO2000004482A2 (en) * 1998-07-17 2000-01-27 Intergraph Corporation Multi-processor graphics accelerator
US6157393A (en) * 1998-07-17 2000-12-05 Intergraph Corporation Apparatus and method of directing graphical data to a display device
US6552723B1 (en) * 1998-08-20 2003-04-22 Apple Computer, Inc. System, apparatus and method for spatially sorting image data in a three-dimensional graphics pipeline
US6690380B1 (en) 1999-12-27 2004-02-10 Microsoft Corporation Graphics geometry cache
US6885378B1 (en) * 2000-09-28 2005-04-26 Intel Corporation Method and apparatus for the implementation of full-scene anti-aliasing supersampling
US7098924B2 (en) * 2002-10-19 2006-08-29 Via Technologies, Inc. Method and programmable device for triangle interpolation in homogeneous space
US7036692B2 (en) 2003-02-19 2006-05-02 Graham Packaging Company, L.P. Dispenser with an integrally molded neck finish
US7259765B2 (en) * 2003-04-04 2007-08-21 S3 Graphics Co., Ltd. Head/data scheduling in 3D graphics
US7418606B2 (en) * 2003-09-18 2008-08-26 Nvidia Corporation High quality and high performance three-dimensional graphics architecture for portable handheld devices
US20050206648A1 (en) 2004-03-16 2005-09-22 Perry Ronald N Pipeline and cache for processing data progressively
US7570267B2 (en) 2004-05-03 2009-08-04 Microsoft Corporation Systems and methods for providing an enhanced graphics pipeline
US7710427B1 (en) * 2004-05-14 2010-05-04 Nvidia Corporation Arithmetic logic unit and method for processing data in a graphics pipeline
US7142214B2 (en) 2004-05-14 2006-11-28 Nvidia Corporation Data format for low power programmable processor
US7505036B1 (en) 2004-07-30 2009-03-17 3Dlabs Inc. Ltd. Order-independent 3D graphics binning architecture
US7639252B2 (en) * 2004-08-11 2009-12-29 Ati Technologies Ulc Unified tessellation circuit and method therefor
US6972769B1 (en) 2004-09-02 2005-12-06 Nvidia Corporation Vertex texture cache returning hits out of order
US7233334B1 (en) 2004-09-29 2007-06-19 Nvidia Corporation Storage buffers with reference counters to improve utilization
EP1883045A4 (en) * 2005-05-20 2016-10-05 Sony Corp Signal processor
US7492373B2 (en) * 2005-08-22 2009-02-17 Intel Corporation Reducing memory bandwidth to texture samplers via re-interpolation of texture coordinates

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6549209B1 (en) * 1997-05-22 2003-04-15 Kabushiki Kaisha Sega Enterprises Image processing device and image processing method
EP1096427A2 (en) * 1999-10-28 2001-05-02 Nintendo Co., Limited Vertex cache for 3D computer graphics

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DEERING, MICHAEL F. AND NELSON, SCOTT R.: "Leo: a system for cost effective 3D shaded graphics" SIGGRAPH '93: PROCEEDINGS OF THE 20TH ANNUAL CONFERENCE ON COMPUTER GRAPHICS AND INTERACTIVE TECHNIQUES, 1993, pages 101-108, XP002516786 New York, NY, USA *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101118594B1 (en) 2008-12-02 2012-02-27 한국과학기술원 Apparatus and method for sharing look-up table
JP2011044143A (en) * 2009-08-21 2011-03-03 Intel Corp Technique to store and retrieve image data
CN101976432A (en) * 2010-11-22 2011-02-16 长沙景嘉微电子有限公司 Implementation of hierarchical cutting strategy in graphic chip design
CN101976432B (en) * 2010-11-22 2012-02-08 长沙景嘉微电子有限公司 Implementation of hierarchical cutting strategy in graphic chip design

Also Published As

Publication number Publication date
US8009172B2 (en) 2011-08-30
US20080030512A1 (en) 2008-02-07
EP2084670A2 (en) 2009-08-05
CA2666064A1 (en) 2008-04-24
CN101523442A (en) 2009-09-02
TW200830220A (en) 2008-07-16
WO2008048940A3 (en) 2009-04-30
JP2010507175A (en) 2010-03-04
KR20090079241A (en) 2009-07-21

Similar Documents

Publication Publication Date Title
US8009172B2 (en) Graphics processing unit with shared arithmetic logic unit
US8421794B2 (en) Processor with adaptive multi-shader
US7952588B2 (en) Graphics processing unit with extended vertex cache
US8436854B2 (en) Graphics processing unit with deferred vertex shading
US7928990B2 (en) Graphics processing unit with unified vertex cache and shader register file
US8384728B2 (en) Supplemental cache in a graphics processing unit, and apparatus and method thereof
US8692848B2 (en) Method and system for tile mode renderer with coordinate shader
US20080012874A1 (en) Dynamic selection of high-performance pixel shader code based on check of restrictions
EP2122577A1 (en) Method, display adapter and computer program product for improved graphics performance by using a replaceable culling program
US10192348B2 (en) Method and apparatus for processing texture
US7492373B2 (en) Reducing memory bandwidth to texture samplers via re-interpolation of texture coordinates
US20100277488A1 (en) Deferred Material Rasterization
US11798218B2 (en) Methods and apparatus for pixel packing
US20160321835A1 (en) Image processing device, image processing method, and display device
WO2021150372A1 (en) Hybrid binning
US7167183B1 (en) Reorganized anisotropic sampling order
US20240212257A1 (en) Workload packing in graphics texture pipeline
US20240331079A1 (en) Single pass anti-ringing clamping enabled image processing
US10311627B2 (en) Graphics processing apparatus and method of processing graphics pipeline thereof
EP4315257A1 (en) Synchronization free cross pass binning through subpass interleaving

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200780038381.0

Country of ref document: CN

ENP Entry into the national phase

Ref document number: 2666064

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 712/MUMNP/2009

Country of ref document: IN

ENP Entry into the national phase

Ref document number: 2009533470

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 1020097010016

Country of ref document: KR

Ref document number: 2007854073

Country of ref document: EP

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07854073

Country of ref document: EP

Kind code of ref document: A2

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)