GB2425030A - Managed network render targets for routing graphical information - Google Patents

Managed network render targets for routing graphical information Download PDF

Info

Publication number
GB2425030A
GB2425030A GB0507252A GB0507252A GB2425030A GB 2425030 A GB2425030 A GB 2425030A GB 0507252 A GB0507252 A GB 0507252A GB 0507252 A GB0507252 A GB 0507252A GB 2425030 A GB2425030 A GB 2425030A
Authority
GB
United Kingdom
Prior art keywords
render target
render
shader
texture
textures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0507252A
Other versions
GB0507252D0 (en
Inventor
Richard Steven Faria
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TENOMICHI Ltd
Original Assignee
TENOMICHI Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TENOMICHI Ltd filed Critical TENOMICHI Ltd
Priority to GB0507252A priority Critical patent/GB2425030A/en
Publication of GB0507252D0 publication Critical patent/GB0507252D0/en
Priority to PCT/GB2006/001003 priority patent/WO2006109011A2/en
Publication of GB2425030A publication Critical patent/GB2425030A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Generation (AREA)
  • Image Processing (AREA)

Abstract

Using a matrix switch and render target controller 1 as a mechanism to provide a convenient and organised way of handling render targets 5, 4, 3 comprises of a software or hardware switch that can be used to connect shaders 8, 7, 6 together to enable more complex shaders to be run without exceeding the graphics card maximum pixel shader instructions. The switch can be used to connect render targets into programs and to image processors that can use the render target's contents as a controller for other programs. The render target switch and its controller create a unified video memory architecture, which can be used as a connection point between programs and shaders running on the GPU and CPU and may also incorporate a hi speed buss.

Description

MEMORY MANAGEMENT AND VIDEO PROCESSING
Technical Field
The present invention relates to the use of memory in electronic systems and how it is used by hardware components such as a Graphical Processing Unit (0 PU), Central Processing Unit (CPIJ), Personal Computers (PC) and software systems.
Background Art
Render targets (p-buffers) have often been used as a method to store and send textures between programs or from one part of a program to another. There is no set way to use render targets and as a result each program uses them in a different way. If we look at, for example, computer games and how they are written, the way the graphics memory is used can vary from one game or application to the next. The memory usage may often be written into the application software and as a result it often causes inefficiencies in the graphics cards performance. These inefficiencies are especially noticeable in the use of render targets. Render targets are used as and when a program needs them and, as a result often interfere with the graphics cards perfonnance during use.
Modern graphics cards are capable of running small programs on them called shaders.
These shaders run on the GPU and perfbrm functions on the video texture. At present, the size and complexity of these shaders is limited to the size of their instruction sets.
Summary Of the Invention
The invention called Managed Networked Render Targets (MNRT) uses a program called a Render Target Controller (Ri'C) to manage the memory of a video card and the memory of a CPU by controlling and networking and switching of render targets (RT), which is a reserved portion of the graphics memory used to hold a texture. The MNRT, the RTC and the switch may be implemented as software or as hardware in the form of an integrated circuit, which may be directly connected to any part of the graphics card including the GPU. The invention may also include a hi- speed buss specifically to connect render targets from both video card memory and main PC memory.
The output of a shader can be written to a render target (RT). Once a texture has been written to an RT it can then be used as an input of another shader. This process can be programmed for each inter-connect and thus shaders can be daisy chained together. The MNRT and the RTC enable these shaders to be connected in an organised manner and can create complex connections and routings that are not possible unless very complex bespoke software solutions are written. The advantage of the controlled switch and routing of render targets is especially important in programs that require textures to be imported and exported to different 3D scenes where each scene may be using multiple textures and RTs such as, but not restricted to, video editing programs.
The invention creates an N number of RTs and places them in a matrix array so that any RT's texture can be accessed, copied, transferred or deleted. The RTC manages the RT's input and output and thus allows textures to be transferred between shaders no matter where those shaders are in the shader pipeline. It can also send the texture of the RT to the CPU so it can be processed outside the GPU pipeline and then copy the processed texture back into a RT an use a shader to insert it back into the GPU pipeline for eventual display onto the monitor.
The RTC can combine two RT textures together and feed the result into a third RT and connect this RT to a shader. Thus the output of N shaders can be merged. The RTC can copy an RT to another RT and thus a single shader output can be fed to N shader inputs.
The RTC can create shader loops and feed back loops to create recursive shader effects.
The RTC can be controlled by external programs and also be controlled by the texture in the RI. The RTC can copy RTs from other programs and from shaders run in the CPU. The RTC can be programmed so that the information contained in one RI can control how the RTC affects the information in another RT.
The texture used can be either static as in an image such as a bitmap or it can be dynamic as in a video. If the image is dynamic then the RTC can synchronise itself with the video pipeline and can thus create different connections with each pipeline clock cycle or part cycle or send each pixel down a different sequence of shaders. Shaders and render targets can be run either on the graphics card or in the main PC and thus MNRT can form a Unified Video Memory Architecture (UVMA) that uses all the available memory as a virtual video memory whether it is on the graphics card or on the main PC motherboard.
MNRT can also be used to provide links between separate 3D Scenes. If we look at a typical 3D scene it consists of a world, a camera, a light and an object. The camera in that scene creates a texture of the object, at the right angle and focal length, lit by the scene light with ambient and other types of light. The camera texture can be connected to the Ri'C and held in a RI. From here the texture can be fed into a shader or be created into a texture for display in the same scene or another scene. The RTC becomes a matrix switch controlling views from different scenes or part views of scenes.
Brief Description of Drawings
Figure I shows a render target controller I connected to a render target matrix switch 2.
The texture 9 is connected to a shader 8 input and the output of shader 8 is connected to a render target 5. The render target 5 is connected to the input of a shader 7 and the output of the shader 7 is connected to the frame buffer 10 of the video card I I. Figure 2 shows a render target controller I connected to a render target matrix switch 2.
A texture 9 is connected to a shader 8 input and the output of shader 8 is connected to a render target 5. Render target 5 is connected to the input of shader 7 and the output of shader 7 is connected to the frame buffer 10 of the video card GPEJ II. Render target S is also connected to the texture 9.
Figure 3 shows a render target controller I connected to a render target matrix switch 2.
A texture 9 is connected to a shader 8 input and the output of shader 8 is connected to a render target 5. The render target 5 is connected to the input of shader 7 and the output of shader 7 is connected to a program 12.
Figure 4 shows a render target controller I connected to a render target matrix switch 2.
A texture 9 is connected to a render target 5 which is then connected to both another render target 4 and a shader 7. The shader 8 is connected to a certain part of a program 13. The render target 4 is connected to a shader 7 and the shader 7 is connected to a certain part of a program 12.
Figure 5 shows a render target controller I connected to a render target matrix switch 2.
A texture 9 is connected to a render target 5. Render target 5 is connected to both a render target 4 and a shader 8. Render target 4 is connected to a shader 7 and is then connected to render target 3. Shader 8 is connected to render target 14. Both render target 3 and render target 14 are connected to shader 6, which is then connected to a program 12.
Figure 6 shows a render target controller I connected to a render target matrix switch 2.
A texture 9 is switched between render target 5 in position A and shader 8 in position B. The switch may be controlled by the render target controller I via an external program 12. When the texture 9 is connected to render target 5, it is then connected to shader 7 then to render target 3 and then to shader 6 and then to a program 12 or to the frame buffer. When texture 9 is connected to shader 8 it is then connected to render target 14 which is connected to shader 6 and then to a program 12.
Figure 7 shows a render target controller I connected to a render target matrix switch 2.
A texture 9 is switched between render target 5 in position A and shader 8 in position B. The switch may be controlled by the content of render target 14 and this content may be analysed by a separate image processing system 15 that may control the switching.
When the texture 9 is connected to render target 5, it is then connected to shader 7 then to render target 3 and then to shader 6 and then to a program 12 or to the frame buffer.
When texture 9 is connected to shader 8 it is then connected to render target 14 which is connected to shader 6 and then to a program 12 or to a frame buffer.
Figure 8 shows a render target controller I connected to a render target matrix switch 2 where the render targets are on both the graphics card GPU and on the main memory in the computer PC. Shaders on both the graphics cards and those running on the PC can be accessed. A texture 9 is connected to a render target 5 and then connected to a shader 7. Shader 7 connects to render target 3 in the PCs main memory. Render target 3 is then analysed by an image processing system 15 and then sent to render target 14 which is also in the PCs main memory. Render target 14 is then connected to shader 6 which is a shader running in the CPU and is connected to a program 12.
Figure 9 shows a render target controller I connected to a render target matrix switch 2 where render targets 5 and 4 are used to link two cameras 17 and 18 from two different scenes Wi and W2 to two different objects 19 and 20 which have texture 9 and 16 on their surface. The two scenes have lights in them, 21 and 22.
Figure 10 shows a render target controller I connected to a render target matrix switch 2. A program 12 controls the position of object 19 and, when activated by the throwing of switch A B, it moves object 19 to position I 9A. A scene W3 contains a camera 17, a light 21 and an object 19. The texture from the camera is sent to shader 8 via render target 5. Shader 8 controls the render target controller I via a circuit or software connection as indicated by the dotted line joining them.
Figure II show a render target controller I connected to a render target matrix switch 2.
A clock 25 controls the render target controller I and enables the texture 9 to be split into four parts with each part going to a render target. These render targets, 5, 4, 3, and 14 are then connected to four shaders 8, 7, 6 and 23. The render target controller I is also connected to a database 24.
Detailed Description of the Preferred Embodiment
As shown in Figure I, a render target controller I uses a matrix software or hardware switch 2 to fonn a switching mechanism between a number of render targets 5, 4 and 3 and a number of shaders 8, 7 and 6. The source texture 9 can be either static or dynamic.
Texture 9 is connected to the input of shader 8 and is then connected to render target 5.
It is then sent to shader 7 and then to the frame buffer 10 and onto the video card I I. Shader 8 and shader 7 affect the texture 9 in different ways. If shader 8 and shader 7 were combined into one shader it would be too large to run on the graphics card because its combined size creates a larger instruction set than the graphics card permits. l'his method enables complex shaders to be run on the graphics card by daisy chaining them together.
Figure 2 shows a feed back arrangement where a texture 9 is connected to shader 8.
From here it is connected to render target 5 and then connected back to texture 9.
Render target 5 is also connected to shader 7 and then to the frame buffer 10 and to the graphics card I I. As texture 9 is passed through shader 8 it is affected and then passed to shader 7. The affected texture is then passed through shader 8 again where it is affected again. As a result of passing texture 9 continuously through shader 8 the resulting texture in shader 7 is progressively changed at a per pixel, part texture or complete texture level. This method of looping shaders and feeding the results to other shaders can create complex effects especially when a number of loops are connected together.
Figure 3 shows a render target 5 forming an inter-connect between two shaders 8 and 7.
In this example the output of shader 7 is linked to a program 12. This method enables shaders to be used in programs that do not necessarily end up in the graphics card frame buffer. It enables shaders to be used outside the graphics pipeline. A similar method can be seen in Figure 4 with the matrix 2 connecting the same texture 9 to two shaders 8 and 7 to a program or to two programs 12 and 13. Figure 4 also shows a method of how one texture or even one shader output can be split into two textures, which can then be processed differently by two separate shaders.
Figure 5 shows in greater depth a method of branching and combining textures with render targets and shaders. The texture 9 is first split into two textures by render target 5 and these are connected directly to shader 8 and to shader 7 via render target 4. Shader 8 then connects to render target 14 and shader 7 connects to render target 3. Render target 3 and render target 14 are connected to shader 6 and a single texture is then sent to the program 12. Each of the shaders might carry out a completely different type of operation such as hue, saturation and intensity (HS1) or contrast or sobel detection or any other pixel shader operation. They could also be vertex shaders in which 3D meshes are adjusted. Shader 6 could combine the two input textures or even use one texture as a controller of the other.
An external program can also control the render target controller 1 as shown in Figure 6.
This shows a method of how an external program 12 can control a switch A B so that a texture 9 is switched between shader 8 and shader 7. The render target controller might throw the switch A B on a per pixel basis. In this scenario the first pixel of texture 9 is sent to shader 7 via render target 5 and the second pixel is sent to shader 8.The complete image is combined in shader 6. [he graphics card might be configured to have, for example, n shaders where every single pixel in an image is sent to a different shader.
The combined image is then sent to a program. This also demonstrates that the render target controller could connect, for example, a different shader to a different render target every n seconds. If we wanted to process the top half of an image with shader 8 and the bottom with shader 7 then the switch A B would start in position B until all the pixels in the top half had been processed and then switch to position B. A further example would be that if shader 8 removed the blue and red from a dynamic texture and shader 7 carried out sobel edge detection. Then the shader 6 would be able to key out anything with low Lumina resulting in a green outline of the original image. This shows how a single texture can he processed by two shaders in parallel and then have both feed into a single shader for processing and display or as a feed into another program.
Figure 7 shows a similar configuration to Figure 6 but here the output of render target 14 is also fed to the render target controller I. The render target controller sends this texture to an external image processor 15, which analyses the image and uses information in it to control the speed of the switch A B and thus which shader array the next texture or part texture is fed through. The output is fed to a program 12. Figure 7 could also be used to demonstrate how textures, shader, render targets, the render target controller I and the matrix switch 2 can be used as a logic gate array as used in electronics where as the colour of the texture or part texture would denote the charge of the gate. The shaders could be used as transistors to switch the textures, and the render targets as capacitors to store the textures.
Figure 8 shows a method of how the render target controller I could be used to control render targets and shaders running on both the graphics card GPU and the main PC CPU forming a unified video memory architecture. Shaders can be run in software on a CPU. Figure 8 shows shaders running in both the GPU and on the CPU connected via a matrix 2 to render targets on the video memory of the graphics card and the main PC board. Render targets 5, 4, and shader 8 and 7 are all on the graphics card while render targets 3 and 14 and shader 6 and image processor 15 are all running on the main PC.
The render target controller and the render targets provide a bridge between the main board video memory and video card memory enabling the swapping of textures and the routing of textures so that to the program views the total system memory as one single unified memory. This method has great advantages in the creation and control of effects that require both the CPU and the GPU to execute.
Figure 9 shows two 3D scenes WI and W2. WI scene in this example contains an object 19 which is being lit by a light 21 and has a camera 17 pointing towards it. W2 scene contains an object 20 which is being lit by a light 22 and has a camera 18 pointing towards it. The render target controller 1 has connected camera 1 7 to a render target 5 and then connected this to a shader 8. The output of shader 8 is a texture 9 which is mapped onto an object 20. The camera 18 in 3D scene W2 creates a texture and connects this to render target 4, the render target is then copied to texture T2 to map onto object 19. This shows how the render target controller and managed networked render targets can be used to link 3D scenes. The linking of3D scenes enables static and dynamic textures to be carried out in separate 3D scenes and then merged into other scenes. If object 1 9 were to rotate and move then the texture created by camera 17 will cause the surface of object 20 to move. By using the render target controller and render targets to switch the connection of camera object textures and shaders, extremely complex effects and results can be created. If, for example, object 19 were to rotate then the mapped texture surface of object 20 would rotate. If the light 21 were to be set to black then the colour of the object 20 would be black. If there were 360 scenes where object 19 were to be rotated by I degree progressively in each, then object 19 would look as if it were rotating by simply switching from one scene to the next. Object I 9 would look as if it were rotating in the opposite direction if the scenes were switched in reverse order. In addition to just switching between scenes managed networked render targets and the render target controller can detect and respond to what happens in a scene. If for example there were two objects in scene WI and they were moving randomly, they would at some point simultaneously cross the camera view. The texture produced would show two oh jects jointing to become one and then separating to become two objects again. Shader 8 could be programmed to detect this and also be programmed to send a signal to the render target controller so that the matrix 2 activates switch A B and switch X Y. The result of this would be to re-route the textures, as indicated in by the doted lines, changing the textures on the objects in each scene. Thus the render target controller and managed networked render targets can be used to control the camera textures and object textures in each scene and between scenes. It can also he used to switch shaders between scenes and render targets between scenes.
In Figure 10 a camera I 7 is connected to a shader 8 via a render target 5. When, for example, the shader 8 receives twenty-five textures from the scene Wi that are identical, it sends a message to the render target controller I and the render target controller I connects render target 4 to the program 12. The program 12 then moves the object 19 to position l9A. If the frame rate is set to 25 frames per second then the resulting dynamic texture sent to the frame buffer would show object 19 moving every one second. This method demonstrates how a render target controller and managed networked render targets can be used to instruct programs to move objects in 3D scenes.
Figure II demonstrates how a clock from an external program or hardware can be used to control the render target controller I and matrix switch 2. The texture 9 is split into four sections with each section being sent to render targets 5, 4, 3 and 14. The clock 25 controls the size of these segments. The render target controller I is also connected in this example to a database 24, which might monitor the render target controller behaviour and store the results in an external database table so that other programs could refer them or so they can be used to improve the render target controller switching patterns.

Claims (16)

  1. I. A method of controlling and routing camera textures, render targets, shaders programs and other software objects and 3D scenes by using a switch to control render targets which have been stored in video memory on a graphics card or in the memory of a personal computer.
  2. 2. A method according to Claim I wherein the input and the output of shaders are connected together via a render target so as to enable more shaders to be run than the graphics card permits.
  3. 3. A method according to Claim I or Claim 2 wherein textures are duplicated to enable them to be fed to different parts of a program or graphics hardware.
  4. 4. A method according to any preceding claim, wherein render targets are used to connect the inputs and outputs of shaders in conjunction with a render target switch to create complex shader effects by creating feedback loops.
  5. 5. A method according to any preceding claim, wherein the render target switch enables textures to be combined by a shader into new textures.
  6. 6. A method according to any preceding claim, wherein the control and routing of a textures is determined by the input of an external program to a switch.
  7. 7. A method according to any preceding claim, wherein the control and routing of textures is controlled by the texture's format, appearance or characteristics, or the format, appearance or characteristics of another texture.
  8. 8. A method according to any preceding claim, wherein the control and routing of textures is performed between the video ram in the graphics system and main board ram of a personal computer to form a unified video memory irrespective of the memory location.
  9. 9. A method according to any preceding claim, wherein a program running in a personal computer's main memory can be controlled by the graphics card software by the use of render target switching.
  10. 10. A method according to any preceding claim, wherein a program running in the graphics card can be controlled by a program running in a personal computer's main memory by the use of render target switching.
  11. II. A method according to any preceding claim, wherein the switching of render targets can control a program.
  12. 12. A method according to any preceding claim, wherein the switching of dynamic textures can control a program.
  13. 13. A method according to any preceding claim, wherein a switch is controlled by a texture's content.
  14. 14. A method according to any preceding claim, wherein the content of a render target is used to control of the textures in a 31) scene.
  15. IS. A method according to any preceding claim, wherein the content of a render target used to control of the lights in a 3D scene.
  16. 16. A method according to any preceding claim, wherein the content of a render target is used to control objects in a 3D scene.
    1 7. A method according to any preceding claim, wherein the content of a render target is used to control of the vertexes and polygons or texture mapping characteristics of objects in a 3D scene.
    I 8. A method according to any preceding claim, wherein the behaviour and characteristics of different switching patterns of a render target switch are stored into a database or table to enable the monitoring, improving or creating of switching patterns.
GB0507252A 2005-04-09 2005-04-09 Managed network render targets for routing graphical information Withdrawn GB2425030A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB0507252A GB2425030A (en) 2005-04-09 2005-04-09 Managed network render targets for routing graphical information
PCT/GB2006/001003 WO2006109011A2 (en) 2005-04-09 2006-03-17 Memory management and video processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0507252A GB2425030A (en) 2005-04-09 2005-04-09 Managed network render targets for routing graphical information

Publications (2)

Publication Number Publication Date
GB0507252D0 GB0507252D0 (en) 2005-05-18
GB2425030A true GB2425030A (en) 2006-10-11

Family

ID=34610900

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0507252A Withdrawn GB2425030A (en) 2005-04-09 2005-04-09 Managed network render targets for routing graphical information

Country Status (2)

Country Link
GB (1) GB2425030A (en)
WO (1) WO2006109011A2 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7710417B2 (en) 2007-01-15 2010-05-04 Microsoft Corporation Spatial binning of particles on a GPU
US8432405B2 (en) 2008-06-26 2013-04-30 Microsoft Corporation Dynamically transitioning between hardware-accelerated and software rendering

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0496439A2 (en) * 1991-01-15 1992-07-29 Koninklijke Philips Electronics N.V. Computer system with multi-buffer data cache
US5798770A (en) * 1995-03-24 1998-08-25 3Dlabs Inc. Ltd. Graphics rendering system with reconfigurable pipeline sequence
US6067090A (en) * 1998-02-04 2000-05-23 Intel Corporation Data skew management of multiple 3-D graphic operand requests
US20030189574A1 (en) * 2002-04-05 2003-10-09 Ramsey Paul R. Acceleration of graphics for remote display using redirection of rendering and compression

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7564460B2 (en) * 2001-07-16 2009-07-21 Microsoft Corporation Systems and methods for providing intermediate targets in a graphics system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0496439A2 (en) * 1991-01-15 1992-07-29 Koninklijke Philips Electronics N.V. Computer system with multi-buffer data cache
US5822757A (en) * 1991-01-15 1998-10-13 Philips Electronics North America Corporation Computer system with multi-buffer data cache for prefetching data having different temporal and spatial localities
US5798770A (en) * 1995-03-24 1998-08-25 3Dlabs Inc. Ltd. Graphics rendering system with reconfigurable pipeline sequence
US6067090A (en) * 1998-02-04 2000-05-23 Intel Corporation Data skew management of multiple 3-D graphic operand requests
US20030189574A1 (en) * 2002-04-05 2003-10-09 Ramsey Paul R. Acceleration of graphics for remote display using redirection of rendering and compression

Also Published As

Publication number Publication date
WO2006109011A3 (en) 2007-02-15
WO2006109011A2 (en) 2006-10-19
GB0507252D0 (en) 2005-05-18
WO2006109011B1 (en) 2007-04-12

Similar Documents

Publication Publication Date Title
US7663621B1 (en) Cylindrical wrapping using shader hardware
US8284207B2 (en) Method of generating digital images of objects in 3D scenes while eliminating object overdrawing within the multiple graphics processing pipeline (GPPLS) of a parallel graphics processing system generating partial color-based complementary-type images along the viewing direction using black pixel rendering and subsequent recompositing operations
Eyles et al. Pixelflow: the realization
US6636214B1 (en) Method and apparatus for dynamically reconfiguring the order of hidden surface processing based on rendering mode
US8319784B2 (en) Fast reconfiguration of graphics pipeline state
US5969726A (en) Caching and coherency control of multiple geometry accelerators in a computer graphics system
EP1745434B1 (en) A kill bit graphics processing system and method
JP4731028B2 (en) Recirculating shade tree blender for graphics systems
EP1738330B1 (en) Scalable shader architecture
US6664958B1 (en) Z-texturing
US7525547B1 (en) Programming multiple chips from a command buffer to process multiple images
US20040223003A1 (en) Parallel pipelined merge engines
JP2010539602A (en) Fragment shader bypass in graphics processing unit, apparatus and method thereof
US8775777B2 (en) Techniques for sourcing immediate values from a VLIW
US20040227772A1 (en) Bounding box in 3D graphics
US5949421A (en) Method and system for efficient register sorting for three dimensional graphics
TW200929062A (en) Scalar float register overlay on vector register file for efficient register allocation and scalar float and vector register sharing
US20020135587A1 (en) System and method for implementing accumulation buffer operations in texture mapping hardware
US7286129B1 (en) Two-sided stencil testing system and method
US9064336B2 (en) Multiple texture compositing
US20030122820A1 (en) Object culling in zone rendering
CN110738593A (en) Use of textures in a graphics processing system
GB2425030A (en) Managed network render targets for routing graphical information
US10062140B2 (en) Graphics processing systems
US6885375B2 (en) Stalling pipelines in large designs

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)