GB2336983A - Improvements relating to computer 3D rendering system - Google Patents

Improvements relating to computer 3D rendering system Download PDF

Info

Publication number
GB2336983A
GB2336983A GB9918187A GB9918187A GB2336983A GB 2336983 A GB2336983 A GB 2336983A GB 9918187 A GB9918187 A GB 9918187A GB 9918187 A GB9918187 A GB 9918187A GB 2336983 A GB2336983 A GB 2336983A
Authority
GB
United Kingdom
Prior art keywords
processing elements
image
sub
determination
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9918187A
Other versions
GB9918187D0 (en
GB2336983B (en
Inventor
Martin Ashton
Simon James Fenny
Hossein Yassaie
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Imagination Technologies Ltd
Original Assignee
Videologic Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Videologic Ltd filed Critical Videologic Ltd
Priority to GB9918187A priority Critical patent/GB2336983B/en
Publication of GB9918187D0 publication Critical patent/GB9918187D0/en
Publication of GB2336983A publication Critical patent/GB2336983A/en
Application granted granted Critical
Publication of GB2336983B publication Critical patent/GB2336983B/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Generation (AREA)

Abstract

An apparatus for shading a three dimensional image for display on a screen operates by first supplying data defining a group of infinite surfaces representing each object in the image. A depth value is generated for each surface for each elementary area of the display in dependence on the distance of that surface from an image plane. The apparatus then determines whether or not any surface is visible at that elementary area. One or more objects may be designated as a light volume. A depth value is determined for each surface of the light volume for each elementary area of the display in dependence on distance of that surface from an image plane. The elementary area of the display is then shaded in dependence on the object surface visible at that elementary area and its location in relation to the light volume. To facilitate easier processing of the shading the display may be sub-divided into a plurality of sub-regions. The size of the sub-regions is dependent on the complexity of the image of each area of the screen. Preferably the determination of the depth values for surfaces for groups of pixels in one sub-region is interleaved with the determination of depth values for surfaces for groups of pixels in other sub-regions. The apparatus performing the processing comprises a plurality of groups of processing elements for determining depth values for surfaces in the image for each elementary area of the display. Each group of processing elements has an associated cache memory means for storing defining the various surfaces in the image. The groups of processing elements are coupled together by a bus. Data read from the associated memory of the first group of processing elements is transmitted via the bus to other groups of processing elements for the determination of depth values until all data from the first associated memory means has been read. Reading of data then takes place from the associated memory means from a second group of processing elements and so on.

Description

2336983 IMPROVEMENTS RELATING TO COMPUTER 3D RENDERING SYSTEMS This
invention relates to computer 3D rendering systems of the type described in our British patent application No. 9414834.3.
British patent application No. 9414334.3 describes a 3D rendering system in which each object in a scene to be viewed is defined as a set of infinite surfaces. Each.elementary area of a screen on which an image is to be displayed has a ray projected through it from a viewpoint into the three-dimensional scene. The location of the intersection of the projected ray with each surface is then determined. From these intersections it is then possible to determine whether any new intersected surface is visible at that elementary area. The elementary area is then shaded for display in dependence on the result of the determination.
The system is implemented in a pipeline type processor comprising a number of cells each of which can perform an intersection calculation with a surface. Thus a large number of surface intersections can be computed simultaneously. Each cell is loaded with a set of coefficients defining the surface for which it is to perform the intersection test.
The system is capable of generating shadows by defining volumes in shadow in the same way as solid objects, that is to say they are defined as a set of infinite surfaces. These shadow objects are then processed in the same way as solid objects. Once all the shadow and non-shadow objects have been processed, it is a straightforward matter to determine whether the surface visible at a particular pixel is in shadow by comparing the position of the visible surface with the furthest 2 - forward facing surface and the nearest reverse facing surface of the shadow object and checking to see whether the visible surface is within that range. If it is then a flag associated with that surface is set to indicate that a modification to its shade or colour needs to be made to simulate the effect of shadowing.
Preferred embodiments of the present invention seek to improve the performance of the pipeline processor described in British patent application No. 9414834.3 by sub-dividing a scene into a variety of sub-regions for processing. These sub-regions are preferably rectangular tiles.
Another embodiment of this invention seeks to provide a system in which spotlight-type effects can be generated within a scene.
The invention is defined with more precision in the appended claims to which reference should now be made.
Preferred embodiments of the invention will now be described in detail, by way of example, with reference to the drawings, in which:-.
Figure 1 schematically shows a spotlight type effect to be modelled; Ficrures 2A, 2B, and 2C show the way in which concave objects are dealt with by the system; Ficrure 3 shows schematically a rectangular tile containing four objects to be processed by the system; Ficrure 4 shows schematically the same tile divided into four sub-tiles for processing; Figure 5 shows further sub-divisions of tiles; LLgure 6 shows a variable tile size arrangement; Ficrure 7 shows schematically the arrangement for processing elements of the prior art system;
3 - Ficrure 8 shows an alternative arrangement for the prior art system;
FicTure 9 shows an arrangement of tiles having a different number of surfaces per tile for mixed processing in accordance with an embodiment of the invention; Figure 10 shows a block diagram of a pipeline processor embodying the invention; Figure 11 shows the manner in which surface data is retrieved from memory for use in an embodiment of the invention; and Ficrure 12 shows a plurality of pipeline processing embodying another aspect of the invention.
In order to make computer-generated three-dimensional images more lifelike, it is important to be able to simulate the effect of a spotlight in a night time scene.
For example, car headlights, search lights, and landing lights are all things which it may be desired to simulate.
The type of effect is illustrated in Figure 1 in which a light source 2 illuminates a cone-shaped volume 4 of light which is projected onto a surface 6. An object 8 positioned on the surface illuminated by the cone of light 4.
The cone-shaped volume 4 is referred to as a light volume. This is to all intents and purposes, the opposite of the shadow volumes which are described in British patent No. 9414834.3. A light volume is a region of space whose shape is determined by the characteristics of the light source. All objects or parts of objects which are outside this area are in shadow with respect to the light source.
In order to render the scene the object 8 is defined as a set of surfaces as is the light volume 4. In the case where three-dimensional shapes are concave it is preferable, as in British patent application No. 9414834.3 to model the object as a collection of convex objects.
This is illustrated in Figures 2A, 2B, and 2C. Figures 2A and 2B show two concave objects arranged to intersect to produce an object with concavities. Figure 2C shows how an object with concavities can be split into two convex objects.
As described in British patent application No. 9414834.3, after all the non-shadow convex objects have been processed, we have, at any particular pixel, the distance to the visible surface at that point. Thus, to find whether the surface at that pixel is in a light object we then need to process the surfaces defining the light volume 4 to find the furthest forward facing surface and the nearest reverse facing surface of the light object. If the furthest forward facing surface is closer to the viewpoint than the nearest reverse facing surface and the visible surface between them, then-that visible surface is within the light volume. If the furthest forward surface is nearer to the viewpoint and the nearest reverse surface and the visible surface is between them then the surface of the object is within the light volume and a flag is associated with it to indicate that its colour will need to be modified to simulate the effect of light. if the furthest forward surface is further from the viewpoint than the nearest reverse surface then the object is outside the light volume and a flag has to be set to indicate that this colour will need to be modified to simulate the effect of shadowing.
Light volumes are processed before the shadow volumes of GB 9414834.3. If a surface of a pixel is outside the light volume then the shadow flag associated with the surface is asserted. in the case of surfaces falling -'0 i5 3C -.G within the light volume shadow volumes are generated within the light volume to simulate shadows cast by the light source and hence objects within the light volume cast shadows.
Light volumes do not require any additions to the architecture as given in Patent No. 9414834.3. The only change that is required is that the operation code associated with each plane has to be extended so that planes can be classed as light volume objects, just as there is currently an operation code to indicate shadow volume objects. The light volume object uses the same logic as the shadow volume object except that the output to the shadow flag is inverted if the surface is a light volume surface.
Turning now to ways in which the performance of the rendering system may be improved, it has been proposed that the screen be divided into a number of sub-regions or tiles. Then, for each tile, only those objects within the tile will be processed. This, decreases the average number of surfaces to be processed when determining which surface is visible at any particular pixel.
A gross elimination process such as surrounding complex objects with bounding volumes can be first used to test which objects apply to the current region being rendered. The bounding volume is defined by finding the areas to which particular objects apply. The sub regions can, in general, be any shape. However, the simplest technique is to divide the screen into rectangular tiles.
All the bounding volumes of objects are projected onto screen space (the array of pixels defining the image plane) and are tested against corners of the tiles. TIose bounding volumes of objects which are completely i 0 .5 outside the tile are discarded for that tile. Thus, the number of surfaces which need to be processed per pixel within a tile becomes less and hence the total time to render an image is reduced since the total processing time for all the tiles will be reduced.
Figure 3 shows the bounding volumes of four objects projected onto an array of twenty pixels by twenty pixels.
If each object is defined by ten surfaces, then the total time to render Figure 3 will be 400 pixels x 40 surfaces which equals 16,000 cycles. Figure 4, however, shows an area of 20 x 20 pixels sub-divided into four sub-arrays of c i. x 10 pixels. Thus the time to render the objects in Figure 4 will be defined by the time to render the object n end 10 x 10 pixel tile. This will be 100 pixels by 10 surfaces which equals 1,000 cycles. Thus to render the whole 20 x 20 array, the time take will be 4,000 cycles.
Thus there is a 75% saving in processing time using this arrangement.
However, the.objects of the image are not normally evenly distributed over the entire screen. Therefore, using tiles with variable sizes allows the same performance enhancement to be gained with fewer tiles or betzer performance with the same number of tiles. This is shown in Figures 5 and 6.
In Figure 5 there are a total of seven objects and 16 tiles each 5 pixels by 5 pixels. Therefore, the time taken to render a scene will be 16 x 25 x 10 which gives 4,000 cycles.
In Figure 4 the number of tiles is reduced by having three 10 pixel x 10 pixel tiles encompassing the larger objects and for 5 pixel x 5 pixel tiles encompassing the smaller objects. The time to process the 10 x 10 pixel tLles will be 100 pixels x 10 surfaces x 3 tiles which I 7 - equals 3,000 cycles. The time to process the 5 pixel x 5 pixel tiles is 25 pixels x 10 surfaces X 4 tiles which equals 1,000 cycles. Thus the total time to render Figure 6 will be 4,000 cycles, the same as it would take to render Figure 5. However, Figure 6 only has 7 tiles as opposed to the 16 of Figure 5. Thus, a saving is made on the sub-division of the scene into tiles.
In a preferred embodiment of the invention projection techniques are used which project boundary boxes around io complex objects. Firstly the distribution of objects around the visible screen is determined and this then allows suitable tile sizes to be defined in dependence on that distribution.
The surfaces defining the various objects are stored in one contiguous list as described in Patent No. 9414834.3. However, for each tile, (a subdivided area of the screen), there is another list of object pointers which point to-the individual objects which reside in that tile. This is to avoid storing identical surfaces for each tile, as one object made of many surfaces could be in a number of tiles. This is illustrated in Figure 11. At the start of the object pointer list there is a x,y start position and a x,y size. This gives the top left hand position and the width and height in pixels of the tile. Thus. allowing for variable tile sizes.
At the start of rendering the pointer list is traversed and the x,y position and size is read. The x, position is stored in register 44 of Figure 10. This gives the starting position for the rendering. As each pointer is read the appropriate surfaces pointed to are also read and stored in the internal memory. After all the pointers have been read the internal memory contains io is all the objects to be processed. The objects are processed as normal. After the tile has been rendered the next object pointer list is traversed as described earlier and the internal memory contains all the objects for the next tile. This continues until all the tiles have been completed.
A further improvement to sub-dividing the screen inco sub-regions (tiles) is achieved by having the ability to process the tiles non-sequentially.
The system described in British patent application No. 9414834.3 uses multiple processing elements that operate on a group of adjacent pixels at any one time.
For example, if a tile consists of N x M pixels there may be an N element parallel processing engine which will process one row of the tile at a time for each input plane. The processing time, per row, in terms of clock cycles will be equal to the number of planes in or associated with that tile. Such an arrangement is shown in Figure 7. If there are tiles which have a very small number of planes, then the parallel processing elements may have to stay idle while the last processed row of pixels is post-processed (e.g. has texture and shading added) by a post processing system 20. This receives pixel data from the N processing elements 22, via an N stage buffer 24. The speed will be limited by the speed of the post-processing unit 20.
One way to overcome this speed inefficiency is to ensure that the post-processing unit 20 can match the peak output rate of the N processing elements 22 for a small numbers of surfaces. This would make the post-processing unit more complex and expensive. Another solution would be to interpose a frame buffer 26 between the N stage buffer 24 and the post-processing unit 20 as shown in Figure 8. Preferably this frame buffer would accommodate the same number of pixels as the display.
This, however, would again lead to additional expense.
A preferred embodiment of the present invention solves this problem in an efficient and inexpensive manner by intermixing the processing of rows from tiles so that rows from tiles with a smaller number of planes contained within them are interleaved with rows from tiles with a large number of planes.
For example, Figure 9 shows tiles A, B, C and D all having different numbers of surfaces defining objects within them. If tiles are processed one at a time in the A, B, C, D order then during the processing of the first two tiles the N element parallel processing array 22 will spend idle time since only a small number of planes need to be processed. However, by processing a row from tile A followed by a row from tile C which has a large number of planes associated with it, then the average output rate of data from the N processing elements 22 will be reduced. Thus, the only buffering needed will be the N stage buffer shown in Figure 7.
The surfaces are stored exactly as detailed above. However, rather than reading in a list of objects for just one tile into the internal memory, two adjacent object lists are read and stored in the internal memory. The first set of surfaces are processed for one line of the first tile which is then followed by processing the second set of surfaces for the second tile. The x,y position of the tiles are stored in two registers such as numbered 42 in Figure 10. These are swopped between as lines from the two different tiles are processed. The tiles which are to be processed together have to have their object lists adjacent in the external memory. In the example above with reference to Figure 11, tile 1 would be tile A and tile 2 would be tile C.
Figure 10 shows an implementation of a system of the type described in British patent application 0 No. 9414834.3 based on an array of pipeline processing cells 40, a precalculation unit 40,46,50,48 and a memory 42. The memory stores a list of instruction words comprising surface parameters (ABC), control bits and surface attributes. The practical and cost effective VLSI implementation of such a system requires that instruction words would be stored in an onchip cache memory. The reasons for this are two-fold. First, the instruction words are in a very long format (typically 96 bits) and therefore sharing external memory with, for example a 32 bit processor would involve multiple bus cycles per instruction word fetch. Secondly, the rate of dispatch of instruction words is directly proportional to the surface processing performance of the system.
Taken together these points indicate that any external memory solution must use high speed, wide and dedicated memory.
Using a cache implementation of this system exploits the fact that during the processing of a given scene many references are made to the same instruction word for processing a particular surface. In the variable tile size enhancement presented above the tile size can be optimised such that all tiles contain a roughly constant number of surfaces and thus a roughly constant number of instruction words. If the cache size is larger than this value then all instruction words for a given tile will fit in the cache. Under these conditions the advantage in time is given by the following:
Time to process a tile from external memory PC x I x MEXT Time to process a tile from cache memory x (MEXT+) PC - 1) x MINT) Percentage improvement = 1 - (MEXT + (PC 1) x MINTW(PC x MEXT) where T = tile size, PE = the number of pipeline elements, PC = the number of pipelines cycles T/PE, I = the number of instruction words per tile, MEXT external memory access time, MINT = cache memory access time.
As scene complexity increases practical limits to cache size dictate that not all instruction words in the given tile can be cached.
The problem of scaleability of memory capacity in a multichip implementation of the system described in patent application 9414834.3 can be solved by a further embodiment of the invention.
An implementation of a 3D rendering system consists of N 3D processing units (henceforth referred to as ISP), a system processor and system memory, is shown in Figure 12. Each ISP 52 contains-an M instruction word memory which functions as described above. In such a configuration all ISPIs are working on a different output line of the same tile, and are thus all operating on the same instruction word at any given time. For the purpose of this description the ISP working on tile line n is referred to as master, the ISP working on tile line (n+l) is referred to as slave-1, that working on tile line (n+m) as slave-m etc. The memory controllers of all ISPIs are connected by a memory interconnect bus 50 consisting of the following elements:
12 1) Data Bus 50: Width of instruction word: Bi directional 2) Load In / Load Out 54: Synchronous daisy chain signals which indicate transfer of cache masterchip Input / Output. 3) Terminal Word 56: Indicates that current instruction word is the last in the list - driven by chip which is currently transmitting data.
4) Data Valid 58: Driven by the current transmitter to io qualify data on data bus.
The master ISP 52 is designated by having its load-in input tied 'l' at reset. The master is responsible for following a hierarchical list of pointers which ultimatel reference instruction words. At the start of a given tile a memory load phase begins in which the master ISP will begin fetching instruction words from system memory 60, assembling the words internally and transmitting them in a synchronised fashion to three destinations: the master internal memory 68, the master processing pipeline and the memory data bus 50. At this point only the master is storing instructions in its memory. The data valid signal is necessary in systems where the master may be denied access to system memory for periods of time.
It is also used to indicate a data bus turn around cycle which occurs when memory data bus ownership Changes.
on the transfer cycle when the master memory is full, the master drives load-in of slave-1 to 'l,, indicating that slave-1 must begin storing instruction words in its memory. This process continues down the chain of slave isp,s until the final instruction word is passed out by the master. At this point terminal-word is driven to l,, and the slave which is currently loading its memory latches the value of the terminal memory address. This completes the memory load phase.
There follows a series of memory read phases which continue until the current tile processing is complete.
memory read commences with the master ISP reading out its instruction word list and passing the values to its processing pipeline and to the memory data bus. On the final access, the master drives load-in of slave-1 to 'l, indicating that slave-1 must begin reading instruction words from its memory. This process continues down chain of slave ISPIs until the final instruction word is passed out by the final slave. At this point terminal-word is driven to 'l,, and the process repeats from the master. Note that there will be a lost cycle between each change of data bus mastership to allow for the turnaround time of the buffers.
Reference should also be made to British Patent Application number 9501834.7 from which the present application is divided.
f

Claims (10)

1. A method for shading a three dimensional image for display on a screen comprising the steps of representing each object in the image as a group of surfaces, determining a bounding volume for each object projected onto the display screen, subdividing the display into a plurality of subregions, the size of the sub-regions being dependent on the complexity of the image in each area of the screen, and for each pixel of each subregion projecting a ray into the image, determining the location of the intersection of the ray with each surface of each object contained with that sub-region, determining whether any intersected surface is visible at that pixel and shading the pixel in dependence on the result of the determination.
2. A method for shading according to claim 1 including the step of interleaving the determination of ray surface intersections for rows of pixels in a sub-region with the determination of ray/surfa-ce intersections for rows of pixels in other sub-regions.
3. A method for shading according to claim 1 in which the step of projecting a range of the image is performed by one of a plurality of groups of processing elements, each group of which has an associated cache memory means for storing data defining the various surfaces in the image, and the groups of processing elements are coupled together by a bus means, and wherein data is read from the associated memory means of a first group of processing elements and transmitted via the bus means to other groups of processing elements for determination of ray surface intersections until all data from the first associated memory means has been read and wherein reading of data then takes place from an associated memory means of a second group of processing elements.
4. Apparatus for shading a three dimensional image for display on a screen comprising means for representing each object in the image as a group of surfaces, means for determining a bounding volume for each object projected on to the display screen, means for sub-dividing the display screen into a plurality of sub-regions, the size of the sub-regions being dependent on the complexity of the image in each area of the screen, means for generating a depth value for each surface for each pixel of each sub-region in dependence of the distance of that surface from an image plane, means for determining whether any surface is visible at that elementary area, and means for shading the elementary area in dependence on the result of determination.
5. Apparatus according to claim 4 including means for interleaving the determination of depth values for surfaces for rows of pixels in one subregion with the determination of depth values for surfaces for rows of pixels in other sub-regions.
6. Apparatus according to claim 4 or 5 in which the means for generating a depth value for each surface for each pixel comprising a plurality of groups of processing elements for determining depth values for services in the image, wherein each group of processing elements has an associated cache memory means for storing data defining tie various surfaces in the image, and the groups of processing elements are coupled together by a bus means, and wherein data is read from the associated memory means of a first group of processing elements and transmitted via the bus means to other groups of processing elements for determination of depth values until all data from said first and associated memory means has been read and wherein reading of data then takes place from the associated memory means of a second group of processing elements.
7. A method for shading a three dimensional image for display on a screen comprising the steps of representing each object in the image a group of surfaces, determining a bounding volume for each object to be projected on to the display screen, sub-dividing the display screen into a plurality of sub-regions, the size of the sub-reqions being dependent on the complexity of the image, generating a depth value for each surface for each pixel of each subregion in dependence on the distance of that surface from an image plane, determining whether any surface is visible at that elementary area, and shading the elementary area in dependence on the result of the determination.
8. A method according to claim 7 including the step of interleaving the determination of depth values for surfaces for rows of pixels in one subregion with the determination of depth values for surfaces for rows of pixels in other sub-regions.
9. A method according to claim 7 or 8 in which the step for generating the depth value for each surface for each pixel performed by a plurality of groups of processing elements, wherein each group of processing elements has an associated cache memory means for storing data defining the various surfaces in the image, and the groups of processing elements are coupled together by a bus means, and wherein data is read from the associated memory means of the first group of processing elements and transmitted by the bus means to other groups of processing elements for determination of depth values until all data from said first associated memory means has been read and wherein reading oil data then takes place from the associated memory means of a secondary processing element.
10. A method for shading a three dimensional image for display substantially as herein described.
ii. Apparatus for shading a three dimensional image for display substantially as herein described.
GB9918187A 1995-01-31 1995-01-31 Improvements relating to computer 3D rendering systems Expired - Lifetime GB2336983B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB9918187A GB2336983B (en) 1995-01-31 1995-01-31 Improvements relating to computer 3D rendering systems

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB9501834A GB2298111B (en) 1995-01-31 1995-01-31 Improvements relating to computer 3d rendering systems
GB9918187A GB2336983B (en) 1995-01-31 1995-01-31 Improvements relating to computer 3D rendering systems

Publications (3)

Publication Number Publication Date
GB9918187D0 GB9918187D0 (en) 1999-10-06
GB2336983A true GB2336983A (en) 1999-11-03
GB2336983B GB2336983B (en) 2000-01-19

Family

ID=10768840

Family Applications (3)

Application Number Title Priority Date Filing Date
GB9918181A Expired - Lifetime GB2336982B (en) 1995-01-31 1995-01-31 Improvements relating to computer 3D rendering systems
GB9501834A Expired - Lifetime GB2298111B (en) 1993-07-30 1995-01-31 Improvements relating to computer 3d rendering systems
GB9918187A Expired - Lifetime GB2336983B (en) 1995-01-31 1995-01-31 Improvements relating to computer 3D rendering systems

Family Applications Before (2)

Application Number Title Priority Date Filing Date
GB9918181A Expired - Lifetime GB2336982B (en) 1995-01-31 1995-01-31 Improvements relating to computer 3D rendering systems
GB9501834A Expired - Lifetime GB2298111B (en) 1993-07-30 1995-01-31 Improvements relating to computer 3d rendering systems

Country Status (5)

Country Link
EP (1) EP0725367B1 (en)
JP (1) JPH08255262A (en)
DE (1) DE69609534T2 (en)
ES (1) ES2150071T3 (en)
GB (3) GB2336982B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1410337A2 (en) * 2000-05-29 2004-04-21 Natalia Zviaguina Method, apparatus and article of manufacture for determining visible parts of surfaces of three-dimensional objects and their parameters of shading while accounting for light and shadow volumes

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2343603B (en) 1998-11-06 2003-04-02 Videologic Ltd Shading 3-dimensional computer generated images
JP2000163601A (en) * 1998-11-24 2000-06-16 Sega Enterp Ltd Image processor, image processing method and recording medium
GB9904901D0 (en) 1999-03-03 1999-04-28 Canon Kk Computer graphics apparatus
JP3599268B2 (en) 1999-03-08 2004-12-08 株式会社ソニー・コンピュータエンタテインメント Image processing method, image processing apparatus, and recording medium
US7009626B2 (en) * 2000-04-14 2006-03-07 Picsel Technologies Limited Systems and methods for generating visual representations of graphical data and digital document processing
US20020039100A1 (en) 2000-06-08 2002-04-04 Stephen Morphet Memory management for systems for generating 3-dimensional computer images
GB2378108B (en) 2001-07-24 2005-08-17 Imagination Tech Ltd Three dimensional graphics system
GB0307095D0 (en) 2003-03-27 2003-04-30 Imagination Tech Ltd Improvements to a tiling system for 3d rendered graphics
US7477256B1 (en) * 2004-11-17 2009-01-13 Nvidia Corporation Connecting graphics adapters for scalable performance
JP4699036B2 (en) * 2005-01-31 2011-06-08 三菱電機株式会社 Graphics hardware
GB0519597D0 (en) * 2005-09-26 2005-11-02 Imagination Tech Ltd Scalable multi-threaded media processing architecture
GB0524804D0 (en) 2005-12-05 2006-01-11 Falanx Microsystems As Method of and apparatus for processing graphics
GB2449399B (en) 2006-09-29 2009-05-06 Imagination Tech Ltd Improvements in memory management for systems for generating 3-dimensional computer images
US9965886B2 (en) 2006-12-04 2018-05-08 Arm Norway As Method of and apparatus for processing graphics
GB2452731B (en) 2007-09-12 2010-01-13 Imagination Tech Ltd Methods and systems for generating 3-dimensional computer images
GB0823254D0 (en) 2008-12-19 2009-01-28 Imagination Tech Ltd Multi level display control list in tile based 3D computer graphics system
GB0900700D0 (en) 2009-01-15 2009-03-04 Advanced Risc Mach Ltd Methods of and apparatus for processing graphics
GB201004673D0 (en) 2010-03-19 2010-05-05 Imagination Tech Ltd Processing of 3D computer graphics data on multiple shading engines
GB201004675D0 (en) 2010-03-19 2010-05-05 Imagination Tech Ltd Memory management system
US9317948B2 (en) 2012-11-16 2016-04-19 Arm Limited Method of and apparatus for processing graphics
US10204391B2 (en) 2013-06-04 2019-02-12 Arm Limited Method of and apparatus for processing graphics
KR102197067B1 (en) * 2014-04-02 2020-12-30 삼성전자 주식회사 Method and Apparatus for rendering same region of multi frames
CN106813568B (en) * 2015-11-27 2019-10-29 菜鸟智能物流控股有限公司 Object measuring method and device
GB2553744B (en) 2016-04-29 2018-09-05 Advanced Risc Mach Ltd Graphics processing systems

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2246057A (en) * 1990-07-03 1992-01-15 Rank Cintel Ltd Shading a 3-dimensional computer generated image
US5305430A (en) * 1990-12-26 1994-04-19 Xerox Corporation Object-local sampling histories for efficient path tracing

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06223201A (en) * 1993-01-22 1994-08-12 Matsushita Electric Ind Co Ltd Parallel image generating device
GB9301661D0 (en) * 1993-01-28 1993-03-17 Philips Electronics Uk Ltd Rendering an image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2246057A (en) * 1990-07-03 1992-01-15 Rank Cintel Ltd Shading a 3-dimensional computer generated image
US5305430A (en) * 1990-12-26 1994-04-19 Xerox Corporation Object-local sampling histories for efficient path tracing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1410337A2 (en) * 2000-05-29 2004-04-21 Natalia Zviaguina Method, apparatus and article of manufacture for determining visible parts of surfaces of three-dimensional objects and their parameters of shading while accounting for light and shadow volumes
EP1410337A4 (en) * 2000-05-29 2006-06-14 Natalia Zviaguina Method, apparatus and article of manufacture for determining visible parts of surfaces of three-dimensional objects and their parameters of shading while accounting for light and shadow volumes

Also Published As

Publication number Publication date
GB2298111A (en) 1996-08-21
EP0725367B1 (en) 2000-08-02
GB9501834D0 (en) 1995-03-22
DE69609534T2 (en) 2000-12-07
GB9918181D0 (en) 1999-10-06
GB2336982A (en) 1999-11-03
JPH08255262A (en) 1996-10-01
DE69609534D1 (en) 2000-09-07
GB9918187D0 (en) 1999-10-06
GB2298111B (en) 2000-01-19
EP0725367A1 (en) 1996-08-07
GB2336983B (en) 2000-01-19
ES2150071T3 (en) 2000-11-16
GB2336982B (en) 2000-01-19

Similar Documents

Publication Publication Date Title
EP0725367B1 (en) Computer 3D rendering method and apparatus
US5729672A (en) Ray tracing method and apparatus for projecting rays through an object represented by a set of infinite surfaces
CN109603155B (en) Method and device for acquiring merged map, storage medium, processor and terminal
Potmesil et al. The pixel machine: a parallel image computer
US5544292A (en) Display apparatus having a display processor for storing and filtering two dimensional arrays forming a pyramidal array, and method of operating such an apparatus
US6847370B2 (en) Planar byte memory organization with linear access
EP0438195A2 (en) Display apparatus
EP0430501A2 (en) System and method for drawing antialiased polygons
JPH0935075A (en) Computer graphics system with high-performance primitive clipping preprocessing
CN110675480B (en) Method and apparatus for acquiring sampling position of texture operation
US4241341A (en) Apparatus for scan conversion
GB2240016A (en) Texture memories store data at alternating levels of resolution
CN102915563A (en) Method and system for transparently drawing three-dimensional grid model
EP1410337A2 (en) Method, apparatus and article of manufacture for determining visible parts of surfaces of three-dimensional objects and their parameters of shading while accounting for light and shadow volumes
US7737994B1 (en) Large-kernel convolution using multiple industry-standard graphics accelerators
Read Developing the next generation cockpit display system
CN1430769B (en) Tiled graphics architecture
JP3090605B2 (en) Multiprocessor device
US6819337B2 (en) Initializing a series of video routers that employ source-synchronous signaling
US5732248A (en) Multistep vector generation for multiple frame buffer controllers
CN101013500B (en) Multi-thread executable peak coloring device, image processor and control method thereof
EP0827082B1 (en) Semiconductor memory having arithmetic function
US6813706B2 (en) Data processing system and multiprocessor system
CN114830082A (en) SIMD operand arrangement selected from multiple registers
US20030052886A1 (en) Transferring a digital video stream through a series of hardware modules

Legal Events

Date Code Title Description
PE20 Patent expired after termination of 20 years

Expiry date: 20150130