GB2359230A - Computer graphics rendering of partially transparent object - Google Patents

Computer graphics rendering of partially transparent object Download PDF

Info

Publication number
GB2359230A
GB2359230A GB9926761A GB9926761A GB2359230A GB 2359230 A GB2359230 A GB 2359230A GB 9926761 A GB9926761 A GB 9926761A GB 9926761 A GB9926761 A GB 9926761A GB 2359230 A GB2359230 A GB 2359230A
Authority
GB
United Kingdom
Prior art keywords
subdivision
depth
discontinuity
scene
accordance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9926761A
Other versions
GB2359230B (en
GB9926761D0 (en
Inventor
Iestyn Davi Bleasdale-Shepherd
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB9926761A priority Critical patent/GB2359230B/en
Publication of GB9926761D0 publication Critical patent/GB9926761D0/en
Publication of GB2359230A publication Critical patent/GB2359230A/en
Application granted granted Critical
Publication of GB2359230B publication Critical patent/GB2359230B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/503Blending, e.g. for anti-aliasing

Abstract

Computer graphics apparatus comprises a fog processing unit (42) capable of processing data representing a non-opaque object (116, Fig 15) in a three dimensional scene, where processing is carried out mainly on a block with resolution lower than pixel resolution, except in regions where the contribution of the non-opaque object to the appearance of the scene is substantially non-uniform, such as at an edge or depth discontinuity. In that case, the fog processing unit (42) subdivides the block containing the discontinuity and samples pixels within the block to determine the contribution of the object to the region containing the discontinuity.

Description

2359230 1 COMPUTER GRAPHICS APPARATUS is This invention relates to
computer graphics apparatus for generating graphical images of three dimensional objects. It has particular application in the representation of at least partially transparent objects.
When a scene, defined in three dimensions, is to be displayed graphically on computer apparatus, its representation is formed as a two dimensional rasterised image. The final colour and intensity values applied to each pixel of the image are generally calculated on a pixel by. pixel basis. These calculations can be computationally expensive, especially if objects in the scene have complex interactions with light in the scene.
The present invention is particularly applicable to the representation of transparency in computer graphics; transparency can also be interpreted as a lack of opacity. In numerical terms, opacity can be represented as a number between 0 and 1, an opacity of 0 being applicable to a purely transparent material which allows the transmission of all light incident thereon and an opacity of 1 being applicable to a purely opaque material which allows no transmission of light therethrough.
is 2 When calculating final pixel values, the opacity value of a partially opaque material can be used as a "blending" value, representing the contribution of the colour of that material to the final pixel colour, relative to the colour of objects lying behind the material under consideration. In the simplest case, the blending value can be used as a multiplication f actor, such as in the formula below:
C = aF + (1 - a) B (1) where C is the final pixel colour, ot is the opacity value of a partially opaque object, F is the colour of the partially opaque object, and B is the background colour in front of which the partially opaque object is placed.
Increasing the opacity value assigned to an object causes the contribution of the object colour to the final pixel colour to increase, and the contribution of the background colour consequently to decrease, and vice versa.
In proprietary computer apparatus, it may be inc6nvenient to calculate the effect of fog accurately for each pixel, 3 while maintaining acceptable frame display update rates. Therefore, it is an object of the invention to take advantage of the uniformity of many parts of a scene so that the non-uniform parts can be rendered to a sufficient resolution.
A f irst aspect of the present invention provides computer apparatus for graphically representing a three dimensional scene, including means for storing data representative of the visual appearance of said scene in a predetermined viewing direction, means for storing data representative of depth of objects of said scene in said viewing direction, and means for rendering into said scene an object whose visual appearance depends at least partially upon the visual appearance of objects of the scene obscured thereby, comprising means for receiving a fragment of said object, means for identifying whether said fragment overlaps a significant feature of said scene, and means, responsive to said identifying means, operative to generate further fragments of said object fragment to a. resolution suf f ic-ient to render the object having regard to the significant feature.
In that way, a fragment of an object can be rendered at relatively low resolution where there is no significant 4 feature which might affect the appearance of the object within that fragment, and at a substantially higher resolution where such a significant feature exists.
A second aspect of the invention provides computer apparatus for graphically representing a threedimensional scene, including means for receiving data representative of a fragment of a scene to be rendered, means for detecting if there is a depth discontinuity across said fragment, and means, responsive to said detecting means, for sampling at least two pixel values across a fragment for which a depth discontinuity is detected, and assigning pixel values for other pixels in said fragment on the basis thereof.
A third aspect of the invention provides a method of generating a graphical image of a three dimensional scene, comprising the steps of receiving data representing a three dimensional graphical object, testing the region of the scene over which the object is to be placed for the presence of a feature affecting significantly the appearance of said object, and, in the event of presence of said feature, analysing said object in greater detail to generate a more accurate graphical representation thereof.
is A f ourth aspect of the invention provides a method of processing graphical data, comprising the steps of storing data representative of the appearance of a scene and data representative of depth of objects within said scene, receiving data representative of an object to be included in said scene, said object being at least partially transparent, and blending said received data with said stored data, said step of blending including testing the stored data representing the portion of the scene over which the object is to be placed and, if a significant discontinuity in depth data is identified then modifying said blending step to take into account said discontinuity.
Said step of modifying said blending step may include perforn-Ling said blending step to a higher resolution thereby accounting for said discontinuity. Alternatively, said blending step could be modified by the identification of a blending value on the basis of analysis of the stored data defining the scene each side of the discontinuity.
A fifth aspect of the invention provides a computer program product for use in the development of a content software product, said computer program product including 6 means for configuring a computer apparatus to receive data defining a three dimensional scene, means for configuring said computer apparatus to identify an at least partially transparent object to be represented graphically in said scene, means for configuring said computer apparatus to identify a blending factor representing relative contributions of said at least partially transparent object and an object lying behind said at least partially transparent object to the final graphical representation of said scene, and means for configuring said computer apparatus to identify a discontinuity in said scene, and to be operative, on identification of a discontinuity, to modify said blending value to take account of said discontinuity.
A sixth aspect of the invention provides a method of manufacturing a computer program product, the method including the steps of developing a content software product referring to the product of the fifth aspect of the invention, compiling said product linking with the product of the fifth aspect of the invention, and presenting said compiled product for distribution.
Further aspects and advantages of the present invention will become apparent from the following description of
7 specific embodiments of the invention with reference to the accompanying drawings, in which:
Figure 1 illustrates computer graphics apparatus in accordance with a specific embodiment of the invention; Figure 2 illustrates a graphics controller of the apparatus illustrated in Figure 1; Figure 3 illustrates an edge identifier of the graphics controller illustrated in Figure 2; Figure 4 illustrates a minimum and maximum identifier unit of the edge identifier illustrated in Figure 3; Figure 5 illustrates a graph showing the relationship between input and output values for a look-up table of the minimum and maximum identifier unit illustrated in Figure 4; Figure 6 illustrates a fog processor of the graphics controller illustrated in Figure 2; Figure 7 illustrates an index creation unit of the fog processor illustrated in Figure 6; 8 Figure 8 illustrates a graph showing the relationship between input and output values for a look-up table of the index creation unit illustrated in Figure 7; Figure 9 illustrates a blend value generator of the fog processor illustrated in Figure 6; Figure 10 illustrates a graph of opacity against depth for a selection of different opacity density values; Figure 11 illustrates a graph showing the relationship between input and output values for a log function lookup table in the blend value generator illustrated in Figure 9; Figure 12 illustrates a graph showing the relationship between input and output values for an exponential function look-up table in the blend value generator illustrated in Figure 9; Figure 13 illustrates a blending unit of the graphics controller illustrated in Figure 2; Figure 14 is a plan view of a scene to be represented graphically by computer graphics apparatus as illustrated 9 in Figure 1; Figure 15 is a view, in the intended viewing direction, of the scene illustrated in Figure 14; is Figure 16 is a schematic view of a visual display unit screen over which a scene is to be displayed in use, illustrating a fog sampling resolution relative pixel resolution; and Figure 17 is an enlarged view of a fragment of the screen illustrated in Figure 16.
As illustrated in Figure 1, a computer apparatus 10 comprises a console 12 in communication with a television set 14 for display of graphical images thereon, and has connected thereto one or more input devices 16 such as a hand held controller, joystick or the like. Data storage media 18 such as an optical disc or a memory card can be inserted into the console 12 for storage and retrieval of information for operation of the console 12.
The console 12 comprises a game controller 20, capable of receiving input instructions from the input devic es 16, and for storing and retrieving data from the storage is media 18. The game controller 20 implements a game defined in computer implementable instructions from the data storage media 18, and generates data relating to the state of the game to be passed to a graphics data generator 22.
A modem could be provided, which would allow the console 12 to receive a signal bearing configuration commands for storage on the storage media 18.
The graphics data generator 22 acts on information supplied by the game controller 20 to define a scene to be represented graphically and a viewing direction in which -the scene is to be viewed in a graphical representation. A graphics controller 24 is operable to receive configuration commands relating to the scene and the viewing direction from the graphics data generator 22, and generates therefrom rasterised image data to be supplied to the television set 14 for display.
The graphics data generator 22 makes use of an application programmers' interface (API) provided in the graphics controller 24, to supply data thereto in a required form. A scene to be represented graphically can be defined to include a list of pointers to data 11 structures defining bodies to be included in the scene. Each object may include a list of pointers to polygons defining the shape of that object. Each polygon may include a pointer to a material from which the object is to be constructed. Each material includes attributes defining colour, opacity and the like. A material can point to a texture bit map which maps over the surface of an object pointing to that material in use.
is Data ordered in this is delivered by the graphics data generator 22 to the graphics controller 24 for reproduction of a rasterised image.
Figure 2 illustrates the graphics controller 24 in more detail. As shown, the graphics controller 24 has a pipeline structure, being capable of receiving graphics data in the form previously described. As graphics data is delivered, it is stored in a polygon queue 30, which releases polygons to later parts of the pipeline one at a time.
A polygon is released by the polygon queue 30 and is passed to a scan conversion unit 32. The scan conversion unit 32 separates out the polygon into fragments taking account of the level of resolution allowed by the memory 12 capacity of a frame buffer 44 containing a pixel buffer storing pixel appearance data and a Z-buffer storing corresponding depth information. Fragments are delivered by the scan conversion unit 32 to an interpolation unit 34 which interpolates attributes to the fragment in question, from available data which might only be defined formally for vertices of a polygon. This is particularly important if the fragment is to be mapped to a texture map.
is The size of fragment created by the scan conversion unit 32 will depend upon the computing power available for processing thereof. If computing power is available, for instance through significant dedicated hardware provision, then pixel sized fragments would be handled with ease. However, the described embodiment reduces the need for detailed fog rendering by dividing fog polygons into fragments 9 x 9 pixels in size. one such fragment 200 is illustrated in Figure 17, made up of 9 x 9 pixels 202.
Following interpolation, the fragment is passed to a depth test unit 35 which compares depth data. for the fragment with depth data stored in the Z-buffer.
13 Depth information for both the fragment and the existing data in the Zbuffer is held in reciprocal form, i.e. in terms of 1/Z, where Z is the depth of the point in question relative to the viewing position. This provides a better range and resolution of available values of Z. Also, using the reciprocal of Z enables easier interpolation with respect to depth, because reciprocal depth values can be interpolated linearly, whereas true depth values cannot.
Without performing an inverting operation on the depth information, the depth test unit 35 establishes whether the fog fragment is in front of the existing object in the particular location in the frame buffer 44. This is achieved by a simple comparison test. If the depth data (the reciprocal of the actual depth) of the fragment is greater than the contents of the Z-buffer, then the fragment is closer to the viewing position than is the background. This takes into account the fact that the values being compared are the reciprocals of actual depths, and so the reciprocal of the actual depth of a foreground object will be greater than the reciprocal of the actual depth of a background object. If.the fog fragment is behind the existing contents of the frame buffer 44, then the fog fragment is rejected, and the
14 next fog fragment is received from the interpolation unit 34.
is However, if the fog fragment is in front of the existing object, then the fog fragment is considered further. The pipeline further comprises a fog mode selector 36 which receives a fragment from the depth test unit 35, and which is operable under the control of a pipeline configuration unit 45. The pipeline configuration unit 45 receives configuration flags from an application program to control aspects of the graphics controller 24 which can have several possible modes of operation dependent on the needs of the application or the availability of hardware functionality.
For instance, an application program might assign specific opacity values to specific polygons before delivery to a graphics pipeline. The pipeline configuration unit 45 is responsive to this to consider the fog mode selector 36 so as to pass the fragment and the associated opacity values directly to a blending unit 38.
Alternatively, taking advantage of capabilities of the described embodiment, an alternative blending mode can be set by an application program generating a corresponding configuration flag. This mode causes the pipeline configuration unit 45 to configure the fog mode selector 36 to pass fragments to an edge identifier 40 and a fog processor 42 with an associated opacity density, so that appropriate depth-dependent opacity values can be calculated therein and passed to the blending unit 38. Additionally, the fog mode selector 36 is arranged to check the opacity density of a fragment. If the opacity density is 0 (representing complete transparency), or 255 (representing complete opacity), then processing is trivial and the fragment can be written directly into the frame buffer 44.
The blending unit 38 is also configured by the pipeline configuration unit 45 as to which of a variety of available blending functions should be applied. A different graphical effect can be achieved by using a blending function different from the blending function previously described in equation (1).
Furthermore, in Figure 2 an arrow indicates that the pipeline configuration unit 45 is operative to configure the interpolation unit 34. Several different interpolation algorithms are available for selection by 16 means of configuration flags generated by the graphics generator 22.
The blending unit 38 blends the characteristics of the fragment, be it a fog fragment or otherwise, into the contents of the frame buffer 44, having regard to existing pixel information and depth information held therein.
is If a fog fragment is identified by the fog test unit 36, the edge identifier 40 establishes whether a significant discontinuity exists in the values stored in the Z-buf f er for the fragment. If not, then the edge identifier 40 configures a fog processor 42 to calculate an opacity value (also known as a blend value) for an exemplary one of the depth values for the fragment for use in the blending unit.
If there is a significant discontinuity, the edge identifier 40 configures the fog processor 42 to analyse the fragment at higher resolution, for instance pixel. resolution, to reflect the existence of the discontinuity. Thereafter, an appropriate opacity value is calculated in the fog processor 42, for the fragment containing the discontinuity.
17 The operation of the edge identifier 40 is illustrated in further detail with reference to Figures 3, 4 and 5. As illustrated in Figure 3, the edge identifier 40 comprises a depth input register 50, a minimum and maximum identifier unit 52 and a depth discontinuity test unit 54. The depth input register 50 samples the fragment received from the fog test unit in nine different places, which corresponds with a 3 x 3 grid pattern over the fragment and, for those nine selected samples, reads the entries in the Z-buffer for the pixels corresponding with those samples. Then, the minimum and maximum identifier unit 52 identifies the largest and smallest of those nine Z-buffer entries.
As noted above, in the embodiment, depth values are stored in the Zbuffer in reciprocal form. That is, depth values are inverted before they are stored. Therefore, the largest value retrieved from the Z-buffer will in fact represent the smallest depth of the nine samples from the fog fragment, and the smallest will represent the largest depth. Those reciprocal minimum and maximum depths 1/Z,., 1/Z,,, are then passed to the depth discontinuity test unit 54. This unit identifies whether the values 1/Z.., 1/Z,,,, passed to it are sufficiently different that a discontinuity in depth is 18 to be identified. The identification of a depth discontinuity would correspond with a change in opacity of a region of fog whose thickness includes that discontinuity.
is Figure 4 shows the depth discontinuity test unit 54 in further detail. The unit 54 has a threshold generation unit 56, which receives the reciprocal minimum depth value lIZ.,,., and looks up in a look-up table 58 a reciprocal threshold value 1/Z, corresponding therewith. The entries in the look-up table 58 producing the reciprocal threshold value 1/Z....,, are derived as follows. Firstly, in order for a discontinuity to be deemed significant, the following relation must hold:
ZMAX - 4IN > '& (2) where A is a scalar quantity, whose value represents a difference in depths deemed significant. A is set by the user of the graphics controller in the graphics data generator, and is dependent on the relative priorities assigned to speed and quality. If A is large, then few fragments will contain a depth difference sufic.iently large as to be deemed significant. A scene defined in is 19 that way will be quick to process, but may be subject to undesirable visible artefacts. on the other hand, setting A to a relatively low value will result in the identification of a large number of fragments in a scene as containing a significant discontinuity, which will slow down the processing of the scene but may produce more aesthetically pleasing results.
This inequality can be re-expressed in terms of 1/Z.. and 1/Z.,,. (the available data), as follows:
11ZMAX < ("ZMIIV) 1 (A"'ZMIN + 1) (3) Therefore, for every value of 1/Z,,, input from the minimum and maximum identifier unit 52, a value can be calculated, representative of the right hand side of the inequality (3), hereinafter referred to as 1/Z,.,,. If this value 1/Z,,..,, is greater than 1/Z.. input from the minimum and maximum identifier unit 52, then the difference between the minimum and maximum depth values is significant. A graph of the relationship between 1/Z,,,,,, and 1/ZTEPM.. 'S shown in Figure 5, from which the values stored in the look-up table 58 can be derived.
once a value of 1/ZTHRE5E has been calculated for an input 1/Z,,,, that value is compared with the input reciprocal maximum depth value 1/Z,,,,, in a threshold comparator 60. As shown by inequality (3), if 1/Z.. is less than 1 / Z TERWE F there is deemed to be a significant discontinuity in the received fog fragment, which causes a threshold comparator 60 to issue a control message to the fog processor 42. This control message accompanies the depth values 1/Z.,. and 1/Z..,., both of which will be used in the calculation of a more refined opacity value in the case of a significant discontinuity in depth.
Operation of the fog processor 42 will now be described in further detail with reference to Figure 6. The fog processor 42 receives depth values from the edge identifier 40, which have already been retrieved from the Zbuffer. Depth values are received singly, in the case that a fragment has no significant depth discontinuity, or in pairs (1/Z,,,, 1/Z..) when a significant depth discontinuity is present.
Further, a fragment input register 60 comprises a fragment depth location 62 for loading depth information relating to a received fog fragment and a fragment opacity density location 64 for receiving the opacity 21 density for that fog fragment.
A background depth buffer 66 stores depth information relating to the scene already defined by data stored in the frame buffer 44.
In the event that the edge identifier does not identify an edge, the control signal to the fog processor 42 conf igures the fog processor to treat the 9 x 9 pixel fragment as a single entity.
Otherwise, two processes have to be performed, one for each region of the fragment, where regions are bounded by the depth discontinuity. Thereafter, in order to provide a smooth transition, the blending unit 38 is configured to blend across the whole fragment by a weighting dependent on the number of pixels on each side of the discontinuity. For instance, if eight of the nine samples taken are from one side of the discontinuity, and one from the other, then the final blending value will more strongly reflect the value associated with the former than with the latter. However, should there be five samples on one side against four on the other, then the final blending value will be only slightly more dependent on the blending value for the five than that 22 for the four.
is An index creation unit 68 generates a thickness value, representative of the distance between the fog fragment in question and the existing object in the frame buffer. That thickness is passed to a blend value generator 70. The blend value generator 70 receives the opacity density value held in the opacity density location 64 in the fragment input register 60, which calculates a blend value for use in the blending unit 38.
As shown in Figure 7, the index creation unit 68 receives depth data 1/Z,, ,, retrieved from the frame buffer 44, from the edge identifier unit 40, and depth data 1/Z,, from the fragment input register 60. These data are passed to respective inverters 72, 74, which both refer to the same lookup table 76 to perform an inversion operation. As a resultY values Z,, and Z,, are generated.
The look-up table may not offer a range of possible inputs, and corresponding outputs, over the full range of possible values of 1/Z. However, by expressing 1/Z as a fixed point number, the inverse thereof can be calculated using the mantissa thereof as input value to the lookup table, and reexpressing the exponential part of the is 23 number as a "shift left" rather than a "shift right" or vice versa as the case may be, in accordance with the inversion operation. The graph of the inversion function, on which the contents of the look-up table 76 are based, is illustrated in Figure 8. The inputs of the look-up table 76 are read from the horizontal axis of that graph, and outputs of the look-up table 76 are taken from the vertical axis of the graph.
ZIPR and Z,. are then passed to a subtraction unit 78 which calculates Z,, ,,,., the difference between the depth of the existing frame contents and the depth of the fog fragment under consideration. Z,,,, which is a thickness value, is then delivered to the blend value generator 70.
The blend value generator 70, as illustrated in Figure 9, receives the thickness value Z,,,,,., from the index creation unit 68, and the opacity density value A from the fog fragment. The blend value generator 70 simulates as far as possible the physical impact of fog on the propagation of light therethrough.
Attenuation of light through fog can be represented mathematically in many different ways. However, the preferred mathematical representation for the present 24 embodiment is the following formula:
a = 1 - (1 -A) is T (4) In that formula, A is the opacity density inherent to the f og under consideration, and T is the thickness of the fog. Q is the opacity resultant from a fog of opacity density A and thickness T. The opacity density A is constrained in a range between 0 and 1. Figure 10 shows a graph of opacity a against thickness T, for nominal units U, for five different values of opacity density A. For lower opacity densities A, the graph is approximately linear in the illustrated range, while, as opacity density approaches 1, the function by which opacity is governed in respect to thickness approaches a step function. This function has been observed to produce a reasonable representation of the effect which fog has on light propagation.
The unit illustrated in Figure 9 can generate an opacity value based on the general form of equation (4). Certain allowances are made for the efficient running of a computer program, in that opacity density A is now defined over a range between 0 and 255. As noted above, is this allows opacity density A to be defined as an unsigned integer. Furthermore, for the same reasons, a is scaled so as to range between 0 and 255. In that way, sufficient resolution is obtained without the need to represent decimal fractions in binary form. Finally, the thickness Z.,, is scaled by a scaling unit factor U, which takes account of any problems associated with the magnitude of depth values in the scene concerned. Therefore, the formula given in equation (4), taking account of scaling for convenient implementation of computer apparatus, can be modified to take into account convenient scaling, and re-expressed as the following:
1 U 255 _(1 _ A)ZDIFF 255 (5) This expression could be implemented on a computer by means of a look-up table with two arguments, namely A and ZWEFF However, a look-up table dependent on two arguments can be expensive of memory. Therefore, the described embodiment provides an arrangement which allows use of considerably less memory to implement equation (5), with no loss in resolution.
26 Initially, the thickness value Z,,,,, is received, and is scaled in a scaling unit 80 by multiplication by a scaling factor namely -1/(U log 2). Also, the opacity density value A from the fog fragment data is received by a look-up unit 82 which refers to a look-up table 84, which delivers avalue representative of the expression log(l-A/255). That number, and (-Z, ,,,dU log 2), are passed to a multiplication unit 86, outputting the product X. X is then passed to a look-up unit 88, making reference to a look-up table 90, which delivers a value of Y, which equals 0.5x. Y is then passed to a f inal calculation unit 92 which calculates a in accordance with the following equation:
a = 255 [ 1 - Y I (6) Since Y = 0.5x and X = -(ZDIIIII/U log 2) x log (1-A/255), the following expression for c( can be obtained by substitution into equation (6):
a = 255 11 Since - 0.5_(ZDil=r.X log 2) log (1 - A1255) 1 (7) 27 -log (1 -A 1255) 0.5 log 2 1 - A/255 (8) is true for all A in the range of 0 to 254, equation (7) is equivalent to equation (5).
The foregoing demonstrates that the blend value generator 70 is operative to produce a blend value a for a given fragment, from an opacity density value A and a thickness value Z.... (or thickness values in the case of a discontinuity), in accordance with equation (5). Graphs showing how the contents of the look-up tables 84, 90 are derived in accordance with the functions illustrated In blocks 82 and 88 in Figure 9 are illustrated for further clarification in Figures 11 and 12 respectively.
once a blend value a has been obtained, the blending unit 38 operates in accordance with a fog blending mode to generate new colour information to be read into the frame buffer 44. As illustrated in Figure 13, the blending unit 38 includes a blend mode register 100 which receives information relating to the control of a blending function unit 102.
28 In a fog fragment blending mode, the blending function unit 102 is responsive to receive opacity values a from the fog processor 42, to blend a fog colour received into a fog colour register 104 and the frame colour received into a frame colour register 106 from the frame buffer 44. Alternatively, in a non-fog blending mode, where a fragment is identified by the fog test unit 36 not to be a fog fragment, but instead to be a completely opaque object (i.e. A = 255), the blending function unit 102 operates to combine the object data with the existing data in the frame buffer in accordance with that identification.
The blend mode received in the blend mode register 100 is expressed as a pair of values. If a polygon to be processed is a fog polygon but bounds the far side of a fog object, it must be rendered so that it does not itself contribute opacity to the scene, but only alters the contents of the depth values held in the Z-buffer part of the frame buffer 44. This is achieved by setting the blend mode to (1, 0). This signifies that the entirety of the background is to be held in the f rame buffer (the background being the existing conterLts of the frame buffer) and no contribution is made of the colour of the fog to the final colour stored in the frame
29 buffer. Further, a completely opaque polygon to be processed will have opacity density A = 255. In that case, the blend mode sent to the blend mode register 100 will be (0, 1).
Generally, where the pair of values for the blend mode are expressed as (p, q) the following expression holds:
CouT = p x CFRAmE + q X CFOG (9) For a fog fragment, the blend mode is set such that q = a and p = 1 - a. It will be appreciated that more complicated ways of blending fog colour with existing frame colour could be provided. However, it is an advantage of the blending expression of equation (9) that new colour values can be obtained with little further calculation.
Figures 14 and 15 illustrate an example of a fog object included in a scene, and the effect on the appearance of a fog object which can be caused by differences in opacity density and thickness. Two objects 110, 112 which are represented in a fully opaque material are inserted into a scene, of which a viewable volume 114 is indicated. A fog object 116 is inserted between those two objects 110, 112, so as to slightly overlap the front object 112.
is The overlap of the fog object 116 with the front object 112 causes a discontinuity in the thickness of the fog object 116 in the.viewing direction. This discontinuity is at the rear visible edge of the front object 112, labelled 118 in Figure 15. Where this edge 118 intersects with the front face of the fog object 116, the thickness of the fog object changes from its full thickness to zero thickness suddenly.
Figure 16 shows a template over the scene in the view direction, including the fragments 200, 9 x 9 pixel square in size, which are to be used in identifying opacity (blend values) for the fog polygons. one of the fragments 200 is illustrated in further detail to identify it as the fragment illustrated in Figure 17 whose processing is described above.
In the example shown in Figure 17, the nine samples 204 are indicated by bold outlines relative to other pixel sized regions 202 of the fragment 200. of those samples, two (204') are found, by the edge identifier 40, to have 31 markedly lower depth than the others, and so the weighted average will reflect that fact.
In that case, if the highlighted fragment 200 were not treated differently, in view of the depth discontinuity at the edge of the front object 112, then an anomaly would arise in the generated blending value for that fragment. In particular, if the blending value were calculated for the centre of the fragment, this would be on the basis of the full thickness of the fog, whereas for a non-negligible part of the fragment, the fog is of reduced thickness. The blending value which will provide the most realistic representation of the fragment will be somewhat lower, in accordance with the illustrated embodiment, than would otherwise be the case.
The embodiment can be used for the representation of all kinds of partially opaque objects in a scene. These can include static objects, such as glass, water. or fog, which may be uniformly or non-uniformly partially opaque, or dynamic objects such as clouds, smokes or the like. In the case of a fog, the fog can be applied over the whole or just a part of a visible scene.
Instead of the weighted average method described above, is 32 it would be possible to provide a system which, in response to a finding that a fragment has a significant depth discontinuity, would further analyse the fragment to establish where that discontinuity lies. For example, the fragment could be further fragmented, either to intermediate sized fragments, or to pixel level. Then, each sub-fragment or each pixel could be analysed to generate a final pixel. value for that sub-fragment or that pixel, as the case may be.
For example, in the fragment illustrated in Figure 17, if the edge identification unit identifies a significant discontinuity, this could configure the fog processor 42 to calculate blending values for each of the nine highlighted sample pixels 204, the calculated blending value for each sample pixel 204 then being applied to the block of nine pixels of which that sample pixel 204 is the central pixel.
Alternatively, depth values could be found for each of the nine pixels comprising a block surrounding and including each sample pixel 204, and a further depth discontinuity test be applied to that block of nine. Then, if there is no depth discontinuity, the blending value pertaining to the central pixel, of that block of 33 nine could be applied over that block of nine, whereas if a depth discontinuity was found, then either a weighted average of blending values applicable to the pixels of minimum and maximum depth in that block of nine could be applied across the block of nine, or a blending value for each individual pixel could be calculated separately. This iterative process might increase the length of time required for processing, but would, in compensation, improve the picture quality.
34

Claims (21)

CLAIMS:
1. A computer apparatus operable to graphically a three dimensional scene including an at least partially transparent object, the apparatus comprising: means for identifying an transparent object of said scene; means for determining a graphical representation, and depth data for said graphical representation of the scene without the identified partially transparent object; means for generating a plurality of subdivisions of said at least partially transparent object; and means for testing said depth data, for each subdivision, for the existence of a significant depth discontinuity in said scene coincident with said subdivision, wherein said apparatus further comprises means for determining the contribution of each subdivision to the graphical representation of the scene on the basis of samples taken within said subdivision, including sampling means, responsive to detection of a significant depth discontinuity, for determining one or more sample contributions in said subdivision in the case of at least partially detection of no significant depth discontinuity and two or more sampling contributions in said subdivision, in the case of detection of a significant depth discontinuity, one of said two or more sample contributions being determined in respect of a point in said subdivision separated from points at which other contributions are determined by said significant depth discontinuity.
2. Apparatus in accordance with claim 1, wherein said contribution determining means is operable, in the case of detection of a significant depth discontinuity, to determine a contribution on the basis of a weighted average of said sample contributions,, said weighting taking account of the relative areas of said subdivision separated by said discontinuity.
3. Apparatus in accordance with claim 1 or claim 2, wherein said depth discontinuity detection means is operable to collect, for a selected subdivision, a plurality of depth data and is further operable to identify and compare minimum and maximum depth values of said collected depth data to identify whether-or not a significant discontinuity is present in said subdivision.
36
4. Apparatus in accordance with claim 3, wherein said depth discontinuity detection means is operable to compare said minimum and maximum values to determine whether said values dif f er by more than or less than a predetermined amount.
5. Apparatus in accordance with claim 4, wherein said depth discontinuity detection means is operable to generate a threshold value based upon one of said minimum and maximum values and said predetermined amount, and wherein said detection means is operable to compare the other of said minimum and maximum values with said threshold values to establish whether said minimum and maximum values differ by more than the predetermined amount.
6. Apparatus in accordance with any preceding claim wherein, in the case of no detection of a significant depth discontinuity in a portion of a scene corresponding to a subdivision, said sampling means is operable to determine a sample contribution at a single sample point in said subdivision.
7. Apparatus in accordance with claim 6, wherein said subdivision is of substantially regular shape, and said 37 sample point is located substantially centrally of said subdivision.
8. Apparatus in accordance with any preceding claim, wherein said apparatus includes first storage means for storing pixel data defining a graphical representation, and second storage means for storing depth data corresponding with said pixel data, said determining means being operable to refer to and modify data stored in said first and second storage means in use.
9. Apparatus in accordance with claim 8, wherein said detecting means is operable to detect, for each subdivision, if said subdivision is for an object having depth greater than the stored depth data at that location in the scene, in use, and, if that is the case, then being operable to discard said subdivision from further consideration.
Aimensional
10. A method of graphically representing scene including an at least transparent object, comprising the steps of: identifying an at least partially transparent object of said scene; determining a graphical representation with respect a three nartiallv is 38 to a viewpoint and depth data corresponding to said graphical representation relative to said viewpoint, of the three dimensional scene to the exclusion of said identified partially transparent object; generating a plurality of subdivisions of said at least partially transparent object; detecting, for each subdivision, a significant depth discontinuity coincident with said subdivision; determining the contribution of each subdivision to the graphical representation of the scene, including the step of sampling for determining a sample contribution at at least one sampling point in said subdivision in the case that a significant depth discontinuity is not detected and at at least two sampling points in said subdivision in the case that a significant depth discontinuity is detected, at least one said sample points being separated from the other point or points by said significant depth discontinuity.
11. A method in accordance with claim 10, wherein said step of determining said contribution includes determining on the basis of a weighted average of said sample contribution, in the case of detection of a significant depth discontinuity, said weighting taking account of the relative areas of said subdivision 39 separated by said discontinuity.
12. A method in accordance with claims 10 or claim 11, wherein the step of detecting a depth discontinuity includes the steps of collecting, for a selected subdivision, a plurality of depth data and identifying and comparing minimum and maximum depth values of said collected depth data to identify whether or not a significant discontinuity is present in said subdivision.
13. A method in accordance with claim 12, wherein said step of detecting a depth discontinuity includes comparing with said minimum and maximum values to determine whether said values differed by more than or less than a predetermined amount.
14. A method in accordance with claim 13, wherein said step of detecting a depth discontinuity includes the step of generating a threshold value based upon one of said minimum and maximum values and said predetermined amount, and comparing the other of said minimum and maximum values with said threshold values to establish whether said minimum and maximum values differ by more.than the predetermined amount.
15. A method in accordance with any one of claims 10 to 14, wherein said step of sampling, in the case of no detection of a significant depth discontinuity in a portion of a scene corresponding to a subdivision, includes the step of determining a sampling contribution at a single sample point in said subdivision.
16. A method in accordance with any one of claims 10 to 15, including the steps of storing in a first location data defining a graphical representation, and storing in a second location data, corresponding with said data defining a graphical representation, representing depth of objects in a represented scene, and wherein said step of determining a graphical representation and depth data includes the steps of referring to and modifying data stored in said first and second locations.
17. A method in accordance with claim 16, and further comprising the step of detecting, for a subdivision, if said subdivision corresponds with an object to be graphically represented having depth greater than the depth data stored in said second location at that location in the scene, and if that is the case, then discarding said subdivision from further consideration.
18. A storage medium storing computer executable instructions for conf iguring a computer apparatus to operate as apparatus in accordance with any one of claims 1 to 9.
19. A signal carrying computer executable instructions for receipt by a computer apparatus and for configuring said computer apparatus to operate as apparatus in accordance with any one of claims 1 to 9.
20. A storage medium storing computer executable instructions f or conf iguring a computer to operate in accordance with the method of any one of claims 10 to 17.
21. A signal carrying computer executable instructions for receipt by a computer apparatus and for configuring said computer apparatus to operate in accordance with the method of any one of claims 10 to 17.
GB9926761A 1999-11-11 1999-11-11 Computer graphics apparatus Expired - Fee Related GB2359230B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB9926761A GB2359230B (en) 1999-11-11 1999-11-11 Computer graphics apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9926761A GB2359230B (en) 1999-11-11 1999-11-11 Computer graphics apparatus

Publications (3)

Publication Number Publication Date
GB9926761D0 GB9926761D0 (en) 2000-01-12
GB2359230A true GB2359230A (en) 2001-08-15
GB2359230B GB2359230B (en) 2004-02-11

Family

ID=10864385

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9926761A Expired - Fee Related GB2359230B (en) 1999-11-11 1999-11-11 Computer graphics apparatus

Country Status (1)

Country Link
GB (1) GB2359230B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013168998A1 (en) 2012-05-10 2013-11-14 Samsung Electronics Co., Ltd. Apparatus and method for processing 3d information
CN104835191A (en) * 2014-02-06 2015-08-12 想象技术有限公司 Opacity Testing For Processing Primitives In 3D Graphics Processing System

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110120142B (en) * 2018-02-07 2021-12-31 中国石油化工股份有限公司 Fire smoke video intelligent monitoring early warning system and early warning method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2226937A (en) * 1988-12-05 1990-07-11 Rediffusion Simulation Ltd Image display
US5831627A (en) * 1996-06-27 1998-11-03 R/Greenberg Associates System and method for providing improved graphics generation performance using memory lookup
GB2331217A (en) * 1997-11-07 1999-05-12 Sega Enterprises Kk Image processor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2226937A (en) * 1988-12-05 1990-07-11 Rediffusion Simulation Ltd Image display
US5831627A (en) * 1996-06-27 1998-11-03 R/Greenberg Associates System and method for providing improved graphics generation performance using memory lookup
GB2331217A (en) * 1997-11-07 1999-05-12 Sega Enterprises Kk Image processor

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013168998A1 (en) 2012-05-10 2013-11-14 Samsung Electronics Co., Ltd. Apparatus and method for processing 3d information
CN104272731A (en) * 2012-05-10 2015-01-07 三星电子株式会社 Apparatus and method for processing 3d information
JP2015524915A (en) * 2012-05-10 2015-08-27 サムスン エレクトロニクス カンパニー リミテッド 3D information processing apparatus and method
EP2848002A4 (en) * 2012-05-10 2016-01-20 Samsung Electronics Co Ltd Apparatus and method for processing 3d information
US9323977B2 (en) 2012-05-10 2016-04-26 Samsung Electronics Co., Ltd. Apparatus and method for processing 3D information
CN104835191A (en) * 2014-02-06 2015-08-12 想象技术有限公司 Opacity Testing For Processing Primitives In 3D Graphics Processing System
GB2522868A (en) * 2014-02-06 2015-08-12 Imagination Tech Ltd Opacity testing for processing primitives in a 3D graphics processing system
US9299187B2 (en) 2014-02-06 2016-03-29 Imagination Technologies Limited Opacity testing for processing primitives in a 3D graphics processing system
GB2522868B (en) * 2014-02-06 2016-11-02 Imagination Tech Ltd Opacity testing for processing primitives in a 3D graphics processing systemm
CN104835191B (en) * 2014-02-06 2019-03-01 想象技术有限公司 For handling the opacity test of pel in 3D graphic system

Also Published As

Publication number Publication date
GB2359230B (en) 2004-02-11
GB9926761D0 (en) 2000-01-12

Similar Documents

Publication Publication Date Title
CN112200900B (en) Volume cloud rendering method and device, electronic equipment and storage medium
CN111508052B (en) Rendering method and device of three-dimensional grid body
US7570266B1 (en) Multiple data buffers for processing graphics data
US5043922A (en) Graphics system shadow generation using a depth buffer
JP3390463B2 (en) Shadow test method for 3D graphics
US5083287A (en) Method and apparatus for applying a shadowing operation to figures to be drawn for displaying on crt-display
Brabec et al. Shadow volumes on programmable graphics hardware
US6285348B1 (en) Method and system for providing implicit edge antialiasing
Bartz et al. Extending graphics hardware for occlusion queries in OpenGL
JP2004038926A (en) Texture map editing
JPH0896161A (en) Shadow drawing method and three-dimensional graphic computersystem
US7400325B1 (en) Culling before setup in viewport and culling unit
JP2006120158A (en) Method for hardware accelerated anti-aliasing in three-dimension
JPH09223244A (en) Method and device for rendering three-dimensional object at a high speed
KR101100650B1 (en) A system for indirect lighting and multi-layered displacement mapping using map data and its method
EP1125252B1 (en) Shading and texturing 3-dimensional computer generated images
GB2406252A (en) Generation of texture maps for use in 3D computer graphics
KR100277803B1 (en) 3D graphic display
US7292239B1 (en) Cull before attribute read
KR0147439B1 (en) A hardware based graphics workstation for refraction phenomena
GB2359230A (en) Computer graphics rendering of partially transparent object
US6333742B1 (en) Spotlight characteristic forming method and image processor using the same
JP3005981B2 (en) Shadow processing method and apparatus
US5900882A (en) Determining texture coordinates in computer graphics
GB2359229A (en) Computer graphics rendering of partially transparent object

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20181111