3D-graphics
The invention relates to a method of generating 3D-graphics that comprises edge anti-aliasing, a graphics system for generating edge anti-aliased 3D-graphics, a computer comprising the graphics system, and a display apparatus comprising the graphics system.
Known methods of generating 3D-graphics receive geometric data which comprises geometric primitives of the 3D-objects from a 3D-application. Usually, the vertices of the primitives are provided as the geometric data. Further, texture data is available which indicate the textures of the 3D-objects. Usually, the texture data is stored in a texture memory and represents texture intensities at texel positions on a texel grid. The 3D-graphics method processes the geometric data and the texture data to obtain pixel intensities on pixel positions on a pixel grid in the screen space. Usually, these pixel intensities are stored in a frame buffer. The pixel intensities are displayed on a display by reading out the frame buffer. In the prior art fragment buffer approach, a plurality of pixel fragments is stored per pixel position in a plurality of fragment (frame) buffers. Each of the pixel fragments is related to an amount of area of the pixel cell covered by the associated primitive. Thus, for each primitive that at least partially covers the pixel cell area, a weight factor is determined representative for the amount of area covered. The weighted intensities that are obtained by multiplying for each of these primitives the weight factor with the pixel intensities are called partial colors. This approach requires a substantial amount of memory bandwidth because for each pixel all contributing primitives have to be processed. When all primitives of a scene have been processed, the pixel fragments that are stored in the pixel fragment buffers are, for each pixel location, merged to obtain the final pixel intensities (including color). Often only a finite number of pixel fragments can be stored per pixel location because only a finite number of pixel fragments can be stored per pixel location because only a finite number of pixel fragment buffers is available. This limitation causes that the pixel fragment values have to be merged when all the pixel fragment buffers for this pixel are full, even if more primitives contribute to the pixel value. This causes artifacts.
It is an object of the invention to provide a method of generating edge anti- aliased 3D-graphics that require less frame buffers at an improved performance. A first aspect of the invention provides a method of generating edge anti- aliased 3D-graphics. A second aspect of the invention provides a graphics system for generating edge anti-aliased 3D-graphics as claimed in claim 4. A third aspect of the invention provides a computer as claimed in claim 5. A fourth aspect of the invention provides a display apparatus as claimed in claim 6. Advantageous embodiments are defined in the dependent claims.
In the method of generating edge anti-aliased 3D-graphics, vertices of the geometric primitives of 3D-objects are transformed to screen space. As in the prior art, the primitives may be any polygon. The vertices may have positions in the screen space that are in-between grid positions, and the image to be displayed is determined by the intensities (including brightness and color) on the grid positions. The visible parts of the geometric primitives are determined by using the geometric data to obtain non-overlapping primitives. Thus, of overlapping primitives is determined which primitive is on top, seen from the viewpoint. The parts of primitives that are occluded are not used anymore. For example, if a first primitive is a large polygon completely covering a small polygon, and the large polygon is on top, the small polygon is not anymore processed, only the large polygon is processed further. If the small polygon is on top, two adjacent areas are obtained: the small polygon and an area of the large polygon minus the small polygon. This latter area is further also referred to as the delta-area. The delta-area which covers the large polygon except where the small polygon covers it may be a polygon itself, or may be build up out of several adjacent or separate, non-overlapping polygons covering the delta-area. Hidden surface removal (further referred to as HSR) algorithms as such are known from the publication "Hidden Surface Removal using polygon area sorting" by Kevin Weiler and Peter Atherton, Computer Graphics 11 (SIGGRAPH 77 Proceedings), pp. 214-222, July 1977.
The non-overlapping geometric primitives are stored in a memory for later use. The stored non-overlapping geometric primitives are rasterized one by one to determine the intensities of the pixels in the screen space based on texture data retrieved from bitmaps, or/and procedurally on-the-fly generated bitmaps (such as Gouraud shading) representing textures of the 3D-objects. The rasterizer may use the inverse texture mapping or the forward texture mapping approach, which as such both are known.
The intensities of the pixels determined for each one of the non-overlapping geometric primitives are accumulated in a frame memory to obtain the final intensities of the pixels after all the non-overlapping geometric primitives have been processed by the rasterizer. Because the rasterizer in the present invention only has to process non- overlapping geometric primitives, it is not required anymore to store the intermediate results of (partly) overlapping geometric primitives in the fragment frame buffers. In accordance with the present invention, the pixel intensity per pixel per primitive can be simply added to the value already stored in the frame buffer for an adjacent non-overlapping primitive that has been processed earlier. Thus, less memory is required, and no artifacts will be caused by more contributing primitives than fragment frame buffers available.
In an embodiment as claimed in claim 2, the rasterizer is based on the inverse texture mapping approach, which as such is well known. In the Phd thesis "An a layered object-space based architecture for interactive raster graphics" of A.A.M Kuijk, University of Amsterdam, September 1996 a hidden surface removal unit is disclosed which is present at the input side of the graphics processor. However, the disclosed scan- line rasterizer has a disadvantage that it requires the non-overlapping primitives that contribute to a pixel to be available at the time of calculating the final pixel color (summation of all the primitive intensity contributions), and therefore requires the primitives in pre-sorted order per scan-line (the order may differ per scan-line). These non-overlapping primitives have a specific data structure (not the normal polygonal structure where the vertices of the corners are stored), called pattern, such that it can be processed by the specific scan-line rasterizer used. In the system in accordance with the present invention, the non-overlapping primitives do not have to be in this special data structure (they are in the same polygonal vertex format as the (possibly overlapping) input polygons), the summation of the primitive contributions to a pixel is accumulated via a read-add-write access to the frame buffer on the associated pixel position.
Further, the prior art approach has the disadvantage that a normalizing per pixel is required as can be seen in equation (8.2) and equation (8.3) of the prior art. Another disadvantage is that the system is only suitable for box pre-filtering see equation (8.2) and the text just above it in the prior art. The system is not suitable for higher order pre-filtering.
In an embodiment as claimed in claim 3, the rasterizer is based on the forward texture mapping approach, which as such is well known, for example from the publication "Resample hardware for 3D graphics" by Koen Meinds and Bart Barenbrug, in T. Ertl, W.
Heidrich, and M. Doggett, editors, Proceedings of Graphics Hardware 2002, pages 17-26. In this document, no hidden surface removal unit is present at the input side of the graphics processor, and a pixel fragment processing circuit that comprises pixel fragment buffers is required.
These and other aspects of the invention are apparent from and will be elucidated with reference to the embodiments described hereinafter.
In the drawings: Fig. 1 elucidates a display of a real world 3D-object on a display screen,
Fig. 2 elucidates the known inverse texture-mapping algorithm, Fig. 3 shows a block diagram of a circuit for performing the known inverse texture-mapping algorithm,
Fig. 4 elucidates the known forward texture-mapping algorithm, Fig. 5 shows a block diagram of a circuit for performing the known forward texture-mapping algorithm,
Fig. 6 shows a block diagram of a 3D-graphics system in accordance with the invention,
Figs. 7A and 7B show examples of possible configuration of overlapping primitives and the resulting non-overlapping primitives.
Fig. 8 shows a block diagram of a computer which comprises the 3D-graphics system, and
Fig. 9 shows a block diagram of a display apparatus that comprises the 3D- graphics system.
Fig. 1 elucidates a display of a real world three-dimensional (further also referred to as 3D) object on a display screen. A real world object WO, which may be a 3D- object such as the cube shown, is projected on a two-dimensional display screen DS. The appearance of the 3D-object WO is determined by a surface structure usually referred to as texture. In Fig. 1 the polygon A has a texture TA and the polygon B has a texture TB. The polygons A and B are with a more general term also referred to as the real world graphics primitives.
The projection of the real world object WO on the screen DS is obtained by defining an eye or camera position ECP with respect to the screen DS. In Fig. 1 is shown how the polygon SGP corresponding to the polygon A is projected on the screen DS. The polygon SGP in the screen space SSP defined by the coordinates x and y is also referred to as a graphics primitive instead of the graphics primitive in the screen space. Thus, with graphics primitive is indicated the polygon A in the eye space, or the polygon SGP in the screen space, or the polygon TGP in the texture space, it is clear from the context which graphics primitive is meant. It is only the geometry of the polygon A that is used to determine the geometry of the polygon SGP. Usually, it suffices to know the vertices of the polygon A to determine the vertices of the polygon SGP.
The texture TA of the polygon A is not directly projected from the real world into the screen space SSP. The different textures of the real world object WO are stored in a texture map or texture space TSP defined by the coordinates u and v. For example, Fig. 1 shows that the polygon A has a texture TA which is available in the texture space TSP in the area indicated by TA, while the polygon B has another texture TB which is available in the texture space TSP in the area indicated by TB. The polygon A is projected on the texture space TA such that a polygon TGP occurs such that when the texture present within the polygon TGP is projected on the polygon A the texture of the real world object WO is obtained or at least resembled as much as possible. A perspective transformation PPT between the texture space TSP and the screen space SSP projects the texture of the polygon TGP on the corresponding polygon SGP. This process is also referred to as texture mapping. Usually, the textures are not all present in a global texture space, but every texture defines its own texture space.
Fig. 2 elucidates the known inverse texture-mapping algorithm. Fig. 2 shows the polygon SGP in the screen space SSP and the polygon TGP in the texture space TSP. To facilitate the elucidation, it is assumed that both the polygon SGP and the polygon TGP correspond to the polygon A of the real world object WO of Fig. 1.
The intensities PIi of the pixels Pi present in the screen space SSP define the image displayed. Usually, the pixels Pi are actually positioned (in a matrix display) or thought to be positioned (in a CRT) in an orthogonal matrix of positions. In Fig. 2 only a limited number of the pixels Pi is indicated by the dots. The polygon SGP is shown in the screen space SSP to indicate which pixels Pi are positioned within the polygon SGP.
The texels or texel intensities Ti in the texture space TSP are indicated by the intersections of the horizontal and vertical lines. These texels Ti that usually are stored in a
memory called texture map define the texture. It is assumed that the part of the texel map or texure space TSP shown corresponds to the texture TA shown in Fig. 1. The polygon TGP is shown in the texture space TSP to indicate which texels Ti are positioned within the polygon TGP. The well known inverse texture mapping comprises the steps elucidated in the now following. A blurring- filter that has a footprint FP is shown in the screen space SSP and has to operate on the pixels Pi to perform a weighted averaging operation required to obtain the blurring. This footprint FP in the screen space SSP is mapped to the texture space TSP and called the mapped footprint MFP. The polygon TGP that may be obtained by mapping the polygon SGP from the screen space SSP to the texture space TSP is also called the mapped polygon. The texture space TSP comprises the textures TA, TB (see Fig. 1) which should be displayed on the surface of the polygon SGP. As described above, these textures TA, TB are defined by texel intensities Ti stored in a texel memory. Thus, the textures TA, TB are appearance information which defines an appearance of the graphics primitive SGP by defining texel intensities Ti in a texture space TSP.
The texels Ti both falling within the mapped footprint MFP and within the mapped polygon TGP are determined. These texels Ti are indicated by the crosses. The mapped blurring- filter MFP is used to weight the texel intensities Ti of these texels Ti to obtain the intensities of the pixels Pi. The weighting of the texel intensities Ti is performed in accordance with a filter characteristic of the mapped blurring-filter MFP, which characteristic is a transformed filter characteristic of the blurring-filter in the screen space SSP.
Fig. 3 shows a block diagram of a circuit for performing the known inverse texture mapping. The circuit comprises a screen space rasterizer RSS which operates in the screen space SSP, a resampler RTS in the texture space TSP, a texture memory TM and a pixel fragment processing circuit PFO. Ut, Vt is the texture coordinate of a texel Ti with index t, Xp, Yp is the screen coordinate of a pixel with index p, It is the color of the texel Ti with index t, and IPi is the intermediate intensity (brightness and color) of pixel Pi with index i.
The screen space rasterizer RSS rasterizes the polygon SGP in the screen space SSP. For every pixel Pi traversed, its anti-aliasing filter footprint FP is mapped to the texture space TSP. The anti-aliasing filter is also commonly referred to as the pre-fϊlter. The texels Ti within the mapped footprint MFP and within the mapped polygon TGP are determined and weighted according to a mapped profile of the anti-aliasing filter. The color of the pixels Pi is computed using the mapped anti-aliasing filter in the texture space TSP.
Thus, the rasterizer RSS receives the polygons SGP in the screen space SSP to supply the mapped anti-aliasing filter footprint MFP and the coordinates of the pixels Pi. A resampler in the texture space RTS receives the mapped anti-aliasing filter footprint MFP and information on the position of the polygon TGP to determine which texels Ti are within the mapped footprint MFP and within the polygon TGP. The intensities of the texels Ti determined in this manner are retrieved from the texture memory TM. The anti-aliasing filter filters the relevant intensities of the texels Ti determined in this manner to supply the intermediate color IPi of the pixel Pi.
The pixel fragment processing circuit PFO blends the intermediate pixel intensities IPi of overlapping polygons due to the blurring. The pixel fragment processing circuit PFO may comprise a pixel fragment composition unit, also commonly referred to as A-buffer, which contains a fragment buffer. Commonly, a fragment buffer is used to minimize edge anti-aliasing based on geometric information on the overlap of an area (often a square) associated to a pixel with the polygon. Often a mask is used on a super-sample grid which enables a quantized approximation of the geometric information. This geometric information is an embodiment of what is called "contribution factor" of a pixel. For the motion blur application, the contribution value of the pixels of a moving object is dependent on the motion speed and is filtered blurry in the same manner as the color channels. The pixel fragment composition unit PFO will blend these pixel fragments accordingly to their contribution factor until the sum of the contribution factors reaches 100%, or no pixel fragments are available anymore, thereby generating the effect of translucent pixels of moving objects.
To be able to implement the above process, pixel fragments are required in depth (Z-value) sorted order. Because polygons can be delivered in random depth order, the pixel fragments per pixel location are stored in depth-sorted order in a pixel fragment buffer. However, the in the fragment buffer stored contribution factor is now not based on the geometric coverage per pixel. Instead, the contribution factor, which depends on the motion speed and which is filtered blurry in the same manner as the color channels, is stored. The pixel fragment composition algorithm comprises two stages: insertion of pixel fragments in the fragment buffer and composition of pixel fragments from the fragment buffer. To prevent overflow during the insertion phase, fragments that are closest in their depth values may be merged. After all the polygons of the scene are rendered, the composition phase composes fragments per pixel position in a front to back order. The final pixel color is obtained when
the sum of the contribution factors of all added fragments is one or more, or when al pixel fragments have been processed.
Fig. 4 elucidates the known forward texture mapping. Fig. 4 shows the polygon SGP in the screen space SSP and the polygon TGP in the texture space TSP. To facilitate the elucidation, it is assumed that both the polygon SGP and the polygon TGP correspond to the polygon A of the real world object WO of Fig. 1.
The intensities PIi of the pixels Pi present in the screen space SSP define the image displayed. The pixels Pi are indicated by the dots. The polygon SGP is shown in the screen space SSP to indicate which pixels Pi are positioned within the polygon SGP. The pixel actually indicated by Pi is positioned outside the polygon SGP. With each pixel Pi a footprint FP of a blur filter is associated.
The texels or texel intensities Ti in the texture space TSP are indicated by the interstices of the horizontal and vertical lines. Again, these texels Ti that usually are stored in a memory called texture map define the texture. It is assumed that the part of the texel map or texure space TSP shown corresponds to the texture TA shown in Fig. 1. The polygon TGP is shown in the texture space TSP to indicate which texels Ti are positioned within the polygon TGP.
The coordinates of the texels Ti within the polygon TGP are mapped (resampled) to the screen space SSP. In Fig. 4, this mapping (indicated by the arrow AR from the texture space TSP to the screen space SSP) of a texel Ti (indicated by a cross in the texture space) to the screen space SSP provides mapped texels MTi (indicated by the cross in the screen space SSP, which cross may be positioned in-between pixel positions indicated by the dots) in the screen space SSP. A contribution of the mapped texel MTi to all the pixels Pi which have a footprint FP of the blur filter which encompasses the mapped texel MTi is determined in accordance with the filter characteristic of the blur filter. All the contributions of the mapped texels MTi to the pixels Pi are summed to obtain the intensities PIi of the pixels Pi.
In the forward texture mapping, the resampling from the colors of the texel Ti to the colors of the pixels Pi occurs in the screen space SSP, and thus is input sample driven. Compared to the inverse texture mapping, it is easier to determine which texels Ti contribute to a particular pixel Pi. Only the mapped texels MTi that are within a footprint FP of the anti¬ aliasing filter for a particular pixel Pi will contribute to the intensity or color of this particular pixel Pi. Further, there is no need to transform the anti-aliasing filter from the screen space SSP to the texel space TSP.
Fig. 5 shows a block diagram of a circuit for performing the forward texture mapping. The circuit comprises a rasterizer RTS which operates in the texture space TSP, a resampler RSS in the screen space SSP, a texture memory TM and a pixel fragment processing circuit PFO. Ut, Vt is the texture coordinate of a texel Ti with index, Xp, Yp is the screen coordinate of a pixel with index p, It is the color of the texel Ti with index t, and Ip is the filtered color of pixel Pi with index p.
The rasterizer RTS rasterizes the polygon TGP in the texture space TSP. For every texel Ti which is within the polygon TGP, the resampler in the screen space RSS maps the texel Ti to a mapped texel MTi in the screen space SSP. Further, the resampler RSS determines the contribution of a mapped texel MTi to all the pixels Pi of which the associated footprint FP of the anti-aliasing filter encompasses this mapped texel MTi. Finally, the resampler RSS sums the intensity contributions of all mapped texels MTi to the pixels Pi to obtain the intensities PIi of the pixels Pi.
The pixel fragment processing circuit PFO shown in Fig. 5 has been elucidated in detail with respect to Fig. 3.
Fig. 6 shows a block diagram of a 3D-graphics system in accordance with the invention. The 3D-application 1 provides geometric data GD to the vertex transform and lighting unit 2 further also referred to as the T&L unit 2. The geometric date GD defines geometric primitives in the texture space TGP and/or the screen space SGP. Usually, the geometric data GD comprises vertices of polygons. These vertices, which are submitted by the 3D-application, are transformed by the T&L unit 2 from a coordinate system used by the 3D-application (for example, "real" world coordinates) to the screen space SSP. The 3D- application might use a 3D-API such as, for example, OpenGL or Direct3D. The coordinates in the screen space SSP have decimal precision behind the dot, for example, the coordinates are represented by floating or fixed point numbers. Also, an intensity may be calculated for each vertex (vertex shading). Usually, the intensity of the vertices comprises a brightness and a color.
The hidden surface removal unit 3 (further also referred to as HSR 3) determines the visible part(s) of the geometric primitives TGP; SGP using their geometric data. The parts of the geometric primitives TGP; SGP which are occluded by other geometric primitives TGP; SGP seen from the viewpoint or camera ECP are determined and cut-off to obtain only non-overlapping geometric primitives TGP'; SGP'. This is further elucidated with respect to Figs. 7A and 7B. The non-overlapping geometric primitives TGP'; SGP' are stored by the HSR 3 in a primitives memory 4 for later use.
In a forward texture mapping system (further also referred to as FTM system), preferably the HSR 3 operates on geometric primitives SGP in the screen space SSP. The geometric primitives TGP in the texture space TSP are obtained by a transformation of the geometric primitives SGP in the screen space SSP. In an inverse texture mapping system (further also referred to as ITM system), the HSR preferably operates in the texture space on the geometric primitives SGP.
The 3D-graphics system further comprises a rasterizer which receives the output data of the HSR 3, texture information from the texture memory 6 to determine the partial intensities IPi of the pixel Pi for all non-overlapping primitives TGP'; SGP' which contribute to the final intensity PIi of the pixel Pi. The partial intensities IPi are accumulated by the rasterizer in the frame buffer 7 to obtain the final intensities PIi of all the pixels Pi. The operation of the rasterizer 5 depends on whether the 3D-graphics system is a FTM or ITM system.
In an ITM system, the rasterizer 5 comprises a screen space rasterizer RSS (see Fig. 3) which rasterizes the non-overlapping geometric primitives SGP' in the screen space SSP one by one to obtain the grid positions of the pixels Pi per non-overlapping geometric primitive SGP'. To each pixel Pi in the screen space, a pre-filter is associated which has a predetermined filter profile and a pre-filter footprint (FP) centered on its associated pixel (Pi). Such pre-filters are well known from the prior art ITM systems. The pre-filter footprint FP is mapped by the resampler RTS in texture space
TSP for each pixel (Pi) of each one of the non-overlapping geometric primitives (SGP') to a texture space (TSP) comprising the textures to obtain a mapped filter footprint (MFP) and a transformed filter profile being a transformed version of the filter profile of the pre-filter.
The resampler RTS determines, exact or by approximation, for each mapped filter footprint MFP which texels Ti in the texture space TSP are positioned within both the mapped filter footprint MFP and the non-overlapping geometric primitive TGP' in the texture space TSP. These texels Ti are filtered with the transformed filter profile to obtain the partial intensity IPi of the associated pixel Pi. Thus for each non-overlapping primitive SGP' which contributes to the final intensity PIi of a particular one of the pixels Pi all partial intensities IPi are determined because the partial intensities IPi are determined for the non-overlapping primitives SGP' sequentially, one by one. The rasterizer 5 accumulates the partial intensities IPi for each of the pixels Pi in the frame buffer 7. Thus, after processing of all the non- overlapping primitives SGP' of a scene, the image to be displayed is present in the frame
buffer 7. It is not required to process all primitives SGP and to store all the fractions in a plurality of frame buffers.
Per non-overlapping primitive SGP', considering a particular one of the non- overlapping primitives SGPl, not only a pre-filter footprint (FP) is attributed and mapped to the texture space TSP to the pixels Pi within this particular one of the primitives SGP' but also to pixels Pi outside the particular one of the primitives SGP' within a band around this particular primitive SGP'. The band is determined by the size of the pre-filter footprint FP. Only those pixels Pi belong to the band of which the pre-filter footprint covers pixels within the particular primitive SGP'. In an FTM system, the rasterizer 5 comprises a texture space rasterizer RTS
(see Fig. 5) which rasterizes the non-overlapping geometric primitives TGP' in the texture space TSP one by one to obtain the grid positions of the texels Ti per non-overlapping geometric primitive TGP'.
The resampler in screen space RSS maps the texels Ti within a non- overlapping geometric primitive TGP', per non-overlapping geometric primitive TGP', to the screen space SSP to obtain mapped texel positions MTi. The resampler RSS further splats, in the screen space SSP, for each mapped texel position MTi, the associated intensity It of the texel Ti over a group of adjacent pixels Pi of which associated pre-filters have footprints FP overlapping the mapped texel position MTi. The splatting determines the contributions of the texel intensity to the pixels Pi which surround the mapped texel position MTi. Usually, the contributions become smaller the further the pixel Pi is away from the mapped texel position MTi. Thus, per non-overlapping geometric primitive TGP' all the splatted intensities It are accumulated for all pixels Pi in the frame buffer 7 to obtain partial intensity contributions PIi for the pixels Pi of the group according to pre-filter profiles of the associated pre-filters. The accumulation of the contributions of the mapped texels may also be performed by the resampler RSS. Now, only the partial contributions PIi thus obtained are accumulated in the frame buffer 7.
Thus, after processing of all the non-overlapping primitives TGP' of a scene, the image to be displayed is present in the frame buffer 7. Figs. 7A and 7B show examples of possible configuration of overlapping primitives and the resulting non-overlapping primitives.
Fig. 7A shows two overlapping primitives TGP(I); SGP(I) and TGP(2); SGP(2). By way of example only, the primitives are shown to be triangles. The depth information of the primitives is supplied by the 3D-application. It is assumed that the
primitive TGP(I); SGP(I) has a depth value Zl and the primitive TGP(2); SGP(2) has a depth value Z2 such that the primitive TGP(2); SGP(2) is nearer to the viewpoint or camera ECP than the primitive TGP( 1 ); SGP( 1 ).
The HSR 3 uses the coordinates of the vertices and the depth values Zl and Z2 of the two overlapping primitives TGP(I); SGP(I) and TGP(2); SGP(2) to determine which parts of the primitive TGP(I); SGP(I) is occluded by the primitive TGP(2); SGP(2). This overlapped part is cut-out resulting in 3 primitives Pl, P2 and P3 shown in Fig. 7B.
Fig. 7B shows the 3 primitives Pl, P2 and P3 determined by the HSR 3. The primitive P2 is identical to the primitive TGP(2); SGP(2) because this primitive is on top. From the primitive TGP(I); SGP(I), the part occluded by the primitive TGP(2); SGP(2) is in fact cut-out such that only the non-occluded parts Pl and P3 which are visible are available. Thus, instead of processing both overlapping primitives TGP(I); SGP(I) and TGP(2); SGP(2) now the only the primitive TGP(2); SGP(2) and the visible parts Pl and P2 of the primitive TGP(I); SGP(I) are processed. Thus in fact, now 3 adjacent, non-overlapping primitives Pl, P2, and P3 are processed. If, for example, the system is only able to process triangles, the HSR 3 has to split the primitive Pl into several primitives which each are a triangle.
Fig. 8 shows a block diagram of a computer that comprises the 3D-graphics system. The computer PC that comprises the 3D-graphics system in accordance with the invention has an output Ol to supply the final intensities PIi. Of the 3D-graphics system in accordance with the invention only the frame buffer 7 is shown. As shown in Fig. 5, the frame buffer 7 receives the partial intensities IPi and supplies the final intensities PIi.
Usually, the 3D-graphics processing in a computer PC requires dedicated hardware that is present on a graphics board in a slot of the computer PC. The processor of the computer PC that is running a 3D-application supplies the geometric data GD to the graphics board where it is used as input data for the 3D-graphics processing. The output Ol may be a standard plug suitable to transfer the image to a display device. The image transferred to the display device may be in the form of digital data, for example if a DVI interface is used. The image may also be transferred as analog signal(s). Usually, the image is transferred as three RGB (Red, Green, and Blue) signals and synchronization signals.
The final intensities PIi of the 3D-graphics processing in accordance with the invention may therefore be considered to comprise a digital data stream or analog signals representing the intensities for red, green, and blue in combination or separately.
The display device and the computer PC may be integrated into a single cabinet.
Fig. 9 shows a block diagram of a display apparatus that comprises the 3D- graphics system. The display apparatus MON comprises the 3D-graphics system in accordance with the invention, a processing circuit PRO, and a display device LCD. Again, of the 3D-graphics system in accordance with the invention only the frame buffer 7 is shown. The frame buffer 7 receives the partial intensities IPi and supplies the final intensities PIi to the processing circuit PRO. The processing circuit PRO processes the pixel intensities PIi stored in the frame buffer 7 to obtain drive signals DS which are supplied to the display device LCD. The display device LCD has a display screen to display the images determined by the final intensities PIi.
The display apparatus may be a computer monitor receiving the geometric data GD from a computer or microprocessor that may be present in the same cabinet as the monitor. The 3D graphics system may be used to display 3D-graphics that are locally generated in the display apparatus, for example to facility easy operation of the display apparatus, for example by generating animated menus. The display apparatus may also be a television receiver.
The display device LCD may be of any kind, for example a liquid crystal display, a plasma display, or any other matrix display or a cathode ray tube. It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims.
In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. Use of the verb "comprise" and its conjugations does not exclude the presence of elements or steps other than those stated in a claim. The article "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.