GB2362552A - Processing graphical data - Google Patents

Processing graphical data Download PDF

Info

Publication number
GB2362552A
GB2362552A GB0120840A GB0120840A GB2362552A GB 2362552 A GB2362552 A GB 2362552A GB 0120840 A GB0120840 A GB 0120840A GB 0120840 A GB0120840 A GB 0120840A GB 2362552 A GB2362552 A GB 2362552A
Authority
GB
United Kingdom
Prior art keywords
fragment
opaque
pixel
data
fragment information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0120840A
Other versions
GB0120840D0 (en
GB2362552B (en
Inventor
Ken Cameron
Eamon O'dea
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ClearSpeed Technology Ltd
Original Assignee
ClearSpeed Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB9915060A external-priority patent/GB2355633A/en
Application filed by ClearSpeed Technology Ltd filed Critical ClearSpeed Technology Ltd
Publication of GB0120840D0 publication Critical patent/GB0120840D0/en
Publication of GB2362552A publication Critical patent/GB2362552A/en
Application granted granted Critical
Publication of GB2362552B publication Critical patent/GB2362552B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/40Hidden part removal
    • G06T15/405Hidden part removal using Z-buffer

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Executing Machine-Instructions (AREA)
  • Image Processing (AREA)
  • Image Generation (AREA)

Abstract

An apparatus (13) for processing graphical data includes a processing core having processor elements (15) and a controller (17). The apparatus operates to process transparent blended graphical data without the need for expensive storage and sorting routines.

Description

2362552 PROCESSING GRAPHICAL DATA The present invention relates to
processing graphical data and in particular to methods and apparatus for calculating pixel data for display on a display device having a plurality of pixels.
BACKGROUND OF THE INVENTION
Graphical images are conventionally displayed on display devices which include a plurality of picture elements, or pixels. one such device is illustrated in Figure 1 of the accompanying drawings. The display device 1 is made up of a plurality (for example 640x480, 800x6OO, to 1600x1200) of pixels (picture elements) 3 which are used to make up the image display on the screen as is well known. In order to display an image on the screen the colour value of each pixel must be calculated for each frame of the image to be displayed. The pixel information is typically stored in a "frame bufferff of the display device. Calculation of pixel colour values is known as "shading" and is advantageously performed by a dedicated graphics processing system. The use of such a dedicated graphics processing system in combination with a host system enables the processing power of the host system to be more effectively utilised processing applications software. The application software typically determines the geometry of the images to be shown on the display device and the graphics system takes geometrical information and calculates the actual pixel values for display on the device 1 from that information.
1 -2 is Commonly, the graphics processing system receives information from the host application in the form of information data regarding graphical primitives to be displayed. A graphical primitive is a basic graphical element which can be used to make up-complex images. For example, a common graphical primitive is a triangle and a number of different shaped and sized triangles can be used to make up a larger more complex shape. The primitive data includes information regarding, the extent, colour, texture and other attributes of the primitive. Any amount of primitive information can be used. For example, simply colour and extent information may be sufficient for the application or primitive concerned. visual depth information, ie the relative position, of the primitive can also be included. In the following examples, primitives having a high visual depth value are considered to be closer the viewer, ie more visible, than those primitives having lower visual depth value. Such a convention is arbitrary, and could be replaced by any other suitable convention.
Figure 2 illustrates in side view, and Figure 3 illustrates in front view, the display of primitives P1, P2 and P3 on the pixels 3 of the display device 1. Primitive P1 is.the rearmost primitive, having a visual depth Pld which is lower than the visual depths of the other primitives. Primitive P3 is the frontmost primitive. As can be seen, the primitives overlap one another, and so the graphics processing system must calculate, for each pixel, which of the primitives is displayed at that pixel.
In the following examples, three pixels 3a, 3b and 3c -3 will be used illustrate the graphical processing of the primitive data.
is In a graphical processing system having a single processor, a primitive analysed so that a pixel covered by the primitive can be identified. A "fragment" of the primitive data is determined for that pixel, and is then processed to determine the colour to be displayed for the pixel. When one fragment has been processed, a further fragment can be identified and processed.
The graphics processor receives fragment information which contains data indicating the colour, texture and blending characteristics of the primitive concerned at a particular pixel.
A "shadingm process is then used to process the fragment information in order to determine the actual pixel data which is to be written to the frame buffer of the display device for display thereon. The shading process results in the determination of the colour of the pixel from the fragment information. This may involve a texture look-up operation to determine the texture characteristics to be displayed at the pixel. A texture look-up involves a memory access step to retrieve the texel, or texture element, for the pixel. For opaque primitives, the colour information is supplied to the frame buffer where it overwrites the current value to give a new value for display.
The frame buffer contents can be displayed immediately, or could be displayed at an arb itrary time in the future (for example using multiple frame buffers for the device), and any suitable scheme can be used for is the display device concerned. Figure 4 shows the final frame buffer values for the primitives arrangements shown in Figure 2 and 3. Pixel 3a will display the properties of primitive P1, pixel 3b will display the properties of primitive P2, and pixel 3c will display the properties of primitive P3.
A development of such a system uses a region-based processing scheme including a plurality of processors. As illustrated in Figure 1, the pixels 3 of the display device 1 are grouped in to a number of regions, for example region 5. The region size is usually defined by the number of processors in the multiple processor system. One particular processing architecture could be a single instruction multiple data (SIMD) processing architecture. In a region based architecture, the primitives are sorted to determine which regions of the display include which primitives and are then subjected to "rasterisation" to break the primitives into fragments. The fragment information is stored for each primitive until all of the primitives have been rasterised. Usually, only the most recently determined fragment information is retained for each pixel. A shading process is then used to determine the pixel data to be stored in the frame buffer for display. Such a scheme has the advantage that the shading process can be used a minimized number of times by shading multiple pixels at the same time (using one processor per pixel) and by waiting until a high proportion of pixels are ready to be shaded. Such a scheme is known as "deferred shading" because the shading process is carried out after the rasterisation process has been completed.
j -5 Such a scheme works well when all of the primitives are opaque since deferring the shading operation enables large memory accesses (i.e. texture look-ups) to be deferred and performed in parallel. The result for opaque primitives will be as shown in Figure 4.
A technique which can be used to provide transparent or partly transparent primitives is known as "blending". In a blending process, the current pixel data stored in the frame buffer is combined with newly calculated pixel data relating to a new primitive. The combination is performed in a manner defined by the blending algorithm in accordance with a so-called avalue which indicates the amount of blending that is to be achieved, for example an a-value of 0.5 indicates that the result of the blend is to be half existing colour and half new colour. Blending occurs after the shading process. In the single processor case blending is performed immediately following the shading process for each pixel. The pixel data is blended in the order in which the associates primitives are output from the host system.
Figure 5 illustrates the calculated frame buffer values for the primitives of Figures 2 and 3, where primitives P1 is a transparency, P2 is opaque and P3 is transparent. Pixel 3a displays the background viewed through primitive P1, pixel 3b displays P2, and 3c displays P2 viewed through P3..
In the region based architecture, it has not been practical to defer blending with the deferred shading process because of the requirement to store large amounts of data relating to all of the primitives -6 is occurring at a pixel regardless of whether those primitives are visible or not. This is necessary because a blended primitive can have an effect on the final values of the pixel. In such a case, the shading and blending processes must be carried out for a pixel as soon as a blended primitive is encountered. This results in low utilisation of a multi- processor design, since, on average, a single blended primitive is likely to cover only a small number of pixels and so the shading and blending processes must be carried out even though only a small number of the available processors have the required data. In addition, if shading and blending were to be performed for each primitive, many operations would be unnecessary due to overlapping primitives at a pixel.
Deferred shading for images including blended primitives has not been implemented for region based. multiple processor graphics processing architectures, because of these problems.
It is therefore desirable to provide a graphics processing system which can defer blending and shading operations in order to provide higher performance and faster computation time.
SUMMARY OF THE PRESENT INVENTION
According to the present invention there is provided a method of processing data representing images to be displayed on a device having a plurality of pixels, the method comprising, for at least one pixel of the device:
a) defining a data queue having a predetermined is c) number of locations therein, and assigning one of the locations for storing only data relating to opaque images; b) defining an opaque depth value indicating the depth value of the most visible opaque primitive to be displayed at the pixel; receiving fragment information belonging to an image to be displayed by the pixel, the fragment information including fragment depth information; d) determining whether the fragment information relates to a visible image with respect to the opaque depth value, and discarding the fragment information if it does not relate to such a visible image; e) storing the fragment information in the queue at a location corresponding to the fragment depth information; f) determining whether the fragment information belongs to an opaque image or to a transparent image; g) if the fragment information relates to an opaque image:
i) clearing the queue location relating to fragment depths behind the fragment depth; and ii) updating the opaque depth value to equal the fragment depth; h) repeating steps c) to g) until all of the locations if a data queue contain fragment information, or until no further fragment information is available; i) processing in turn fragment information stored in the locations of the queue to produce respective pixel display values.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is a schematic diagram illustrating a display device; Figure 2 and Figure 3 are schematic diagrams illustrating graphical primitives and the device of Figure 1; Figures 4 and 5 illustrate calculated frame buffers for non-blended and blended graphics primitives respectively; Figure 6 illustrates a graphics processing system; Figure 7 illustrates part of the system of Figure 6; Figure 8 illustrates a method of processing data; Figures 9 to 12 illustrate one example processing using the method of Figure 8; Figures 13 to 16 illustrate another example processing using the method of Figure 8; Figure 17 illustrates a second method; Figures 18 to 21 illustrate one example processing using the method of Figure 17; Figure 22 illustrates a third method embodying the present invention.
DESCRIPTION OF THE PREFERRED EMBODIMENT
Figure 6 of the accompanying drawings illustrates a graphics processing system including a host system 11 connected to communicate with a graphics processing core 13. The graphics processing core 13 includes processor elements 15 and a controller 17. The processor elements 15 receive graphical primitive information from the host system 11 and control signals is from the controller 17. The processor elements 15 operate to process the graphical primitive data in accordance with instructions from the controller 17 and to output information relating to properties of a respective pixel. Figure 7 illustrates one of the processor elements 15 of Figure 6 in more detail. The processor element 15 includes a processor unit 151, and a memory unit 152. The processor unit 151 may, for example, include an arithmetic logic unit and operates to process the data supplied by the host system, and the memory unit 152 is used as alocal data storage area by the processor unit 151.
A method of processing graphical data will now be described with reference to Figures 6 to 12. As illustrated in Figure 9 a respective data queue 19a, 19b, 19c is defined for each pixel 3a, 3b, 3c (Figure 2). Each queue has a plurality of locations and acts as a first-in-first-out queue. Each queue is preferably defined within the processor memory unit 152, although could be implemented as a dedicated FIFO buffer or other appropriate device.
Graphical primitive information is received from the host system 11 a single primitive at a time. A rasterisation and sorting process identifies the pixels that are included in the primitive and divides the primitive into fragments, one fragment per pixel. When all of the primitives have been rasterised and sorted, the pixel attribute calculation can begin. Such rasterisation and sorting processes are well known in the art.
Figure 8 shows a flow chart of a first method. At step A the queue for each pixel is defined, and two depth values are defined. A first depth value, the "opaque depth value (ODV)", indicates the current depth of the most visible opaque fragment at the pixel, and a second value, the "transparent depth value -(TDV)" relates to the most visible transparent fragment at a pixel.
Fragment data is received step B, and its depth value is compared with the current ODV. If the fragment is found not to be visible (i.e. it is behind the current opaque depth value, then the data is discarded. If the fragment is visible (i.e. not behind the opaque depth value) then the fragment is tested to see whether it is opaque (step D). If the fragment is opaque, then the opaque depth value (ODV) is updated to equal the fragment depth value. The fragment is then tested to determine whether the fragment is behind the current transparent depth value (step F). If the fragment is not (i.e. it is in front of the TW) then the queue for that pixel is cleared (step G) and the TDV updated to equal the fragment depth value (step H). Processing then moves to step K in which the fragment data is stored in the next available queue location for the pixel. If the opaque fragment is determined to be behind the transparent depth value, then the queue is not cleared, but the fragment data is stored in the next available queue location.
For non opaque (transparent) fragments determined at step D, the fragment depth value is again tested to determine whether the new fragment is behind the current transparent depth value (step I). If the new depth value is behind the current transparent depth value, then the fragment data is stored in the next available queue location. However, if the new depth value is in front of the transparent depth value, the transparent depth value is updated (step J) before the fragment data is stored in the next available queue location (step K).
The process continues whilst there is more fragment data available (step L). When no further fragment data is available, then the queue contents are sorted into depth order (step M) with any fragment data behind the frontmost opaque value being discarded. When the queue has been sorted into depth order, then the shading an blending of the queue contents (step N) can be undertaken in order of queue location. The first location contents are shaded and blended with the frame buffer contents to produce updated frame buffer contents. The second queue location can then be shaded and blended itself. This shading and blending continues until the queue contents are all shaded.
The method of Figure 8 will now be illustrated using the primitives shown in Figure 2. In Figure 2, the first primitive P1 is a transparent primitive, the second primitive P2 is an opaque primitive, and the third primitive P3 is a transparent primitive. It is assumed in this.first example that the primitive data arrives in the order P1, P2 and P3.
The queues and depth values are initialised as shown in Figure 9, and the data relating to P1 is received. Since P1 is in front of the current depth value (0), is not opaque and is in front of the current transparent depth value, the P1 fragment data is entered in the first location of each queue and the transparent depth value for each pixel updated (Figure 10).
is Date concerning primitive P2 is then received. Since P2 is opaque, is front of the current opaque depth, and is in front of the current transparent depth, queues 19B and 19C relating to pixels 3B and 3C are cleared and P2 fragment data entered in the first queue location. The opaque depth value and the transparent depth value for each queue is updated in line with P2. This result is shown in Figure 11.
Data concerning primitive P3 is then received. Since P3 is a transparent primitive which is in front of the current opaque and transparent depth values, its data is entered into the next available location for queue 19C (the second location) and the transparent depth value for that queue is updated appropriately. No change is made to the opaque depth value, since the third primitive P3 is a transparent primitive. The final queue contents and depth values are shown in Figure 12. It will be noted that in this first example, since the primitive information is supplied in the correct depth order, that no sorting of the queue is required.
A second example of the results of the method of Figure 8 will now be explained using the primitives shown in Figure 2. However, in this second example the primitives arrive in the order P2, P1, P3. In this example, Pi is an opaque primitive, and P2 and P3 are transparent primitives. The queues and depth values are once again initialised, and the primitive information relating to P2 is received. Since P2 is the first primitive to be received, its data is loaded into the first locations of queues 19B and 19C. The transparent depth values for both queues is updated to the second primitive value, but the opaque depth value is not since the primitive P2 is transparent.
Data relating to primitive P1 is then received and for the first queue (19A) is entered in the first location and the opaque and transparent depth values updated appropriately. For the remaining queues 19B and 19C, since P1 is an opaque primitive which has a depth value below the current transparent depth value for those pixels,data relating to P1 is loaded into the next available (second) location in each queue. Depth value for each queue is updated to be in line with the first primitive, but the transparent depth value remains as the P2 value, since P2 is in front of Pl.
Data relating to primitive P3 is then received, and for queue 19C the primitive is visible (i.e. in front of the current opaque depth value) and so its data is loaded into the next available queue location (location 3) since it is a transparent primitive. The transparent depth value is updated to the P3 value, but the opaque depth value remains the P1 value. Since there are no further fragments available to be processed, the queues are sorted into depth order, with the lowest depth value primitive occupying the first location in the queue. The results of the sorting process can be seen in Figure 16.
In this way, shading and blending of fragment information can be deferred so that the large texture look-ups and other calculations which are required for shading and blending do not impact on the processing speed of the rasterisation process. In addition, the shading and blending is deferred until a large number of the processor units 15 are able to take part in the shading operation. This is particularly important in a single instruction multiple data (SIMD) processor architecture in which all processors process the same instruction at the same time.
Another advantage of this method is that when the queues are not filled during rasterisation, the number of shade steps is minimized. Even where a queue is filled, the number of shade steps will probably be less than the number of rasterisation steps.
An alternative method is illustrated in Figure 17. The method of Figure 17 is identical to the method described with reference to Figure 8, with the exception that in the method of Figure 17 a specific location in each queue for each pixel is reserved for storing the most visible opaque primitive information. Thus, step K of Figure 8 is replaced by two steps K1 and K2 in Figure 17. For an opaque fragment the opaque location is replaced when that opaque fragment is visible, i.e. when it is in front of the current opaque depth value. If the new opaque fragment is in front of the current transparent depth value, then the queue is cleared, as before, and the transparent depth value updated.
For transparent fragments, the fragment data is stored in the next available queue location, but not in the opaque location. The method of Figure 17 will be illustrated with reference to the primitives of Figure 2, assuming that the primitive P1 is opaque primitive -is- P2 is transparent and primitive P3 is opaque. It will also be assumed that the primitive data is received by the system in the correct order, i. e. P1 followed by P2 followed by P3. Figure 18 shows the initialised queues and depth values, with the first position in each queue reserved for the most visible opaque data. Data relating to primitive P1 is received and since the primitive is opaque, its date is loaded into the opaque location of each queue. The opaque depth value and transparent depth values are updated to relate to this first primitive (Figure 19). Data relating to primitive P2 is received, and since this primitive is transparent the opaque location of each queue remains unchanged, and the P2 primitive data is loaded into the second location of queues 19B and 19C. The opaque depth values are unchanged, but the transparent depth values are updated to relate to the second primitive (Figure 20).
Data for primitive P3, which is an opaque primitive, causes the queue 19C to be cleared and the data for primitive P3 stored in the opaque location of that queue. The opaque depth value and transparent depth value are updated to relate to primitive P3, since primitive P3 is the most visible primitive (Figure 21) The methods described with reference to Figures 8 and 17 can also be used for alpha tested primitives, i.e. where the fragment depth value is uncertain, simply by treating all fragments (including opaque) as being transparent. The sort process can then be used to discard those fragments which are not visible.
Figure 22 illustrates a third method embodying the -16 is present invention. In step A of Figure 22 the queues are defined for each pixel, together with a single opaque depth value. Fragment data is received at step B, and if that fragment data is behind the current opaque depth value then it is discarded (step C), as before. In this third embodiment, visible data is then entered into the queue for a pixel at a location appropriate to its fragment depth value. If the fragment is opaque (step E) then the queue entries behind the new fragment data (i.e. with a lower depth value) are cleared (step F). The reception of fragment data continues if there are more fragments available (step G). As soon as all of the primitives have been processed, the queue contents are shaded and blended as before.
It will be appreciated that the method in accordance with Figure 22 avoids the need for post-rasterisation sorting since the incoming fragments are effectively sorted when being loaded into the queue. The queue location is chosen on the basis of the fragment depth value, rather than simply the next available location, as in the other methods. The contents of the final queue for each pixel will contain an opaque primitive data location and a sorted list of transparent primitive data locations.

Claims (1)

  1. CLAIMS is 1. A method of processing data representing images to be
    displayed on a device having a plurality of pixels, the method comprising, for at least one pixel of the device:
    a) defining a data queue having a predetermined number of locations therein, and assigning one of the locations for storing only data relating to opaque images; b) defining an opaque depth value indicating the depth value of the most visible opaque primitive to be displayed at the pixel; C) receiving fragment information belonging to an image to be displayed by the pixel, the fragment information including fragment depth information; d) determining whether the fragment information relates to a visible image with respect to the opaque depth value, and discarding the fragment information if it does not relate to such a visible image; e) storing the fragment information in the queue at a location corresponding to the fragment depth information; f) determining whether the fragment information belongs to an opaque image or to a transparent image; if the fragment information relates to an opaque image:
    i) clearing the queue location relating to fragment depths behind the fragment depth; and ii) updating the opaque depth value to equal 9) the fragment depth; h) repeating steps c) to 9) until all of the locations if a data queue contain fragment information, or until no further fragment information is available.
    i) processing in turn fragment information stored in the locations of the queue to produce respective pixel display values.
    2. A method as claimed in claim 1, wherein the processing of the fragment information includes a shading step in which pixel data is derived from the fragment information.
    3. A method as claimed in claim 2, wherein the display includes a frame buffer for storing pixel display data, and wherein the processing of the fragment information includes blending derived pixel data with the frame buffer for fragments belonging to blended images.
    4 A method as claimed in claim 3 wherein the display includes a frame buffer for storing pixel display data, and wherein the processing of the fragment information includes replacing existing pixel display data stored in the frame buffer.
    An apparatus for processing data representing images to be displayed on a device having a plurality of pixels, the apparatus comprising processing means operable, for at least one pixel of the device, to a) define a data queue having a predetermined number of locations therein, and assign one is c) e) of the locations for storing only data relating to opaque images; b) define an opaque depth value indicating the depth value of the most visible opaque primitive to be displayed at the pixel; receive fragment information belonging to an image to be displayed by the pixel, the fragment information including fragment depth information; d) determine whether the fragment information relates to a visible image with respect to the opaque depth value, and discard the fragment information if it does not relate to such a visible image; store the fragment information in the queue at a location corresponding to the fragment depth information; f) determine whether the fragment information belongs to an opaque image or to a transparent image; and g) if the fragment information relates to an opaque image:
    i) clear the queue location relating to fragment depths behind the fragment depth; and ii) update the opaque depth value to equal the fragment depth, h) the processing means being operable to repeat steps c) to g) until all of the locations if a data queue contain fragment information, or until no further fragment information is available; and to process in turn fragment information stored in the locations of the queue to produce respective pixel display values.
    6. An apparatus as claimed in claim 5, wherein the processing of the fragment information includes a shading step in which pixel data is derived from the fragment information.
    7. An apparatus as claimed in claim 6, wherein the display includes a frame buffer for storing pixel display data, and wherein the processing of the fragment information includes blending derived pixel data with the frame buffer for fragments belonging to blended images.
    is 8. An apparatus as claimed in claim 6, wherein the display includes a frame buffer for storing pixel display data, and wherein the processing of the fragment information includes replacing existing pixel display data stored in the frame buffer.
GB0120840A 1999-06-28 2000-03-22 Processing graphical data Expired - Fee Related GB2362552B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB9915060A GB2355633A (en) 1999-06-28 1999-06-28 Processing graphical data
GB0006986A GB2352381B (en) 1999-06-28 2000-03-22 Processing graphical data

Publications (3)

Publication Number Publication Date
GB0120840D0 GB0120840D0 (en) 2001-10-17
GB2362552A true GB2362552A (en) 2001-11-21
GB2362552B GB2362552B (en) 2003-12-10

Family

ID=26243941

Family Applications (3)

Application Number Title Priority Date Filing Date
GB0120840A Expired - Fee Related GB2362552B (en) 1999-06-28 2000-03-22 Processing graphical data
GB0015678A Expired - Fee Related GB2356717B (en) 1999-06-28 2000-06-27 Data processing
GB0015766A Expired - Fee Related GB2356718B (en) 1999-06-28 2000-06-27 Data processing

Family Applications After (2)

Application Number Title Priority Date Filing Date
GB0015678A Expired - Fee Related GB2356717B (en) 1999-06-28 2000-06-27 Data processing
GB0015766A Expired - Fee Related GB2356718B (en) 1999-06-28 2000-06-27 Data processing

Country Status (1)

Country Link
GB (3) GB2362552B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11776672B1 (en) 2020-12-16 2023-10-03 Express Scripts Strategic Development, Inc. System and method for dynamically scoring data objects
US11862315B2 (en) 2020-12-16 2024-01-02 Express Scripts Strategic Development, Inc. System and method for natural language processing
US11423067B1 (en) 2020-12-16 2022-08-23 Express Scripts Strategic Development, Inc. System and method for identifying data object combinations

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5923333A (en) * 1997-01-06 1999-07-13 Hewlett Packard Company Fast alpha transparency rendering method
EP0984392A1 (en) * 1997-05-22 2000-03-08 Sega Enterprises, Ltd. Image processor and image processing method
GB2344039A (en) * 1998-09-10 2000-05-24 Sega Enterprises Kk Blending processing for overlapping translucent polygons

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0424618A3 (en) * 1989-10-24 1992-11-19 International Business Machines Corporation Input/output system
US5790879A (en) * 1994-06-15 1998-08-04 Wu; Chen-Mie Pipelined-systolic single-instruction stream multiple-data stream (SIMD) array processing with broadcasting control, and method of operating same

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5923333A (en) * 1997-01-06 1999-07-13 Hewlett Packard Company Fast alpha transparency rendering method
EP0984392A1 (en) * 1997-05-22 2000-03-08 Sega Enterprises, Ltd. Image processor and image processing method
GB2344039A (en) * 1998-09-10 2000-05-24 Sega Enterprises Kk Blending processing for overlapping translucent polygons

Also Published As

Publication number Publication date
GB2356717B (en) 2001-12-12
GB2356718A (en) 2001-05-30
GB2356717A (en) 2001-05-30
GB0015766D0 (en) 2000-08-16
GB0120840D0 (en) 2001-10-17
GB2362552B (en) 2003-12-10
GB2356718B (en) 2001-11-21
GB0015678D0 (en) 2000-08-16

Similar Documents

Publication Publication Date Title
US6664959B2 (en) Method and apparatus for culling in a graphics processor with deferred shading
US6614444B1 (en) Apparatus and method for fragment operations in a 3D-graphics pipeline
US10957082B2 (en) Method of and apparatus for processing graphics
US7042462B2 (en) Pixel cache, 3D graphics accelerator using the same, and method therefor
US6898692B1 (en) Method and apparatus for SIMD processing using multiple queues
US20200226828A1 (en) Method and System for Multisample Antialiasing
US20150221127A1 (en) Opacity Testing For Processing Primitives In A 3D Graphics Processing System
US20070139421A1 (en) Methods and systems for performance monitoring in a graphics processing unit
US20080100627A1 (en) Processing of 3-Dimensional Graphics
US6424345B1 (en) Binsorter triangle insertion optimization
EP3333805B1 (en) Removing or identifying overlapping fragments after z-culling
US6670955B1 (en) Method and system for sort independent alpha blending of graphic fragments
GB2526598A (en) Allocation of primitives to primitive blocks
US20150097831A1 (en) Early depth testing in graphics processing
US7616202B1 (en) Compaction of z-only samples
JPH09500462A (en) Computer graphics system with high performance multi-layer z-buffer
US20040196296A1 (en) Method and system for efficiently using fewer blending units for antialiasing
US20030002729A1 (en) System for processing overlapping data
US8068120B2 (en) Guard band clipping systems and methods
JPH06236176A (en) Method and apparatus for giving of tranaparency to raster image
EP3504684B1 (en) Hybrid render with preferred primitive batch binning and sorting
GB2362552A (en) Processing graphical data
US6268874B1 (en) State parser for a multi-stage graphics pipeline
GB2352381A (en) Processing graphical data
US11790479B2 (en) Primitive assembly and vertex shading of vertex attributes in graphics processing systems

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20101111 AND 20101117

PCNP Patent ceased through non-payment of renewal fee

Effective date: 20180322