EP0753181A1 - Rendering 3-d scenes in computer graphics - Google Patents
Rendering 3-d scenes in computer graphicsInfo
- Publication number
- EP0753181A1 EP0753181A1 EP95913278A EP95913278A EP0753181A1 EP 0753181 A1 EP0753181 A1 EP 0753181A1 EP 95913278 A EP95913278 A EP 95913278A EP 95913278 A EP95913278 A EP 95913278A EP 0753181 A1 EP0753181 A1 EP 0753181A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- faces
- scanline
- active
- list
- edges
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/40—Hidden part removal
- G06T15/405—Hidden part removal using Z-buffer
Abstract
A method of rendering a 2-D image includes the steps of analysingsurfaces facing a viewing direction into scanline sequences which represent continuous surfaces, checking depth values of those surfaces relative to a viewing position and discarding without rendering those objects or surfaces lying behind a foremost surface. This has the effect of extending the scanline algorithm to reduce the amount of work managing and processing lists of faces by exploiting the fact that most 3-D scenes are constructed from continuous surfaces made up of adjoining faces.
Description
Rendering 3-D scenes in Computer Graphics This invention relates to 3-D computer graphics. Converting the information relating to a 3-D image into a 2-D projection for a computer requires an assessment of which objects are visible, and which are hidden by others. In doing this, it is conventional to analyse all surfaces of objects into smaller polygonal flat faces defined by the coordinates of those faces, often triangles. The images for an animation (e.g. for computer games), have to be produced in real time, i.e. at a rate that gives the impression of fairly smooth movement. The current techniques are as follows: Z-Buffer - A depth value is kept for each image pixel. Each face in the scene is rendered, and at each pixel the new depth is compared with that already in the image. If the new depth is nearer to the observer than the old, the image pixel and depth are updated. Painters algorithm - Each face in the scene is rendered into the image, the faces are visited in the order 'furthest' to 'nearest'. This order may be generated by sorting the faces at runtime, or by using a binary space partition that was calculated offline. Scanline Z Buffer - The image is traversed in scanline order. As each scanline is processed, the faces that intersect this scanline are maintained in an active list. A set of depth values are maintained, corresponding to the pixels in a scanline. For each of the current active faces, the section that intersects the current scanline is rendered. At each pixel, a depth test is made, and the image pixel is only updated if the new pixel is nearer. Scanline - The image is traversed in scanline order. As each scanline is processed, the faces that intersect this scanline are maintained in an active list. The faces in the active list are sorted along the horizontal axis by the first scanline pixel which they intersect. This sorted list is then processed to generate the sections of faces that are frontmost. Each of these visible sections is then rendered into the output image. These techniques suffer from various disadvantages: Z Buffer - A large amount of memory is consumed by maintaining a depth value per image pixel. A test is performed per pixel to find out if it is obscured. Painters Algorithm - Fully correct sorting is time consuming. An approximate sort can be used, but this leads to visual artifacts. Binary Space Partitions can be used to accelerate the sorting, at the cost of making some or all of the 3-D scene unchangeable. Scanline Z Buffer - Extra work is required to maintain the active lists. All the above techniques suffer from the problem that the value for an image pixel may be generated several times, once for every face that covers that pixel. Only the 'nearest' value will survive into the output image. If there are complex calculations needed to generate a pixel's colour, the extra work can amount to a significant portion of the overall processing time. Scanline - Extra work is required to maintain and sort lists of active faces. The invention proposes a new technique which is designed to speed up processing while saving processing power. The invention proposes a method of rendering a 2-D image which includes the steps of analysing surfaces facing the camera into scanline sequences which. represent continuous surfaces, checking depth values of those surfaces and discarding without rendering those objects or surfaces lying behind a foremost surface. This has the effect of extending the scanline algorithm to reduce the amount of work managing and processing lists of faces by exploiting the fact that most 3-D scenes are constructed from continuous surfaces made up of adjoining faces. The invention also extends to image generating apparatus for rendering images by the method herein disclosed. The apparatus includes the means necessary to carry out the described method steps and may be in hardware or software form or in any combination thereof. These means will be apparent to the skilled reader from the teachings herein. In order that the invention shall be fully understood, a more detailed example of the technique will now be described. A 3-D scene is fully described by defining for each object a series of faces (which together make up a surface) employing coordinates which define the vertices, edges connecting each two vertices, and faces formed within a set of connected edges. As a first step, the faces are examined to see whether the camera is in front of or behind the face. If behind, then the face is away from the camera on the back of the object and can be ignored. The next step is to look in turn at all the faces which are facing the camera. One imagines going around the edges of each face once with a pen, and keeping a count of how many times one passes over each edge in doing so. Starting from zero, any edge. at the silhouette of an object will accumulate a count of 1; an edge between two faces will count 2. Moreover, each edge is marked to say which face is to right or left of it. Using this information, two lists are built up for each scanline. A first list identifies those visible silhouette edges which become active on that scanline; the second lists all other edges that become active. Now the scene is considered by scanline in turn. A third list is prepared of active surfaces (i.e. sequences of faces). As each new lefthand silhouette edge becomes active on the scanline, an active surface is logged. As other active edges are noted from the second list, it is added to one of the existing active surfaces in the third list by updating the adjoining faces to indicate that this edge is now their neighbour. Thus, each active surface (without regard to depth) is enumerated by starting at the lefthand silhouette edge and then following the neighbour references between edges and faces until the righthand edge is reached. This active list of surfaces is now processed to find the visible segments. The scanline is broken up into runs (groups of pixels separated by left or right hand edges of surfaces) . These runs are enumerated in order. During this process, an active list of surfaces that span the run is maintained, sorted by nearest depth. If there are no surfaces in the list for a run, then the section of the scanline is the background colour. If the furthest depth of the first surface is greater than the nearest depth of any further surfaces on the list, than that surface is rendered, and the next run processed. This technique, although some mpre memory is required, makes it possible to perform bulk rejection of obscured parts of a scene, based on accepting whole surfaces formed by linked sequences of faces. Thus, considerably less processing is required than simple scanline techniques. There are complex areas of a scene which do not lend themselves to this simplified treatment, for example where a run has two or more surfaces of which the depth overlap. This may require that such areas of the scene are treated in more detail by conventional techniques. These areas will require more processing than usual and be slower, but these are usually a minority of the scene and the savings on the majority are greater. The disclosures in British patent application no. 9406509.1, from which this application claims priority, and in the abstract accompanying this application are incorporated herein by reference.
Claims
1. A method of rendering a 2-D- image including the steps of analysing surfaces facing a viewing direction into scanline sequences which represent continuous surfaces, checking depth values of those surfaces relative to a viewingsposition and discarding without rendering those objects or surfaces lying behind a foremost surface.
2. A method according to claim 1, including the steps of obtaining coordinates of vertices of an or each object of the image and coordinates of edges connecting each two vertices, and defining faces within a set of connected edges, the faces forming one or more of said surfaces.
3. A method according to claim 2, comprising the steps of examining each face to determine whether the face is in front of or behind the viewing position relative to the viewing direction, and ignoring each face behind the viewing position.
4. A method according to claim 2 or 3, comprising the steps of scanning the image data by line and generating first and second lists for each scanline, the first list identifying those visible silhouette edges at the edge of an object which become active on that scanline; the second list identifying all other edges that become active.
5. A method according to claim 4, comprising the step of identifying as first silhouette edges the silhouette edges of an object first reached during scanning along a scanline.
6. A method according to claim 5, comprising the steps of generating a third list of active faces by the steps of logging an active face as each new first silhouette edge becomes active on the scanline, as other active edges are noted from the second list, associating each of said other active edges to one of the existing active faces in the third list by updating -the adjoining farces to indicate that this edge is now their neighbour.
7. A method according to claim 6, comprising the step of processing the active list of faces to find.the visible segments by dividing the scanline into runs, and enumerating the runs in order so as to sort an active list of faces that span the run by nearest depth relative to the viewing position.
8. A method according to claim 7, comprising the step of, when there are no faces in a list for a run, generating a background image or colour for the associated section of the scanline.
9. A method according to claim 7 or 8, wherein when the furthest depth of the first face is greater than the nearest depth of any further faces of the list, any such further face is rendered, and the next run processed.
10. Imaging generating apparatus operative to render a 2-D image by a method according to any preceding claim.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB9406509 | 1994-03-31 | ||
GB9406509A GB9406509D0 (en) | 1994-03-31 | 1994-03-31 | Rendering 3-d scenes in computer graphics |
PCT/GB1995/000746 WO1995027263A1 (en) | 1994-03-31 | 1995-03-31 | Rendering 3-d scenes in computer graphics |
Publications (1)
Publication Number | Publication Date |
---|---|
EP0753181A1 true EP0753181A1 (en) | 1997-01-15 |
Family
ID=10752896
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP95913278A Withdrawn EP0753181A1 (en) | 1994-03-31 | 1995-03-31 | Rendering 3-d scenes in computer graphics |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP0753181A1 (en) |
JP (1) | JPH09511083A (en) |
CA (1) | CA2185906A1 (en) |
GB (1) | GB9406509D0 (en) |
WO (1) | WO1995027263A1 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5596686A (en) | 1994-04-21 | 1997-01-21 | Silicon Engines, Inc. | Method and apparatus for simultaneous parallel query graphics rendering Z-coordinate buffer |
EP0840915A4 (en) * | 1995-07-26 | 1998-11-04 | Raycer Inc | Method and apparatus for span sorting rendering system |
US6410643B1 (en) | 2000-03-09 | 2002-06-25 | Surmodics, Inc. | Solid phase synthesis method and reagent |
US7583263B2 (en) * | 2003-12-09 | 2009-09-01 | Siemens Product Lifecycle Management Software Inc. | System and method for transparency rendering |
US9298311B2 (en) | 2005-06-23 | 2016-03-29 | Apple Inc. | Trackpad sensitivity compensation |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
SU834692A1 (en) * | 1977-10-19 | 1981-05-30 | Институт Автоматики И Электрометриисо Ah Cccp | Device for output of halftone images of three-dimensional objects onto television receiver screen |
US4825391A (en) * | 1987-07-20 | 1989-04-25 | General Electric Company | Depth buffer priority processing for real time computer image generating systems |
JPH07122908B2 (en) * | 1991-03-12 | 1995-12-25 | インターナショナル・ビジネス・マシーンズ・コーポレイション | Apparatus and method for generating displayable information representing a three-dimensional solid object |
GB2259432A (en) * | 1991-09-06 | 1993-03-10 | Canon Res Ct Europe Ltd | Three dimensional graphics processing |
-
1994
- 1994-03-31 GB GB9406509A patent/GB9406509D0/en active Pending
-
1995
- 1995-03-31 JP JP7525511A patent/JPH09511083A/en active Pending
- 1995-03-31 EP EP95913278A patent/EP0753181A1/en not_active Withdrawn
- 1995-03-31 WO PCT/GB1995/000746 patent/WO1995027263A1/en not_active Application Discontinuation
- 1995-03-31 CA CA002185906A patent/CA2185906A1/en not_active Abandoned
Non-Patent Citations (1)
Title |
---|
See references of WO9527263A1 * |
Also Published As
Publication number | Publication date |
---|---|
CA2185906A1 (en) | 1995-10-12 |
WO1995027263A1 (en) | 1995-10-12 |
GB9406509D0 (en) | 1994-05-25 |
JPH09511083A (en) | 1997-11-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5596685A (en) | Ray tracing method and apparatus for projecting rays through an object represented by a set of infinite surfaces | |
US6529207B1 (en) | Identifying silhouette edges of objects to apply anti-aliasing | |
US5577175A (en) | 3-dimensional animation generating apparatus and a method for generating a 3-dimensional animation | |
Raskar et al. | Image precision silhouette edges | |
EP1640915B1 (en) | Method and system for providing a volumetric representation of a 3-dimensional object | |
US6204856B1 (en) | Attribute interpolation in 3D graphics | |
US6774910B2 (en) | Method and system for providing implicit edge antialiasing | |
US7126600B1 (en) | Method and apparatus for high speed block mode triangle rendering | |
JPH08251480A (en) | Method and apparatus for processing video signal | |
EP0568358B1 (en) | Method and apparatus for filling an image | |
EP0910044A3 (en) | Method and apparatus for compositing colors of images with memory constraints | |
DE602004012341T2 (en) | Method and system for providing a volume rendering of a three-dimensional object | |
US6906715B1 (en) | Shading and texturing 3-dimensional computer generated images | |
US6501481B1 (en) | Attribute interpolation in 3D graphics | |
EP0753181A1 (en) | Rendering 3-d scenes in computer graphics | |
Schollmeyer et al. | Efficient and anti-aliased trimming for rendering large NURBS models | |
US6839058B1 (en) | Depth sorting for use in 3-dimensional computer shading and texturing systems | |
EP2249312A1 (en) | Layered-depth generation of images for 3D multiview display devices | |
EP0725365B1 (en) | Method and apparatus for shading three-dimensional images | |
EP0753182B1 (en) | Texture mapping in 3-d computer graphics | |
JP2532055B2 (en) | How to create a time series frame | |
Chow et al. | Fast Display of Articulated Characters using Impostors. | |
Kurka | Image-BASED Occluder Selection | |
KIM et al. | Fast Image Generation Method for Animation | |
Strasser et al. | PROOF II: A Scalable Architecture for Future Highperformance Graphics Systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 19960923 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): BE DE ES FR GB IT NL SE |
|
17Q | First examination report despatched |
Effective date: 19981103 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 19990714 |