GB2448717A - Three-dimensional rendering engine - Google Patents
Three-dimensional rendering engine Download PDFInfo
- Publication number
- GB2448717A GB2448717A GB0707952A GB0707952A GB2448717A GB 2448717 A GB2448717 A GB 2448717A GB 0707952 A GB0707952 A GB 0707952A GB 0707952 A GB0707952 A GB 0707952A GB 2448717 A GB2448717 A GB 2448717A
- Authority
- GB
- United Kingdom
- Prior art keywords
- textures
- frame
- texture
- rendering
- animating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
Abstract
A 3D rendering engine (in this case based on the Blynn and Newell algorithm) is made capable of rendering animated image textures by causing Frame Events that notify the listeners when some or all of an image changes. Then standard graphical components can be added invisibly to the 3D container such that they are drawn to off-screen images too. Thus software components can be used as textures for 3D surfaces, hence creating a 3D windowing system. The image texture draws to a memory image object and when a frame is completed, a frame event 3D container and a redraw scheduled. The Blynn and Newell algorithm is a fast integer approximation for determining the image texture pixel from the screen position. The screen pixel is then set to that retrieved from the corresponding image texture pixel. One application of the invention is to displaying 3D menus.
Description
Method and apparatus for the efficient animation of textures based on
images and graphical components This invention relates to an efficient method and apparatus for animating textures based on images and graphical components, especially for 3D-windowing systems, that may be used either by applications or operating system window managers.
Graphical containers and window managers are well known computer concepts.
Graphical containers are software components that may contain other graphical components. Window managers are programs that manage the top-level windows of running applications.
Herein, graphical containers will be synonymous with both component containers and window managers, and, graphical components will be synonymous with both components and windows.
Normally, all graphical components are "laid-out" and displayed in 2D. That is, they are usually "flat" and rectangular, emulating a piece of paper on a desktop. In general, components and applications can overlap, are hence are normally overlaid on each other in "depth" order.
If the whole or part of a component or window requires a repaint, the application can be asked to do so via an external event (e.g. if partially exposed) or an internal event (e.g. the user inputting data). With 2D windowing systems, the size and shape of windows does not change according to depth and hence the X-Y co-ordinates in the application are basically the X-Y co-ordinates on the screen.
However, new windowing systems have been hypothesised whereby not only the size of a component or window may change but also its shape. The most interesting of these are 3D-windowing systems that emulate the three-dimensional effect that distant objects appear smaller then close ones.
Whenever the mapping of a component (or image) content to screen content is non-trivial, components and images are painted to "textures", which in turn are rendered to the screen. Textures are used as a convenient intermediate step to make screen rendering faster since some of the processing can be very complex and time consuming.
In conventional 3D rendering systems, the textures are basically static images and hence cannot animate them. However with, for example 3D-windowing systems, the textures will change with time.
With many of the hypothesised systems, changes in textures due to image or component changes are animated to the screen merely by re-rendering all of the textures to their locations on the screen at a fixed refresh or frame rate.
However, re-rendering the entire scene at say 10-30 frames per second is very inefficient, and even with the fastest algorithms and hardware, the number of polygons that can be rendered may be reduced considerably.
In order to understand the rendering process, it is necessary to have an understanding of the prior art since practicality is important. Normally, a re- projection involves depth ordering all of the surfaces and a re-rendering involves re-drawing each polygon, from the most distant surface to the nearest, to the screen.
There are several possible reasons for screen changes to occur, not all of which require the complete screen to be re-drawn. These are: 1) movement of the observer (3D) 2) a change in the "layout" of screen objects 3) a change in the content of screen objects In general, the first two types of event will require a full re-projection and re-render, however the third type of event can be made much more efficient than merely re-rendering the entire screen at a fixed refresh rate.
In the third case, it is extremely inefficient to animate the entire scene. Most animated textures will be foreground surfaces and no re-projection is required.
Therefore only those parts of the affected surfaces need be re-rendered, when required.
According to the present invention there is provided an efficient method and apparatus for animating textures based on images and graphical components such that amount processing involved in rendering to the screen due to texture content change is minimised.
A specific embodiment shall now be described by way of example with reference to the accompanying code in which:-Figure 1 shows a typical scene with a running application in 3D.
Figure 2 shows the board layout for hardware acceleration.
Listing 1 is the top level 3D window.
Listing 2 is the Java 3D container.
Listing 3 is a basic (single colour) texture.
Listing 4 is an (animated) image texture.
Listing 5 is a component texture.
The example is (part of) a fully functioning 3D-windowing system which uses the very efficient method for animating textures. Changes only occur when required, and only those parts of the rendering required are performed.
3D objects are added to the "world" and the "observer" may move through that world in a manner similar to a 3D video game. The observer defines the position and line of sight of the user.
"Projection" comprises a transformation according to the observer. Then the world is reduced to a set of 3D polygons. They are then "clipped", since only parts of polygons may be visible. Finally they are depth-ordered (overlain back to front) ready for rendering.
"Rendering" comprises slicing each viable 3D polygon remaining (after the clipping and ordering) into horizontal trapesiums. Then each screen pixel is set to the colour defined either by the texture colour, or with images, the equivalent texture pixel.
For this example, the Blynn and Newell algorithm is used whereby the first order approximation of the angle of an object (most are rectangles) subtended at the eye is used. That is, sx = arctan(x,z) . x/z and sy = arctan(y,z) y/z and a linear interpolation is used to render 3D positions to 2D positions.
So far, this is basically how 3D games work, and the rendering is sufficiently fast and efficient for real applications. Re-projections and re-rendering are required if the observer moves or if the world changes.
However, with 3D games, the textures are static. Thus the concept of a frame event for animated images was introduced. On completion of each image frame, a frame event is fired which propagates up though the 3D object model to the Java 3D container. Every time an image frame is complete, the 3D container is notified.
With a frame event, the minimum amount of rendering required to redraw the world is performed because a full re-projection and re-rendering is not required.
The most distant affected polygon is identified, and only that polygon and those nearer to the observer are re-rendered.
This method of merely re-rendering the most distant affected polygon and those "in front is further enhanced by maintaining the minimum bounding 2D shape so that the amount of subsequent 3D rendering (not yet implemented) and the amount of traditional 2D rendering (for example, double buffering) is minimised.
Hence animated images can be used as textures. Images are painted to an off-screen texture buffer in a form that may be used by the standard rendering engine.
Also normal graphical components can be rendered to off-screen buffers so that a frame event can be caused by a change in component content. Both images and components may change at variable rates in which (some or all of) their content may change.
In summary, surfaces can be defined with animated image or component textures, and frame events occur only when there is a change in content (e.g. during text editing). This event driven rendering is much more efficient than redrawing the entire scene at a fixed refresh rate because only those parts of the scene are re-rendered when required.
The example is a Java simulation whereby any lightweight 2D component can be displayed in a 3D "world't without modification. The equivalent window manager would actually be slightly simpler, since the constraints of the Java runtime environment require that components need to be added and rendered invisibly to the 3D container. With a window manager, the top-level windows of applications can merely be rendered to off-screen images, and then projected and rendered in the same way.
Of course, the understanding of prior art is important since this method is designed for 3D hardware acceleration whereby most of the 3D processing is delegated from the computer's CPU to a video card.
With 3D video cards the polygon rendering code is stored in the EEPROM on the video card. They also have special circuitry and instruction sets designed specifically for 3D rendering.
Video cards could be adapted to handle animated images as textures whereby all of the processing could be on card. But since component textures are one of the best features of this method, it makes sense to specify common apparatus for both images and component textures.
In general, the generation of the texture content is non-trivial and hence should be performed by the computer's (generic) CPU. However, the rendering of textures should be delegated to the video card's (custom) CPU.
Frame events may be passed to the video card using a suitable signal or interrupt.
However the video card also requires "to see" the texture itself. This may be achieved by copying the (part of) the texture that changed across the bus.
However, the best apparatus should use shared memory between the motherboard and video card such that texture buffers do not need copying. Access to the shared memory should then be controlled by some form of mutex, in the normal way.
Many current 3D video cards have some form of shared memory already, however that is used for static images. In order to handle frame events of animated textures a firmware (EEPROM) upgrade and driver update will probably be required.
Much of the static texture rendering functionality (mid-level and low-level) should not require much modification, except that re-rendering should incorporate the "most distant" polygon and bounding shape functionality for maximum efficiency.
Claims (5)
1. A method and apparatus for animating textures based on images and graphical components such that amount processing involved in rendering to the screen due to texture content change is minimised.
2. A method and apparatus for animating textures as claimed in Claim 1 wherein frame events are generated whenever there is a change in content such that all texture viewers are informed.
3. A method and apparatus for animating textures as claimed in Claims 1 & 2 wherein only the most distant affected surface, and those "in front" or overlaying are re-rendered.
4. A method and apparatus for animating textures as claimed in Claims 1 & 2 wherein the bounding 2D shape is maintained such that only that part of the window need be redrawn.
5. A method and apparatus for the efficient animation of textures as claimed in Claims 1 & 4 wherein texture frames are drawn to an area of memory shared between the generic CPU and the custom processor device, and the frame event then takes the form of a signal or hardware interrupt between the processors. S. * . * SSS S... * * 5*05 *. S * S. * S. a
S..... * a
***.S* * . a a.. a. S * a
5. A method and apparatus for animating textures as claimed in Claims 1 & 2 whereby an animated texture viewer may be a video card.
6. A method and apparatus for animating textures as claimed in Claim 1, 2 & 5 whereby animated textures may be painted into memory shared between the motherboard and the video card. (0
Amendments to the claims have been filed as follows
1. A method and apparatus for the efficient animation of textures based on images and graphical components In transform based windowing systems comprising frame event generators" that fire an application frame event whenever a full or partial texture frame is ready; attached renderers" determine the difference between event types so that with frame events a full transform is not performed and the scene is merely rendered.
2. A method and apparatus for the efficient animation of textures as claimed in Claim 1 wherein the "renderers" use the frame event to identify the changed texture and its bounds of change and render only (parts of) the most distant" affected surface(s), and those "in front or or overlaying.
3. A method and apparatus for the efficient animation of textures as claimed in Claims 1 & 2 wherein the 2D bounding shape comprising the union of the affected (parts of the) surface(s) Is calculated and used to define the bounds of rendering and/or 2D Post-processing.
4. A method and apparatus for the efficient animation of textures as claimed in Claim 1 wherein the generation of animated image or graphical component output and frame events is performed on a generic CPU, but the transform and rendering are delegated to a custom processor device such as a video card.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB0707952A GB2448717B (en) | 2007-04-25 | 2007-04-25 | Method and apparatus for the efficient animation of textures based on images and graphical components |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB0707952A GB2448717B (en) | 2007-04-25 | 2007-04-25 | Method and apparatus for the efficient animation of textures based on images and graphical components |
Publications (3)
Publication Number | Publication Date |
---|---|
GB0707952D0 GB0707952D0 (en) | 2007-05-30 |
GB2448717A true GB2448717A (en) | 2008-10-29 |
GB2448717B GB2448717B (en) | 2012-09-19 |
Family
ID=38135382
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB0707952A Expired - Fee Related GB2448717B (en) | 2007-04-25 | 2007-04-25 | Method and apparatus for the efficient animation of textures based on images and graphical components |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2448717B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9621869B2 (en) | 2012-05-24 | 2017-04-11 | Sony Corporation | System and method for rendering affected pixels |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5903270A (en) * | 1997-04-15 | 1999-05-11 | Modacad, Inc. | Method and apparatus for mapping a two-dimensional texture onto a three-dimensional surface |
WO2000060444A1 (en) * | 1999-04-06 | 2000-10-12 | Microsoft Corporation | Method and apparatus for supporting two-dimensional windows in a three-dimensional environment |
US6229542B1 (en) * | 1998-07-10 | 2001-05-08 | Intel Corporation | Method and apparatus for managing windows in three dimensions in a two dimensional windowing system |
US6480200B1 (en) * | 2000-06-09 | 2002-11-12 | Hewlett-Packard Company | Method and apparatus for deferred texture validation on a multi-tasking computer |
US6538654B1 (en) * | 1998-12-24 | 2003-03-25 | B3D Inc. | System and method for optimizing 3D animation and textures |
US6597363B1 (en) * | 1998-08-20 | 2003-07-22 | Apple Computer, Inc. | Graphics processor with deferred shading |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10198822A (en) * | 1997-01-10 | 1998-07-31 | Sharp Corp | Image compositing device |
US20030151604A1 (en) * | 2001-11-21 | 2003-08-14 | Research Foundation Of State University Of New York | Volume rendering with contouring texture hulls |
US7450124B2 (en) * | 2005-03-18 | 2008-11-11 | Microsoft Corporation | Generating 2D transitions using a 3D model |
-
2007
- 2007-04-25 GB GB0707952A patent/GB2448717B/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5903270A (en) * | 1997-04-15 | 1999-05-11 | Modacad, Inc. | Method and apparatus for mapping a two-dimensional texture onto a three-dimensional surface |
US6229542B1 (en) * | 1998-07-10 | 2001-05-08 | Intel Corporation | Method and apparatus for managing windows in three dimensions in a two dimensional windowing system |
US6597363B1 (en) * | 1998-08-20 | 2003-07-22 | Apple Computer, Inc. | Graphics processor with deferred shading |
US6538654B1 (en) * | 1998-12-24 | 2003-03-25 | B3D Inc. | System and method for optimizing 3D animation and textures |
WO2000060444A1 (en) * | 1999-04-06 | 2000-10-12 | Microsoft Corporation | Method and apparatus for supporting two-dimensional windows in a three-dimensional environment |
US6480200B1 (en) * | 2000-06-09 | 2002-11-12 | Hewlett-Packard Company | Method and apparatus for deferred texture validation on a multi-tasking computer |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9621869B2 (en) | 2012-05-24 | 2017-04-11 | Sony Corporation | System and method for rendering affected pixels |
Also Published As
Publication number | Publication date |
---|---|
GB2448717B (en) | 2012-09-19 |
GB0707952D0 (en) | 2007-05-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11270506B2 (en) | Foveated geometry tessellation | |
CN109313470B (en) | Sharp text rendering with reprojection | |
US6229542B1 (en) | Method and apparatus for managing windows in three dimensions in a two dimensional windowing system | |
US6587112B1 (en) | Window copy-swap using multi-buffer hardware support | |
KR101086570B1 (en) | Dynamic window anatomy | |
US7990374B2 (en) | Apparatus and methods for haptic rendering using data in a graphics pipeline | |
US7230626B2 (en) | System and method for optimizing a graphics intensive software program for the user's graphics hardware | |
US7400322B1 (en) | Viewport-based desktop rendering engine | |
US20060107229A1 (en) | Work area transform in a graphical user interface | |
EP2245598B1 (en) | Multi-buffer support for off-screen surfaces in a graphics processing system | |
US9275493B2 (en) | Rendering vector maps in a geographic information system | |
US20100201695A1 (en) | System and method for optimizing a graphics intensive software program for the user's graphics hardware | |
KR20100004119A (en) | Post-render graphics overlays | |
US20040085310A1 (en) | System and method of extracting 3-D data generated for 2-D display applications for use in 3-D volumetric displays | |
US11308701B2 (en) | Rendering augmented reality image including virtual object with surface showing reflection of environment | |
Ganestam et al. | Real-time multiply recursive reflections and refractions using hybrid rendering | |
JP2012190428A (en) | Stereoscopic image visual effect processing method | |
JPH07182526A (en) | Display method of graphics display device | |
US7528839B1 (en) | Faster clears for three-dimensional modeling applications | |
GB2448717A (en) | Three-dimensional rendering engine | |
JP2005346417A (en) | Method for controlling display of object image by virtual three-dimensional coordinate polygon and image display device using the method | |
KR100848687B1 (en) | 3-dimension graphic processing apparatus and operating method thereof | |
KR101227155B1 (en) | Graphic image processing apparatus and method for realtime transforming low resolution image into high resolution image | |
Hong et al. | Motion-blurred shadows utilizing a depth-time ranges shadow map | |
JP2001283254A (en) | Three-dimensional graphic plotting device and its method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PCNP | Patent ceased through non-payment of renewal fee |
Effective date: 20210425 |