EP1579391A4 - Modele de surface unifie pour composition de scene geometrique et fondee sur l'image - Google Patents

Modele de surface unifie pour composition de scene geometrique et fondee sur l'image

Info

Publication number
EP1579391A4
EP1579391A4 EP02808109A EP02808109A EP1579391A4 EP 1579391 A4 EP1579391 A4 EP 1579391A4 EP 02808109 A EP02808109 A EP 02808109A EP 02808109 A EP02808109 A EP 02808109A EP 1579391 A4 EP1579391 A4 EP 1579391A4
Authority
EP
European Patent Office
Prior art keywords
image
scene
rendering
computer
computer system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP02808109A
Other languages
German (de)
English (en)
Other versions
EP1579391A1 (fr
Inventor
Christopher F Marrin
Robert K Myers
James R Kent
Peter G Broadwell
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Electronics Inc
Original Assignee
Sony Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Electronics Inc filed Critical Sony Electronics Inc
Publication of EP1579391A1 publication Critical patent/EP1579391A1/fr
Publication of EP1579391A4 publication Critical patent/EP1579391A4/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/61Scene description

Definitions

  • This invention relates generally to a modeling language for 3D graphics and, more particularly, to embedding images in a scene.
  • VRML Virtual Reality Modeling Language
  • a conventional modeling language that defines most of the commonly used semantics found in conventional 3D applications such as hierarchical transformations, light sources, view points, geometry, animation, fog, material properties, and texture mapping.
  • Texture mapping processes are commonly used to apply externally supplied image data to a given geometry within the scene.
  • VRML allows one to apply externally supplied image data, externally supplied video data or externally supplied pixel data to a surface.
  • VRML does not allow the use of rendered scene as an image to be texture mapped deciaratively into another scene.
  • the semantics required to attain the desired outcome are implicit, and therefore a description of the outcome is sufficient to get the desired outcome.
  • deciaratively combine any two surfaces on which image data was applied to produce a third surface It is also desirable to deciaratively re-render the image data applied to a surface to reflect the current state of the image.
  • 3D scenes are rendered monolithically, producing a final frame rate to the viewer that is governed by the worst-case performance determined by scene complexity or texture swapping.
  • scene complexity or texture swapping determines how many rendering rates to improve and viewing experience would be more television-like and not a web-page-like viewing experience.
  • a system and method for the real-time composition and presentation of a complex, dynamic, and interactive experience by means of an efficient declarative markup language can embed images or full- motion video data anywhere they would use a traditional texture map within their 3D scene.
  • Authors can also use the results of rendering one scene description as an image to be texture mapped into another scene.
  • the Surface allows the results of any rendering application to be used as a texture within the author's scene. This allows declarative rendering of nested scenes and rendering of scenes having component S rfaces with decoupled rendering rates
  • Fig. 1A shows the basic architecture of Blendo.
  • PATENT /50N3457.0I Fig. IB is a flow diagram illustrating flow of content through Blendo engine.
  • Fig. 2A illustrates how two surfaces in a scene are rendered at different rendering rates.
  • Fig. 2B is a flow chart illustrating acts involved in rendering the two surfaces shown in Fig. 2A at different rendering rates.
  • Fig. 3 A illustrates a nested scene.
  • Fig. 3B is a flow chart showing acts performed to render the nested scene of Fig. 3A.
  • Blendo is an exemplary embodiment of the present invention that allows temporal manipulation of media assets including control of animation and visible imagery, and cueing of audio media, video media, animation and event data to a media asset that is being played.
  • Fig. 1A shows basic Blendo architecture.
  • At the core of the Blendo architecture is a Core Runtime module 10 (Core hereafter) which presents various Application Programmer Interface (API hereafter) elements and the object model to a set of objects present in system 11.
  • API Application Programmer Interface
  • a file is parsed by parser 14 into a raw scene graph 16 and passed on to Core 10, where its objects are instantiated and a runtime scene graph is built.
  • the objects can be built-in objects 18, author defined objects 20, native objects 24, or the like.
  • the objects use a set of available managers 26 to obtain platform services 32. These platform services 32 include event handling, loading of assets, playing of media, and the like.
  • the objects use rendering layer 28 to compose intermediate or final images for display.
  • a page integration component 30 is used to interface Blendo to an external environment, such as an HTML or XML page.
  • Blendo contains a system object with references to the set of managers 26. Each manager 26 provides the set of APIs to control some aspect of system 11.
  • An event manager 26D provides access to incoming system events originated by user input or environmental events.
  • a load manager 26C facilitates the loading of Blendo files and native node implementations.
  • a media manager 26E provides the ability to load, control and play audio, image and video media assets.
  • a render manager 26G allows the creation and management of objects used to render scenes.
  • a scene manager 26A controls the scene graph.
  • a surface manager 26F allows the creation and management of surfaces onto which scene elements and other assets may be composited.
  • a thread manager 26B gives authors the ability to spawn and control threads and to communicate between them.
  • Fig. IB illustrates in a flow diagram, a conceptual description of the flow of content through a Blendo engine.
  • a presentation begins with a source which includes a file or stream 34 (Fig. 1 A) of content being brought into parser 14 (Fig. 1A).
  • the source could be in a native VRML-like textual format, a native binary format, an XML based format, or the like.
  • the source is converted into raw scene graph 16 (Fig. 1 A).
  • the raw scene graph 16 can represent the nodes, fields and other objects in the content, as well as field initialization values. It also can contain a description of object prototypes, external prototype references in the stream 34, and route statements.
  • the top level of raw scene graph 16 include nodes, top level fields and functions, prototypes and routes contained in the file. Blendo allows fields and functions at the top level in addition to traditional elements. These are used to provide an interface to an external environment, such as an HTML page. They also provide the object interface when a stream 34 is used as the contents of an external prototype.
  • Each raw node includes a list of the fields initialized within its context.
  • Each raw field entry includes the name, type (if given) and data value(s) for that field.
  • Each data value ir Kdes a number, a string, a raw no e, and/or ⁇ p tjeld tftaLean represent an explicitly typed field value.
  • the prototypes are extracted from the top level 01 raw scene graph 16 (Fig. 1 A) and used to populate the database of object prototypes accessible by this scene.
  • the raw scene graph 16 is then sent through a build traversal. During this traversal, each object is built (block 65), using the database of object prototypes.
  • each field in the scene is initialized. This is done by sending initial events to non-default fields of Objects. Since the scene graph structure is achieved through the use of node fields, block 75 also constructs the scene hierarchy as well. Events are fired using in order traversal. The first node encountered enumerates fields in the node. If a field is a node, that node is traversed first.
  • the author is allowed to add initialization logic (block 80) to prototyped objects to ensure that the node is fully initialized at call time.
  • the blocks described above produce a root scene.
  • the scene is delivered to the scene manager 26A (Fig. 1 A) created for the scene.
  • the scene manager 26A is used to render and perform behavioral processing either implicitly or under author control.
  • a scene rendered by the scene manager 26 A can be constructed using objects from the Blendo object hierarchy.
  • BaiiHMMHAHri ⁇ HitfMHipHMfc ⁇ -M» ⁇ ⁇ BWB WMP»W ⁇ i WH MIIIIlfc Objects may derive some of their functionality from their parent objects, and subsequently extend or modify their functionality.
  • At the base of the hierarchy is the Object.
  • the two main classes of objects derived from the Object are a Node and a Field. Nodes contain, among
  • PATENT / 50N34S7.01 .5- other things a render method, which gets called as part of the render traversal.
  • the data properties of nodes are called fields.
  • Blendo object hierarchy is a class of objects called Timing Objects, which are described in detail below.
  • the following code portions are for exemplary purposes. It should be noted that the line numbers in each code portion merely represent the line numbers for that particular code portion and do not represent the line numbers in the original source code.
  • a Surface Object is a node of type SurfaceNode.
  • a SurfaceNode class is the base class for all objects that describe a 2D image as an array of color, depth and opacity (alpha) values. SurfaceNodes are used primarily to provide an image to be used as a texture map. Derived from the SurfaceNode Class are MovieSurface, ImageSurface, MatteSurface, PixelSurface and SceneSurface. It should be noted the the line numbers in each code portion merely represent the line numbers for that code portion and do not represent the line numbers in the original source code.
  • the following code portion illustrates the MovieSurface node. A description of each field in the node follows thereafter.
  • a MovieSurface node renders a movie on a surface by providing access to the sequence of images defining the movie.
  • the MovieSurface' s TimedNode parent class determines which frame is rendered onto the surface at any one time. Movies can also be used as sources of audio.
  • the URL field provides a list of potential locations of the movie data for the surface. The list is ordered such that element 0 describes the preferred source of the data. If for any reason element 0 is unavailable, or in an unsupported format, the next element may be used.
  • the timeBase field specifies the node that is to provide the timing information for the movie.
  • the timeBase will provide the movie with the information needed to determine which frame of the movie to display on the surface at any given instant. If no timeBase is specified, the surface will display the first frame of the movie.
  • the duration field is set by the MovieSurface node to the length of the movie in seconds once the movie data has been fetched.
  • the loadTime and the loadStatus fields provide information from the MovieSurface node concerning the availability of the movie data.
  • LoadStatus has five possible values, "NONE”, “REQUESTED”, “FAILED”, “ABORTED”, and "LOADED”.
  • NONE is the initial state.
  • a "NONE" event is also sent if the node's url is cleared by either setting the number of values to 0 or setting the first URL string to the empty string. When this occurs, the pixels of the surface are set to black and opaque (i.e. color is 0,0,0 and transparency is 0).
  • a "REQUESTED” event is sent whenever a non-empty url value is set.
  • the pixels of the surface remain unchanged after a "REQUESTED” event.
  • “FAILED” is sent after a "REQUESTED” event if the movie loading did not succeed. This can happen, for example, if the URL refers to a non-existent file or if the file does not contain valid data. The pixels of the surface remain unchanged after a "FAILED” event.
  • a "LOADED" event is sent when the movie is ready to be displayed. It is followed by a loadTime event whose value matches the current time.
  • the frame of the movie indicated by the timeBase field is rendered onto the surface. If timeBase is NULL, the first frame of the movie is rendered onto the surface.
  • An ImageSurface node renders an image file onto a surface.
  • the URL field provides a list of potential locations of the image data for the surface. The list is ordered such that element 0 describes the most preferred source of the data. If for any reason element 0 is unavailable, or in an unsupported format, the next element may be used.
  • the loadTime and the loadStatus fields provide information from the ImageSurface node concerning the availability of the image data.
  • LoadStatus has five possible values, "NONE”, “REQUESTED”, “FAILED”, “ABORTED”, and "LOADED”.
  • NONE is the initial state. A "NONE" event is also sent if the node's URL is cleared by either setting the number of values to 0 or setting the first URL string
  • PATENT/ 50N3457.01 -8- to the empty string.
  • the pixels of the surface are set to black and opaque (i.e. color is 0,0,0 and transparency is 0).
  • a "REQUESTED” event is sent whenever a non-empty URL value is set.
  • the pixels of the surface remain unchanged after a "REQUESTED” event.
  • FAILED is sent after a "REQUESTED” event if the image loading did not succeed. This can happen, for example, if the URL refers to a non-existent file or if the file does not contain valid data. The pixels of the surface remain unchanged after a "FAILED” event.
  • a "LOADED" event is sent when the image has been rendered onto the surface. It is followed by a loadTime event whose value matches the current time.
  • the MatteSurface node uses image compositing operations to combine the image data from surface 1 and surface2 onto a third surface.
  • the result of the compositing operation is computed at the resolution of surface2. If the size of
  • PATENT / 50N3457.01 -9- surfacel differs from that of surface2, the image data on surfacel is zoomed up or down before performing the operation to make the size of surfacel equal to the size of surface2.
  • the surfacel and surface2 fields specify the two surfaces that provide the input image data for the compositing operation.
  • the operation field specifies the compositing function to perform on the two input surfaces. Possible operations are described below.
  • REPLACE_ALPHA overwrites the alpha channel A of surface2 with data from surfacel. If surfacel has 1 component (grayscale intensity only), that component is used as the alpha (opacity) values. If surfacel has 2 or 4 components (grayscale intensity+alpha or RGB A), the alpha channel A is used to provide the alpha values. If surfacel has 3 components iRGB), the operation is undefined. This operation can be used to provide static or dynamic alpha masks for static or dynamic images. For example, a SceneSurface could render an animated James Bond character against a transparent background. The alpha component of this image could then be used as a mask shape for a video clip.
  • MULTJPLY_ALPHA is similar to REPLACE_ALPHA, except the alpha values from surfacel are multiplied with the alpha values from surface2.
  • CROSS_FADE fades between two surfaces using a parameter value to control the percentage of each surface that is visible. This operation can dynamically fade between two static or dynamic images. By animating the parameter value (line 5) from 0 to 1, the image on surfacel fades into that of surface2.
  • BLEND combines the image data from surfacel and surface2 using the alpha channel from surface2 to control the blending percentage. This operation allows the alpha channel of surface2 to control the blending of the two images. By animating the alpha channel of surface2 by rendering a SceneSurface or playing a MovieSurface, you can produce a complex travelling matte effect. If Rl, Gl, Bl,
  • PATENT /50N3457.01 -10- and Al represent the red, green, blue, and alpha values of a pixel of surfacel and R2, G2, B2, and A2 represent the red, green, blue, and alpha values of the corresponding pixel of surface2, then the resulting values of the red, green, blue, and alpha components of that pixel are:
  • red R1 * (1 - A2) + R2 * A2 (1)
  • green G1 * (1 - A2) + G2 * A2 (2)
  • blue B1 * (1 - A2) + B2 * A2 (3)
  • alpha 1 (4)
  • the parameter field provides one or more floating point parameters that can alter the effect of the compositing function.
  • the specific interpretation of the parameter values depends upon which operation is specified
  • a PixelSurface node renders an array of user-specified pixels onto a surface.
  • the image field describes the pixel data that is rendered onto the surface.
  • a SceneSurface node renders the specified children on a surface of the specified size.
  • the SceneSurface automatically re-renders itself to reflect the current state of its children.
  • the children field describes the ChildNodes to be rendered.
  • the children field describes an entire scene graph that is rendered independently of the scene graph that contains the SceneSurface node.
  • the width and height fields specify the size of the surface in pixels. For example, if width is 256 and height is 512, the surface contains a 256 x 512 array of pixel values.
  • the MovieSurface, ImageSurface, MatteSurface, PixelSurface & SceneSurface nodes are utilized in rendering a scene.
  • the output is mapped onto the display, the "top level Surface.”
  • the 3D rendered scene can generate its output onto a Surface using one of the above mentioned SurfaceNodes, where the output is available to be incorporated into a richer scene composition as desired by the author.
  • the contents of the Surface, generated by rendering the surface's embedded scene description can include color information, transparency (alpha channel) and depth, as part of the Surface's structured image organization.
  • An image in this context is defined to include a video image, a still image, an animation or a scene.
  • PATENT / 50N3457.01 - 12- A defined to support the specialized requi ⁇ tents of various texture-mapping systems internally, behind a common image management interface. As a result, any Surface producer in the system can be consumed as a texture by the 3D rendering process. Examples of such Surface producers include an Image Surface, a MovieSurface, a MatteSurface, a SceneSurface, and an ApplicationSurface.
  • An ApplicationSurface maintains image data as rendered by its embedded application process, such as a spreadsheet or word processor, a manner analogous to the application window in a traditional windowing system.
  • the Surface abstraction provides a mechanism for decoupling rendering rates for different elements on the same screen. For example, it may be acceptable to portray a web browser that renders slowly, at perhaps 1 frame per second, but only as long as the video frame rate produced by another application and displayed alongside the output of the browser can be sustained at a full 30 frames per second. If the web browsing application draws into its own Surface, then the screen compositor can render unimpeded at full motion video frame rates, consuming the last fully drawn image from the web browser's Surface as part of its fast screen updates.
  • Fig. 2A illustrates a scheme for rendering a complex portion 202 of screen display 200 at full motion video frame rate.
  • Fig. 2B is a flow diagram illustrating various acts included in rendering screen display 200 including complex portion 202 at full motion video rate. It may be desirable for a screen display 200 to be displayed at 30 frames per second, but a portion 202 of screen display 200 may be too complex to display at 30 frames per second. In this case, portion 202 is rendered
  • screen display 200 including portion 202 is displayed at 30 frames per second by using the first surface stored in buffer 204.
  • the next frame of portion 202 is rendered on a second surface and stored in buffer 206 as shown in block220.
  • the next update of screen display 200 uses the second surface (block 225) and continues to do so till a further updated version of portion 202 is available in buffer 204.
  • the next frame of portion 202 is being rendered on first surface as shown in block 230.
  • the updated first surface will be used to display screen display 200 including complex portion 202 at 30 frames per second.
  • Fig. 3 A depicts a nested scene including an animated sub-scene.
  • Fig 3B is a flow diagram showing acts performed to render the nested scene of Fig. 3 A.
  • Block 310 renders a background image displayed on screen display 200, and block 315 places a cube 302 within the background image displayed on screen display 200. The area outside of cube 302 is part of a surface that forms the background for cube 302 on display 200.
  • a face 304 of cube 302 is defined as a third surface.
  • Block 320 renders a movie on the third surface using a MovieSurface node. Thus, face 304 of the cube displays a movie that is rendered on the third surface.
  • Face 306 of cube 302 is defined as a fourth surface.
  • Block 325 renders an image on the fourth surface using an ImageSurface node. Thus, face 306 of the cube displays an image that is
  • the entire cube 302 is defined as a fifth surface and in block 335 this fifth surface is translated and/or rotated thereby creating a moving cube 52 with a movie playing on face 304 and a static image displayed on face 306.
  • a different rendering can be displayed on each face of cube 302 by following the procedure described above. It should be noted that blocks 310 to 335 can be done in any sequence including starting all the blocks 310 to 335 at the same time.
  • Blendo independent of Blendo, and it can be part of an embodiment separate from Blendo. It is also to be understood that while the description of the invention describes 3D scene rendering, the invention is equally applicable to 2D scene rendering.
  • the surface model enables authors to freely intermix image and video effects with 2D and 3D geometric mapping and animation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)
  • Studio Circuits (AREA)

Abstract

L'invention concerne un système et un procédé (figure 1A, objet 11) pour la composition et la présentation en temps réel d'une expérience complexe, dynamique et interactive au moyen d'un langage de balisage déclaratif efficace (figure 1A, article 12). Grâce à l'utilisation d'une surface de construction, les auteurs peuvent incruster des images ou des données vidéo plein écran (figure 1A, article 20) n'importe où un mappage de texture traditionnelle serait utilisée dans leur scène 3D. Les auteurs peuvent aussi utiliser les résultats de rendu d'une description de scène en tant qu'image à être texturée dans une autre scène (figure 1A, article 28). Plus particulièrement, la surface permet aux résultats de n'importe quelle application de rendu d'être utilisés en tant que texture dans la scène de l'auteur (figure 1A, article 28). Cela permet le rendu déclaratif de scènes imbriquées et le rendu de scènes ayant des surfaces de composant avec des vitesses de rendu découplées (figure 1A, article 26F).
EP02808109A 2002-11-01 2002-11-01 Modele de surface unifie pour composition de scene geometrique et fondee sur l'image Withdrawn EP1579391A4 (fr)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2002/035212 WO2004042659A1 (fr) 2002-11-01 2002-11-01 Modele de surface unifie pour composition de scene geometrique et fondee sur l'image

Publications (2)

Publication Number Publication Date
EP1579391A1 EP1579391A1 (fr) 2005-09-28
EP1579391A4 true EP1579391A4 (fr) 2009-01-21

Family

ID=32311631

Family Applications (1)

Application Number Title Priority Date Filing Date
EP02808109A Withdrawn EP1579391A4 (fr) 2002-11-01 2002-11-01 Modele de surface unifie pour composition de scene geometrique et fondee sur l'image

Country Status (5)

Country Link
EP (1) EP1579391A4 (fr)
JP (1) JP4260747B2 (fr)
CN (1) CN1695169A (fr)
AU (1) AU2002368317A1 (fr)
WO (1) WO2004042659A1 (fr)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100463004C (zh) * 2006-02-24 2009-02-18 腾讯科技(深圳)有限公司 一种渲染模型残影效果的方法
JP2007336281A (ja) 2006-06-15 2007-12-27 Sony Corp 画像記録装置、画像再生装置、画像記録方法及び画像再生方法
US20080158254A1 (en) * 2006-12-29 2008-07-03 Hong Jiang Using supplementary information of bounding boxes in multi-layer video composition
CA2682935C (fr) 2007-04-11 2020-01-28 Thomson Licensing Procede et appareil pour ameliorer des effets video numeriques (dve)
EP2506263A1 (fr) 2011-03-31 2012-10-03 Thomson Licensing Graphique de scène stéréoscopique pour définir des objets graphiques compatibles 3D et 2D
CN102930536B (zh) * 2012-10-16 2016-08-03 深圳先进技术研究院 基于层次化结构的室内场景运动性分析与检测方法
CN109462771B (zh) * 2018-11-26 2021-08-06 广东精鹰传媒股份有限公司 一种立体字幕条的二维效果实现方法

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5982350A (en) * 1991-10-07 1999-11-09 Eastman Kodak Company Compositer interface for arranging the components of special effects for a motion picture production
US20020052891A1 (en) * 1998-04-10 2002-05-02 Jeffrey H. Michaud Assigning a hot spot in an electronic artwork

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3229042B2 (ja) * 1992-11-24 2001-11-12 株式会社ソニー・コンピュータエンタテインメント 画像処理装置および画像処理方法
JP3208116B2 (ja) * 1998-02-03 2001-09-10 株式会社次世代情報放送システム研究所 映像インデックス情報を記録した記録媒体、映像インデックス情報を用いた映像情報管理方法、音声インデックス情報を記録した記録媒体および音声インデックス情報を用いた音声情報管理方法
JP2002208036A (ja) * 2001-01-10 2002-07-26 Toshimitsu Nakanishi コンテンツ提供システム及びコンテンツ提供方法

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5982350A (en) * 1991-10-07 1999-11-09 Eastman Kodak Company Compositer interface for arranging the components of special effects for a motion picture production
US20020052891A1 (en) * 1998-04-10 2002-05-02 Jeffrey H. Michaud Assigning a hot spot in an electronic artwork

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of WO2004042659A1 *

Also Published As

Publication number Publication date
JP2006505050A (ja) 2006-02-09
JP4260747B2 (ja) 2009-04-30
AU2002368317A1 (en) 2004-06-07
WO2004042659A1 (fr) 2004-05-21
CN1695169A (zh) 2005-11-09
EP1579391A1 (fr) 2005-09-28

Similar Documents

Publication Publication Date Title
US6631240B1 (en) Multiresolution video
Klein et al. Non-photorealistic virtual environments
JP4796499B2 (ja) 映像およびシーングラフインターフェイス
KR100962920B1 (ko) 비주얼 및 장면 그래프 인터페이스
AU2010227110B2 (en) Integration of three dimensional scene hierarchy into two dimensional compositing system
JP3177221B2 (ja) 興味深いシーンのイメージを表示するための方法及び装置
US8566736B1 (en) Visualization of value resolution for multidimensional parameterized data
US8723875B2 (en) Web-based graphics rendering system
EP1462998A2 (fr) Langage de balisage et modèle objet pour graphiques à vecteurs
US7113183B1 (en) Methods and systems for real-time, interactive image composition
US6856322B1 (en) Unified surface model for image based and geometric scene composition
US20050128220A1 (en) Methods and apparatuses for adjusting a frame rate when displaying continuous time-based content
EP1579391A1 (fr) Modele de surface unifie pour composition de scene geometrique et fondee sur l'image
US20050021552A1 (en) Video playback image processing
US20050088458A1 (en) Unified surface model for image based and geometric scene composition
US6683613B1 (en) Multi-level simulation
Papaioannou et al. Enhancing Virtual Reality Walkthroughs of Archaeological Sites.
CN111460770A (zh) 文档内元素属性同步方法、装置、设备及存储介质
Qi et al. Quasi-3D cell-based Animation.
Jeffery et al. Programming language support for collaborative virtual environments
CN114241101A (zh) 三维场景渲染方法、系统、装置及存储介质
Trapp Analysis and exploration of virtual 3D city models using 3D information lenses
US20040046781A1 (en) Movie description language
JP2006523337A (ja) 表示のためのグラフィックスアニメーションの描写を管理する方法、および該方法を実施するための受信機およびシステム
Christopoulos et al. Image-based techniques for enhancing virtual reality environments

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20050421

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LI LU MC NL PT SE SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

RIN1 Information on inventor provided before grant (corrected)

Inventor name: BROADWELL, PETER, G.

Inventor name: KENT, JAMES, R.

Inventor name: MYERS, ROBERT, K.C/O SONY ELECTRONICS, INC.

Inventor name: MARRIN, CHRISTOPHER, F.C/O SONY ELECTRONICS, INC.

DAX Request for extension of the european patent (deleted)
RBV Designated contracting states (corrected)

Designated state(s): DE FR GB

A4 Supplementary search report drawn up and despatched

Effective date: 20081218

17Q First examination report despatched

Effective date: 20090723

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20091203