WO2002037857A2 - Method and device for video scene composition including graphic elements - Google Patents

Method and device for video scene composition including graphic elements Download PDF

Info

Publication number
WO2002037857A2
WO2002037857A2 PCT/EP2001/012286 EP0112286W WO0237857A2 WO 2002037857 A2 WO2002037857 A2 WO 2002037857A2 EP 0112286 W EP0112286 W EP 0112286W WO 0237857 A2 WO0237857 A2 WO 0237857A2
Authority
WO
WIPO (PCT)
Prior art keywords
graphic elements
video
frames
rendered
format
Prior art date
Application number
PCT/EP2001/012286
Other languages
French (fr)
Other versions
WO2002037857A3 (en
Inventor
Guillaume Brouard
Thierry Durandy
Thierry Planterose
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Priority to JP2002540463A priority Critical patent/JP2004513578A/en
Priority to EP01993126A priority patent/EP1334620A2/en
Publication of WO2002037857A2 publication Critical patent/WO2002037857A2/en
Publication of WO2002037857A3 publication Critical patent/WO2002037857A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23412Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs

Definitions

  • the present invention relates to a method of video scene composition from a set of graphic elements and primary video objects.
  • This invention may be used in any video coding system for improving, for a viewer, the visual reading comfort of graphic elements.
  • multimedia applications such as interactive television or the Electronic Program Guide (EPG)
  • EPG Electronic Program Guide
  • image quality has become an important aspect. Indeed, having simultaneously a good resolution and a large display format of pictures, videos and graphic elements is now required by end users so that they can communicate and interact with such applications while having a maximum visual comfort.
  • the content of such multimedia applications is composed of a primary video content into which additional information is inserted. Such additional information may correspond to answers to end user requests or end-user graphic personalization resulting in the mapping of two- dimensional (2D) graphic elements in video frames of said primary video content, such as text or geometric patterns.
  • 2D two- dimensional
  • US patent 5,877,771 describes a method and apparatus for 2D texture mapping for providing a richer surface detail in a displayed frame. To this end, multi-resolutional texture data for a destination pixel in the frames of the primary video is super-sampled at horizontal and vertical screen space sampling rates based on the local rate of change in texture. If graphic elements are considered as texture, such a method can be used for the mapping of 2D graphic elements on video frames.
  • the prior art method has strong limitations.
  • this method does not take into account that the mapping of 2D graphic elements on the primary video frames must be combined with an upscaling of said primary video frames if a larger format is required for the resulting output video frames.
  • the quality of 2D graphic elements may be degraded because of aliasing, which results in coarse and jagged symbols not legible to viewers.
  • this method remains expensive since it requires a large amount of processing power for the mapping operation.
  • the method according to the invention provides a solution to the problems posed by the limitations of the prior art method.
  • This method renders it possible to obtain upscaled video frames including 2D graphic elements from primary video objects of smaller format while ensuring that no degradation of said 2D graphic elements will take place.
  • the method according to the invention is characterized in that it comprises: a rendering step of said primary video objects for providing rendered video frames in a first format, - an upscaling step of said rendered video frames for providing upscaled video frames in a second format, a rendering step of graphic elements for providing rendered graphic elements in said second format, a mapping step of said rendered graphic elements on said upscaled video frames, for composing frames defining the video scene.
  • This method profits the fact that the output video is composed of two sets of distinct data relating to video objects and 2D graphic elements.
  • the first process consists of composing frames from video object frames, said frames being upscaled to the desired output video format afterwards.
  • the second process consists in directly rendering the 2D graphic elements in said output format, using a drawing algorithm which ensures that no degradation of said 2D graphic elements will take place.
  • rendered 2D graphic elements are mapped on the upscaled video frames.
  • This method generates upscaled video frames including 2D graphic elements of good resolution as compared with an upscaling performed on video frames including 2D graphic elements, which leads to upscaled video frames including degraded 2D graphic elements.
  • Fig.l depicts the sequence of steps according to the invention
  • Fig.2 depicts the hardware implementation of the invention
  • Fig.3 depicts an embodiment of the invention.
  • the present invention relates to an improved video scene composition method from a set of video data and 2D graphic elements.
  • the invention is described for a video scene composed from 2D graphic elements and video data coded in accordance with the MPEG-4 video standard, but it will be apparent to those skilled in the art that the scope of the invention is not limited to this specific case but can also be applied to video data coded in accordance with other object-oriented video standards, MPEG-2 or H.263 video standards, or to non-coded video data.
  • Fig.l depicts the sequence of steps of the method according to the invention in the context of a video scene composition from two videos and 2D graphic elements. It includes: a decoding step 101 for decoding input video objects 102 coded in accordance with the MPEG-4 video standard and for providing decoded video objects 103.
  • the first video object corresponds to a background video having a first format, for example CIF format (Common Intermediate Format).
  • the second video object corresponds to a video having a smaller format, for example a SQCIF format (Sub Quarter Common Intermediate Format).
  • These input video objects are decoded by separate MPEG-4 decoders.
  • a video rendering step 104 for obtaining rendered video frames 105 from decoded videos 103.
  • This step consists in assembling said videos 103 with respect to assembling parameters. For example, it may result in SQCIF video frames overlaid in CIF video frames.
  • Such parameters describe, for example, the spatial position of each video object in the scene or the transparency coefficient between SQCIF and CIF video frames. They are directly extracted from each video object or from a stream 106 encoded in accordance with the BIFS syntax (Binary Format For Scene) and dedicated to describing the scene composition.
  • This step may also take into account the ability of the MPEG-4 layer to modify assembling parameters in response to user interaction, e.g.
  • an upscaling step 108 for providing enlarged rendered frames 109 along the horizontal and/or vertical axis.
  • luminance a d chrominance pixels of frames 105 are duplicated horizontally and/or vertically according to a scaling factor.
  • alternative upscaling techniques may be used, such as techniques based on pixel interpolation.
  • a graphic rendering step 110 for obtaining 2D rendered graphic elements 112 from 2D graphic elements 111.
  • a drawing algorithm is used to render said graphic elements 111 in a format allowing a direct mapping on frames 109, without upscaling. In this way no degradation of the 2D graphic elements can take place.
  • the 2D graphic elements may be composed of text and/or graphic patterns.
  • Each element 111 is rendered as a separate unit in the graphic rendering step 110. - a mapping step 113 of rendered 2D graphic elements 112 on rendered frames 109, resulting in frames 115.
  • This step takes into account the position, defined by a signal 114 or the scene description inside the BIFS stream 106, of each 2D graphic element 112 in the frames 109, said position corresponding to horizontal and vertical coordinates in a cartesian reference system defined in frames 109.
  • the signal 114 is pre-set or issued from a mouse or a keyboard, allowing a user to interact with 2D graphic elements in choosing their spatial position in said reference system.
  • the mapping operation 113 replaces pixels of frames 109 with pixels defining said graphic elements.
  • transparency between graphic elements and frames 109 can be obtained in achieving an average between pixels of frames 109 and the pixels defining said graphic elements.
  • Fig.2 depicts the hardware architecture 200 for implementing the various steps according to the invention.
  • This architecture is structured around a data bus 201 to ensure data exchange between the various processing hardware units.
  • First, it includes an input peripheral 202 for receiving both input video objects and 2D graphic elements, which are both stored in the mass storage 203.
  • Said video objects are decoded by the signal processor 204 (referred to as CPU in the figure), which executes instructions belonging to a decoding algorithm stored in the fast access memory 205. Once decoded, video objects are stored in a first video buffer 206.
  • the video rendering step is also performed by the signal processor 204, executing instructions belonging to a rendering algorithm stored in the memory 205, but also taking into account data originating from the action of a mouse 207, a keyboard 208, a BIFS file stored in the mass storage 203, or a BIFS stream from the input peripheral 202 for positioning each video object in the video scene being built.
  • Each frame rendered from a set of decoded video objects is thus stored in said first buffer 206 and is upscaled by means of a signal co-processor 209 (referred to as ICP in the Figure).
  • ICP signal co-processor
  • the resulting upscaled frame is stored in said buffer 206.
  • the 2D graphic elements are rendered by the signal processor 204, which executes instructions belonging to a drawing algorithm stored in the memory 205, each graphic element being successively rendered and successively mapped in the rendered frame contained in buffer 206. If transparency between rendered frames and 2D graphic elements is desired, rendered graphic elements are stored in a temporary buffer 210 so that an averaging operation between pixels belonging to the rendered frame stored in buffer 206 and pixels belonging to said rendered 2D graphic elements can be performed by the processor 204, the resulting frame being stored in buffer 206.
  • the content of buffer 206 is sent to a second buffer 211 so that the final rendered frame is presented to an output video peripheral 212 for being displayed on a display 213.
  • This switching mechanism allows the rendering of the next frame in the buffer 206 to start while the current frame in the buffer 211 is being displayed. This process is repeated for the rendering of each frame including 2D graphic elements.
  • Fig. 3 depicts an embodiment according to the invention.
  • This embodiment corresponds to an electronic program guide application (EPG) allowing a user to receive a variety of information on TV channels programs, such as video previews or textual data.
  • EPG electronic program guide application
  • the consecutive steps according to the invention as described with reference to Figs. 1 and 2 are implemented in a set-top box unit 301, which receives primary data from an outside world 302, e.g. from a broadcaster, via a link 303.
  • Said primary data are processed in accordance with the different steps of the invention, resulting in video frames having a larger format than primary video objects, including 2D graphic elements, and displayed on the display 304.
  • This application allows a user to navigate the screen and to see previews in dependence on the position of a browsing window 308 with its associated bar targets 310, in a channels space 306 and a time space 307.
  • the browsing window 308 is overlaid and blended on top of the fullscreen TV program 309. Then the user can browse through time 307 and channels 306 while having the current TV program in the background.
  • the interaction function is provided by the mouse-like pointer device 305, such as a multifunctional remote control.
  • the invention ensures a good legibility of text and graphic elements 306, 307 and 310 in the displayed frames.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Circuits (AREA)
  • Image Processing (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Television Systems (AREA)

Abstract

This invention relates to a method and device for obtaining upscaled video frames including 2D graphic elements from primary video objects of smaller format, while ensuring no degradation of said 2D graphic elements will take place. To this end, separate and parallel processes are performed on video frames and on 2D graphic elements. The first process consists of generating rendered frames from said primary video objects, said rendered frames being afterwards upscaled to the desired output video format. The second process consists of directly rendering in said output format the 2D graphic elements by using a drawing algorithm ensuring no degradation of said 2D graphic elements will take place. In a final step, rendered 2D graphic elements are mapped on the upscaled video frames. Compared with an upscaling performed on video frames including 2D graphic elements which leads to upscaled video frames including degraded 2D graphic elements, this method yields upscaled video frames including 2D graphic elements of good resolution.Application: Video scene composition

Description

Method and device for video scene composition including graphic elements
The present invention relates to a method of video scene composition from a set of graphic elements and primary video objects.
This invention may be used in any video coding system for improving, for a viewer, the visual reading comfort of graphic elements.
With the emergence of multimedia applications such as interactive television or the Electronic Program Guide (EPG), image quality has become an important aspect. Indeed, having simultaneously a good resolution and a large display format of pictures, videos and graphic elements is now required by end users so that they can communicate and interact with such applications while having a maximum visual comfort. More and more, the content of such multimedia applications is composed of a primary video content into which additional information is inserted. Such additional information may correspond to answers to end user requests or end-user graphic personalization resulting in the mapping of two- dimensional (2D) graphic elements in video frames of said primary video content, such as text or geometric patterns.
US patent 5,877,771 describes a method and apparatus for 2D texture mapping for providing a richer surface detail in a displayed frame. To this end, multi-resolutional texture data for a destination pixel in the frames of the primary video is super-sampled at horizontal and vertical screen space sampling rates based on the local rate of change in texture. If graphic elements are considered as texture, such a method can be used for the mapping of 2D graphic elements on video frames.
It is an object of the invention to propose an improved and cost-effective method of video scene composition from 2D graphic elements and video objects which allows obtaining a good quality of said 2D graphic elements in the video scene. Indeed, the prior art method has strong limitations. First, this method does not take into account that the mapping of 2D graphic elements on the primary video frames must be combined with an upscaling of said primary video frames if a larger format is required for the resulting output video frames. Thus, in this context, the quality of 2D graphic elements may be degraded because of aliasing, which results in coarse and jagged symbols not legible to viewers. Secondly, this method remains expensive since it requires a large amount of processing power for the mapping operation.
The method according to the invention provides a solution to the problems posed by the limitations of the prior art method. This method renders it possible to obtain upscaled video frames including 2D graphic elements from primary video objects of smaller format while ensuring that no degradation of said 2D graphic elements will take place. To this end, the method according to the invention is characterized in that it comprises: a rendering step of said primary video objects for providing rendered video frames in a first format, - an upscaling step of said rendered video frames for providing upscaled video frames in a second format, a rendering step of graphic elements for providing rendered graphic elements in said second format, a mapping step of said rendered graphic elements on said upscaled video frames, for composing frames defining the video scene.
This method profits the fact that the output video is composed of two sets of distinct data relating to video objects and 2D graphic elements. Thus, separate and parallel processes are performed on video objects and on 2D graphic elements. The first process consists of composing frames from video object frames, said frames being upscaled to the desired output video format afterwards. The second process consists in directly rendering the 2D graphic elements in said output format, using a drawing algorithm which ensures that no degradation of said 2D graphic elements will take place. In a final step, rendered 2D graphic elements are mapped on the upscaled video frames. This method generates upscaled video frames including 2D graphic elements of good resolution as compared with an upscaling performed on video frames including 2D graphic elements, which leads to upscaled video frames including degraded 2D graphic elements. The invention will now be explained in more detail with reference to the embodiments described below and considered in connection with the accompanying drawings, in which identical parts or sub-steps have the same reference numbers: Fig.l depicts the sequence of steps according to the invention, Fig.2 depicts the hardware implementation of the invention, and
Fig.3 depicts an embodiment of the invention.
The present invention relates to an improved video scene composition method from a set of video data and 2D graphic elements.
The invention is described for a video scene composed from 2D graphic elements and video data coded in accordance with the MPEG-4 video standard, but it will be apparent to those skilled in the art that the scope of the invention is not limited to this specific case but can also be applied to video data coded in accordance with other object-oriented video standards, MPEG-2 or H.263 video standards, or to non-coded video data.
Fig.l depicts the sequence of steps of the method according to the invention in the context of a video scene composition from two videos and 2D graphic elements. It includes: a decoding step 101 for decoding input video objects 102 coded in accordance with the MPEG-4 video standard and for providing decoded video objects 103. The first video object corresponds to a background video having a first format, for example CIF format (Common Intermediate Format). The second video object corresponds to a video having a smaller format, for example a SQCIF format (Sub Quarter Common Intermediate Format). These input video objects are decoded by separate MPEG-4 decoders. - a video rendering step 104 for obtaining rendered video frames 105 from decoded videos 103. This step consists in assembling said videos 103 with respect to assembling parameters. For example, it may result in SQCIF video frames overlaid in CIF video frames. Such parameters describe, for example, the spatial position of each video object in the scene or the transparency coefficient between SQCIF and CIF video frames. They are directly extracted from each video object or from a stream 106 encoded in accordance with the BIFS syntax (Binary Format For Scene) and dedicated to describing the scene composition. This step may also take into account the ability of the MPEG-4 layer to modify assembling parameters in response to user interaction, e.g. by means of a mouse or a keyboard signal 107 or using BIFS updates inside the BIFS stream 106, such as changing of the spatial position of selected video objects in the scene being rendered, an upscaling step 108 for providing enlarged rendered frames 109 along the horizontal and/or vertical axis. To this end, luminance a d chrominance pixels of frames 105 are duplicated horizontally and/or vertically according to a scaling factor. Of course, alternative upscaling techniques may be used, such as techniques based on pixel interpolation. For example, if the scaling factor is set for two, the upscaling of frames 105 in the CIF format will result in frames 109 having the CCIR format, a graphic rendering step 110 for obtaining 2D rendered graphic elements 112 from 2D graphic elements 111. To this end, a drawing algorithm is used to render said graphic elements 111 in a format allowing a direct mapping on frames 109, without upscaling. In this way no degradation of the 2D graphic elements can take place. The 2D graphic elements may be composed of text and/or graphic patterns. Each element 111 is rendered as a separate unit in the graphic rendering step 110. - a mapping step 113 of rendered 2D graphic elements 112 on rendered frames 109, resulting in frames 115. This step takes into account the position, defined by a signal 114 or the scene description inside the BIFS stream 106, of each 2D graphic element 112 in the frames 109, said position corresponding to horizontal and vertical coordinates in a cartesian reference system defined in frames 109. The signal 114 is pre-set or issued from a mouse or a keyboard, allowing a user to interact with 2D graphic elements in choosing their spatial position in said reference system. Once the position of a given graphic element is defined, the mapping operation 113 replaces pixels of frames 109 with pixels defining said graphic elements. In an improved embodiment, transparency between graphic elements and frames 109 can be obtained in achieving an average between pixels of frames 109 and the pixels defining said graphic elements.
Fig.2 depicts the hardware architecture 200 for implementing the various steps according to the invention. This architecture is structured around a data bus 201 to ensure data exchange between the various processing hardware units. First, it includes an input peripheral 202 for receiving both input video objects and 2D graphic elements, which are both stored in the mass storage 203. Said video objects are decoded by the signal processor 204 (referred to as CPU in the figure), which executes instructions belonging to a decoding algorithm stored in the fast access memory 205. Once decoded, video objects are stored in a first video buffer 206. The video rendering step is also performed by the signal processor 204, executing instructions belonging to a rendering algorithm stored in the memory 205, but also taking into account data originating from the action of a mouse 207, a keyboard 208, a BIFS file stored in the mass storage 203, or a BIFS stream from the input peripheral 202 for positioning each video object in the video scene being built. Each frame rendered from a set of decoded video objects is thus stored in said first buffer 206 and is upscaled by means of a signal co-processor 209 (referred to as ICP in the Figure). The use of a signal co-processor for such a task allows a fast treatment and a minimum CPU occupation because upscaling hardware functions can be included in such a device. The resulting upscaled frame is stored in said buffer 206. In parallel, the 2D graphic elements are rendered by the signal processor 204, which executes instructions belonging to a drawing algorithm stored in the memory 205, each graphic element being successively rendered and successively mapped in the rendered frame contained in buffer 206. If transparency between rendered frames and 2D graphic elements is desired, rendered graphic elements are stored in a temporary buffer 210 so that an averaging operation between pixels belonging to the rendered frame stored in buffer 206 and pixels belonging to said rendered 2D graphic elements can be performed by the processor 204, the resulting frame being stored in buffer 206. When the final rendered frame including 2D graphic elements is available, the content of buffer 206 is sent to a second buffer 211 so that the final rendered frame is presented to an output video peripheral 212 for being displayed on a display 213. This switching mechanism allows the rendering of the next frame in the buffer 206 to start while the current frame in the buffer 211 is being displayed. This process is repeated for the rendering of each frame including 2D graphic elements.
Fig. 3 depicts an embodiment according to the invention. This embodiment corresponds to an electronic program guide application (EPG) allowing a user to receive a variety of information on TV channels programs, such as video previews or textual data. To this end, the consecutive steps according to the invention as described with reference to Figs. 1 and 2 are implemented in a set-top box unit 301, which receives primary data from an outside world 302, e.g. from a broadcaster, via a link 303. Said primary data are processed in accordance with the different steps of the invention, resulting in video frames having a larger format than primary video objects, including 2D graphic elements, and displayed on the display 304. This application allows a user to navigate the screen and to see previews in dependence on the position of a browsing window 308 with its associated bar targets 310, in a channels space 306 and a time space 307. The browsing window 308 is overlaid and blended on top of the fullscreen TV program 309. Then the user can browse through time 307 and channels 306 while having the current TV program in the background. The interaction function is provided by the mouse-like pointer device 305, such as a multifunctional remote control. In this application, the invention ensures a good legibility of text and graphic elements 306, 307 and 310 in the displayed frames.
Of course, alternative graphic designs may be proposed for more informational features, such as the presentation of the actors of a movie, detailed information on programs, without deviating from the scope of the invention.

Claims

CLAIMS:
1. A method of video scene composition from a set of graphic elements and primary video objects, said method being characterized in that it comprises: a rendering step of said primary video objects for providing rendered video frames in a first format, - an upscaling step of said rendered video frames for providing upscaled video frames in a second format, a rendering step of graphic elements for providing rendered graphic elements in said second format, a mapping step of said rendered graphic elements on said upscaled video frames, for composing frames defining the video scene.
2. A method as claimed in claim 1 , characterized in that the primary video objects are decoded MPEG-4 video objects.
3. A method as claimed in claim 1 , characterized in that the graphic elements are characters and geometric patterns.
4. A method as claimed in claim 1 , characterized in that the rendering step of graphic elements is done by a method using a drawing algorithm.
5. A method as claimed in claim 1, characterized in that the upscaling step involves a duplication of pixels which define rendered frames having the first format.
6. A set-top box product for video scene composition from a set of graphic elements and primary video objects, said set-top box being characterized in that it comprises: rendering means applied to said primary video objects for providing rendered video frames in a first format, upscaling means applied to said rendered video frames for providing upscaled video frames in a second format, rendering means applied to said graphic elements for providing rendered graphic elements in said second format, mapping means for mapping said rendered graphic elements on said upscaled video frames, resulting in frames defining the video scene.
7. A set-top box product as claimed in claim 6, characterized in that the rendering and mapping means involve the execution of dedicated program instructions by a signal processor, said program instructions being loaded in said signal processor or in a memory, while upscaling means involve the execution of hardware functions of a signal co- processor.
8. A set-top box product as claimed in claim 6, characterized in that it comprises user-interaction means for modifying the relative spatial positions of said primary video objects during their rendering.
9. A set-top box product as claimed in claim 6, characterized in that it comprises decoding means for decoding an input MPEG-4 stream, resulting in MPEG-4 video objects defining said primary video objects.
10. A set-top box product as claimed in claim 6, characterized in that said graphic elements mapped on said upscaled video frames are characters and geometric patterns.
PCT/EP2001/012286 2000-10-31 2001-10-19 Method and device for video scene composition including graphic elements WO2002037857A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2002540463A JP2004513578A (en) 2000-10-31 2001-10-19 Method and apparatus for creating a video scene containing graphic elements
EP01993126A EP1334620A2 (en) 2000-10-31 2001-10-19 Method and device for video scene composition including graphic elements

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP00403027 2000-10-31
EP00403027.6 2000-10-31

Publications (2)

Publication Number Publication Date
WO2002037857A2 true WO2002037857A2 (en) 2002-05-10
WO2002037857A3 WO2002037857A3 (en) 2002-07-18

Family

ID=8173928

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2001/012286 WO2002037857A2 (en) 2000-10-31 2001-10-19 Method and device for video scene composition including graphic elements

Country Status (6)

Country Link
US (1) US6828979B2 (en)
EP (1) EP1334620A2 (en)
JP (1) JP2004513578A (en)
KR (1) KR100800275B1 (en)
CN (1) CN1253004C (en)
WO (1) WO2002037857A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003101107A2 (en) * 2002-05-28 2003-12-04 Koninklijke Philips Electronics N.V. Remote control system for a multimedia scene

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4464599B2 (en) * 2002-05-13 2010-05-19 株式会社マイクロネット Three-dimensional computer image broadcasting telop apparatus and method thereof
FR2851716A1 (en) * 2003-02-21 2004-08-27 France Telecom Graphical animations description managing method, involves independently storing data describing content of spatiotemporal arrangement and data describing primitive of graphical objects
CN1875627A (en) * 2003-10-29 2006-12-06 皇家飞利浦电子股份有限公司 Method and apparatus for rendering smooth teletext graphics
KR101445074B1 (en) * 2007-10-24 2014-09-29 삼성전자주식회사 Method and apparatus for manipulating media object in media player
KR100980449B1 (en) * 2007-12-17 2010-09-07 한국전자통신연구원 Method and system for rendering of parallel global illumination
CN101599168B (en) * 2008-06-04 2011-09-28 鸿富锦精密工业(深圳)有限公司 Graphic layer transition method of aggregation substance and system thereof
KR102194635B1 (en) 2014-01-29 2020-12-23 삼성전자주식회사 Display controller and display system including the same

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0407614A1 (en) * 1989-02-02 1991-01-16 Dai Nippon Insatsu Kabushiki Kaisha Image processing apparatus
US5659793A (en) * 1994-12-22 1997-08-19 Bell Atlantic Video Services, Inc. Authoring tools for multimedia application development and network delivery
WO1998006098A1 (en) * 1996-08-06 1998-02-12 Applied Magic, Inc. Non-linear editing system for home entertainment environments
WO2000079799A2 (en) * 1999-06-23 2000-12-28 Sarnoff Corporation Method and apparatus for composing image sequences

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1993002524A1 (en) * 1991-07-19 1993-02-04 Princeton Electronic Billboard Television displays having selected inserted indicia
US6208350B1 (en) * 1997-11-04 2001-03-27 Philips Electronics North America Corporation Methods and apparatus for processing DVD video
US6275239B1 (en) * 1998-08-20 2001-08-14 Silicon Graphics, Inc. Media coprocessor with graphics video and audio tasks partitioned by time division multiplexing
EP1145218B1 (en) * 1998-11-09 2004-05-19 Broadcom Corporation Display system for blending graphics and video data
US20010017671A1 (en) * 1998-12-18 2001-08-30 Pierre Pleven "Midlink" virtual insertion system and methods
US6526583B1 (en) * 1999-03-05 2003-02-25 Teralogic, Inc. Interactive set-top box having a unified memory architecture
US6525746B1 (en) * 1999-08-16 2003-02-25 University Of Washington Interactive video object processing environment having zoom window

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0407614A1 (en) * 1989-02-02 1991-01-16 Dai Nippon Insatsu Kabushiki Kaisha Image processing apparatus
US5659793A (en) * 1994-12-22 1997-08-19 Bell Atlantic Video Services, Inc. Authoring tools for multimedia application development and network delivery
WO1998006098A1 (en) * 1996-08-06 1998-02-12 Applied Magic, Inc. Non-linear editing system for home entertainment environments
WO2000079799A2 (en) * 1999-06-23 2000-12-28 Sarnoff Corporation Method and apparatus for composing image sequences

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"MPEG-4 Authoring Tools Let Pros, Consumers Create Multimedia for Web Pages, TV, HDTV" XP002155140 page 1, line 1 -page 1, line 40 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003101107A2 (en) * 2002-05-28 2003-12-04 Koninklijke Philips Electronics N.V. Remote control system for a multimedia scene
FR2840494A1 (en) * 2002-05-28 2003-12-05 Koninkl Philips Electronics Nv REMOTE CONTROL SYSTEM OF A MULTIMEDIA SCENE
WO2003101107A3 (en) * 2002-05-28 2004-03-04 Koninkl Philips Electronics Nv Remote control system for a multimedia scene

Also Published As

Publication number Publication date
CN1253004C (en) 2006-04-19
JP2004513578A (en) 2004-04-30
KR20020086878A (en) 2002-11-20
CN1398487A (en) 2003-02-19
WO2002037857A3 (en) 2002-07-18
KR100800275B1 (en) 2008-02-05
EP1334620A2 (en) 2003-08-13
US6828979B2 (en) 2004-12-07
US20020163501A1 (en) 2002-11-07

Similar Documents

Publication Publication Date Title
US8698840B2 (en) Method and apparatus for processing video and graphics data to create a composite output image having independent and separate layers of video and graphics display planes
US6317164B1 (en) System for creating multiple scaled videos from encoded video sources
US7836193B2 (en) Method and apparatus for providing graphical overlays in a multimedia system
US6269484B1 (en) Method and apparatus for de-interlacing interlaced content using motion vectors in compressed video streams
US6633676B1 (en) Encoding a video signal
EP1357745A2 (en) Method and apparatus for processing of interlaced video images for progressive video displays
CN109640167B (en) Video processing method and device, electronic equipment and storage medium
US5638130A (en) Display system with switchable aspect ratio
US20100156916A1 (en) Display device
WO2001043431A2 (en) Enhanced display of world wide web pages on television
JP2000217110A (en) Method and device for providing on-screen display data to coded video signal having format
KR100561214B1 (en) Block based video processor and method for processing a data stream of coded image representative data
US6828979B2 (en) Method and device for video scene composition including mapping graphic elements on upscaled video frames
US9053752B1 (en) Architecture for multiple graphics planes
US6480238B1 (en) Apparatus and method for generating on-screen-display messages using field doubling
US20080260290A1 (en) Changing the Aspect Ratio of Images to be Displayed on a Screen
US7630018B2 (en) On-screen display apparatus and on-screen display generation method
EP1338149B1 (en) Method and device for video scene composition from varied data
EP0932977B1 (en) Apparatus and method for generating on-screen-display messages using field doubling
EP1848203A1 (en) Method and system for video image aspect ratio conversion
US20020113814A1 (en) Method and device for video scene composition
Bugwadia Motion-compensated image interpolation-rate conversion algorithms for HDTV

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref country code: JP

Ref document number: 2002 540463

Kind code of ref document: A

Format of ref document f/p: F

WWE Wipo information: entry into national phase

Ref document number: 1020027008457

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 018033792

Country of ref document: CN

AK Designated states

Kind code of ref document: A3

Designated state(s): CN JP KR

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

WWE Wipo information: entry into national phase

Ref document number: 2001993126

Country of ref document: EP

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWP Wipo information: published in national office

Ref document number: 1020027008457

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 2001993126

Country of ref document: EP