GB2488746A - Transmission of 3D subtitles in a three dimensional video system - Google Patents

Transmission of 3D subtitles in a three dimensional video system Download PDF

Info

Publication number
GB2488746A
GB2488746A GB1021919.4A GB201021919A GB2488746A GB 2488746 A GB2488746 A GB 2488746A GB 201021919 A GB201021919 A GB 201021919A GB 2488746 A GB2488746 A GB 2488746A
Authority
GB
United Kingdom
Prior art keywords
eye view
section
map
view component
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1021919.4A
Other versions
GB2488746B (en
GB201021919D0 (en
Inventor
Arthur Simon Waller
Leslie Arthur Durn
Graham John Mudd
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority to GB1021919.4A priority Critical patent/GB2488746B/en
Publication of GB201021919D0 publication Critical patent/GB201021919D0/en
Priority to PCT/KR2011/009829 priority patent/WO2012086990A2/en
Priority to EP11850047.9A priority patent/EP2656614A4/en
Priority to CN201180062524.8A priority patent/CN103270758B/en
Priority to KR1020137013819A priority patent/KR101846857B1/en
Publication of GB2488746A publication Critical patent/GB2488746A/en
Application granted granted Critical
Publication of GB2488746B publication Critical patent/GB2488746B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/24Systems for the transmission of television signals using pulse code modulation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/172Processing image signals image signals comprising non-image signal components, e.g. headers or format information
    • H04N13/183On-screen display [OSD] information, e.g. subtitles or menus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Abstract

A stream of stereoscopic images is transmitted as an overlay to a stereoscopic broadcast, the overlay being transmitted for optional display at a receiver in combination with the stereoscopic broadcast. The stream of stereoscopic images is divided into a plurality of sections, each section having a first eye view component and a second eye view component. A map is determined that represents the first eye view component 34a, 34b of each said section and a comparative map 42 is derived for each section representing a comparison between the first and second eye view components for the respective section. The respective map for each section representing the first eye view components 34a, 34b is encoded using run length encoding to produce an encoded first eye view symbol 38 and the respective comparative map for each said section is encoded using run length encoding to produce an encoded comparison symbol 44. A composite symbol is assembled for each section comprising the respective encoded first eye view symbol 38 followed by a first ending symbol, followed by the respective encoded comparison symbol 44 and a second ending symbol, and the composite symbols are transmitted. Also disclosed is the use of a display offset descriptor.

Description

Improvements to Subtitles for Three Dimensional Video Transmission
Field of the Invention
The present invention relates generally to three dimensional video transmission, and in particular, but not exclusively, to methods and apparatus relating to subtitles for stereoscopic video broadcast.
Background of the Invention
Video transmission systems may transmit three dimensional video images in the form of a stereoscopic broadcast, typically by broadcasting left eye view components and right view components of a video stream. The left eye and right eye view components, typically differ due to the viewpoint being in a slightly different position, usually offset horizontally by the typical separation of human eyes. The respective view components may be conveyed to the appropriate eye of a viewer by a variety of means. For example, the left and right eye view components may be displayed on a screen alternately, with a polarisation or colour composition that is dependent on which eye is intended to view the component, so that the appropriate view for each eye may be viewed by blocking the view intended for the other eye by the use of special spectacles which block certain colours or certain polarisation components.
Typically, the stereoscopic broadcast may be transmitted in a format that is backward compatible with conventional two dimensional video transmission receivers; typically a single eye view would be received and displayed, and data relating to the second eye view would be disregarded.
An overlay may be transmitted for optional display at a receiver in combination with the stereoscopic broadcast, typically for display in certain parts of the field of view. For example, the overlay may comprise subtitles that may be optionally selected for display, and a choice of languages may be provided. Conventionally, the overlay, such as subtitles, is transmitted in a two dimensional form that is back-compatible with two dimensional receivers, together with an indication of a display offset, typically a horizontal offset, that should be applied between the position within a display area that components of the overlay should occupy in one eye view relative to the other eye view. The choice of horizontal offset typically determines the position within a three dimensional view where the overlay will appear to be located in terms of depth, that is to say in the z-direction, i.e front-to-back. However, an overlay transmitted and displayed in this way will appear to be "razor-thin", as it has no apparent bulk or thickness, which may give an unsatisfactory viewing experience.
It is an object of the invention to mitigate the problems with the prior art systems.
Summary of the Invention
In accordance with a first aspect of the present invention, there is provided a method of transmitting a stream of stereoscopic images as an overlay to a stereoscopic broadcast, the overlay being transmitted for optional display at a receiver in combination with the stereoscopic broadcast, the method comprising: dividing the stream of stereoscopic images into a plurality of sections, each section having a first eye view component and a second eye view component; determining a map to represent the first eye view component of each said section; deriving a comparative map for each section representing a comparison between the first and second eye view components for the respective section; encoding the respective map for each section representing the first eye view components using run length encoding to produce an encoded first eye view symbol; encoding the respective comparative map for each said section using run length encoding to produce an encoded comparison symbol; assembling a composite symbol for each section comprising the respective encoded first eye view symbol followed by a first ending symbol, followed by the respective encoded comparison symbol and a second ending symbol; and transmitting the composite symbols.
An advantage of encoding the comparative map, typically a bit map, for each section using run length encoding is that this is particularly efficient in terms of data compression. For a typical overlay, such as for example subtitles, the differences between the first eye and the second eye view components, that is to say for example the left eye and right eye view components, may typically correspond to the edges of characters within the overlay, with large parts of the left eye and right eye view components being similar. The comparative map, typically a bit map, may therefore be likely to contain large parts containing an indication that there is no difference between the views. Run-length encoding is particularly efficient for encoding such content.
The composite symbol has an advantage of being receivable by a conventional receiver for receiving two dimensional images, which receiver may recognise the encoded first eye view symbol and the first ending symbol, but disregard the encoded comparison symbol and the second ending symbol.
In an embodiment of the invention, the method comprises transmitting a display offset descriptor indicating a difference between the position of the first eye component in a display field and the position of the second eye view
component in the display field.
The display offset descriptor may be determined so as to reduce differences between a first eye view component and an offset second eye view component. This has an advantage that the amount of data in each encoded comparison symbol may be reduced. The display offset descriptor may relate to a position of the overlay in depth, that is to say z-direction, and data in the comparison symbol may be used to represent data relating to a three dimensional representation of objects within the overlay, for example giving apparent thickness to subtitle characters. Transmitting a combination of the display offset descriptor and the composite symbol comprising the encoded comparison symbol gives an efficient way of transmitting the overlay so that a receiver may display a three dimensional representation of objects within the overlay.
In an embodiment of the invention, the display offset descriptor for each section is the same. In this case, each of the sections of the overlay may appear at the same depth.
In an embodiment of the invention, a first value of display offset descriptor is transmitted relating to first sections selected from the plurality of sections for display in a first region of a display and a second value of display offset descriptor is transmitted relating to second sections selected from the plurality of sections for display in a second region of a display. This has an advantage that sections of the overlay for display in different regions may appear at different depths. For example, speech subtitles, or "bubbles", may be attached to different individuals at an appropriate depth for the position of the individual in a field of view in the stereoscopic broadcast.
In an embodiment of the invention, the method comprises deriving the comparative map for each section on the basis of a difference between maps for the first and second eye view components for the respective section.
In an embodiment of the invention, the method comprises deriving the comparative map for each section on the basis of a subtraction of element values in a map for the second eye view component for the respective section from corresponding element values in the map for the first eye view component for the respective section.
In an embodiment of the invention, the method comprises deriving the comparative map for each section on a basis comprising a comparison of an element value in a map for the second eye view component for the respective section with a corresponding element value in the map for the first eye view component for the respective section; representing an element value in the comparative map by the respective element value in the map for the second eye view component if the respective element values for the first and second eye view components are different; and representing an element value in the comparative map by a reserved entry if the respective element values for the first and second eye view components are the same. This has an advantage that the comparative map may be easily decoded at the receiver.
In an embodiment of the invention, the composite symbol comprises a plurality of rows, each row representing a line of an image, and each row of the composite symbol comprises a run length encoded row of the first eye view symbol followed by a first ending symbol, the first ending symbol being followed by corresponding run length encoded row of the comparison symbol and a second ending symbol. This has an advantage that the composite symbol may be easily decoded at the receiver, and that the composite symbol may be back-compatible with existing receivers for two-dimensional video, since these may recognise the first ending symbol and decode the first eye view symbol but may disregard the comparison symbol.
In an embodiment of the invention, the method comprises: compiling a colour look up table to represent colours of the stream of stereoscopic images, wherein the respective maps comprise values from the colour look up table. This has an advantage of providing an efficient data compression in encoding the symbols, since it is likely that an overlay, such as subtitles, may comprise a limited number of colours.
In an embodiment of the invention, the respective maps comprising values from the colour look up table are bit maps having elements each representing a pixel.
In an embodiment of the invention, the colour look up table represents a given number of colours chosen to represent the stream of stereoscopic images.
In an embodiment of the invention, the given number of colours is 256 or fewer.
In an embodiment of the invention, the stream of stereoscopic images is a stream of subtitle images.
In an embodiment of the invention, each said section represents a character.
In an embodiment of the invention, each said section represents a word.
In an embodiment of the invention, each said section represents a line of text.
In an embodiment of the invention, the stereoscopic broadcast comprises a video data stream comprising a first part representing a first eye view and a second part representing a second eye view. The first part of said video data stream and the second part of said video data stream may comprise substantially the same amount of data, in that the first eye view and second eye views are transmitted independently, that is to say the video data stream may be transmitted without information representing a comparison of the first part representing a first eye view and the second part representing a second eye view, and may be transmitted without information enabling the second eye view to be decoded on the basis of the first eye view. This has an advantage that existing video compression techniques may be used to transmit and receive the video data stream.
In accordance with a second aspect of the invention there is provided apparatus for transmitting a stream of stereoscopic images as an overlay to a stereoscopic broadcast, the overlay being transmitted for optional display at a receiver in combination with the stereoscopic broadcast, the apparatus being arranged to: divide the stream of stereoscopic images into a plurality of sections, each section having a first eye view component and a second eye view component; determine a map to represent the first eye view component of each said section; derive a comparative map for each section representing a comparison between the first and second eye view components for the respective section; encode the respective map for each section representing the first eye view components using run length encoding to produce an encoded first eye view symbol; encode the respective comparative map for each said section using run length encoding to produce an encoded comparison symbol; assemble a composite symbol for each section comprising the respective encoded first eye view symbol followed by a first ending symbol, followed by the respective encoded comparison symbol and a second ending symbol; and transmit the composite symbols.
In accordance with a third aspect of the invention there is provided a method of transmitting a stream of stereoscopic images as an overlay to a stereoscopic broadcast, the overlay being transmitted for optional display at a receiver in combination with the stereoscopic broadcast, the stream of stereoscopic images comprising a plurality of sections, each section having a first eye view component and a second eye view component, the method comprising: determining respective maps to represent the first and second eye view components of each section; transmitting data generated from the map representing the first eye view component and data generated from the map representing the second eye view for each respective section; and transmitting a display offset descriptor indicating a difference between the position of the first eye component in a display field and the position of the
second eye view component in the display field.
In accordance with a fourth aspect of the invention there is provided apparatus for transmitting a stream of stereoscopic images as an overlay to a stereoscopic broadcast, the overlay being transmitted for optional display at a receiver in combination with the stereoscopic broadcast, the stream of stereoscopic images comprising a plurality of sections, each section having a first eye view component and a second eye view component, the apparatus being arranged to: determine respective maps to represent the first and second eye view components of each section; transmit data generated from the map representing the first eye view component and data generated from the map representing the second eye view for each respective section; and transmit a display offset descriptor indicating a difference between the position of the first eye component in a display field and the position of the second eye
view component in the display field.
In accordance with a fifth aspect of the invention, there is provided a method of receiving a stream of stereoscopic images as an overlay to a stereoscopic broadcast, the overlay being received for optional display at a receiver in combination with the stereoscopic broadcast, wherein the stream of stereoscopic images comprises a plurality of sections, each section having a first eye view component and a second eye view component, the method comprising: receiving a display offset descriptor indicating a difference between the position of the first eye component in a display field and the position of the
second eye view component in the display field;
receiving data representing a map of the first eye view component of each section and data representing a comparative map for each section, each comparative map representing a comparison of the respective first eye component and a respective second eye view component; generating a map of the second eye view component of each section based on the received data representing the respective map of the first eye view component and the received data representing the respective comparative map; and generating a display on the basis of the map of the first eye view component, the map of the second eye view component and the display offset descriptor.
In accordance with a sixth aspect of the invention there is provided apparatus for receiving a stream of stereoscopic images as an overlay to a stereoscopic broadcast, the overlay being received for optional display at a receiver in combination with the stereoscopic broadcast, wherein the stream of stereoscopic images comprises a plurality of sections, each section having a first eye view component and a second eye view component, the apparatus being arranged to: receive a display offset descriptor indicating a difference between the position of the first eye component in a display field and the position of the
second eye view component in the display field;
receive data representing a map of the first eye view component of each section and data representing a comparative map for each section, each comparative map representing a comparison of the respective first eye component and a respective second eye view component; generate a map of the second eye view component of each section based on the received data representing the respective map of the first eye view component and the received data representing the respective comparative map; and generate a display on the basis of the map of the first eye view component, the map of the second eye view component and the display offset descriptor.
Further features and advantages of the invention will be apparent from the following description of preferred embodiments of the invention, which are given by way of example only.
Brief Description of the Drawings
Figure 1 is a schematic diagram illustrating display of subtitles
according to the prior art;
Figures 2a and 2b are schematic diagrams illustrating stereoscopic
projection according to the prior art;
Figure 3 is a schematic diagram showing 3D subtitles in an embodiment of the invention; Figure 4 is a schematic diagram illustrating a composite symbol in an embodiment of the invention; Figure 5 is a schematic diagram illustrating encoding of bit maps of left and offset right eye view components in an embodiment of the invention; Figure 6 is a schematic diagram illustrating encoding of a bit map of a left eye view component and a comparative map in an embodiment of the invention; Figure 7 is a flow chart illustrating encoding of a symbol using a colour
look-up table according to the prior art;
Figure 8 is a flow chart illustrating encoding of a symbol using a colour look-up table in an embodiment of the invention; Figure 9 is a flow chart illustrating encoding of a symbol using a colour look-up table in an embodiment of the invention; Figure 10 is a schematic diagram showing display of 3D subtitles in an embodiment of the invention; and Figure 11 is a schematic diagram illustrating derivation of a comparative map for a section of a stream of stereoscopic subtitle images in an embodiment of the invention.
Detailed Description of the Invention
By way of example, embodiments of the invention will now be described in the context of a digital video broadcast system. However, it will be understood that this is by way of example only and that other embodiments may involve other systems for transmitting or displaying stereoscopic images such as video games systems.
Figure 1 shows how a stream of images, such as a stream of subtitle images may be displayed in a conventional two dimensional video display. The overlay may be displayed in one or more regions 2a, 2b that overlay a broadcast video image. Display of the overlay images may be optional, and alternative overlay images, such as subtitles in alternative languages, may be selected.
Characters, or groups of characters in the overlay image may be represented by bit maps 4a, 4b, 4c, 4d, 4e, which may be delivered as part of the broadcast stream. The bitmaps may be copied by the decoder from the broadcast stream into a graphics buffer which is blended with the broadcast video, which is delivered separately.
Figures 2a and 2b illustrate the principle of stereoscopic projection as used in the generation and display of stereoscopic images, that is to say three dimensional (3D) images. The view of an object 6 will be different from the point of view of a left eye and a right eye as illustrated by Figure 2a. Figure 2b shows the left eye view components and right eye view components superimposed, as for example they may be if they are displayed alternately on a display screen. A filtering device may be provided to direct each image to the appropriate eye of a viewer. For example, the viewer may be equipped with a pair of spectacles that filter out certain colours or polarisation states. It should be noted that corresponding parts of the left and right eye view components are typically offset from one another in the display, that is to say the display field, and also the perspective views between the two components may differ.
Stereoscopic, that is to say 3D, video may be delivered over a broadcast stream in what may be referred to as Frame Compatible mode. In this case, the video is delivered in the same way as current 2D video but with the video image split in two, either horizontally or vertically: one half intended for the left eye and one half for the right eye. The decoder takes each video image and separates the two halves into Left and Right images. Each image is scaled to fill the complete video display area. Alternatively, the stereoscopic video may be delivered in Service Compatible mode. The same video is delivered as for a 2D image but with additional information that provides enough information to enable the decoder to modify the broadcast image to build up the Left image and the Right image. Frame compatible mode is typically employed in current systems, but embodiments of the invention may be applied to either mode.
Conventionally, the overlay, such as subtitles, is transmitted in a two dimensional form that is back-compatible with two dimensional receivers, together with an indication of a horizontal offset, that is to say a display offset descriptor, that should be applied between the position within a display area that components of the overlay should occupy in one eye view relative to the other eye view. The choice of horizontal offset typically determines the position within a three dimensional view where the overlay will appear to be located in terms of depth, that is to say in the z-direction, i.e front-to-back. However, an overlay transmitted and displayed in this way will appear to be "razor-thin", as it has no apparent bulk or thickness, which may give an unsatisfactory viewing experience.
An embodiment of the present invention relates to a method of transmitting a stream of stereoscopic images as an overlay to a stereoscopic broadcast, the overlay being transmitted for optional display at a receiver in combination with the stereoscopic broadcast.
In this embodiment the overlay is transmitted as two different components, a left eye view 1 2a and a right view 1 2b, namely a first eye view component and a second eye view component, respectively, as shown in Figure 3. The two components will typically differ, due to the differing view points of three dimensional components of the overlay. As shown in figure 3, the left eye view component 12a and the right eye view component 12b may be displayed at different positions within the display field for the left eye 16 and the display field for the right eye 18. The left eye component may be offset by a first distance 14a from the left side of the display are, whereas the right eye component may be offset by a second distance 14b from the left side of the display area. Figure 10 shows a block of text illustrating a left eye view component 46 and a right eye view component 28.
In this embodiment, the method comprises: dividing the stream of stereoscopic images into a plurality of sections, each section having a first eye view component and a second eye view component as for example described above; and determining a map to represent the first eye view component of each said section.
Further, in accordance with this embodiment, a comparative map for each section representing a comparison between the first and second eye view components for the respective section is derived, and the respective map for each section representing the first eye view components is encoded using run length encoding to produce an encoded first eye view symbol, and the respective comparative map for each said section is encoded using run length encoding to produce an encoded comparison symbol. Figure 11 illustrates a comparative map 52 representing the differences between the left eye view 50 and right eye view 54.
Then, as illustrated by Figure 4, and referring to bitmaps illustrated therein using reference numerals 20, 22, 24, 26, 28, 30, a composite symbol for each section may be assembled comprising the respective encoded first eye view symbol followed by a first ending symbol, followed by the respective encoded comparison symbol and a second ending symbol. The encoded composite symbols may then be transmitted. Encoding the comparative map, typically a bit map, for each section using run length encoding may be particularly efficient in terms of data compression. For a typical overlay, such as for example subtitles, the differences between the left eye and right eye view components may typically correspond to the edges of characters within the overlay, with large parts of the left eye and right eye view components being similar. The comparative map, typically a bit map, may therefore be likely to contain large parts containing an indication that there is no difference between the views.
Run-length encoding is particularly efficient for encoding such content.
Figure 5 illustrates the case in which both the left eye view 34 and right eye view 36 are run length encoded to produce a composite symbol comprising an encoded left eye view 38 and right eye view 40. The left and right eye views 34, 36 are based on original text 32. Figure 6 illustrates the saving in data capacity that may be achieved by generating a comparative map 42, which may then be run length encoded to produce an encoded comparative map 44.
Features labelled in Figure 6 are similar to features of Figure 5, and are labelled with corresponding reference numerals.
The composite symbol may be receivable by a conventional receiver for receiving two dimensional images, which may recognise the encoded first eye view symbol and the first ending symbol, but disregard the encoded comparison symbol and the second ending symbol.
A descriptor may be transmitted representing a display offset, which for example is a horizontal offset, indicating a difference between the position of the first eye component in a display field and the position of the second eye view component in the display field. The display offset, for example the horizontal offset, may be determined so as to reduce differences between a first eye view component and an offset second eye view component, so that the amount of data in each encoded comparison symbol may be reduced. The display offset descriptor may relate to a position of the overlay in depth, that is to say z-direction, and data in the comparison symbol may be used to represent data relating to a three dimensional representation of objects within the overlay, for example giving apparent thickness to subtitle characters. Transmitting the combination of the descriptor of the display offset, for example a horizontal offset, and the composite symbol comprising the encoded comparison symbol gives an efficient way of transmitting the overlay so that a receiver may display a three dimensional representation of objects within the overlay.
The display offset, in this example a horizontal offset, for each section may be the same. In this case, each of the sections of the overlay may appear at the same depth. A first value of horizontal offset may be applied for first sections selected from the plurality of sections for display in a first region of a display and a second value of horizontal offset may be applied for second sections selected from the plurality of sections for display in a second region of a display, so that sections of the overlay for display in different regions may appear at different depths. For example, speech bubbles may be attached to different individuals at an appropriate depth for the position of the individual in
a field of view in the stereoscopic broadcast.
As illustrated by Figure 8, the comparative map for each section may be derived on the basis of a difference between maps for the first and second eye view components for the respective section, which may be for example on the basis of a subtraction of element values in a map for the second eye view component for the respective section from corresponding element values in the map for the first eye view component for the respective section. Alternatively, as illustrated by Figure 9, the comparative map for each section may be derived on a basis comprising a comparison of an element value in a map for the second eye view component for the respective section with a corresponding element value in the map for the first eye view component for the respective section. An element value in the comparative map may be represented by the respective element value in the map for the second eye view component if the respective S element values for the first and offset second eye view components are different and an element value in the comparative map may be represented by a reserved entry if the respective element values for the first and offset second eye view components are the same, so that that the comparative map may be easily decoded at the receiver.
The composite symbol may comprise a plurality of rows, each row representing a line of an image, and each row of the composite symbol comprises a run length encoded row of the first eye view symbol followed by a first ending symbol, the first ending symbol being followed by corresponding run length encoded row of the comparison symbol and a second ending symbol.
This has an advantage that the composite symbol may be easily decoded at the receiver, and that the composite symbol may be back-compatible with existing receivers for two-dimensional video, since these may recognise the first ending symbol and decode the first eye view symbol but may disregard the comparison symbol.
As illustrated by figure 7, in particular method steps labelled S7.1, S7.2, S7.3, S7.4, S7.S, a colour look up table may be compiled to represent colours of the stream of stereoscopic images, the respective maps comprising values from the colour look up table, which may be an efficient data compression in encoding the symbols, since it is likely that an overlay, such as subtitles, may comprise a limited number of colours. The respective maps may be bit maps having elements each representing a pixel.
The colour look up table represents a given number of colours chosen to represent the stream of stereoscopic images, typically 256 colours or fewer.
The stream of images is typically a stream of subtitle images, and each section may represent for example a character, a word, or a line of text.
Embodiments of the invention will now be described in more detail.
An example of syntax that may be used in an embodiment of the invention is shown in Table 1 and table 2 as follows.
Syntax Size Type 3d_object_data segment() { sync byte 8 bslbf segment_type 8 bslbf page_id 16 bslbf segment_length 16 uimsbf object_id 16 bslbf object_version_number 4 uimbsf object_coding_method 2 bslbf non_modifying colour_flag 1 bslbf reserved 1 bslbf 3d_offset 8 uimbsf if (object_coding_method == 00') {
top_field_data_block_length 16 uimbsf
bottom_field_data_block_length 16 uimbsf
while (processed_length <
top field data block length)
3d_pixel-data sub-blockQ while rocessed_length <
bottom field data block length)
3d_pixel-data sub-blockO 8 bslbf if (!wordaligned) 8_stuff_bits } 8 uimbsf if (object_coding_method == 01') ( number_of_codes 16 bslbf for (i1; i<number of codes; i++) character code
I
Table 1
Syntax Size Type 3d_pixel-data sub-block() { data_type 8 bslbf if (data_type == Ox 10') { repeat { 2-bit/pixel code strings (* left eye *) } until (end of 2-bit/pixel_code_string) repeat ( 2-bit/pixel_code_string_(*_right_eye ________ _______ } until (end of 2-bit/pixel_code_string) while (!bytealigned) 2 bslbf 2_stuff_bits if(datatype== Oxil') { repeat { 4-bit/pixel_code string() (* left eye *) } until (end of 4-bit/pixel_code_string) repeat { 4-bit/pixel_code string() (* right eye } until (end of 4-bit/pixel_code_string) 4 bslbf while (!bytealigned) 4_stuff_bits
I
if (data_type == 0x12') { repeat { 8-bit/pixel code string() (* left eye *) } until (end of 8-bit/pixel_code_string) repeat { 8-bit/pixel_code string() (* right eye } until (end of 8-bit/pixel_code_string) 16 bslbf if (data_type == 0x20') 32 bslbf
2 to 4-bit map-table
if(datatype=='0x21') 128 bslbf
2 to 8-bit_map-table
if (data_type == 0x22')
4 to 8-bit_map-table
} ____________ ____________
Table 2
A new segment type may be added to carry 3D objects, that is to say a stream of stereoscopic images as an overlay to a stereoscopic broadcast, as for example bitmaps and characters, and to define their 3D disparity between left and right eye views. This field could take the next free value of 0x15 A new 3d_object_data_segment may be defined largely following the existing object_data_segment, but there may be key differences. A "3d_offset" field may be added, that is to say a display offset descriptor, which defines for example a horizontal offset, which is an example of a display offset, between the left and right eye view components, that is to say images, which sets the "depth" of the object that will be "seen" by the viewer. A new sub-element 3d±pixel- data sub-blocko may be defined which may be based on an existing pixel-data_sub-block but with two sets of data provided for each line of the image, one for the left eye and one for the right eye.
The pixel data for the right eye, that is to say the map of the right eye view component, could be coded as a complete independent image or the difference from the left eye.
Current standards describe how subtitle data is delivered as bitmaps, typically defined by a series of descriptors called "segments". Currently there are: Display Definition Segment (DDS), Page Composition Segment (PCS), Region Composition Segment (RCS), CLUT Definition Segment (CDS), Object Data Segment (ODS), and End of Display set Segment (EDS).
The standards have evolved gradually as TV requirements have changed.
For example, the Display Definition Segment was introduced to provide compatibility with modem high definition (HD) displays and overcome the limitations of the fixed 720 x 576 resolution provided originally. In an embodiment of the invention, a new descriptor is introduced to support 3-D displays. This may be called, for example, the "3-D Definition Segment "(3D5).
Introducing a new descriptor may be preferable to modifying the format of existing descriptors, for reasons of backward compatibility as described earlier.
The existing data model of a conventional subtitle screen may comprise: one page (pages display in sequence); multiple regions in a page; multiple CLUTs (Colour Lookup Tables) in a page with each region potentially using a specific CLUT; and/or multiple objects in a page. An object may be decoded into any number of regions.
An embodiment of the invention may relate to bitmaps rather than character-coded objects, and the subtitle display may normally used to render simple textual graphics with a small number of colours. There may be no need to define any more regions or CLUTs for the left and right images and the differences between left-and right-eye subtitle images may be relatively minor, since subtitles often contain large blocks of solid colour. The 3-D subtitle image may be constructed by taking a base 2-D subtitle image, authored, and applying "delta" image overlays to make images for separate left-eye and right-eye views.
A legacy 2-D capable receiver may process just the base image, as it may not understand or decode the new 3-D descriptor In an embodiment of the invention, a new 3-D Definition Segment (3D5) may contain an object ID, which may be a reference to an existing object in the display set; a region ID, which may be a reference to an existing region in the display set; a sequence of pixel codes, in the same format as the existing Object Display Segment (ODS); and/or a flag indicating whether this overlay applies to the left-eye or right-eye image.
The subtitle decoder may process these 3D5 by decoding them and overlaying the resulting image onto each object with a matching ID and in a matching region; an object in a 3-D display may look different depending on the region into which it is decoded.
Embodiments of the invention may meet the dual requirements of backwards compatibility and minimum bandwidth requirements by recognising that DYB subtitle images are generally simple bitmaps, and building a solution that re-uses existing DVB subtitle compression technology to apply image overlays using very small sequences of pixel strings.
In an embodiment of the invention, a page can have a variable number of rectangular regions defined within it. Regions may be containers for objects, and each region may have a CLUT associated with it. A region may effectively be a placeholdcr for objects. Objects may be compressed data structures, which when decompressed will form the bitmaps on screen. The compression scheme may be a simple mn-length encoding system which is ideal for the kind of graphics that are typically used in subtitles (plain text on a uniform background colour). in an embodiment of the invention, objects can be re-used. For example, as illustrated by Figure 1, a character, for example an upper case H', may be rendered only once, at the left hand side of a region. Another character, e.g. Lower case 1' by comparison can be rendered several times in different places, which may be a useful way to reduce transmission bandwidth -although it should be noted that this may rarely happens in practice, as subtitle encoders tend to transmit whole words or sentences as single large objects.
In an embodiment of the invention, colours defined by the objects are not true colours, but indexes into a CLUT. If a different CLUT is assigned to regions 1 and 2, as illustrated by Figure 1, the objects may appear in a different colour in each region (even when we are re-rendering the same object in each).
In an embodiment of the invention, for any given textual subtitle string with a 3D appearance (aka volume), the left and right eye views, that is to say the first eye view component and second eye view component, respectively, may differ only by the highlighting that is necessary to give an impression of volume, so that a receiver to construct a right eye subtitle view by simply repeating the left-eye view and then drawing over this view with the differences.
There are several potential advantages to this technique. Decoding mn-length encoded objects may be relatively expensive in CPU time. Therefore processing resources may be saved by re-using the left-eye image which has already been decoded into a frame buffer. Typically, the objects representing the difference between one eye view and another are smaller than the objects needed describe the entire view, save on transmission bandwidth. An optimising encoder may find instances of repetition, and reduce the transmission bandwidth further. For example there may be 3 instances of the "1" character and 2 instances of an These difference objects only need to be transmitted once each, and can be referenced multiple times in the right eye regions.
As has been shown, embodiments of the invention may provide backward compatibility by preserving the existing subtitle format such that a 2D-only receiver will show just the left-eye view, and may re-use the subtitle region / object model to draw over one view and create another.
The above embodiments are to be understood as illustrative examples of the invention. It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims.

Claims (26)

  1. Claims 1. A method of transmitting a stream of stereoscopic images as an overlay to a stereoscopic broadcast, the overlay being transmitted for optional display at a receiver in combination with the stereoscopic broadcast, the method comprising: dividing the stream of stereoscopic images into a plurality of sections, each section having a first eye view component and a second eye view component; determining a map to represent the first eye view component of each said section; deriving a comparative map for each section representing a comparison between the first and second eye view components for the respective section; encoding the respective map for each section representing the first eye view components using run length encoding to produce an encoded first eye view symbol; encoding the respective comparative map for each said section using run length encoding to produce an encoded comparison symbol; assembling a composite symbol for each section comprising the respective encoded first eye view symbol followed by a first ending symbol, followed by the respective encoded comparison symbol and a second ending symbol; and transmitting the composite symbols.
  2. 2. A method according to claim 1, the method comprising: transmitting a display offset descriptor indicating a difference between the position of the first eye component in a display field and the position of thesecond eye view component in the display field.
  3. 3. A method according to claim 2, the method comprising: determining the display offset descriptor so as to reduce differences between a first eye view component and an offset second eye view component.
  4. 4. A method according to claim 2 or claim 3, wherein the display offset descriptor for each section is the same.
  5. 5. A method according to claim 2 or claim 3, wherein a first value of display offset descriptor is transmitted relating to first sections selected from the plurality of sections for display in a first region of a display and a second value of display offset descriptor is transmitted relating to second sections selected from the plurality of sections for display in a second region of a display.
  6. 6. A method according to any preceding claim, the method comprising: deriving the comparative map for each section on the basis of a difference between maps for the first and second eye view components for the respective section.
  7. 7. A method according to any preceding claim, the method comprising: deriving the comparative map for each section on the basis of a subtraction of element values in a map for the second eye view component for the respective section from corresponding element values in the map for the first eye view component for the respective section.
  8. 8. A method according to any of claims 1 to 6, the method comprising: deriving the comparative map for each section on a basis comprising a comparison of an element value in a map for the second eye view component for the respective section with a corresponding element value in the map for the first eye view component for the respective section; representing an element value in the comparative map by the respective element value in the map for the second eye view component if the respective element values for the first and second eye view components are different; and representing an clement value in the comparative map by a reserved entry if the respective element values for the first and second eye view components are the same.
  9. 9. A method according to any preceding claim, wherein the composite symbol comprises a plurality of rows, each row representing a line of an image, and each row of the composite symbol comprises a run length encoded row of the first eye view symbol followed by a first ending symbol, the first ending symbol being followed by corresponding run length encoded row of the comparison symbol and a second ending symbol.
  10. 10. A method according to any preceding claim, the method comprising: compiling a colour look up table to represent colours of the stream of stereoscopic images, wherein the respective maps comprise values from the colour look up table.
  11. 11. A method according to claim 10, wherein the respective maps comprising values from the colour look up table are bit maps having elements each representing a pixel.
  12. 12. A method according to claim 10 or claim 11, wherein the colour look up table represents a given number of colours chosen to represent the stream of stereoscopic images.
  13. 13. A method according to claim 12, wherein the given number of colours is 256 or fewer.
  14. 14. A method according to any preceding claim, wherein the stream of images is a stream of subtitle images.
  15. 15. A method according to any preceding claim, wherein each said section represents a character.
  16. 16. A method according to any of claims 1 to 14, wherein each said section represents a word.
  17. 17. A method according to any of claims 1 to 14, wherein each said section represents a line of text.
  18. 18. A method according to any preceding claim, wherein the stereoscopic broadcast comprises a video data stream comprising a first part representing a first eye view and a second part representing a second eye view.
  19. 19. A method according to claim 18, wherein the first part of said video data stream and the second part of said video data stream comprise substantially the same amount of data.
  20. 20. A method according to claim 18 or claim 19, wherein said video data stream is transmitted without information representing a comparison of the first part representing a first eye view and the second part representing a second eye view.
  21. 21. A method according to claims 18 to 20, wherein said video data stream is transmitted without information enabling the second eye view to be decoded on the basis of the first eye view.
  22. 22. Apparatus for transmitting a stream of stereoscopic images as an overlay to a stereoscopic broadcast, the overlay being transmitted for optional display at a receiver in combination with the stereoscopic broadcast, the apparatus being arranged to: divide the stream of stereoscopic images into a plurality of sections, each section having a first eye view component and a second eye view component; determine a map to represent the first eye view component of each said section; derive a comparative map for each section representing a comparison between the first and second eye view components for the respective section; encode the respective map for each section representing the first eye view components using run length encoding to produce an encoded first eye view symbol; encode the respective comparative map for each said section using run length encoding to produce an encoded comparison symbol; assemble a composite symbol for each section comprising the respective encoded first eye view symbol followed by a first ending symbol, followed by the respective encoded comparison symbol and a second ending symbol; and transmit the composite symbols.
  23. 23. A method of transmitting a stream of stereoscopic images as an overlay to a stereoscopic broadcast, the overlay being transmitted for optional display at a receiver in combination with the stereoscopic broadcast, the stream of stereoscopic images comprising a plurality of sections, each section having a first eye view component and a second eye view component, the method comprising: determining respective maps to represent the first and second eye view components of each section; transmitting data generated from the map representing the first eye view component and data generated from the map representing the second eye view for each respective section; and transmitting a display offset descriptor indicating a difference between the position of the first eye component in a display field and the position of thesecond eye view component in the display field.
  24. 24. Apparatus for transmitting a stream of stereoscopic images as an overlay to a stereoscopic broadcast, the overlay being transmitted for optional display at a receiver in combination with the stereoscopic broadcast, the stream of stereoscopic images comprising a plurality of sections, each section having a first eye view component and a second eye view component, the apparatus being arranged to: determine respective maps to represent the first and second eye view components of each section; transmit data generated from the map representing the first eye view component and data generated from the map representing the second eye view for each respective section; and transmit a display offset descriptor indicating a difference between the position of the first eye component in a display field and the position of thesecond eye view component in the display field.
  25. 25. A method of receiving a stream of stereoscopic images as an overlay to a stereoscopic broadcast, the overlay being received for optional display at a receiver in combination with the stereoscopic broadcast, wherein the stream of stereoscopic images comprises a plurality of sections, each section having a first eye view component and a second eye view component, the method comprising: receiving a display offset descriptor indicating a difference between the position of the first eye component in a display field and the position of thesecond eye view component in the display field;receiving data representing a map of the first eye view component of each section and data representing a comparative map for each section, each comparative map representing a comparison of the respective first eye component and a respective second eye view component; generating a map of the second eye view component of each section based on the received data representing the respective map of the first eye view component and the received data representing the respective comparative map; and generating a display on the basis of the map of the first eye view component, the map of the second eye view component and the display offset descriptor.
  26. 26. Apparatus for receiving a stream of stereoscopic images as an overlay to a stereoscopic broadcast, the overlay being received for optional display at a receiver in combination with the stereoscopic broadcast, wherein the stream of stereoscopic images comprises a plurality of sections, each section having a first eye view component and a second eye view component, the apparatus being arranged to: receive a display offset descriptor indicating a difference between the position of the first eye component in a display field and the position of thesecond eye view component in the display field;receive data representing a map of the first eye view component of each section and data representing a comparative map for each section, each comparative map representing a comparison of the respective first eye component and a respective second eye view component; generate a map of the second eye view component of each section based on the received data representing the respective map of the first eye view component and the received data representing the respective comparative map; and generate a display on the basis of the map of the first eye view component, the map of the second eye view component and the display offset descriptor.
GB1021919.4A 2010-12-23 2010-12-23 Improvements to subtitles for three dimensional video transmission Expired - Fee Related GB2488746B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
GB1021919.4A GB2488746B (en) 2010-12-23 2010-12-23 Improvements to subtitles for three dimensional video transmission
PCT/KR2011/009829 WO2012086990A2 (en) 2010-12-23 2011-12-20 Improvements to subtitles for three dimensional video transmission
EP11850047.9A EP2656614A4 (en) 2010-12-23 2011-12-20 Improvements to subtitles for three dimensional video transmission
CN201180062524.8A CN103270758B (en) 2010-12-23 2011-12-20 To the improvement of the captions sent for 3 D video
KR1020137013819A KR101846857B1 (en) 2010-12-23 2011-12-20 Improvements to Subtitles for Three Dimensional Video Transmission

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1021919.4A GB2488746B (en) 2010-12-23 2010-12-23 Improvements to subtitles for three dimensional video transmission

Publications (3)

Publication Number Publication Date
GB201021919D0 GB201021919D0 (en) 2011-02-02
GB2488746A true GB2488746A (en) 2012-09-12
GB2488746B GB2488746B (en) 2016-10-26

Family

ID=43598953

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1021919.4A Expired - Fee Related GB2488746B (en) 2010-12-23 2010-12-23 Improvements to subtitles for three dimensional video transmission

Country Status (5)

Country Link
EP (1) EP2656614A4 (en)
KR (1) KR101846857B1 (en)
CN (1) CN103270758B (en)
GB (1) GB2488746B (en)
WO (1) WO2012086990A2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017180141A1 (en) 2016-04-14 2017-10-19 Lockheed Martin Corporation Selective interfacial mitigation of graphene defects

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100118119A1 (en) * 2006-10-11 2010-05-13 Koninklijke Philips Electronics N.V. Creating three dimensional graphics data
US20100142924A1 (en) * 2008-11-18 2010-06-10 Panasonic Corporation Playback apparatus, playback method, and program for performing stereoscopic playback
WO2010064853A2 (en) * 2008-12-02 2010-06-10 Lg Electronics Inc. 3d caption display method and 3d display apparatus for implementing the same
WO2010064118A1 (en) * 2008-12-01 2010-06-10 Imax Corporation Methods and systems for presenting three-dimensional motion pictures with content adaptive information
US20100215347A1 (en) * 2009-02-20 2010-08-26 Wataru Ikeda Recording medium, playback device, integrated circuit
WO2010095074A1 (en) * 2009-02-17 2010-08-26 Koninklijke Philips Electronics N.V. Combining 3d image and graphical data
WO2010099495A2 (en) * 2009-02-27 2010-09-02 Laurence James Claydon Systems, apparatus and methods for subtitling for stereoscopic content
US20100238267A1 (en) * 2007-03-16 2010-09-23 Thomson Licensing System and method for combining text with three dimensional content
US20100302234A1 (en) * 2009-05-27 2010-12-02 Chunghwa Picture Tubes, Ltd. Method of establishing dof data of 3d image and system thereof

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101917616A (en) * 2004-02-27 2010-12-15 Td视觉有限公司 The method and system that is used for digital coding three-dimensional video image
KR20070011340A (en) * 2006-09-27 2007-01-24 티디비전 코포레이션 에스.에이. 데 씨.브이. Method and system for digital coding 3d stereoscopic video images
KR101396619B1 (en) * 2007-12-20 2014-05-16 삼성전자주식회사 System and method for generating and playing three dimensional image file including additional information on three dimensional image
KR101430474B1 (en) * 2008-02-19 2014-08-18 엘지전자 주식회사 Method and Apparatus for display of 3-D images
US8265453B2 (en) * 2008-06-26 2012-09-11 Panasonic Corporation Recording medium, playback apparatus, recording apparatus, playback method, recording method, and program
KR101305789B1 (en) * 2009-01-22 2013-09-06 서울시립대학교 산학협력단 Method for processing non-real time stereoscopic services in terrestrial digital multimedia broadcasting and apparatus for receiving terrestrial digital multimedia broadcasting
EP2211556A1 (en) * 2009-01-22 2010-07-28 Electronics and Telecommunications Research Institute Method for processing non-real time stereoscopic services in terrestrial digital multimedia broadcasting and apparatus for receiving terrestrial digital multimedia broadcasting

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100118119A1 (en) * 2006-10-11 2010-05-13 Koninklijke Philips Electronics N.V. Creating three dimensional graphics data
US20100238267A1 (en) * 2007-03-16 2010-09-23 Thomson Licensing System and method for combining text with three dimensional content
US20100142924A1 (en) * 2008-11-18 2010-06-10 Panasonic Corporation Playback apparatus, playback method, and program for performing stereoscopic playback
WO2010064118A1 (en) * 2008-12-01 2010-06-10 Imax Corporation Methods and systems for presenting three-dimensional motion pictures with content adaptive information
WO2010064853A2 (en) * 2008-12-02 2010-06-10 Lg Electronics Inc. 3d caption display method and 3d display apparatus for implementing the same
WO2010095074A1 (en) * 2009-02-17 2010-08-26 Koninklijke Philips Electronics N.V. Combining 3d image and graphical data
US20100215347A1 (en) * 2009-02-20 2010-08-26 Wataru Ikeda Recording medium, playback device, integrated circuit
WO2010099495A2 (en) * 2009-02-27 2010-09-02 Laurence James Claydon Systems, apparatus and methods for subtitling for stereoscopic content
US20100302234A1 (en) * 2009-05-27 2010-12-02 Chunghwa Picture Tubes, Ltd. Method of establishing dof data of 3d image and system thereof

Also Published As

Publication number Publication date
WO2012086990A2 (en) 2012-06-28
GB2488746B (en) 2016-10-26
KR101846857B1 (en) 2018-04-09
KR20140000257A (en) 2014-01-02
CN103270758A (en) 2013-08-28
WO2012086990A3 (en) 2012-11-29
EP2656614A2 (en) 2013-10-30
GB201021919D0 (en) 2011-02-02
EP2656614A4 (en) 2017-05-03
CN103270758B (en) 2016-01-20

Similar Documents

Publication Publication Date Title
US9961325B2 (en) 3D caption display method and 3D display apparatus for implementing the same
US9253469B2 (en) Method for displaying 3D caption and 3D display apparatus for implementing the same
US9699439B2 (en) 3D caption signal transmission method and 3D caption display method
CN102415101B (en) Broadcast transmitter, broadcast receiver and 3d video data processing method thereof
CN104333746B (en) Broadcast receiver and 3d subtitle data processing method thereof
US20110119709A1 (en) Method and apparatus for generating multimedia stream for 3-dimensional reproduction of additional video reproduction information, and method and apparatus for receiving multimedia stream for 3-dimensional reproduction of additional video reproduction information
KR20110018262A (en) Method and apparatus for processing signal for 3-dimensional display of additional data
JP2012518367A (en) 3D video format
US20140078248A1 (en) Transmitting apparatus, transmitting method, receiving apparatus, and receiving method
CN101218827A (en) Method and device for coding a video content comprising a sequence of pictures and a logo
CN102292993A (en) Three-dimensional subtitle display method and three-dimensional display device for implementing the same
CN103597823A (en) Transmission device, transmission method and receiver device
EP2621177A1 (en) Transmission device, transmission method and reception device
CN102870419A (en) Three-dimensional image data encoding method and device and decoding method and device
KR101846857B1 (en) Improvements to Subtitles for Three Dimensional Video Transmission
EP2600620A1 (en) Transmission device, transmission method, and receiving device

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20181223