EP2524513A1 - Liaison de service pour un transport de données de disparité de sous-titres - Google Patents

Liaison de service pour un transport de données de disparité de sous-titres

Info

Publication number
EP2524513A1
EP2524513A1 EP11753806A EP11753806A EP2524513A1 EP 2524513 A1 EP2524513 A1 EP 2524513A1 EP 11753806 A EP11753806 A EP 11753806A EP 11753806 A EP11753806 A EP 11753806A EP 2524513 A1 EP2524513 A1 EP 2524513A1
Authority
EP
European Patent Office
Prior art keywords
data
service
mapped
disparity
extended service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP11753806A
Other languages
German (de)
English (en)
Other versions
EP2524513A4 (fr
Inventor
Mark Kenneth Eyer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/022,828 external-priority patent/US8730301B2/en
Application filed by Sony Corp filed Critical Sony Corp
Publication of EP2524513A1 publication Critical patent/EP2524513A1/fr
Publication of EP2524513A4 publication Critical patent/EP2524513A4/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/172Processing image signals image signals comprising non-image signal components, e.g. headers or format information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/172Processing image signals image signals comprising non-image signal components, e.g. headers or format information
    • H04N13/183On-screen display [OSD] information, e.g. subtitles or menus

Definitions

  • FIGURE 1 is an example caption_disparity_data() command arrangement consistent with certain embodiments of the present invention.
  • FIGURE 2 is an example piecewise linear approximation of authored disparity data consistent with certain embodiments of the present invention.
  • FIGURE 3 is an example encoder consistent with certain embodiments of the present invention.
  • FIGURE 4 is an example decoder consistent with certain embodiments of the present invention.
  • FIGURE 5 is an example television receiver device consistent with certain embodiments of the present invention.
  • FIGURE 6 is an example block diagram depicting the various operations of a processor consistent with certain embodiments of the present invention.
  • FIGURE 7 is an example flow chart of a process consistent with certain embodiments of the present invention.
  • the terms “a” or “an”, as used herein, are defined as one or more than one.
  • the term “plurality”, as used herein, is defined as two or more than two.
  • the term “another”, as used herein, is defined as at least a second or more.
  • the terms “including” and/or “having”, as used herein, are defined as comprising (i.e., open language).
  • the term “coupled”, as used herein, is defined as connected, although not necessarily directly, and not necessarily mechanically.
  • program or “computer program” or similar terms, as used herein, is defined as a sequence of instructions designed for execution on a computer system.
  • a "program”, or “computer program” may include a subroutine, a program module, a script, a function, a procedure, an object method, an object implementation, in an executable application, an applet, a servlet, a source code, an object code, a shared library / dynamic load library and/or other sequence of instructions designed for execution on a computer system.
  • program may also be used in a second context (the above definition being for the first context).
  • the term is used in the sense of a "television program”.
  • the term is used to mean any coherent sequence of audio video content such as those which would be interpreted as and reported in an electronic program guide (EPG) as a single television program, without regard for whether the content is a movie, sporting event, segment of a multi-part series, news broadcast, etc.
  • EPG electronic program guide
  • the term may also be interpreted to encompass commercial spots and other program-like content which may not be reported as a program in an electronic program guide.
  • the CC window and associated text is likely to be rendered in the plane of the screen unless steps are taken to render the window and text such that they appear at a different, more appropriate, perceived depth.
  • An object in a scene within 3D content may be presented such that it appears to the viewer to be some distance in front of the plane of the display screen. If a captioning window positioned in depth at the plane of the display were to be placed in front of that object, a "depth violation" would occur. In such a case, the viewer is presented with conflicting depth cues, a situation that causes eye fatigue and discomfort.
  • captioning may intersect in the z-axis with content in the scene if it is simply positioned at the screen plane
  • the presentation of captioning is preferably individually authored to the subject matter of the video presentation.
  • extra information can be sent along with the captions to define the perceived placement on the z-axis (a designated distance in front or behind the plane of the screen) of a window containing the caption text for effective presentation and to avoid interference with objects in the scene.
  • a number of techniques can be devised to provide this information, but many have disadvantages.
  • Stereoscopic 3D television involves delivery to the display screen of separate views for the left and right eyes, coupled with a method to allow each of the viewer's eyes to see only the image intended for that eye.
  • the illusion of depth is achieved when, for a given object, the left- and right-eye views differ in the horizontal position of that object's placement.
  • display disparity i.e., as measured on the screen
  • retina disparity is defined as the difference of the physical x coordinates of corresponding points in the right and left images in a pair of aligned stereo images displayed on a screen.
  • disparity is negative (e.g. when the left-eye image is rendered on the screen to the right of the right-eye image)
  • the object is perceived as being in front of the plane of the screen.
  • disparity is positive (e.g. when the left-eye image is rendered on-screen to the left of the right-eye image), the object is perceived as being behind the screen plane.
  • the term "disparity data" can refer to any data indicating the disparity value to be used in rendering a given graphical object, such as a caption window and its associated text.
  • the term can also be used more generally to refer to data reflecting the z-axis positions of objects in the current scene.
  • the scene area can be mapped out into regions, with the z-axis position of the object in each region that is closest to the viewer recorded as a coordinate in the map.
  • Such a map may be called a "disparity map” or a "depth map.”
  • Disparity maps can change on a frame-by-frame basis and can be represented in any number of ways. It is noted that disparity is a measure of the horizontal offset of left eye and right eye images, but the offset need not be in an integer number of pixels as fractional pixel offsets are perfectly acceptable.
  • disparity is generally represented as a percentage of the width of the accompanying video. As such, it is a dimensionless number.
  • the signaling scheme may specify that one unit of disparity is equal to 1/1920 of the width of the video content (which is generally rendered to match the width of the display screen). But, a disparity of 1/1920 is not the minimum increment in actual disparity even with a screen width of 1920. With this definition, a disparity of 7 refers to a distance equal to 7/1920 of the width of the video content.
  • disparity should most properly be viewed as the difference in the physical location on the screen along the x axis (horizontal) of corresponding points in left eye and right eye images in a pair of aligned stereoscopic images.
  • the CC window will generally be a two dimensional window which is positioned along the z-axis and which is perceived to be in a plane parallel to the plane of the display screen.
  • the subject matter herein addresses a problem involving the transport of data to support 3D caption services.
  • a method is needed to deliver data in the DTV Caption Channel of CEA-708 compliant devices that can as nearly as possible be assured to be backwards- compatible with legacy (existing, fielded) caption decoders.
  • One possibility is to use the Extended Channel as described in U.S. Patent Application 13/022,810 filed of even date herewith entitled "EXTENDED COMMAND STREAM FOR CLOSED CAPTION DISPARITY", to Eyer, which is hereby incorporated in its entirety by reference.
  • Closed captioning data for 3D audio/video content includes both the definition of caption window attributes and text as well as the disparity data specifying the z-axis position (depth) that each caption window is to be rendered on top of 3D video.
  • a 2D version of the same content is distributed to receivers through a different path (for example, to be broadcast on a different channel on cable).
  • the same closed captioning data, including disparity data may accompany the 2D version of the content. Since the 2D version of the content is processed by non-3 D-capable receivers (which may be called "legacy" receivers), the disparity data should be properly disregarded, or skipped over, when the captioning data is processed.
  • CEA-708 The CEA standard for advanced captioning, CEA-708, included a number of provisions intended to allow future extensions to the standard to be made. Using one of these extensions for the addition of disparity data would seem to be a logical choice, however implementations of CEA-708 caption decoders have been found to be deficient with respect to the way they handle some of these extensions. CEA-708 standard is unclear or confusing in some areas, a fact that contributes to implementation errors or omissions.
  • a method described herein involves delivering disparity data within a separate caption service that is known to be associated with one of the standard caption services.
  • CEA-708 advanced captioning standard supports multiple simultaneous caption services so that, for example, captioning in different languages can be offered for the same program.
  • CEA-708 defines a "minimum decoder" in Section 9. A minimum decoder is required to process the "standard” service numbers 1 through 6. Processing "Extended” services 7 through 63 is optional. Quoting from the standard, "Decoders shall be capable of decoding all Caption Channel Block Headers consisting of Standard Service Headers, Extended Service Block Headers, and Null Block headers.”
  • the disparity data transport method described herein involves placing the 3D data in services identified with Service Numbers in the Extended range (7-63).
  • a standard method for carrying disparity data could be envisioned in which one Extended service, for example Service Number 63, would carry the 3D-related data for Standard service number 1. This method would be insufficient, however, to handle the case of multiple simultaneous caption services (such as English and Spanish captioning being offered simultaneously).
  • Extended service 63 (or some other service number in the 7-62 range) could carry 3D data for one or more standard services.
  • signaling could be present to associate a certain block of 3D data with a particular one of the Standard services (1-6).
  • Standard services 1-6
  • the timing of the transmission of 3D data should be closely matched to the caption data establishing the caption window definitions and text. If the 3D data for multiple Standard services is transported within a single Extended service, decoders would encounter data blocks for Standard services they are not decoding, resulting in wasted effort.
  • the present subject matter overcomes the above limitations by defining a mapping between Service Numbers 1-6 and six Extended service numbers.
  • a 3D-capable receiver when decoding captions for Standard service #1, would also process service blocks for the Extended service that is mapped to Standard service #1.
  • the mapped Extended service would contain only 3D data associated with Standard service #1 and no other service. For simplicity and efficiency, a standard mapping could be used.
  • An additional aspect of the present subject matter involves the method where the data structure used to transport the 3D disparity data uses an extensibility feature of CEA-708 called the "variable-length" command as defined in CEA-708-D Sec. 7.1.11.2.
  • Such a method would not be suitable for use within the context of Standard services, because it is believed that a significant population of legacy receivers exist that are not able to handle this feature. Such receivers would likely display garbled caption output on-screen if they would encounter one of these Variable Length Commands. However, if the disparity data is delivered in a service block with an Extended Service Number, this is not a problem. It is believed that all existing receivers are able to skip service blocks (including Extended services) corresponding to service numbers they are not currently decoding. If any legacy receiver attempts to decode disparity data (which should not normally occur, as caption services containing disparity data are not announced in the Caption Service Descriptor), if the receiver is built according to CEA-708-D, it will simply disregard the contents of the command.
  • 3D disparity data is sent in a caption service identified with an Extended Service Number, either the full 2D data would need to be replicated (which is wasteful of bandwidth), or the Service Number of the 2D service this 3D service is linked to would need to be transmitted (also wasteful of bandwidth).
  • 3D disparity data in the service identified with an Extended Service Number could possibly be decoded by a legacy decoder (if such a decoder allowed the user to select services by number).
  • the legacy device if built compliant to CEA-708-D, would decode correctly— it should simply discard the 3D data as an unsupported command.
  • mapping scheme is used.
  • the following example mapping table is one example of a mapping that could be used:
  • the number 49 is chosen in this example because, in binary, the Service Number it corresponds with (e.g. provides additional commands for) is indicated in the least-significant 3 bits.
  • the extended service number conveniently "self-maps" to the service number.
  • the (Extended Service Number) bitwise- ANDed with 7 (111 in binary) yields the associated Main Caption Service Number.
  • this method should not be considered limiting since other mappings could be chosen as well.
  • 1 could be associated with 51, 2 with 52, etc.
  • the mapped extended service numbers can be mapped as 1 to 57, 2 to 58, 3 to 59, 4 to 60, 5 to 61 and 6 to 62.
  • the mapped extended service number need not have any of the limitations of being sequential, being selected from the examples above or having any particular arrangement of bits when represented in binary.
  • the Variable Length Command as defined in CEA-708-D Sec. 7.1.11.2 can be used. Such commands use the "C3" command ("C3 Code Set - Extended Control Code Set 2"). If properly implemented, legacy decoders should skip variable length commands further assuring that they will not take an unpredictable action.
  • Extended Service numbers are used for the disparity data.
  • Multiple captions services can be accommodated by use of different Extended Eervice numbers (7-63) that are mapped to Standard service numbers 1-6. If done according to the teachings herein, there is no need to explicitly identify the service number in the data structure, since it is automatically mapped to an Extended Service number.
  • a variable-length command can be used to define the disparity data in any suitable manner.
  • CSD Caption Service Descriptor
  • PSIP Program and System Information Protocol
  • variable length disparity command is depicted as 100 in FIGURE 1.
  • variable-length commands are indicated by the EXTl character followed by a number in the range 0x90 to 0x9F, where the "Ox" notation denotes a number represented in hexadecimal format.
  • the EXTl character (0x10) is followed by 0x90.
  • 0x90 is the command identifier for the SetDisparity command.
  • the next byte contains a two-bit Type field, a zero bit, followed by a 5 -bit length field.
  • the caption_disparity_data() data structure follows the byte containing the length field.
  • the syntax of one example of the caption disparity data is depicted in pseudocode in TABLE 1 below:
  • This example TABLE 1 can for example utilize a mechanism as described in U.S. Patent Application Serial Number filed of even date herewith entitled "Disparity Data Transport and Signaling" to Eyer et al., which is incorporated herein in its entirety by reference, for calling out disparity data as a number of frames and a piecewise linear modeling of authored disparity.
  • the slope of each line segments and number of frames for which the slope applies is used to define the disparity.
  • this mechanism can be utilized to carry any other suitable representation of disparity as desired.
  • caption_window_count a 3 -bit unsigned integer that indicates the number of caption windows included in this instance of the caption_disparity_data().
  • caption_window_id a 3 -bit unsigned integer that identifies the Window ID in the corresponding service for which disparity data is being given in this iteration of the "for" loop
  • temporaI_extension_flag - a 1-bit flag that, when set to "1", indicates data is included that identifies a time-varying disparity path.
  • disparity [i] - a 9-bit signed integer that indicates the disparity value of the associated caption window, relative to 1920 horizontal pixels. Value zero indicates the screen plane (no disparity). Negative values correspond with perceived depths in front of the screen; positive values behind.
  • segment_count - a 5-bit unsigned integer in the range 1 to 31 that indicate the number of segments to follow.
  • FIGURE 3 a basic diagram of a service provider such as a broadcaster is depicted.
  • a single service provider may provide multiple programs over one or more transport streams.
  • the audio, video and caption data are provided to an encoder which encodes the data into packets suitable for distribution, including caption data packets as described above.
  • Program A and Program B are encoded by encoders 402 and 404 which are then provided to a transport stream multiplexer 410 which then provides an output that can be distributed via a physical channel medium such as cable or satellite broadcast.
  • This encoded data from the physical channel is received at a television receiver device (e.g., a television or a set top box) as depicted in FIGURE 4.
  • the transport stream is demultiplexed at transport stream demultiplexer 504 to produce one or more program streams including audio, video and caption data (as well as possibly other data not shown).
  • Video packets from Program A are passed from demultiplexer 504 to video parser 508.
  • Audio packets from Program A are passed from demultiplexer 504 to audio decoder 512 which in turn produces the audio output.
  • Video parser 508 extracts video packets from the video stream and passes them to video decoder 514.
  • Video parser 508 extracts user data from the video stream and passes it to user data parser 510.
  • User data parser 510 extracts closed captioning data from within user data packets and passes it to caption processor 516.
  • caption service blocks containing data for caption services other than the one of interest are filtered out and discarded.
  • caption processor 516 processes caption service blocks corresponding to the Main service of interest, while at the same time processing caption service blocks corresponding to the mapped Extended service.
  • the output of caption processor 516 is the graphical representation of the closed captions, typically text enclosed in caption windows. For 3D content, the output of caption processor 516 is separate outputs of captioning for the left-eye view and the right eye views, with appropriate disparity applied to establish the perceived depth (z-plane position) of each caption window.
  • the caption graphics are composited at compositor 520 with the video data so as to produce 3D video with captions placed according to the data in the caption data packets in the x, y and z plane.
  • Such data may place the captions in the z-axis in a static manner or dynamically in according with the authoring of the caption data.
  • a receiver device is depicted in greater detail in FIGURE 5 wherein content is received via any suitable source such as terrestrial broadcast, cable or satellite at a receiver 600's tuner/demodulator 602.
  • the transport stream from the tuner/demodulator 602 is demultiplexed at demultiplexer 606 into audio and video streams.
  • the audio is decoded at an audio decoder 610 while the video is decoded at a video decoder 614.
  • Uncompressed A V data may also be received via an uncompressed A/V interface 618 that can be selectively utilized.
  • a V content may also be received via the Internet 622 via a network interface 626 for IP television content decoding.
  • storage 630 can be provided for non-real time (NRT) stored content.
  • NRT content can be played by demultiplexing at 606 in a manner similar to that of other sources of content.
  • the receiver generally operates under control of a processor such as CPU 638 which is interconnected to working memory 640 and program memory 642 as well as a graphics subsystem 644 via one or more buses such as 650.
  • the CPU 638 receives closed caption data from the demultiplexer 606 as well as the disparity data via the mechanism described herein and determines by parsing the data in the extended service what z-position as well as x and y position to locate the caption data. This information is passed to the graphics subsystem 644 and the images are composited at the compositor 660 to produce an output suitable for processing and display on a video display.
  • FIGURE 6 depicts one implementation of the program modules used to process the caption data supplied in the manner described herein.
  • the CPU 638 operating under program control from program memory 642 and using working memory 640 receives the demultiplexed transport stream from demultiplexer 606 and a parser module 704 produces the caption data (that is, the caption text) at 708 and determines from the extended service processing module 712 that the service referenced in the service number presented corresponds to an extended service mapped in the extended service map table 714 where the disparity data are extracted from the extended service's service blocks of data for use by the graphics subsystem 644 for processing.
  • Other data may be parsed by parser 704 to other data processing modules as indicated by 706.
  • a television receiver device that processes disparity data for closed captions has a receiver that receives closed caption data including closed caption text within a service block having a service number in the range of 1-6;
  • One or more processors such as 638 map the service number to a corresponding mapped extended service that is an unannounced service.
  • a parser process such as 704 parses the disparity data from the closed caption data appearing in the mapped extended service. The parser further receives closed caption text data from the transport stream.
  • a compositor such as 660 receives and processes the disparity data and the caption text to produce an output suitable for defining a rendering of a three dimensional image on a display of the caption text at a z-axis position defined by the disparity data.
  • the extended service corresponds to a service number in the range of 7 through 63.
  • the disparity data are carried in a CEA-708 compliant variable length command.
  • the mapped extended service numbers are mapped as 1 to 49, 2 to 50, 3 to 51, 4 to 52, 5 to 53 and 6 to 54 while in others, the mapped extended service numbers are mapped as 1 to 57, 2 to 58, 3 to 59, 4 to 60, 5 to 61 and 6 to 62.
  • the mapped extended service numbers when represented in binary uniquely identify the associated service number by the extended service number's three least significant bits.
  • FIG. 800 An overall process consistent with example implementations of the current invention is depicted in the flow chart 800 of FIGURE 7 starting at 802. If captions are enabled at 804, then at 806, data is received indicating that caption data is present in any of service numbers 1-6. The service number selected is mapped to an extended service number within the range of 7-63 at 810. At 814, closed caption text is received and at 818 closed caption disparity data are received. It will be understood that these events are a continuous flow with text and caption data being received on an ongoing basis, so that the present representation is not to be construed as accurately depicting time. The disparity data is processed at 822 to determine the z-axis position of the closed caption text and the closed caption window containing the text and this data can then be output to produce composited display data.
  • a method of processing disparity data for closed captions for three dimensional video involves receiving closed caption data including closed caption text within a service block having a service number in the range of 1-6; mapping the service number to a corresponding mapped extended service having a service number in the range of 7 through 63 that is an unannounced service as in 810.
  • the disparity data is parsed from the closed caption data appearing in the mapped extended service.
  • the process then proceeds in receiving closed caption text data and processing the caption text and disparity data to produce an output suitable for defining a rendering of a three dimensional image on a display of the caption text at a z-axis position defined by the disparity data, where the disparity data are carried in a CEA-708 compliant variable length command.
  • a method of processing disparity data for closed captions for three dimensional video involves receiving closed caption data including closed caption text within a service block having a service number in the range of 1-6; mapping the service number to a corresponding mapped extended service having a service number in the range of 7 through 63; parsing the disparity data from the closed caption data appearing in the mapped extended service; receiving closed caption text data; and processing the caption text and disparity data to produce an output suitable for defining a rendering of a three dimensional image on a display of the caption text at a z-axis position defined by the disparity data.
  • the disparity data are carried in a CEA- 708 compliant variable length command in order to further prevent legacy receivers from taking action on the higher numbered extended services.
  • the mapped extended service numbers are mapped as 1 to 49, 2 to 50, 3 to 51, 4 to 52, 5 to 53 and 6 to 54 while in other implementations, the mapped extended service numbers are mapped as 1 to 57, 2 to 58, 3 to 59, 4 to 60, 5 to 61 and 6 to 62.
  • the mapped extended service numbers when represented in binary uniquely identify the associated service number by the extended service number's three least significant bits.
  • Another example method of processing disparity data for closed captions for three dimensional video involves receiving closed caption data including closed caption text within a service block having a service number in the range of 1-6; mapping the service number to a corresponding mapped extended service that is an unannounced service; parsing the disparity data from the closed caption data appearing in the mapped extended service; receiving closed caption text data; and processing the caption text and disparity data to produce an output suitable for defining a rendering of a three dimensional image on a display of the caption text at a z-axis position defined by the disparity data.
  • the extended service corresponds to a service number in the range of 7 through 63.
  • the disparity data are carried in a CEA- 708 compliant variable length command.
  • the mapped extended service numbers are mapped as 1 to 49, 2 to 50, 3 to 51, 4 to 52, 5 to 53 and 6 to 54 while in others other mappings can be used such as 1 to 57, 2 to 58, 3 to 59, 4 to 60, 5 to 61 and 6 to 62.
  • the mapped extended service numbers when represented in binary uniquely identify the associated service number by the extended service number's three least significant bits.
  • the disparity data can be delivered as a continuous stream or can be pre-delivered in advance.
  • non-transitory storage devices including as for example Read Only Memory (ROM) devices, Random Access Memory (RAM) devices, network memory devices, optical storage elements, magnetic storage elements, magneto-optical storage elements, flash memory, core memory and/or other equivalent volatile and non-volatile storage technologies without departing from certain embodiments of the present invention.
  • ROM Read Only Memory
  • RAM Random Access Memory
  • network memory devices optical storage elements, magnetic storage elements, magneto-optical storage elements, flash memory, core memory and/or other equivalent volatile and non-volatile storage technologies without departing from certain embodiments of the present invention.
  • non-transitory does not suggest that information cannot be lost by virtue of removal of power or other actions. Such alternative storage devices should be considered equivalents.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

L'invention porte sur un procédé de traitement de données de disparité pour des sous-titres pour une vidéo tridimensionnelle, qui met en jeu la réception des données de sous-titres comprenant un texte de sous-titres à l'intérieur d'un bloc de service ayant un numéro de service dans la plage de 1-6 ; le mappage de numéro de service avec un service étendu mappé correspondant ayant un numéro de service dans la plage de 7 à 63 ; l'analyse des données de disparité à partir des données de sous-titres apparaissant dans le service étendu mappé ; la réception des données de texte de sous-titres ; et le traitement du texte de sous-titres et des données de disparité pour produire une sortie appropriée pour définir un rendu d'une image tridimensionnelle sur un dispositif d'affichage du texte de sous-titres au niveau d'une position d'axe z définie par des données de disparité. Cet abrégé n'est pas considéré comme étant limitatif, étant donné que d'autres modes de réalisation peuvent s'écarter des caractéristiques décrites dans cet abrégé.
EP11753806.6A 2010-03-12 2011-03-01 Liaison de service pour un transport de données de disparité de sous-titres Withdrawn EP2524513A4 (fr)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US31361210P 2010-03-12 2010-03-12
US3167310P 2010-03-23 2010-03-23
US34665210P 2010-05-20 2010-05-20
US37879210P 2010-08-31 2010-08-31
US41545710P 2010-11-19 2010-11-19
US41592410P 2010-11-22 2010-11-22
US13/022,828 US8730301B2 (en) 2010-03-12 2011-02-08 Service linkage to caption disparity data transport
PCT/US2011/026698 WO2011112392A1 (fr) 2010-03-12 2011-03-01 Liaison de service pour un transport de données de disparité de sous-titres

Publications (2)

Publication Number Publication Date
EP2524513A1 true EP2524513A1 (fr) 2012-11-21
EP2524513A4 EP2524513A4 (fr) 2014-06-25

Family

ID=47018669

Family Applications (1)

Application Number Title Priority Date Filing Date
EP11753806.6A Withdrawn EP2524513A4 (fr) 2010-03-12 2011-03-01 Liaison de service pour un transport de données de disparité de sous-titres

Country Status (1)

Country Link
EP (1) EP2524513A4 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5519443A (en) * 1991-12-24 1996-05-21 National Captioning Institute, Inc. Method and apparatus for providing dual language captioning of a television program
US20030035063A1 (en) * 2001-08-20 2003-02-20 Orr Stephen J. System and method for conversion of text embedded in a video stream
US20060184994A1 (en) * 2005-02-15 2006-08-17 Eyer Mark K Digital closed caption transport in standalone stream
WO2008115222A1 (fr) * 2007-03-16 2008-09-25 Thomson Licensing Système et procédé permettant la combinaison de texte avec un contenu en trois dimensions
US20090060044A1 (en) * 2007-07-04 2009-03-05 Lg Electronics Inc. Digital broadcasting system and data processing method
WO2010010499A1 (fr) * 2008-07-25 2010-01-28 Koninklijke Philips Electronics N.V. Gestion d'affichage 3d de sous-titres

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5519443A (en) * 1991-12-24 1996-05-21 National Captioning Institute, Inc. Method and apparatus for providing dual language captioning of a television program
US20030035063A1 (en) * 2001-08-20 2003-02-20 Orr Stephen J. System and method for conversion of text embedded in a video stream
US20060184994A1 (en) * 2005-02-15 2006-08-17 Eyer Mark K Digital closed caption transport in standalone stream
WO2008115222A1 (fr) * 2007-03-16 2008-09-25 Thomson Licensing Système et procédé permettant la combinaison de texte avec un contenu en trois dimensions
US20090060044A1 (en) * 2007-07-04 2009-03-05 Lg Electronics Inc. Digital broadcasting system and data processing method
WO2010010499A1 (fr) * 2008-07-25 2010-01-28 Koninklijke Philips Electronics N.V. Gestion d'affichage 3d de sous-titres

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"PROPOSED SMPTE STANDARD FOR TELEVISION - DTV CLOSED-CAPTION SERVER TO ENCODER INTERFACE", SMPTE - MOTION IMAGING JOURNAL, SOCIETY OF MOTION PICTURE AND TELEVISION ENGINEERS, WHITE PLAINS, NY, US, vol. 108, no. 11, 1 November 1999 (1999-11-01), pages 830-833, XP000877922, ISSN: 0036-1682 *
See also references of WO2011112392A1 *

Also Published As

Publication number Publication date
EP2524513A4 (fr) 2014-06-25

Similar Documents

Publication Publication Date Title
US8730301B2 (en) Service linkage to caption disparity data transport
US9912932B2 (en) Data transport in caption service
KR101672283B1 (ko) 3d 비디오 신호 처리 방법 및 이와 같은 기능을 수행하는 디지털 방송 수신기
US9986220B2 (en) Auxiliary data in 3D video broadcast
KR101623020B1 (ko) 방송 수신기 및 3d 비디오 데이터 처리 방법
KR20140138630A (ko) 표준 캡션 서비스에서의 비클로즈드 캡션 데이터 전송
KR101653319B1 (ko) 3d 영상을 위한 영상 컴포넌트 송수신 처리 방법 및 장치
KR20110125201A (ko) 방송 수신기 및 3d 자막 데이터 처리 방법
EP2524513A1 (fr) Liaison de service pour un transport de données de disparité de sous-titres
KR20150133620A (ko) 비-실시간 서비스 기반 지상파 3차원 방송 제공 방법 및 장치

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20120814

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20140523

RIC1 Information provided on ipc code assigned before grant

Ipc: H04N 13/00 20060101ALI20140519BHEP

Ipc: H04N 13/02 20060101AFI20140519BHEP

Ipc: H04N 21/488 20110101ALI20140519BHEP

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20161103

RIN1 Information on inventor provided before grant (corrected)

Inventor name: EYER, MARK KENNETH

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20170314