WO2016047985A1 - Procédé et appareil de traitement de signaux de diffusion 3d - Google Patents

Procédé et appareil de traitement de signaux de diffusion 3d Download PDF

Info

Publication number
WO2016047985A1
WO2016047985A1 PCT/KR2015/009905 KR2015009905W WO2016047985A1 WO 2016047985 A1 WO2016047985 A1 WO 2016047985A1 KR 2015009905 W KR2015009905 W KR 2015009905W WO 2016047985 A1 WO2016047985 A1 WO 2016047985A1
Authority
WO
WIPO (PCT)
Prior art keywords
service
video
field
information
image
Prior art date
Application number
PCT/KR2015/009905
Other languages
English (en)
Korean (ko)
Inventor
황수진
서종열
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Publication of WO2016047985A1 publication Critical patent/WO2016047985A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/2362Generation or processing of Service Information [SI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/2365Multiplexing of several video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • H04N21/4821End-user interface for program selection using a grid, e.g. sorted out by channel and broadcast time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Definitions

  • the present invention relates to a method and apparatus for processing a broadcast signal. More specifically, the present invention relates to a method and apparatus for providing a broadcast service while maintaining compatibility between 2D and 3D service in a 3D broadcast system.
  • the technical problem to be solved by the present invention is to solve the above-mentioned problems, and in a 3D broadcast system, a receiver capable of 3D playback or consuming broadcast content in 2D instead of 3D playback, can effectively produce 2D broadcast content. To make it play.
  • Another object of the present invention is to provide a signaling scheme capable of rendering broadcast content in consideration of the aspect ratio of the provided content and the aspect ratio of the receiver while maintaining compatibility between 3D broadcasting and 2D broadcasting. have.
  • a broadcast reception device including: a tuner for receiving a broadcast signal including component data and signaling information for a service, wherein the component data is a first component for 2D video; Data and second component data for the 3D image; A decoder for decoding at least one of the first component data and the second component data based on the signaling information; And an output unit configured to display the 3D image based only on the decoded first component data among the first component data and the second component data.
  • the apparatus may further include a 2D / 3D converter that converts the 2D image into the 3D image based on the decoded first component data.
  • a 2D / 3D converter that converts the 2D image into the 3D image based on the decoded first component data.
  • each of the first component data and the second component data is associated with one of a left image and a right image, and further includes a cropper for extracting the decoded first component data based on extraction information.
  • a cropper for extracting the decoded first component data based on extraction information.
  • said signaling information includes service type information specifying a type of said service, said service type information may specify a High Efficiency Video Coding (HEVC) digital television service.
  • HEVC High Efficiency Video Coding
  • said signaling information comprises a component descriptor specifying a type of a stream for said component data, said component descriptor comprising: first stream content information specifying a type of said stream, said first stream content; Second stream content information that specifies the type of the stream in combination with the information, and component type information indicating the type of the component data may be included.
  • the signaling information may include a plurality of the component descriptors.
  • the plurality of component descriptors may include a first component descriptor
  • the first component descriptor may include codec information of the stream
  • the codec information may indicate HEVC.
  • the plurality of component descriptors includes at least one second component descriptor, wherein the at least one second component descriptor includes codec information, profile information, resolution information, aspect ratio information, frame rate information, At least one of the image format information and the bit-depth information.
  • the plurality of component descriptors include a frame-compatible 3D service for providing a 3D service based on one stream including a left image and a right image spatially in one frame, and the 2D image in each frame;
  • a frame-compatible 3D service for providing a 3D service based on one stream including a left image and a right image spatially in one frame, and the 2D image in each frame;
  • One of two 2D-plus-depth 3D services that provide a 3D service based on two streams including depth information may be indicated.
  • the format of the component data may be a segmented rectangular format including a first area for the left image and a plurality of second areas for the right image in one frame.
  • the extraction information may include Conformance Cropping Window information that specifies a rectangular area for output.
  • the extraction information may include default display window information that specifies a rectangular area for display.
  • the extraction information may include a Segmented Rectangular-Frame Packing Arrangement Supplemental Enhancement Information (SR-FPA SEI) message for rearranging samples of the left image and the right image in the segmented rectangular format.
  • SR-FPA SEI Segmented Rectangular-Frame Packing Arrangement Supplemental Enhancement Information
  • the SR-FPA SEI message may include segmented_rect_content_interpretation_type indicating that the first component data and the second component data form a left image and a right image.
  • each of the first component data and the second component data is associated with one of a left image and a right image
  • the signaling information is stereoscopic service type information that specifies a type of service.
  • the stereoscopic service type information indicates a service-compatible stereoscopic 3D service
  • the service-compatible stereoscopic 3D service simultaneously includes a stream including the left image and a stream including the right image. It can be based on the transmission.
  • an effective 2D broadcast content can be reproduced in a receiver that cannot perform 3D playback or consumes broadcast content in 2D instead of 3D playback.
  • the present invention while maintaining compatibility between the 3D broadcast and the 2D broadcast, it is possible to render the broadcast content in consideration of the aspect ratio of the provided content and the aspect ratio of the receiver.
  • FIG. 1 is a diagram illustrating an example of an image display apparatus according to an exemplary embodiment of the present invention.
  • FIG. 2 is a diagram illustrating another example of a 3D image display apparatus according to an exemplary embodiment of the present invention.
  • FIG. 3 is a diagram illustrating bitstream syntax for an example of an NIT section including a service list descriptor according to an embodiment of the present invention.
  • FIG. 4 illustrates bitstream syntax of an example of a service list descriptor according to an embodiment of the present invention.
  • FIG. 5 is a diagram illustrating bitstream syntax of an example of an SDT section including a service descriptor according to an embodiment of the present invention.
  • FIG. 6 is a diagram illustrating bitstream syntax of an example of a service descriptor according to an embodiment of the present invention.
  • FIG. 7 and 8 illustrate bitstream syntax of an example of a PMT section and an EIT section including a service descriptor according to an embodiment of the present invention.
  • FIG. 9 illustrates bitstream syntax for an example of a 3D service descriptor according to an embodiment of the present invention.
  • 10 to 12 are diagrams for further explaining Table 3 according to an embodiment of the present invention.
  • FIG. 13 is a diagram illustrating an example of bitstream syntax of a component descriptor according to an embodiment of the present invention.
  • FIG. 14 is a diagram illustrating an example of a bitstream syntax of a linkage descriptor according to an embodiment of the present invention.
  • 15 is a diagram illustrating an example of a method of signaling a corresponding 3D service using a linkage descriptor according to an embodiment of the present invention.
  • 16 is a flowchart illustrating an example of outputting a stereoscopic video signal using 3D signaling information according to an embodiment of the present invention.
  • 17 is a diagram illustrating an example of a UI according to an embodiment of the present invention.
  • FIG. 18 is a diagram illustrating an example of an EPG screen according to an embodiment of the present invention.
  • FIG 19 illustrates another example of an EPG screen according to an embodiment of the present invention.
  • FIG. 20 is a diagram illustrating another example of an EPG screen according to an embodiment of the present invention.
  • 21 and 22 illustrate still another example of an EPG screen according to an embodiment of the present invention.
  • FIG. 23 is a diagram illustrating an example of a UI indicating whether 3D version is present according to an embodiment of the present invention.
  • FIG. 24 is a diagram illustrating another example of an EPG according to an embodiment of the present invention.
  • FIG. 25 is a diagram illustrating an example of the detailed UI of FIG. 24 according to an embodiment of the present invention.
  • FIG. 26 illustrates a stereoscopic image multiplexing format of various image formats according to an embodiment of the present invention.
  • FIG. 27 illustrates a conceptual diagram of a 3D broadcast service according to an embodiment of the present invention.
  • FIG. 28 illustrates a conceptual block diagram illustrating a method for providing a full-resolution 3D broadcast service according to an embodiment of the present invention.
  • 29 illustrates a method for providing a 3D broadcast service according to an embodiment of the present invention.
  • FIG. 30 illustrates a method for providing a 3D broadcast service according to another embodiment of the present invention.
  • 31 illustrates a method for providing a 3D broadcast service according to another embodiment of the present invention.
  • 32 is a diagram illustrating a method for providing a 3D broadcast service according to another embodiment of the present invention.
  • 33 exemplifies full forward and backward interoperability for providing 3D broadcast service according to an embodiment of the present invention.
  • FIG. 34 illustrates a service model of a 3D broadcast service compatible with first generation 3DTV and second generation 3DTV according to an embodiment of the present invention.
  • 35 exemplifies a syntax structure of TVCT including 3D complementary video information according to an embodiment of the present invention.
  • FIG. 36 illustrates a syntax structure of a 3D complementary video descriptor included in TVCT according to an embodiment of the present invention.
  • FIG. 37 illustrates a method of constructing an image according to a field value of a complementary_type field included in 3D complementary video information according to an embodiment of the present invention.
  • 39 illustrates a syntax structure of a PMT including 3D complementary video information according to another embodiment of the present invention.
  • FIG. 40 illustrates a syntax structure of picture extension and user data of a video ES included in 3D complementary video information according to an embodiment of the present invention.
  • FIG. 41 illustrates a Supplemental Enhancement Information (SEI) syntax structure of a user identifier and a structure for decoding 3D complementary video information according to an embodiment of the present invention.
  • SEI Supplemental Enhancement Information
  • FIG. 42 illustrates a method for providing a full-resolution image using base video data, complementary video data, and 3D complementary video information received from a 3D video service Spec-B according to an embodiment of the present invention.
  • FIG. 43 illustrates a method for providing a full-resolution image using base video data, complementary video data, and 3D complementary video information received from a 3D video service Spec-B according to another embodiment of the present invention. .
  • FIG. 44 illustrates a method for providing a full-resolution image using base video data, complementary video data, and 3D complementary video information received from a 3D video service Spec-B according to another embodiment of the present invention. do.
  • 45 illustrates a method for providing a full-resolution image using base video data, complementary video data, and 3D complementary video information received from a 3D video service Spec-B according to another embodiment of the present invention. do.
  • 46 illustrates a method of signaling 3DTV service using SDT according to an embodiment of the present invention.
  • FIG. 49 illustrates a component descriptor representing each elementary stream for DVB broadcast service according to an embodiment of the present invention.
  • FIG. 50 illustrates stream content, component type, and description representing full-resolution 3D stereoscopic service in a DVB broadcasting system according to an embodiment of the present invention.
  • 51 illustrates a syntax structure of a 3D complementary video descriptor for an SDT according to an embodiment of the present invention.
  • FIG. 52 illustrates a method of signaling Spec-A and Spec-B 3D services using a linkage descriptor according to an embodiment of the present invention.
  • FIG. 53 illustrates a flowchart illustrating a process for outputting a stereoscopic video signal by parsing a 3D signal according to an embodiment of the present invention.
  • connection descriptor 54 exemplifies a method of signaling Spec-A and Spec-B 3D services using a connection descriptor according to another embodiment of the present invention.
  • FIG. 55 illustrates information about a full-resolution 3DTV service located in an ATSC PSIP EIT section according to an embodiment of the present invention.
  • 57 illustrates a flowchart illustrating a process for outputting a stereoscopic video signal by parsing and rendering a 3D complementary video descriptor using an ATSC PSIP EIT according to an embodiment of the present invention.
  • FIG. 58 illustrates a flowchart illustrating a process for outputting a stereoscopic video signal by parsing and rendering a 3D complementary video descriptor using DVB SI EIT according to an embodiment of the present invention.
  • FIG. 59 illustrates a block diagram of a broadcast receiver having a 3D video decoder according to an embodiment of the present invention.
  • 60 illustrates a conceptual diagram of a 3D service 2.0 (Spec-B) according to an embodiment of the present invention.
  • FIG. 61 is a diagram illustrating a method for signaling 3D service 2.0 (Spec-B) according to an embodiment of the present invention.
  • FIG. 62 illustrates a stream_content field and / or a component_type field of a component descriptor according to an embodiment of the present invention.
  • 63 illustrates a linkage_type field and / or a link_type field of a linkage descriptor according to an embodiment of the present invention.
  • 64 is a block diagram of a receiver according to an embodiment of the present invention.
  • 65 is a conceptual diagram of 3D service 3.0 according to an embodiment of the present invention.
  • 66 is a diagram illustrating a method for signaling an SFC-3DTV service according to an embodiment of the present invention.
  • 67 is a view illustrating a service_type field of a service descriptor according to an embodiment of the present invention.
  • 68 is a view illustrating a service_type field of a service descriptor according to an embodiment of the present invention.
  • 69 illustrates a stream_content field and / or a component_type field of a component descriptor according to an embodiment of the present invention.
  • 70 illustrates a stream_content field and / or a component_type field of a component descriptor according to an embodiment of the present invention.
  • 71 illustrates a stream_content field and / or a component_type field of a component descriptor according to an embodiment of the present invention.
  • FIG. 72 illustrates a linkage_type field and / or a link_type field of a linkage descriptor according to an embodiment of the present invention.
  • 73 is a block diagram of a receiver according to an embodiment of the present invention.
  • AFD Active Format Description
  • 75 is a diagram illustrating an example of a display of a receiver according to an active_format and an aspect ratio of a transmitted frame, according to an embodiment of the present invention.
  • FIG. 76 illustrates an example of a display of a receiver according to an active_format and an aspect ratio of a transmitted frame according to another embodiment of the present invention.
  • 77 is a diagram illustrating bar data according to an embodiment of the present invention.
  • FIG. 78 is a diagram illustrating a receiver according to an embodiment of the present invention.
  • 79 is a diagram illustrating a method of separating a left image and a right image of an SR FPA according to an embodiment of the present invention.
  • FIG. 80 is a diagram illustrating syntax of a Frame packing arrangement SEI message according to an embodiment of the present invention.
  • 81 is a view showing frame_packing_arrangement_type field according to an embodiment of the present invention.
  • 83 is a diagram illustrating Default Display Window information according to an embodiment of the present invention.
  • FIG. 84 is a diagram illustrating syntax of an SR FPA SEI message according to an embodiment of the present invention.
  • FIG 85 is a diagram illustrating an operation of a broadcast reception device according to an embodiment of the present invention.
  • 86 is a view showing a service descriptor according to an embodiment of the present invention.
  • 87 illustrates a component descriptor according to an embodiment of the present invention.
  • FIG. 88 is a diagram showing the configuration of a broadcast receiving apparatus according to the first to third embodiments of the present invention.
  • 89 is a flowchart of a broadcast receiving device according to the first to third embodiments of the present invention.
  • FIG. 90 is a diagram illustrating a configuration of a broadcast reception device according to a fourth embodiment of the present invention.
  • 91 is a flowchart of a broadcast reception device according to a fourth embodiment of the present invention.
  • 92 is a diagram illustrating a configuration of a broadcast reception device according to a fifth embodiment of the present invention.
  • FIG. 93 is a flowchart of a broadcast receiving device according to a fifth embodiment of the present invention.
  • the present specification provides an image processing method and apparatus for defining various signaling information regarding identification and processing of 3-Dimensional (3D) service according to the present invention so that the transmitting / receiving terminal can process it.
  • a digital receiver including a configuration capable of processing a 3D service
  • Such digital receivers include a digital television receiver, a set-top box (STB) including a configuration capable of processing 3D services, and a display unit for outputting processed 3D images. It may include any device capable of receiving and processing and / or providing 3D image data such as a receiver set, a personal digital assistant (PDA), a mobile phone, a smart phone, and the like.
  • the digital receiver may be any one of a 3D dedicated receiver and a 2D / 3D combined receiver.
  • a method of representing a 3D image includes a stereoscopic image display method considering two viewpoints and a multi-view image display method considering three or more viewpoints. have.
  • the stereoscopic image display method uses a pair of left and right images obtained by photographing the same subject by two cameras spaced at a certain distance, that is, a left camera and a right camera.
  • the multi-view image display method uses three or more images obtained by photographing from three or more cameras having a constant distance or angle.
  • transmission formats of stereoscopic images include a single video stream format and a multiple video stream format (or a multi video stream format).
  • Single video stream formats include side-by-side, top / down, interlaced, frame sequential, checker board, anaglyph
  • the multiple video stream formats include full left / right, full left / half right, and 2D video / depth.
  • the stereo image or the multi-view image may be compressed and transmitted according to various image compression encoding schemes, including a moving picture expert group (MPEG).
  • MPEG moving picture expert group
  • stereo images such as the side-by-side format, the top / down format, the interlaced format, the checker board, and the like may be compressed and transmitted by H.264 / AVC (Advanced Video Coding).
  • a 3D image may be obtained by decoding the stereo image in the inverse of the H.264 / AVC coding scheme in the reception system.
  • one of the left image or the multiview image of the full left / half right format is assigned to the base layer image, and the remaining images are assigned to the enhanced layer image.
  • the upper layer image may be encoded and transmitted only for correlation information between the base layer and the upper layer image.
  • a JPEG, MPEG-1, MPEG-2, MPEG-4, H.264 / AVC scheme may be used.
  • H.264 / MVC Multi-view Video Coding
  • the stereo image is assigned to the base layer image and one higher layer image
  • the multi-view image is assigned to the image of one base layer and the plurality of higher layer images.
  • a criterion for dividing the multi-view image into an image of a base layer and an image of one or more higher layers may be determined according to the position of the camera or may be determined according to the arrangement of the cameras. Or may be arbitrarily determined without following special criteria.
  • the 3D image display is largely divided into a stereoscopic method, a volumetric method, and a holographic method.
  • a 3D image display apparatus using a stereoscopic technique is an image display apparatus that adds depth information to a 2D image and uses the depth information to allow an observer to feel a stereoscopic life and reality.
  • the spectacles method is classified into a passive method and an active method.
  • a left image image and a right image image are divided and displayed using a polarized light filter.
  • passive methods include wearing and viewing blue and red sunglasses, respectively.
  • the active method distinguishes the left and right eyes using a liquid crystal shutter, and distinguishes the left image image from the right image image by opening and closing the lenses of the left and right eyes sequentially.
  • This active method is a method of wearing and viewing glasses equipped with an electronic shutter that periodically repeats a time-divided screen and synchronizes the time-divided screen, and is also referred to as a time split type or shuttered glass method. do.
  • Typical examples of the autostereoscopic method include a lenticular method in which a lenticular lens plate in which cylindrical lens arrays are vertically arranged in front of the image panel, and a periodic slit in the upper part of the image panel.
  • a parallax barrier method having a barrier layer.
  • the glasses will be described as an example.
  • a signaling method for signaling a stereoscopic service with SI in order to transmit and receive a 3D signal, particularly a stereoscopic video signal through a terrestrial DTV broadcasting channel.
  • the term “signaling” refers to transmitting / receiving service information (SI) provided by a broadcasting system, an internet broadcasting system, and / or a broadcasting / internet convergence system.
  • the service information includes broadcast service information (eg, ATSC-SI and / or DVB-SI) provided in each broadcast system that currently exists.
  • the term 'broadcast signal' refers to bidirectional communication such as internet broadcasting, broadband broadcasting, communication broadcasting, data broadcasting, and / or video on demand, in addition to terrestrial broadcasting, cable broadcasting, satellite broadcasting, and / or mobile broadcasting. This is defined as a concept including a signal and / or data provided in a broadcast.
  • 'PLP' refers to a certain unit for transmitting data belonging to a physical layer. Therefore, the content named "PLP” in this specification may be renamed to "data unit” or "data pipe.”
  • DTV digital broadcasting
  • the hybrid broadcasting service allows a user to transmit enhancement data related to broadcasting A / V (Audio / Video) content or a portion of broadcasting A / V content transmitted through a terrestrial broadcasting network in real time through an internet network. Lets you experience various contents.
  • An object of the present invention is to provide a method for encapsulation so that IP packets, MPEG-2 TS packets, and packets usable in other broadcasting systems can be delivered to a physical layer in a next generation digital broadcasting system.
  • a method for transmitting layer 2 signaling in the same header format is also proposed.
  • the 3D image includes a left image viewed through the left eye and a right image viewed through the right eye.
  • a person feels a three-dimensional effect due to the horizontal distance disparity of objects in the left image seen through the left eye and the right image seen through the right eye.
  • the method of providing 3D video or 3D service through broadcasting, the method of transmitting the left image and the right image together in one frame is a frame compatible method, and the left and right images of each of the transmission stream, elementary stream, Alternatively, there may be a service compatible method of transmitting a layer stream.
  • the left image and the right image may be positioned up / down (top-and-bottom) or left / right (side-by-side) within a frame.
  • the definition of service compatible may mean that the 3D service and the 2D service are compatible. That is, within the system that provides the left and right images for the 3D service, the method can be used as a service compatible to use the left and / or right image for the 2D service while maintaining the quality of the 2D service. can do.
  • FIG. 1 is a block diagram illustrating an example of an image display apparatus according to the present invention.
  • an example of an image display apparatus includes a processing part 130 that processes sources input from a content source 110, and a processor 130. And an output unit 140 (outputting part) for outputting processed AV data (Audio / Video data).
  • the source comprises a 3D image, for example.
  • the image display apparatus may further include an interfacing part 135 for processing a source input from an external device 120 in addition to the content source 110.
  • the image display apparatus may output a synchronization signal for viewing 3D images, for example, sync information, to the 3D glasses 150 necessary for viewing the source provided from the output unit 140. It may further include an emitter 145.
  • the processing unit 130 and the display unit 140 are a set as a digital receiver, or the processing unit 130 is in the form of a set-top box, and the display unit 140 is It can also perform the same function as a display device that only outputs the processed signal from the set-top box.
  • the interface unit 135 described above may be used to exchange data between the processing unit 130 and the output unit 140.
  • the interface unit 135 may be, for example, an interface (I / F) that supports a high-definition multimedia interface (HDMI) standard capable of supporting 3D service.
  • I / F an interface
  • HDMI high-definition multimedia interface
  • the 3D image may include, for example, a content source 110 such as terrestrial broadcasting, cable broadcasting, satellite broadcasting, optical disc, or internet protocol television broadcasting. It may be included in a signal or a source transmitted from the) or directly input from an external device 120 such as a universal serial bus (USB), a game console, or the like. In the latter case, in particular, the signaling information for output should be defined and provided based on the information provided from the external device 120 in the interface unit 135.
  • a content source 110 such as terrestrial broadcasting, cable broadcasting, satellite broadcasting, optical disc, or internet protocol television broadcasting. It may be included in a signal or a source transmitted from the) or directly input from an external device 120 such as a universal serial bus (USB), a game console, or the like.
  • USB universal serial bus
  • the signaling information for output should be defined and provided based on the information provided from the external device 120 in the interface unit 135.
  • various formats such as DivX, component, AV, and Scart: Syndicat des Constructeurs d'Appareils Radiorecepteurs et Televiseurs, Radio and Television Receiver Manufacturers' Association
  • the 3D image may be input, and the image display apparatus may include a configuration for processing the above-described formats.
  • the 3D glasses 150 may properly view the 3D image provided from the output unit 140 by using a receiving part (not shown) that receives the synchronization signal output from the 3D emitter 145.
  • the 3D glasses 150 may further include means for switching 2D / 3D viewing modes, and may further include a generation unit (not shown) for individually generating sync information according to the viewing mode switching means. Can be.
  • the sync information generated by the 3D glasses 150 may be transmitted to the image display by sending a viewing mode switch request from the viewing mode switching unit and receiving the sink information from the image display apparatus or received by the image display apparatus. You can create your own by referencing the information.
  • the 3D glasses 150 may further include a storage unit that stores sync information received from the image display device.
  • 2 is a diagram illustrating another example of a 3D image display apparatus according to the present invention. 2 may be, for example, a block diagram of a detailed configuration of the processor 130 of FIG. 1.
  • an image display apparatus includes a receiving part 210, a demodulating part 220, a demultiplexing part 230, and signaling information. It includes an SI processing part 240, a video decoder 250, a 3D image formatter 260, and a controlling part 270.
  • the receiver 210 receives a digital broadcast signal including 3D image data from the content source 110 through a radio frequency (RF) channel.
  • RF radio frequency
  • the demodulator 220 demodulates the digital broadcast signal received by the receiver 210 in a demodulation scheme corresponding to a modulation scheme.
  • the demultiplexer 230 demultiplexes audio data, video data, and signaling information from the demodulated digital broadcast signal.
  • the demultiplexer 230 may demultiplex the digital broadcast signal by filtering using a packet identifier (PID).
  • PID packet identifier
  • the demultiplexer 230 outputs the demultiplexed video signal to the downstream video decoder 220 and outputs signaling information to the signaling information processor.
  • the signaling information may include SI (System Information) information such as Program Specific Information (PSI), Program and System Information Protocol (PSIP), and Digital Video Broadcasting-Service Information (DVB-SI).
  • SI System Information
  • PSI Program Specific Information
  • PSIP Program and System Information Protocol
  • DVD-SI Digital Video Broadcasting-Service Information
  • the signaling information processor 240 processes the signaling information input from the demultiplexer 230 and outputs the signal to the controller 270.
  • the signaling information processor 240 may include a database (DB) that temporarily stores the signaling information to be processed, internally or externally. Such signaling information will be described in more detail in the following embodiments.
  • the signaling information processor 240 determines whether there is signaling information indicating whether the content is a 2D image or a 3D image. If the signaling information processor 240 determines that the signaling information exists, the signaling information processor 240 reads the signaling information and transmits the signaling information to the controller 270.
  • the video decoder 250 receives and decodes the demultiplexed video data.
  • the decoding may be performed based on the signaling information processed by the signaling information processor 240, for example.
  • the 3D image formatter 260 formats the 3D image data decoded by the video decoder 250 according to an output format and outputs the 3D image formatter to the output unit 140.
  • the 3D image formatter 260 may be activated only when the decoded image data is 3D image data, for example.
  • the deactivation that is, the 3D image formatter 260 may output, for example, bypass the input image data without processing.
  • the 3D image formatter 260 performs the requested conversion from the input (decoded) video format to the native 3D display format. Artifact reduction, sharpness, contrast enhancement, de-interlacing, frame rate conversion, and quality enhancement blocks Video processing, such as other types of, may exist between video decoder 250 and 3D image formatter 260.
  • the present invention is to implement a function for processing the 3D video broadcast signal transmitted through the DTV broadcast signal and outputting the 3D video data on the screen in the DTV receiver supporting the 3D video processing function.
  • descriptors for 3D services / events to support the reception of stereoscopic 3D broadcast signals, and describe a method of receiving stereoscopic broadcast signals and supporting stereoscopic display output using them.
  • the existing terrestrial DTV reception standard is based on 2D video content, especially when 3DTV is serviced, a descriptor for 3D codec must be defined.
  • the receiver should properly process such a changed signal to support 3D broadcast service reception and output.
  • SI standards related to existing DVB transmissions are limited to 2D video services. Therefore, in order to receive 3DTV signals, especially stereoscopic video signals through terrestrial DTV broadcasting channels, it is necessary to be able to signal stereoscopic services in the existing SI standard, and in order to effectively handle them, the DTV receiver also has a new design to support 3D broadcasting reception. And implementation methods should be applied.
  • a service type for indicating a 3D service in a service descriptor of the SDT is defined.
  • a 3D service descriptor for providing detailed information about 3D services and events (programs).
  • programs In order to inform 3D service through EIT, we define a method for representing 3D through stream_content and component_type. Defines how the receiver processes 2D / 3D service smoothly by processing newly defined 3D signaling.
  • the level means, for example, a service level of a service unit, content in a service, a content level of an event unit, and the like.
  • a descriptor format is mainly used.
  • this is merely an example and may be signaled by extending a concept of a conventional field of a table section or adding a new field.
  • FIG. 3 is a diagram illustrating a bitstream syntax for an example of a network information table (NIT) section including a service list descriptor according to the present invention
  • FIG. 4 is a diagram of a service list descriptor according to the present invention. Bitstream syntax for an example is shown.
  • the NIT transmits information about the physical organization of multiplexes / TSs transmitted over a given network and the characteristics of the network itself.
  • the combination of original_network_id and transport_stream_id allows each TS to be uniquely identified through the scope of the application herein.
  • Networks assign individual network_id values provided as unique identification codes for networks.
  • the network_id and original_network_id may have the same value or may take a different value, subject to the assigned restrictions for network_id and original_network_id.
  • the receiver may store NIT information in non-volatile memory so that access time is minimized when switching between channels (channel hoping). It makes it possible to transmit the NIT through other networks in addition to the actual network.
  • the differentiation between the NIT for the actual network and the NIT for the other network may consist of different table_id values.
  • Certain sections forming part of the NIT may be sent in TS packets with a PID value of 0x0010.
  • Certain sections of the NIT describing the actual network ie, the network of the TS that includes the NIT as part
  • the network_id field takes a value assigned for the actual network.
  • the table_id field may indicate that this table section is a NIT section by a predefined value.
  • the section_syntax_indicator field may be set to "1".
  • section_length is a 12-bit field, and the first two bits are 00. It may start immediately, describing the number of bytes in the section, including the next section_length field and the CRC.
  • the section_length field does not exceed 1021, and the entire section has a maximum length of 1024 bytes.
  • the network_id field may give a label identifying a delivery system for NIT Informs from some other delivery system.
  • the version_number may be increased by one when a change of information transmitted in the sub_table occurs. When its value reaches 31, it wraps around to zero. If current_next_indicator is set to 1, then version_number is applicable to the current sub_table, and if current_next_indicator is 0, it is applicable to the next sub_table defined by table_id and network_id.
  • the section_number field may give a number of a section.
  • the section_number of the first section in the sub_table can be 0x00.
  • section_number will be incremented by 1 with each additional section with the same table_id and original_network_id.
  • the last_section_number field describes the number of the last section of the sub_table (ie, the section with the highest section_number) as part of this section.
  • the network_descriptors_length field may give the total length in bytes of the following network descriptors.
  • the transport_stream_loop_length field may describe the total length in bytes of the next TS loops immediately ending before the first CRC-32 bytes.
  • the transport_stream_id field may provide as a label for identification of a TS for EIT firms from another multiplex in the delivery system.
  • the original_network_id field may give a label that identifies the network_id of the original delivery system.
  • the transport_descriptors_length field may describe the total length in bytes of the following TS descriptors.
  • the CRC field may include a CRC value that gives zero output of registers at the decoder after processing the entire section.
  • the service list descriptor is a descriptor of the NIT, and by using this, the overall 3D service list can be grasped.
  • the service list descriptor provides means for listing services by service_id and service_type.
  • the descriptor_tag field may identify a corresponding descriptor by predefined values of descriptor_tag.
  • the descriptor_length field provides information on the total length of this descriptor.
  • the service_id field uniquely identifies a service in the TS.
  • the service_id is the same as the program_number in the corresponding program_map_section except that the value of the service_type field is 0x04, 0x18 or 0x1B (NVOD reference services).
  • the service_id does not have a corresponding program_number.
  • the service_type field may describe the type of service.
  • the service_type value for the service is described in more detail in Table 1.
  • the image display apparatus filters the NIT sections of FIG. 3, parses the service_list_descriptor of FIG. 4 according to the present invention included in the filtered NIT sections, and then service_id whose service_type field is a “frame-compatible 3DTV” service. You can print out a list of 3D services (programs) separately.
  • FIG. 5 is a diagram illustrating a bitstream syntax for an example of a service description table (SDT) section including a service descriptor (service_descriptor) according to the present invention
  • FIG. 6 is a bit for an example of a service descriptor according to the present invention. It is a figure which shows stream syntax.
  • Each sub_table of the SDT describes services included in a specific TS.
  • the services may be part of an actual TS or other TSs. These can be identified by table_id.
  • Certain sections forming part of the SDT may be sent by TS packets with a PID value of 0x0011.
  • Certain sections of the SDT describing the actual TS have a value of table_ide 0x42 with the same table_id_extension (transport_stream_id) and the same original_network_id.
  • Other sections of the SDT referencing a different TS from the actual TS may have 0x46 as the table_id value.
  • the table_id field may indicate that this table section is an SDT section by a predefined value.
  • the section_syntax_indicator field may be set to "1".
  • section_length is a 12-bit field, and the first two bits are 00. It may start immediately, describing the number of bytes in the section, including the next section_length field and the CRC.
  • the section_length field does not exceed 1021, and the entire section has a maximum length of 1024 bytes.
  • the transport_stream_id field may provide as a label for identification of TS for SDT inspectors from another multiplex in the delivery system.
  • the version_number may be increased by one when a change of information transmitted in the sub_table occurs. When its value reaches 31, it wraps around to zero. If current_next_indicator is set to 1, then version_number is applicable to the current sub_table, and if current_next_indicator is 0, it is applicable to the next sub_table defined by table_id and network_id.
  • the section_number field may give a number of a section.
  • the section_number of the first section in the sub_table can be 0x00.
  • section_number will be incremented by 1 with each additional section with the same table_id, transport_stream_id and original_network_id.
  • the last_section_number field describes the number of the last section of the sub_table (ie, the section with the highest section_number) as part of this section.
  • the original_network_id field may give a label that identifies the network_id of the originating delivery system.
  • Service_id is provided as a label to identify the service from another service in the TS as a 16-bit field.
  • the service_ide may be the same as program_number in the corresponding program_map_section.
  • EIT_schedule_flag is a 1-bit field. If it is set to 1, it indicates that EIT schedule information for a service exists in the current TS. If the flag is 0, it indicates that EIT schedule information for a service does not exist in the current TS. If EIT_present_following_flag is set to 1 as a 1-bit field, it indicates that EIT_present_following information for a service exists in the current TS, and is for information on a maximum time interval between occurrences of an EIT present / next sub_table. If the flag is 0, there is no EIT present / next sub_table for the service in the TS.
  • running_status may indicate the status of a service as a 3-bit field. For the NVOD reference event, the value of running_status may be set to zero.
  • free_CA_mode is a 1-bit field. If set to 0, all component streams of the event are not scrambled. If set to 1, one or more streams are controlled by the CA system.
  • the descriptors_loop_length field may give the total length in bytes of the following descriptors.
  • the CRC field may include a CRC value that gives zero output of registers at the decoder after processing the entire section.
  • the service descriptor is a descriptor of the SDT, and uses the service_type field included in the service descriptor to determine whether the SDT is 3D for a specific service_id. In addition, by using the service descriptor, it is possible to determine whether the decoding and output of the 3D service.
  • the service descriptor along with service_type, provides the name of the service and service provider in text form.
  • the descriptor_tag field may identify a corresponding descriptor by predefined values of descriptor_tag.
  • the descriptor_length field provides information on the total length of this descriptor.
  • the service_type field may describe the type of service.
  • the assignment of service_type for a service is described in Table 1.
  • the assignment of service_typ from Table 1 is self-explanatory. For example, MPEG-2 HD digital television service. However, the decision is not always made straight forward.
  • service_type indicates a digital television receiver.
  • this service_type does not provide explicit indication at the receiver for the method for the components of the service to be encoded.
  • a particular encoding may be implicitly linked to this service_type referenced to the receiver.
  • This service_type may be used for MPEG-2 SD digital television service. However, it may be used for services using other encodings, including, for example, encodings with certain entries, such as MPEG-2 HD digital television service.
  • DVB does not modify the definition of service_type from a digital television service to an MPEG-2 SD digital television service because it is already present and in use in the context of other (non-MPEG-2 SD) encodings.
  • all receivers can decode and output an MPEG-2 SD encoded material.
  • All receivers can assign and output a service_type to any service for user selection based on it being an MPEG-2 SD coded material.
  • this does not support the receiver using the actual encoding.
  • the inability of the receiver is determined by whether the service_type means needed by the service provider to allocate to care based on the user's experience to achieve it can decode and output the assigned service.
  • the service assigns service_type to 0x01 (digital television service). However, if the desired user experience is only a service assigned to the user of the MPEG-2 SD-only receiver, then the receiver is actually visible and the service is assigned a service_type of 0x11 (MPEG-2 HD digital television service).
  • This service_type is assigned to a service, which includes both alternative encodings such as MPEG-4 HD for the same material as MPEG-2 SD encoding. This assumption is reasonable: all receivers will be able to decode and present MPEG-2 SD encoded materials. Therefore, it can be presented to the user in at least MPEG-2 SD coded form.
  • Components used for other encodings can be differentiated between the decoding aspect by the value assigned for stream_type in the PSI and / or the use of component descriptors in the SI.
  • the value of service_typ may indicate an advanced codec.
  • the advanced codec service_type may indicate that the service is encoded using something other than MPEG-2. More specifically, the assignment of one of these service_types should support a codec other than MPEG-2 so that the receiver can decode and present the service. Based on this, MPEG-2 SD-only receivers may not output a service assigned with one of service_types for user selection. The assignment of one of these service_types provides a generic indication for the use of non-specialized advanced codecs. It is not allowed to allow the receiver to decode and output one of these service_types assigned service. Of course, for a particular platform, a particular encoding is implicitly linked with one of the service_type and referenced by the receiver.
  • a service is assigned for advanced codec service_types
  • the component descriptor is used in the SI to indicate the specific advanced codec used. This allows the receiver to handle properly and allows for confusion without deciding whether to decode and output the service.
  • the value of service_typ may indicate an advanced codec frame compatible stereoscopic HD.
  • the frame compatible stereoscopic HD values allow the broadcaster to signal the service to operate (mainly) as a stereoscopic service. The use of these values requires careful consideration of the consequences for legacy receiver populations. Here, legacy receiver populations ignore these services as a result. Therefore, the broadcaster instead chooses to signal a frame compatible stereoscopic service as a normal HD service, and is used for alternate signaling to indicate that the service (or event) is within the frame compatible stereoscopic format.
  • the service_provider_name_length field describes the number of bytes of the next service_provider_name_length field that describe the characters of the name of the service provider. Char is 8-bit. The string of Char fields describes the name of the service provider or service. Textual information is coded using character sets and methods.
  • the Service_name_length field describes the number of bytes of the next service_name_length field that describe the characters of the name of the service.
  • FIG. 7 and 8 illustrate bitstream syntax for an example of a Program Map Table (PMT) section and an Event Information Table (EIT) section including a service descriptor according to the present invention
  • FIG. 9 illustrates a 3D according to the present invention. Bitstream syntax for an example of a service descriptor is illustrated.
  • the PMT may provide a mapping between program numbers and program elements containing them. A single instance of such a mapping may be referred to as a program definition.
  • PMT is a complete collection of all program definitions for the TS. This table is sent in packets, and the PID values are selected by the encoder. Sections are identified by the program_number field.
  • the table_id field may indicate that this table section is a NIT section by a predefined value.
  • the section_syntax_indicator field may be set to "1".
  • section_length is a 12-bit field, where the first two bits are 00, and the remaining 10 bits immediately start to describe the number of bytes of the section including the next section_length field and the CRC.
  • the section_length field does not exceed 1021 (0x3FD).
  • the program_number field describes a program of applicable program_map_PID.
  • One program definition is transmitted in only one TS_program_map_section. This means that one program definition is no longer than 1016 (0x3F8).
  • Program_number may be used, for example, as designation for a broadcast channel.
  • the version_number may be a version number of TS_program_map_section. This version number is incremented to 1 modulo 32 when a change in the information sent within a section occurs.
  • the version number refers to a single program's definition, and therefore is for a single section.
  • current_next_indicator is set to 1, then version_number indicates that it is currently applicable to TS_program_map_section, and if current_next_indicator is 0, it is applicable to the next TS_program_map_section defined by table_id and network_id.
  • the section_number field may be 0x00.
  • the last_section_number field is 0x00.
  • the PCR_PID field indicates the PID of TS packets containing valid PCR fields for the program described by program_number. If there is no PCR associated with the program definition for private streams, this field has a value of 0x1FFF.
  • program_info_length is a 12-bit field. The first two bits are 00 and the remaining 10 bits describe the byte numbers of the descriptors of the immediately following program_info_length field.
  • stream_type is an 8-bit field that describes the type of program element that is transmitted in packets with the PID of the value described by elementary_PID.
  • the auxiliary stream is available for data types defined by this specification such as program stream directory and program stream map, rather than audio, video and DSM-CC.
  • Elementary_PID is a 13-bit field and describes the PID of TS packets for transmitting the associated program element.
  • ES_info_length is a 12-bit field, where the first two bits are 00 and the remaining 10 bits describe the number of bytes of descriptors of the program element associated with the immediately following ES_info_length field.
  • the CRC field may include a CRC value that gives zero output of registers at the decoder after processing the entire TS program map section.
  • the EIT may provide information in the local order for events included in each service. All EIT sub-tables for the actual TS have the same transport_stream_id and original_network_id values.
  • the current / next table may contain information that persists the current event and the chronological next event transmitted by a given service on an actual TS or another TS, except in the case of an NVOD reference service having more than two event descriptions.
  • Event schedule tables for the actual TS or other TSs may include a list of events in the schedule form. That is, the schedule includes events that occur at a predetermined time without a next event. Event information may be ordered chronologically. Certain sections forming part of the EIT may be sent in TS packets with a PID value of 0x0012.
  • the table_id field may indicate that this table section is an EIT section by a predefined value.
  • the section_syntax_indicator field may be set to "1".
  • the section_length field starts immediately and may describe the number of bytes of the section including the next section_length field and the CRC.
  • the section_length field does not exceed 4093, and the entire section has a maximum length of 4096 bytes.
  • the service_id field may be provided as a label to identify the service from another service in the TS.
  • service_id may be the same as program_number in the corresponding program_map_section.
  • the version_number field is a version number of sub_table.
  • the version_number may increase by one when a change of information transmitted in the sub_table occurs. When its value reaches 31, it wraps around to zero. If current_next_indicator is set to 1, then version_number indicates that it is applicable to the current sub_table, and if current_next_indicator is 0, it is applicable to the next sub_table.
  • the section_number field may give a number of a section.
  • the section_number of the first section in the sub_table can be 0x00.
  • section_number will be incremented by 1 with each additional section with the same table_id, service_ide, transport_stream_id, and original_network_id.
  • the sub_table may be structured as the number of segments.
  • the section_number in each segment is incremented by 1 with each additional section. In numbering, a gap is allowed between the last section of the segment and the first section of the adjacent segment.
  • the last_section_number field describes the number of the last section of the sub_table (ie, the section with the highest section_number) as part of this section.
  • the transport_stream_id field may provide as a label for identification of a TS for EIT firms from another multiplex in the delivery system.
  • the original_network_id field may give a label that identifies the network_id of the original delivery system.
  • the segment_last_section_number field may describe the number of the last section of this segment of the sub_table. For sub_tables that are not segmented, this field may be set to the same value as the last_section_number field.
  • the last_table_id field may identify the last table_id used.
  • the event_id field may include an identification number of the described event (uniquely assigned in the service definition).
  • the start_time field may include a start time of an event in UTC (Universal Time, Co-ordinated) and MJD (Modified Julian Date). This field is coded with 16 bits giving 16 LSBs of MJD followed by 24 bits coded as 6 digits in 4-bit Binary Coded Decimal (BCD). If start_time is not defined (eg, for an event in the NVOD reference service), all bits of the field may be set to one. For example, 93/10/13 12:45:00 is coded "0xC079124500”.
  • the Duration field contains the duration of the event in hours, minutes, and seconds. The format is 6 digits, 4-bit BCD, i.e. 24 bits. For example, 01:45:30 is coded as "0x014530".
  • the running_status field may indicate the status of an event. For the NVOD reference event, the value of running_status may be set to zero.
  • free_CA_mode is a 1-bit field. If set to 0, all component streams of the event are not scrambled. If set to 1, one or more streams are controlled by the CA system.
  • the descriptors_loop_length field may give the total length in bytes of the following descriptors.
  • the CRC field may include a CRC value that gives zero output of registers at the decoder after processing the entire private section.
  • the 3D service descriptor according to the present invention may be included in the above-described SDT of FIG. 5 or PMT of FIG. 7.
  • the service or program is 3D.
  • information contained in the 3D service descriptor may be used to determine 3D video format information.
  • the 3D service descriptor included in the EIT can be used to know in advance whether or not the 3D for a specific event.
  • the 3D service descriptor contains detailed information about the 3D service / program and is located in the PMT or SDT. (It can be located in the EIT, in which case it represents 3D information about the program / event being announced.)
  • the 3D service descriptor exists when service_type is "frame-compatible 3DTV" or when stream_content and component_type for an event are “frame-compatible 3D” and includes the following fields.
  • the descriptor_tag field may identify a corresponding descriptor by predefined values of descriptor_tag.
  • the descriptor_length field provides information on the total length of this descriptor.
  • the 3D_structure field indicates a video format of a 3D program and may be defined as shown in Table 2, for example.
  • the 3D_structure field value is 0000, it means full resolution Left & Right. If the 3D_structure field value is 0001, it means a field alternative. If the field value is 0010, it means line alternative. If the 3D_structure field value is 0100, it means left image plus depth (L + depth). If the 3D_structure field value is 0110, Top-and-bottom (TaB) ), A 3D_structure field value of 0111, a frame sequential, and a 3D_structure field value of 1000 may mean side-by-side (SbS). However, the field values and meanings shown in Table 2 are examples, and are not limited to the above values and meanings.
  • 3D_metadata_location_flag field When the value of the 3D_metadata_location_flag field is '01', 3D_metadata_type, 3D_metadata_length, 3D_metadata field, etc. are additionally present in the 3D service descriptor. In case of '00', corresponding data is not transmitted. In case of '10', 3D_metadata_type, 3D_metadata_length, 3D_metadata field, etc. are transmitted in the video area.
  • the 3D_sampling field informs information about the frame-compatible format of the 3D program and may be defined as shown in Table 3, for example.
  • FIGS. 10-12 10 (a), 11 (a), and 12 (a) mean odd positions, and FIGS. 10 (b), 11 (b), and 12 (b) mean even positions. do.
  • the 3D_sampling field value when the 3D_sampling field value is 0000-0011, it means sub-sampling.
  • the 3D_sampling field value when the 3D_sampling field value is 0000, it relates to sub-sampling. In particular, it means odd left (L) and odd light (R), and in case of 0001, sub-sampling ( sub-sampling, particularly odd left (L) and even light (R), and in the case of 0010, it relates to sub-sampling, in particular even left (L) and odd light (R), In the case of 0011, the sub-sampling is related, and in particular, even left (L) and even light (R).
  • a value of 0100-0111 of 3D_sampling field means a quincunx matrix.
  • the value of the 3D_sampling field when the value of the 3D_sampling field is 0100, it refers to the quench matrix, and in particular, it means the odd left (L) and the odd light (R), and when it is 0101, it relates to the quench matrix.
  • L) and even light (R) in the case of 0110 relates to the quench matrix, in particular in the case of even left (L) and odd light (R), in the case of 0111, in particular the quench matrix.
  • Left (L) and even light (R) In the above description, the case where the 3D video format is SbS will be described as an example. However, the 3D video format may be defined as TaB in the same manner or may be additionally defined in the above example.
  • the 3D_orientation field indicates the pixel arrangement form of the left and right view image data in the 3D program and may be defined as shown in Table 4, for example.
  • the 3D orientation of the video means the left picture and the light picture are not inverted, but normal, and the 01 means that the 3D orientation of the video is the inverted light picture only. If it is 10, the 3D orientation of the video may mean that only the left picture is inverted, and if 11, the 3D orientation of the video may mean that both the left and the light are inverted.
  • the 3D_metadata_type field is a valid field when 3D_metadata_exist_flag is '1', and 3D_metadata_length and 3D_metadata may be defined as shown in Table 5 using this field.
  • 3D_metadata_type 3D_metadata_length 3D_metadata Meaning 000 4 3D_metadata [0] parallax_zero 3D_metadata [1] parallax_scale 3D_metadata [2] Dref 3D_metadata [3] Wref 001 4 3D_metadata [0] xB 3D_metadata [1] Zref 3D_metadata [2] Dref 3D_metadata [3] Wref
  • 3D_metadata_length When the value of the 3D_metadata_type field is 000, 3D_metadata_length is 4 and 3D_metadata may have at least one of four or all four values.
  • 3D_metatdat [0] means parallax_zero
  • 3D_metatdat [1] means parallax_scale
  • 3D_metatdat [2] means Dref
  • 3D_metatdat [3] means Wref.
  • 3D_metadata_type field when the value of the 3D_metadata_type field is 001, 3D_metadata_length is also 4, and 3D_metadata may have at least one of four or all four values.
  • 3D_metatdat [0] means xB
  • 3D_metatdat [1] means Zref
  • 3D_metatdat [2] means Dref
  • 3D_metatdat [3] means Wref.
  • the parameters according to Table 5 are used to reproduce the intended stereoscopic sense intended by the producer using them at the receiver with the intended environmental values in the process of producing 3D content.
  • Each parameter is data for accurately interpreting each parallax when transmitting a parallax map like a depth map. In other words, when the parallax map is received, the parallax values are converted to each value in consideration of the reference value and the current viewing environment to generate an image of a new viewpoint.
  • the Dref parameter is the distance between the viewer and the screen defined as a reference during the 3D content creation process (in cm).
  • the Wref parameter is the horizontal size of the screen defined as a reference during 3D content creation (in cm).
  • the Zref parameter is a depth value (unit: cm) defined as a reference during 3D content creation.
  • the xB parameter is the distance between the viewer's eyes defined by the reference (reference value is 65 mm).
  • the reference parallax Pref is calculated using Equation 1 (assuming that each value of the parallax map is expressed in N-bits).
  • Equation 2 The parallax on the actual screen is calculated as in Equation 2 (see ISO23002-3 for equation derivation).
  • Equation 2 D and W represent the viewer distance of the receiver and the horizontal size of the screen. If 3D_metadata_type is 000, xB parameter is not transmitted. In this case, assume that the value is 65mm.
  • FIG. 13 is a diagram illustrating an example of bitstream syntax of a component descriptor according to the present invention.
  • the component descriptor of FIG. 13 may be defined as the descriptor of the SDT of FIG. 5, for example, to determine whether the corresponding service is 3D.
  • the component descriptor of FIG. 8 is defined as the descriptor of the EIT of FIG. 8 to determine whether the corresponding event is 3D. Can be.
  • the component descriptor may be used to identify the type of component stream and provide a text description of the elementary stream.
  • the descriptor_tag field may identify a corresponding descriptor by predefined values of descriptor_tag.
  • the descriptor_length field provides information on the total length of this descriptor.
  • the stream_content field may describe the type of the stream (video, audio or EBU-data). Coding of this field is described in the table.
  • the Component_type field describes the type of a video, audio or EBU-data component.
  • the component_tag field has the same value as the component_tag field in the stream identifier descriptor (if present in the PSI program map section) for the component stream.
  • the ISO_639_language_code field identifies the language of the component (in the case of audio or EBU-data) and identifies the text description contained within this descriptor.
  • the ISO_639_language_code field may include a 3-character code described in ISO-639-2. Each character is coded into 8 bits and inserted into a 24-bit field. For example, French has a 3-character code coded as "fre”, "0110 0110 0111 0010 0110 0101".
  • the Text_char field may have a string that describes the text description of the component stream. Textual information is coded using character sets and methods.
  • the image display apparatus may identify a 3D service or 3D event of a corresponding service or event through the component descriptor.
  • Table 6 Stream_content Component_type Description 0x01 0x11 MPEG-2 video.
  • Frame-compatible 3D video 25 Hz 0x01 0x12 MPEG-2 video.
  • Frame-compatible 3D video 30 Hz 0x03 0x14 DVB subtitles (normal) for display on 3D monitor 0x03 0x24 DVB subtitles (for the hard of hearing) for display on 3D monitor 0x05 0x11 AVC: H.264 video.
  • Frame-compatible 3D video. 25 Hz 0x05 0x12 AVC H.264 video.
  • Frame-compatible 3D video 25 Hz
  • stream_content when stream_content is 0x01, it indicates MPEG-2 video.
  • component_type when component_type is 0x11, it indicates that it is a frame-compatible 3D video of 25 Hz, and when component_type is 0x12, it is frame-compact at 30 Hz. It may indicate that the picture is a 3D video.
  • stream_content is 0x05
  • component_type is 0x11
  • component_type is 0x12
  • stream_content is 0x03 and component_type is 0x15
  • the DVB subtitles (normal) are indicated on the 3D monitor
  • stream_content is 0x03 and component_type is 0x25
  • the display is displayed on the 3D monitor. Direct the DVB subtitles for the hard of hearing.
  • the translation subtitles are usually white and centered on the screen. Hearing audiences are only used to identify speakers and sound effects for the necessary dialogs within the subtitles. Hard-of-hearing is for recognizing the extra needs of def / hard-of-hearing audiences. In the end, normal is dialog-driven, and hard-of-hearing includes overall contextual information, such as who is talking for people who don't hear well.
  • the image display apparatus may parse the component descriptor of FIG. 13, extract stream_content and component_type field values, identify whether the corresponding service is a 3D service, and also know whether to decode and output the corresponding service or event.
  • FIG. 14 illustrates an example of a bitstream syntax of a linkage descriptor according to the present invention
  • FIG. 15 illustrates an example of a method of signaling a corresponding 3D service using a linkage descriptor according to the present invention. Figure is shown.
  • the linkage descriptor may be included in, for example, the SDT or the EIT, and the image display apparatus may identify a 3D service or event corresponding to a specific 2D service_id currently being viewed or a specific 2D event_id to be broadcasted in the future.
  • the linkage_type included in the linkage descriptor is 0x05 (service replacement service), and in addition, the replacement type may be designated as 3D in the private_data_byte region.
  • linkage_type when linkage descriptors are transmitted to the EIT, linkage_type may be set to 0x0D (event linkage), and the existence of the corresponding 3D service may be determined using the 3D service descriptor or component descriptor for the target_event_id through the EIT.
  • linkage_type to specify a new value of 0x0E and the description to be a “3D service”.
  • Another method uses a linkage_type of 0x05 (service replacement service), but the service_type of the target service directly parses the SDT and EIT for the service to determine whether it is 3D. This method can be used as a method of finding a 2D service corresponding to 3D.
  • the linkage descriptor is provided to identify the service upon the user's request for additional information related to the particular entity described by the SI system.
  • the location of the linkage descriptor in the syntax indicates the entity for which additional information is available.
  • linkage descriptors located in NITs indicate services that provide additional information on the network
  • linkage descriptors in BAT provide links to service information for things such as bouquets.
  • the CA replacement service may be identified using a linkage descriptor. This service may be automatically selected by the IRD if the CA denies access to the particular entity described by the SI system.
  • the service replacement service may be identified using a linkage descriptor. This service replacement service may be automatically selected by the IRD when the running state of the current service is set to not_running.
  • the transport_stream_id field identifies a TS including the indicated information service.
  • the original_network_id field may give a label that identifies the network_id of the original delivery system of the indicated information service.
  • the service_id field uniquely identifies the information service in the TS.
  • Service_id has the same value as program_number in the corresponding program_map_section. If the linkage_type field has a value of 0x04, the service_id field is not relevant and is set to 0x0000.
  • the Linkage_type field is information and describes the type of linkage.
  • linkage_type has 0x0D or 0x0E, it is valid only when the descriptor is transmitted in the EIT.
  • the mobile_hand-over_info () field is coded according to a predefined manner.
  • the event_linkage_info () field is also coded according to a predefined method.
  • the extended_event_linkage_info () field is also coded according to a predefined method.
  • private_data_byte is an 8-bit field and has a privately defined value.
  • a PAT defines a program_number value and a PMT_PID of a corresponding program.
  • the image display apparatus extracts and parses the PMT from the PAT.
  • the PMT indicates a stream_type and a program_number of a corresponding program in the case of a 2D service.
  • the corresponding stream may be an audio stream, and then the PID of the audio ES may be 0x111.
  • program_number is 0xbc, it may indicate that the corresponding stream is a video stream, and then the PID of the video ES is 0x112.
  • PMT further defines one program_number in addition to the above-described stream_type and program_number.
  • program_number when program_number is 0xbd, it may indicate that it is a 3D extension. In this case, it may indicate that PID of the video ES is 0x113.
  • an image display apparatus capable of supporting 3D services may identify and process 3D services by extracting and parsing one stream_type value and two program_number values from the PMT.
  • the SDT may signal the corresponding service by mapping the program_number of the PMT through the service_id.
  • service_id of the SDT when the service_id of the SDT is 2, it corresponds to program_number 2 of the PMT, and may define a service_type as 0x1B (H.264 HD) in the service descriptor in the SDT to signal that it is a 2D service, and through the linkage descriptor, service_id 3 and linkage_type is 0x05, it is a service replacement service and indicates 3D through private_data () and replacement_type (0x02) to signal the presence and processing of the 3D service corresponding to the service_id 2. Can be.
  • service_type may be defined as 0x1C in the service descriptor to signal a 3D service.
  • a relationship between services may be defined as shown in Table 8 to identify whether it is HD simcast or 3D.
  • FIG. 16 is a flowchart illustrating an example of outputting a stereoscopic video signal using 3D signaling information according to the present invention.
  • the demultiplexer 230 filters and parses the SDT sections from the received digital broadcast signal.
  • the signaling information processor 240 acquires and stores information about a service having a legacy service type among service descriptors in a service loop in the parsed SDT. That is, the signaling information processor acquires and stores PMT information for the 2D service (S1604).
  • the signaling information processor 240 acquires and stores information about a service having a 3D service type in a service loop in the parsed SDT. That is, the signaling information processor acquires and stores PMT information about the 3D service (S1606).
  • the signaling information processor 240 parses the linkage descriptor from the signaling information, and grasps the legacy 3D service ID information connected using the parsed linkage descriptor information (S1608).
  • the signaling information processor 240 determines PID information on the extended view stream using the 3D PMT information (S1610).
  • the digital receiver is set to the viewing mode (S1612).
  • the display is divided into two branches according to the viewing mode. First, a case of setting the 3D viewing mode will be described.
  • the digital receiver selects service_id that provides 3D stereoscopic video (S1614).
  • the service_type of the service_id may be, for example, a frame compatible 3DTV.
  • the digital receiver performs PID filtering and video / audio ES decoding on the basic A / V stream (S1616).
  • the controller 270 controls to output 3D stereoscopic video decoded by the 3D image formatter using the 3D service descriptor information (S1618).
  • the 3D video having passed through the 3D image formatter is output on the screen through the output unit (S1620).
  • the digital receiver selects service_id providing 2D video (base view video) (S1622).
  • the channel with the service_id is, for example, a legacy channel.
  • the controller 270 controls the demultiplexing and the decoder to perform PID filtering and video / audio ES decoding (Base View Video Decoder) on the base A / V stream (S1624).
  • the controller 270 outputs the decoded 2D video through the output unit (1626).
  • FIG. 17 is a diagram illustrating an example of a UI configured according to the present invention.
  • the receiver may know whether the corresponding channel provides the 3D service. For example, it is possible to determine whether to provide a 3D service of a corresponding channel in advance from the service list descriptor of FIG. 4, the service descriptor of FIG. 6, the 3D service descriptor of FIG. 9, or the component descriptor of FIG. 13.
  • a UI including a 3D indicator configured according to the present invention is illustrated on a channel banner appearing in a channel search process.
  • FIG. 17 (b) it is an example of an OSD screen configured to inform that the accessed channel provides 3D service.
  • FIG. 17C another example of an OSD screen configured by displaying a 3D indicator together with a title of a corresponding service to inform that an accessed channel provides a 3D service is illustrated.
  • the channel when a viewer directly searches for a channel without prior information, the channel is switched to watch the channel in an appropriate viewing mode when a specific channel accessed currently provides 3D service.
  • the 3D service is displayed in advance on the OSD screen. Accordingly, the viewer can watch the 3D service provided by the channel through the OSD screen by appropriately skipping the channel or changing the viewing mode.
  • the following relates to the EPG screen.
  • 18 to 20 are, for example, a combination of 3D service / event related data extracted from a receiver by parsing and extracting 3D service / event related data included in each table section or each descriptor described above.
  • the EPG screen 1800 of FIG. 18 includes a first item 1805 displaying a current channel, a second item 1810 showing a time zone-specific content list in the corresponding channel, and a selected specific program of the second item 1810.
  • a third item 1820 where a preview image is displayed a fourth item 1830 including additional information related to the preview image displayed in the third item, and a fifth item 1840 on which other menu items are displayed. It is configured to include.
  • FIG. 18 illustrates an EPG screen 1800 including a 3D indicator in various ways in accordance with the present invention.
  • the 3D indicator is displayed on the preview image of the third item 1220 without displaying the 3D indicator on the content list of the second item 1810. Referring to FIG. 11, there is no 3D indicator on the selected content 1811 on the content list of the second item 1810, and there is a 3D indicator 1825 only on the preview image of the third item 1820.
  • the 3D indicator is displayed only on the content list of the second item 1810 without displaying the 3D indicator on the preview image of the third item 1820.
  • 3D indicators 1813 and 1815 are displayed on two contents of the contents list of the second item.
  • the 3D indicator may be implemented in 2D or 3D, and may indicate that the corresponding content is 3D content using color or depth information on the EPG screen together with or separately from the indicator.
  • FIG. 19 is a guide screen 1900 illustrating detailed information on specific content selected from the EPG screen of FIG. 18.
  • the guide screen 1900 of FIG. 19 includes a first item 1910 displaying a current channel and current time information, a second item 1920 including a content title and time information on the corresponding content, and a preview image. And a fourth item 1940 on which the third item 1930 and detailed information about the corresponding content are displayed.
  • the image signal processing apparatus displays the 3D indicator 1925 on the second item 1920 of the EPG screen or the 3D indicator (3D) on the third item 1930 as shown in FIG. 13. 1935 or a 3D indicator may be displayed on both the second item 1920 and the third item 1930.
  • the 3D indicator may be configured in 2D or 3D format.
  • the EPG screen 2000 of FIG. 20 illustrates an EPG screen in which only 3D content is displayed according to a setting.
  • FIG. 20 is configured on the EPG screen, 3D content may be expressed in a manner other than the EPG screen.
  • 21 and 22 illustrate still another example of an EPG screen according to the present invention.
  • the image processing apparatus may determine whether a corresponding 2D / 3D service exists for each service by using the linkage descriptors of FIGS. 14 and 15 described above. Therefore, when there are 2D and 3D services corresponding to each other, the image processing apparatus identifies a service pair, and when the service pair is provided, the EPG screen as shown in FIG. 21 or 22 is provided. Can provide.
  • the image processing apparatus may automatically download a service pair for a service according to a user's setting or automatically. If the service pair is also downloaded from the image processing apparatus, when the user presses the 2D / 3D switch button in the process of playing the stored service or content, the user may be provided with the convenience by switching to and playing the corresponding content.
  • the receiver may schedule a download to receive both a service selection or a content pair automatically.
  • service_id corresponding to the reserved and recorded content is found and all received and stored.
  • the receiver may use the service_id value for each content from the parsed EIT. Therefore, when the user plays the stored content, when the user presses the 2D / 3D switch button, the user may be provided with the convenience by switching to and playing the corresponding content.
  • FIG. 23 is a diagram illustrating an example of a UI indicating whether a 3D version exists according to the present invention
  • FIG. 24 is a diagram illustrating another example of an EPG according to the present invention
  • FIG. 25 is a detailed UI of FIG. 24. It is a figure which shows an example.
  • a corresponding content that is, a 3D version of 2D content stored through an EIT
  • the receiver may be illustrated in FIG. 23.
  • a text bar may be scrolled to indicate such information.
  • the present invention is not necessarily limited to that shown in FIG. 23, and a separate UI may be configured to select and set a control operation related to the existence of the 3D version on the on-screen display (OSD) screen.
  • OSD on-screen display
  • an EPG provided according to a user's request provides an indicator for identifying whether the corresponding content is 2D or 3D for each content.
  • a 3D version of a corresponding 3D version corresponding to the content together with information indicating that a 2D version of the 'wife has returned 22 times' NRT content is provided in the SBS at 12 o'clock. 'Wife returned 22 times' is available from 15:30 at SBS.
  • the 3D version of the content is not necessarily limited to the content of the same episode, for example, may be content for another episode (21 times, 23 times, special, etc.).
  • FIG. 24 exemplarily provides information on content corresponding to a specific content only in the same channel, the present invention is not limited thereto.
  • information about other channels may be provided as well as information about corresponding content in different media. You can also provide information.
  • FIG. 25 illustrates detailed information and related processing when the user selects the 30th Taejo Wanggun 30th (3D) content illustrated in FIG. 24.
  • the selected content is a 3D version of '30 Taejo Wanggun 'of recorded 2D, and provides a recording reservation function and a back function.
  • the receiver uses signaling information obtained from the SDT or the EIT for more detailed information of the corresponding content, for example, plot information, related episode information, broadcast start time information, broadcast end time information, and thumbnail information. You can also provide a variety of information.
  • Video format transitions are described below to reference the above-described content.
  • the frame compatible stereoscopic 3DTV service can switch the video format among two frame compatible stereoscopic video formats. Or it may be switched from one of the frame compatible stereoscopic video formats or from the HDTV video format (ie, non-frame compatible stereoscopic 3DTV video formats). Format switching between side-by-side and top-and-bottom frame packing arrangements can be applied unlikely, but such a transition is not prohibited.
  • Video format switching can only be applied at a random access point (RAP) with Instantaneous Decoding Refresh (IDR) video frames. Due to the lack of tight synchronization between the generation of pictures in the video stream and the generation of PMT in the TS, if the video format is switched during a running frame compatible stereoscopic 3DTV service, inconsistencies may occur for a short period of time.
  • the transport of HDTV (ie, non-3DTV) video format content generally means that the frame packing arrangement Supplemental Enhancement Information (SEI) message does not apply.
  • SEI Supplemental Enhancement Information
  • the IRD outputting such a format conversion does not correctly handle the transition because of a temporary mismatch with the information contained in the previous occurrence of the PMT. This may illustrate converting a 1080i 25Hz side-by-side frame compatible stereoscopic 3DTV video into a 1080i 25Hz HDTV video.
  • Format transition assistance signaling is defined within the IRD to enable certainty of robustness of the decoding process. This format transition assistance signaling recommends that the frame compatible stereoscopic 3DTV service be applied when including the periods of content in a non-3DTV video format.
  • the format transition assistance signaling consists of the inclusion of frame packing arrangement SEI messages in a video stream containing HDTV format video content.
  • the video stream signals that the frame compatible stereoscopic 3DTV video format is not currently transmitted by setting frame_packing_arrangement_cancel_flag to 1.
  • the frame compatible stereoscopic 3DTV service requires the HDTV format for at least two seconds before and after the format conversion between the HDTV video format and the frame compatible stereoscopic 3DTV video format. It is recommended that the frame packing arrangement SEI message be applied during transmission.
  • the frame_packing_arrangement_cancel_flag in the frame packet arrangement SEI message must be set to 1 to indicate that a non-3DTV video format is being transmitted.
  • the transport of the frame packing arrangement SEI message with frame_packing_arrangement_cancel_flag set to 1 should be maintained during HDTV video format content when the service provider is aware.
  • the IRD hops from the other service to the frame compatible stereoscopic 3DTV service. In the same situation, it may be much more convenient to keep applying the same signaling than to stop the transport.
  • the frame packing arrangement SEI message signaling may be consistent with the video format being transmitted and may be predicted through other signaling related to the video format. If a temporary mismatch occurs according to the above-described PMT, as described above, it may be alleviated by the application of format transition assistance signaling.
  • the present invention proposes a method of receiving a broadcast service using corresponding signaling information and an operation and implementation method of a 3D TV for controlling a stereoscopic display output.
  • 3DTV and 2D legacy using the present invention may be applied in whole or in part to a digital broadcasting system.
  • Legacy TV service can be divided and received through a separate independent and separate logical channel, so that the user can perform 3DTV service switching in 2D through channel change.
  • the receiver can clearly recognize the correspondence relationship between the two services to know the existence of 2D and corresponding 3D services, and can respond when the user requests service switching. Do.
  • the present invention may be applied in whole or in part to a digital broadcasting system.
  • 3D image representation methods may include a stereoscopic image method that considers two perspectives (ie, perspective) and a multi-vision image method that considers three or more perspectives (ie, perspective).
  • a single visual image of the related art may be referred to as a monoscopic image method.
  • the stereoscopic image method uses a pair of left and right images obtained by photographing the same object with a left camera and a right camera spaced apart from each other by a predetermined distance.
  • the multi vision image uses at least three sets of images obtained by photographing with at least three different cameras spaced apart from each other by a predetermined distance and arranged at different angles.
  • the stereoscopic image method according to the embodiment of the present invention is described, the idea of the present invention can be applied to the multi visual image method.
  • the term stereoscopic may be abbreviated as stereo.
  • the stereoscopic or multi-visual image is compressed and encoded in a Moving Picture Experts Group (MPEG) format or by using various methods, thereby being transmitted.
  • MPEG Moving Picture Experts Group
  • stereoscopic or multi-visual images are compressed-encoded using the H.264 / AVC (Advanced Video Coding) method and thereby transmitted.
  • the receiving system performs a decoding process on the received image as a process opposite to the H.264 / AVC method, thereby obtaining a 3D image.
  • any one of the left visual image or the right visual image of the stereoscopic image, or any one of the multi visual images may be assigned as the base layer image, and the remaining images may be assigned as the enhancement layer image.
  • the base layer picture can then be encoded by using the same method used to encode the mono picture.
  • the enhancement layer image only relevant information between the base layer image and the enhancement layer aspect is encoded. The processed images are then transmitted.
  • Examples of compression-encoding methods for the base layer include JPEG, MPEG-1, MPEG-2, MPEG-4, and H.264 / AVC. And in this embodiment of the present invention, the H.264 / AVC method is adopted. Furthermore, according to an embodiment of the present invention, a H.264 / SVC (Scalable Video Coding) or MVC (Multi-view Video Coding) method is adopted for the compression-encoding process of an enhancement layer image.
  • SVC Scalable Video Coding
  • MVC Multi-view Video Coding
  • Terrestrial (ie, terrestrial) DTV transmission and reception is based on 2D video content. Therefore, in order to service 3D TV broadcast content, transmission and reception standards for 3D TV broadcast content must be additionally defined.
  • the receiver receives broadcast signals in accordance with the added transmission and reception standards to fully process the received signals and thereby support 3D broadcast services.
  • ATSC Advanced Television Systems Committee
  • information for processing broadcast content is included in the system information and thereby transmitted.
  • System information may be referred to as service information, for example.
  • the system information may include channel information, program information, event information, and the like.
  • this system information may be transmitted and received by being included in the Program Specific Information / Program and System Information Protocol (PSI / PSIP).
  • PSI / PSIP Program Specific Information/ Program and System Information Protocol
  • the present invention is not limited only to this example. And, in the case of a protocol for transmitting system information in a table format, this protocol can be applied to the present invention regardless of the term (ie, name).
  • the PSI table may include a Program Association Table (PAT) and a Program Map Table (PMT).
  • PAT Program Association Table
  • PMT Program Map Table
  • the PAT corresponds to special information transmitted by a data packet whose PID is '0'.
  • the PAT may transmit PID information of a corresponding PMT for each program.
  • the PMT transmits PID information of a Transport Stream (TS) packet (here, program identification numbers and individual bit sequences of video and audio data constituting the corresponding program are transmitted), and also PCR is performed. Also transmits PID information. Then, by parsing the PMT obtained from the PAT, correlation information between the elements constituting the corresponding program can also be obtained.
  • TS Transport Stream
  • the PSIP table includes a virtual channel table (VCT), a system time table (STT), a rating region table (RTT), an extended text table (ETT), a direct channel change table (DCCT), and a DDCSCT (DDCCT). It may include a Direct Channel Change Selection Code Table (EIT), an Event Information Table (EIT), and a Master Guide Table (MGT).
  • VCT virtual channel table
  • STT system time table
  • RTT rating region table
  • ETT extended text table
  • DCCT direct channel change table
  • DDCCT DDCSCT
  • EIT Direct Channel Change Selection Code Table
  • EIT Event Information Table
  • MTT Master Guide Table
  • the VCT may transmit information about virtual channels (such as channel information for selecting channels) and information such as packet identifiers (PIDs) for receiving audio and / or video data. More specifically, when the VCT is parsed, a PID of audio / video data of a broadcast program, which is transmitted through the channel along with the channel name and the channel number, may be obtained.
  • the STT may transmit information about the current data and timing information, and the RRT may transmit information about regions and information about consultation organs for program ratings.
  • the ETT can send additional descriptions of specific channels and broadcast programs, and the EIT can send information about virtual channel events.
  • the DCCT / DCCSCT may transmit information related to automatic (or direct) channel change, and the MGT may transmit version and PID information of each table in the PSIP.
  • Transmission formats of stereoscopic images include a single video stream format and a multi-video stream format.
  • the single video stream format corresponds to a method of transmitting this single video stream by multiplexing two perspectives of video data into a single video stream.
  • a single video stream format has an advantage in that the bandwidth required for providing 3D broadcast service is not widened.
  • the multi-video stream format corresponds to a method of transmitting multiple video data into multiple video streams.
  • the multi-video stream format has an advantage in that high picture quality video data can be displayed because high capacity data can be transmitted.
  • 26 illustrates a stereoscopic image multiplexing format of various image formats, according to an embodiment of the present invention.
  • the video formats of the 3D broadcast service include the side-by-side format shown in (a), the top-bottom format shown in (b), and the interlace shown in (c). Interlaced format, frame sequential format shown in (d), checker board format shown in (e), and anaglyph format shown in (f). do.
  • the side-by-side format shown in (a) corresponds to a format in which the left image and the right image are down-sampled in the horizontal direction.
  • One of the sampled images is located at the left side and the other sampled image is positioned at the right side to generate a single stereoscopic image.
  • the top-bottom format shown in (b) corresponds to a format in which the left image and the right image are 1/2 down-sampled in the vertical direction.
  • one of the sampled images is located at the upper side and the other sampled image is positioned at the lower side to generate a single stereoscopic image.
  • the interlaced format shown in (c) corresponds to a format in which the left image and the right image are down-sampled 1/2 in the horizontal direction so that two images can be alternately lined up to generate a single stereoscopic image.
  • the left image and the right image are 1/2 down-sampled in the vertical direction so that two images can be alternately lined up, thereby corresponding to a format for generating a single stereoscopic image.
  • the frame sequential format shown in (d) corresponds to a format in which the left image and the right image are alternately temporally composed of a single video stream.
  • the left image and the right image are 1/2 down-sampled so that the left image and the right image can be alternated in the horizontal direction and the vertical direction, respectively, thereby converting the two images into a single image. It corresponds to the format to be constructed.
  • the anaglyph format shown in (f) corresponds to a format that composes an image so that the image can provide a three-dimensional effect by using complementary color contrast.
  • Digital broadcasting currently provides a broadcast service by using limited system resources.
  • System resources of the digital broadcast environment include transmission bandwidth, processing capability, and the like.
  • the bandwidth that can be used in the allocation of frequencies (ie, allocation) is limited.
  • the corresponding 3D broadcast service will also use the limited resources used in the digital broadcast environment.
  • a left-view image and a right-view image must be transmitted. do. Therefore, it is difficult to transmit two images at high resolution using a bandwidth of a conventional digital broadcast. For example, when transmitting full-resolution video data using the bandwidth of digital broadcasting, it is difficult to transmit two sets of full-resolution video data using the same bandwidth. Thus, a method of transmitting two sets of half-resolution video data has been proposed.
  • FIG. 27 is a conceptual diagram of a 3D broadcast service according to an embodiment of the present invention.
  • the 3D broadcast service 2010 that provides a full-resolution image will be referred to as 3D service 2.0 or 3D service Spec-B below.
  • the 3D broadcast service 2020 that provides the half-resolution image will be referred to as 3D service 1.0 or 3D service Spec-A below.
  • the 3D service 1.0 2020 may be serviced to a half-resolution left image and a half-resolution right image.
  • the 3D service 2.0 (2010) that provides full-resolution video should maintain compatibility with the 3D service 1.0 (2020) because it should be compatible with the 3D service 1.0 (2020) instead of transmitting a new full-resolution video.
  • a method of providing differential data or additional data for transmitting a full-resolution image may be used. More specifically, as shown in FIG. 27, by adding a complementary video element 2030 of 3D service 2.0 to a half-resolution video element of 3D service 1.0 2020, a full-resolution 3D broadcast service ( 2010) may be provided.
  • a broadcast receiver capable of supporting 3D service 1.0 may provide a half-resolution image by receiving and processing data of 3D service 1.0 2020, and a broadcast receiver capable of supporting 3D service 2.0 may be a 3D service 1.0.
  • the full-resolution image may be provided by receiving and processing the data of 2020 and the complementary data of the 3D service 2.0.
  • FIG. 28 illustrates a conceptual block diagram illustrating a method for providing a full-resolution 3D broadcast service according to an embodiment of the present invention.
  • a digital broadcast receiver 3030 capable of providing full-resolution 3D video and a digital broadcast receiver 3040 capable of supporting half-resolution 3D video may be provided.
  • a broadcast system that provides a 3D broadcast service may transmit half-resolution 3D video data through the base layer 3020 and additional half-resolution for providing full-resolution 3D video through the enhancement layer 3010. 3D video data can be transmitted.
  • a digital broadcast receiver 3040 capable of supporting half-resolution 3D video may provide half-resolution 3D video by receiving and processing video data of the base layer 3020.
  • the digital broadcast receiver 3030 capable of providing full-resolution 3D video receives and processes the full-resolution 3D video by receiving and processing the video data of the base layer 3020 and the video data of the enhancement layer 3010.
  • the video data or video component of the base layer may be referred to as base video data or base video component, respectively
  • the video data or video component of the enhancement layer may be complementary video data or complementary type. Each may be referred to as a video component.
  • 29 illustrates a method for providing a 3D broadcast service according to an embodiment of the present invention.
  • the 3D service Spec-A 4010 displays 3D video data transmitted through a base layer, and according to the embodiment of FIG. 29, the 3D video data is provided in a half-resolution top-bottom image format. do.
  • the 3D service Spec-B 4020 transmits complementary data about an image of each perspective through an enhancement layer.
  • the receiving system receives the transmitted complementary data.
  • the received complementary data is further processed into 3D video data transmitted from the 3D service Spec-A 4010 to enable the receiving system to provide full-resolution stereoscopic images.
  • FIG. 30 illustrates a method for providing a 3D broadcast service according to another embodiment of the present invention.
  • the 3D service Spec-A 5010 may correspond to a top-bottom image format and may include spatial half-resolution 3D video data and temporal full-resolution 3D video data.
  • video data of the 3D service Spec-A 5010 may be interpolated in the receiving system to be provided in spatial full-resolution and temporal half-resolution.
  • the receiving system of the 3D service Spec-B 5020 may further process complementary information to provide both spatial full-resolution images and temporal full-resolution images.
  • Video data may include frame-unit images.
  • the distance between frame-by-frame images (which may be placed in time) may also be defined, along with the resolution of the images. For example, due to a limitation in the bandwidth, if the set of transmittable video data is spatially half-resolution and temporally full-resolution, and the spatial full-resolution images are within the same bandwidth limitations, If it is being transmitted, only temporal half-resolution (eg, twice the frame distance in the case of temporal full-resolution) video data can be transmitted.
  • Various embodiments of a method for processing video data may be available depending on the resolution at the receiving system.
  • the receiving system of the 3D service Spec-A 5010 is provided to the received image (Lb or Rb) to provide a near full-resolution image (Lb 'or Rb') (shown in the lower left of FIG. 30). Interpolation can be performed.
  • the receiving system of the 3D service Spec-B 5020 may use the video data received at the base layer and the video data received at the enhancement layer.
  • the receiving system can interleave and combine the horizontal lines of the received image (Lb or Rb) of the base layer and the received image (Le or Re) of the enhancement layer, thereby full-resolution. It provides the image (Lf or Rf).
  • the receiving system may perform low-pass filtering on the received image (Lb or Rb) of the base layer, and high on the received image (Le or Re) of the enhancement layer. High-pass filtering can be performed, whereby the two images are combined to reconstruct the full-resolution image (Lf or Rf).
  • the receiving system may perform interpolation on the received image (Lb or Rb) of the base layer and complement the interpolated information image (Lb 'or Rb') of the interpolated full-resolution image (near full-resolution). (Le or Re), thereby providing a full-resolution image (Lf or Rf) (shown in the lower right of FIG. 30).
  • 31 illustrates a method for providing a 3D broadcast service according to another embodiment of the present invention.
  • the 3D service Spec-A 6010 corresponds to a side-by-side image format and may include spatial half-resolution 3D video data and temporal full-resolution 3D video data.
  • video data of the 3D service Spec-A 6010 may be interpolated in a receiving system to be provided in spatial full-resolution and temporal half-resolution.
  • the receiving system of the 3D service Spec-B 6020 may further process the complementary information to provide both spatial full-resolution images and temporal full-resolution images.
  • FIG. 31 the remaining description of FIG. 31 is the same as that of FIG. 30 except that the video format corresponds to the side-by-side video format. Accordingly, duplicate descriptions of the present invention will be omitted for the sake of brevity.
  • the reception system of the 3D service Spec-B 6020 in the case of interleaving the received image Lb or Rb of the base layer and the received image Le or Re of the enhancement layer, the reception system of the 3D service Spec-B 6020. Can interleave and combine vertical lines, thereby providing a full-resolution image.
  • 32 illustrates a method for providing a 3D broadcast service according to another embodiment of the present invention.
  • the 3D service Spec-A 7010 may correspond to a frame sequential picture format and may include spatial full-resolution 3D video data and temporal half-resolution 3D video data.
  • video data of the 3D service Spec-A 7010 may be format-converted in the receiving system to be provided in spatial half-resolution and temporal full-resolution.
  • the receiving system of the 3D service Spec-B 7020 may further process complementary information to provide both spatial full-resolution images and temporal full-resolution images.
  • the receiving system of the 3D service Spec-A 7010 may perform decimation on the received image (Lb or Rb), whereby top-bottom format or side- A half-resolution image (Lb 'or Rb') of the bi-side format is generated (or generated). At this time, during the decimation, the receiving system acquires the half-resolution image Lb 'or Rb' in the top-bottom format or the side-by-side format. At this time, while performing the decimation, the receiving system obtains a pair of temporally extended (eg, doubled) half-resolution images through frame rate conversion, and thereby a spatial full- It is possible to provide resolution images and temporal full-resolution images.
  • a pair of temporally extended (eg, doubled) half-resolution images through frame rate conversion, and thereby a spatial full- It is possible to provide resolution images and temporal full-resolution images.
  • the receiving system of the 3D service Spec-B 7020 may include an image (Le or Re) received through the enhancement layer between each successive image (Lb or Rb) received through the base layer. Respectively, so that a spatial full-resolution image and a temporal full-resolution image can be provided.
  • complementary video data In order to provide a high resolution 3D broadcast service, complementary video data must be provided for a 3D broadcast service of a resolution currently being provided, and together with the complementary video data, Signaling information for the data is also required to be transmitted / received and processed.
  • the complementary video data may use a H.264 / SVC (Scalable Video Coding) or MVC (Multi-view Video Coding) method as a layered image compression encoding method.
  • the complementary video data may be transmitted through an enhancement layer.
  • the transmitted signaling information about the complementary video data may be referred to as 3D complementary video information.
  • the 3D complementary video information may be provided in a descriptor or table format according to an embodiment of the present invention, wherein the 3D complementary video information may be referred to as a 3D complementary video descriptor or a 3D complementary video table.
  • the 3D complementary video information may be included in the PSIP (which is transmitted from the ATSC broadcasting system), in particular in the TVCT (or VCT) of the PSIP, thereby being transmitted.
  • the 3D complementary video information may be included in the PSI (which is transmitted from the ATSC broadcasting system), and particularly may be included in the PMT of the PSI.
  • the 3D complementary video information may be included in the complementary video information, and in particular, may be included in the header information of the complementary video ES (Elementary Stream), thereby being transmitted.
  • the present invention provides full forward and backward interoperability between current or next generation source devices and upcoming half-resolution 3DTVs and next-generation full-resolution 3DTVs. Examples of how they work are provided.
  • Spec-A content currently played on BD players / STBs has two modes: the mode in which consumers view half-resolution 3D stereoscopic content on upcoming 3DTVs, and the half-resolution 3D stereoscopic content on next-generation 3DTVs. May have a mode.
  • consumers can view half-resolution 3D stereoscopic content on upcoming 3DTVs, and consumers can view half-resolution 3D stereoscopic content on next-generation 3D TVs.
  • the spatial half resolution methods such as top-bottom and side-by-side in the present invention are well supported in existing BD / DVD authoring systems, and have the following characteristics without modification or through slight processing.
  • 3D subtitles using presentation graphic mode 3D graphics to place shifted objects in the top & bottom portion of the frame, without having to edit each frame )
  • Facilitates the application of effects across the entire clip and BD Live content authoring.
  • FIG. 34 illustrates a service model that provides compatibility between a first generation 3DTV and a second generation 3DTV.
  • each half is half-resolution, and future stereoscopic 3DTV service may be provided through high resolution.
  • the conventional video element already supports half-resolution, in order to support full resolution, a differential signal is transmitted through the complementary video element.
  • the resulting receiver supporting Spec-B can provide full-resolution 3DTV service by adding complementary video elements to Spec-A.
  • the present invention provides a method of transmitting a complementary video element to support 3DTV service for Spec-B.
  • 35 exemplifies a syntax structure of TVCT including 3D complementary video information according to an embodiment of the present invention.
  • the 'table_id' field is an 8-bit unsigned integer number field that indicates the type of table section.
  • the 'section_syntax_indicator' field is a one-bit field, and this field will be set to '1' for the 'terrestrial_virtual_channel_table_section ()' field.
  • the 'private_indicator' field is a one-bit field to be set to '1'.
  • the 'section_length' field is a 12-bit field in which the first two bits are set to '00', and specifies the number of bytes of the section starting immediately after the 'section_length' field and including the CRC.
  • the 'transport_stream_id' field indicates a 16-bit MPEG-2 transport stream (TS) ID.
  • the 'transport_stream_id' field distinguishes the Terrestrial Virtual Channel Table (TVCT) from others that can be broadcast in different PTCs.
  • TVCT Terrestrial Virtual Channel Table
  • a 'version_number' field providing a service as a 5-bit field indicates a version number of a virtual channel table (VCT).
  • VCT virtual channel table
  • the 'current_next_indicator' field is a one (1) -bit indicator.
  • 'current_next_indicator' field is set to '1', this means that the transmitted VCT (Virtual Channel Table) is currently applicable. If the bit of the 'current_next_indicator' field is set to '0', this means that the transmitted table is not yet applicable and the next table will be valid.
  • the 'section_number' field is an 8-bit field giving the number of this section.
  • the 'last_section_number' field which provides service as an 8-bit field, specifies the number of the last section of the complete Terrestrial Virtual Channel Table (ie, the section with the highest section_number value).
  • the 'protocol_version' field which serves as an 8-bit unsigned integer field, is used in the future to allow a table type to carry parameters (which may be configured differently than defined in the current protocol).
  • the 'num_channels_in_section' field which provides service as an 8-bit field, specifies the number of virtual channels in this VCT section.
  • the 'short_name' field may indicate the name of the virtual channel, which is represented as a sequence of 1 to 7 16-bit code values interpreted according to the UTF-16 standard for Unicode character data. .
  • the 'major_channel_number' field indicates a 10-bit number representing a 'major' channel number associated with a virtual channel defined as a repetition of a 'for' loop.
  • the 'minor_channel_number' field indicates a 10-bit number in a range of '0' to '999' to indicate a 'minor' or 'sub' channel number. Along with the 'major_channel_number' field, this 'minor_channel_number' field may indicate a two-part channel number, where minor_channel_number indicates a second or right part of the number.
  • a 'modulation_mode' field containing an 8-bit unsigned integer may indicate the modulation mode for the transmitted carrier associated with the virtual channel.
  • the 'carrier_frequency' field may indicate an allowed carrier frequency.
  • the 'channel_TSID' field is a 16-bit unsigned integer field in the range of 0x0000 to 0xFFFF.
  • the 'channel_TSID' field indicates an MPEG-2 transport stream (TS) ID associated with a transport stream (TS) carrying an MPEG-2 program referenced by a virtual channel.
  • the 'program_number' field contains a 16-bit unsigned integer that associates the virtual channel defined herein with MPEG-2 program association and TS program map tables.
  • the 'ETM_location' field which provides service as a 2-bit field, specifies the presence and location of an extended text message (ETM).
  • the 'access_controlled' field indicates a 1-bit Boolean flag If the boolean flag of the 'access_controlled' field is set, this means that accessing events related to the virtual channel can be controlled.
  • the 'hidden' field indicates a 1-bit boolean flag. If the boolean flag of the 'hidden' field is set, this means that the virtual channel is not accessed by the user by a direct entry of the virtual channel number.
  • the 'hide_guide' field displays a boolean flag. If the boolean flag of the 'hide_guide' field is set to zero ('0') for a hidden channel, this means that virtual channel and virtual channel events may appear in the EPG display.
  • the 'service_type' field is a 6-bit enumerated type field that can identify the type of service carried in the virtual channel.
  • the 'source_id' field contains a 16-bit unsigned integer that can identify a programming source associated with the virtual channel.
  • the 'descriptors_length' field may indicate the total length of the descriptors for the virtual channel (unit is byte).
  • the 'descriptor ()' field may include zero (0) or one or more descriptors determined as appropriate for the 'descriptor ()' field.
  • the 'additional_descriptors_length' field may indicate the total length (unit is byte) of the VCT descriptor list.
  • the 'CRC_32' field ensures zero output of registers in the decoder as defined in Annex A of ISO / IEC 13818 1 "MPEG-2 Systems" [8] after processing of the entire Terrestrial Virtual Channel Table (TVCT) section.
  • a 32-bit field containing a CRC value is a 32-bit field containing a CRC value.
  • the service_type field 8010 corresponds to a field indicating this information. For example, if the field value of the service_type field 8010 is 0x13, this indicates that a 3D broadcast program (audio, video, and complementary video data for displaying a 3D stereoscopic image) is being provided from the corresponding virtual channel. .
  • the descriptor field 8020 includes 3D complementary video information, which will be described in detail below with reference to the accompanying drawings.
  • FIG. 36 illustrates a syntax structure of a 3D complementary video descriptor included in TVCT according to an embodiment of the present invention.
  • the number_elements field indicates the number of video elements that make up each virtual channel.
  • the broadcast receiver may receive a 3DTV service location descriptor, thereby parsing the information contained in the fields below the number_elements field as a number (this number corresponds to the number of video elements that make up each virtual channel).
  • the complementary_type field represents a method of configuring complementary video data or complementary video streams. If a full-resolution image is output, the receiving system uses the information in this field to reconstruct (or reconstruct) the base video data and the complementary video data into the full-resolution image.
  • the naive_subsampling_flag field indicates whether subsampling is being performed or low-pass filtering and high-pass filtering are being performed when the base video component and the complementary video component are configured. For example, if the field value of the naive_subsampling_flag field is equal to 1, this indicates that subsampling is being performed. And, if this field value is equal to 0, this indicates that low-pass filtering and high-pass filtering are being performed.
  • the codec_type field indicates the type of video codec used to encode or compress the complementary video component.
  • a coding scheme such as MPEG-2, AVC / H.264, SVC extension, or the like may be indicated according to the field value of the codec_type field.
  • the horizontal_size field, the vertical_size field, and the frame_rate size field indicate the horizontal size, vertical size, and frame rate of the complementary video component, respectively.
  • the horizontal size and the vertical size may indicate a spatial resolution
  • the frame rate may indicate a temporal resolution.
  • the spatial / temporal resolution of the complementary video component may be full-resolution.
  • the interpolation_filter_available_flag field indicates whether an extra customized filter is used when interpolation is performed on the base video component.
  • information such as filter coefficients for implementing a filter may be included in a descriptor loop for a complementary video component in TVCT or PMT, and may be provided in a descriptor format. And in accordance with another embodiment of the present invention, this information may be included in the header information or the message information in the video element, thereby being provided.
  • the left_image_first_flag field indicates which of the two video data occurs first (or occurs). According to an embodiment of the present invention, when video data corresponding to the left eye is first received, the field value of the left_image_first_flag field may be set to one.
  • the complementary_first_flag field indicates the order of combining the base video component and the complementary video component during the procedure of constructing the full-resolution image. According to an embodiment of the present invention, when the video data corresponding to the base video component precedes the video data corresponding to the complementary video component, the field value of the complementary_first_flag field may be set to one.
  • FIG. 37 illustrates a method of composing an image according to a field value of a complementary_type field included in 3D complementary video information according to an embodiment of the present invention.
  • the complementary_type field included in FIG. 36 indicates a method of configuring complementary video data or complementary video streams.
  • the receiving system uses the information in this field to reconstruct (or reconstruct) the base video data and the complementary video data into a full-resolution image.
  • the reconstruction (or reconstruction) of the full-resolution image according to the field value of the complementary_type field may be variously performed as shown in FIG. 37.
  • the complementary_type field indicates that the complementary video data is line interleaved and carries video data for the complementary line.
  • the complementary video data may include video data for even or odd lines, which are added to the base video data to construct a full-resolution image.
  • Video data for even lines or odd lines may be line-interleaved horizontally or vertically depending on the multiplexing format of the base video data, thereby generating (or generating).
  • vertical line-interleave when the base video data corresponds to the side-by-side format, vertical line-interleave may be performed, and when the base video data corresponds to the top-bottom format, the horizontal line- Interleaving may be performed.
  • the complementary_type field indicates that complementary video data carries order information about the perspective of an image in which pixel complementary pixels are interleaved and alternately (or changed) for each line. Display.
  • the order information here corresponds to information about pixels for reconstructing a full-resolution image.
  • Complementary video data can be interleaved pixel by pixel, thereby transmitting in a checkerboard format.
  • pixels of the left image and pixels of the right image may be alternated pixel by pixel (or pixel by pixel) within a single line.
  • the receiving system is required to transmit this information regarding the order of alternation.
  • the complementary_first_flag field indicates to which perspective or layer the video data included in the first pixel corresponds.
  • the complementary_type field indicates that the complementary video data is frame-interleave and includes complementary frames for reconstructing (or reconstructing) the full-resolution image. do.
  • full-resolution means temporal resolution.
  • the complementary video data may include interleaved image data on a frame-by-frame (or frame-by-frame) basis and may also include video data on a frame-by-frame (or frame-by-frame) basis.
  • the complementary_first_flag field may inform the receiving system whether the video frame received through the complementary video component is located before or after the video frame being received via the base video component.
  • the complementary_type field indicates that the complementary video data is fieldinterleave and includes complementary frames for reconstructing (or reconstructing) the full-resolution image.
  • full-resolution means temporal resolution.
  • the complementary video data may include interleaved image data on a field basis (or per field), and may also include video data on a field basis.
  • the complementary_first_flag field may inform the receiving system whether a video field received through the complementary video component corresponds to an even field or an odd field for a full-resolution image.
  • the complementary_type field may indicate that the complementary video data includes residual or incremental data for reconstructing (or reconstructing) the full-resolution image. have.
  • the complementary video component includes residual or incremental data for reconstructing (or reconstructing) the full-resolution image.
  • the receiving system may perform interpolation or doubling on the base video data.
  • the 3D complementary video descriptor in the PMT provides the complementary video elements that make up a full-resolution 3D stereoscopic program.
  • the 3D_complementary_video_descriptor_PMT is located after the ES_info_length field in the PMT and includes information corresponding to the elementary stream. The meaning of each field is the same as 3D_complementary_video_descriptor_VCT. codec_type may be replaced with a stream_type field in the PMT, in which case the 3D complementary video descriptor may be omitted.
  • 39 illustrates a syntax structure of a PMT including 3D complementary video information according to an embodiment of the present invention.
  • the fields included in the PMT of FIG. 39 are described as follows.
  • the 'table_id' field is an 8-bit field that will always be set to '0x02' in the 'TS_program_map_section' field.
  • the 'section_syntax_indicator' field is a 1-bit field to be set to '1'.
  • the 'section_length' field is a 12-bit field in which the first two bits are set to '00', and specifies the number of bytes of the section starting immediately after the 'section_length' field and including the CRC.
  • the 'program_number' field is a 16-bit field, which specifies the program to which the 'program_map_PID' field is applicable.
  • the 'version_number' field is a 5-bit field, which indicates the version number of the 'TS_program_map_section' field.
  • the 'current_next_indicator' field is a 1-bit field. If the bit of the 'current_next_indicator' field is set to '1', this means that the transmitted 'TS_program_map_section' field is currently applicable. If the bit of the 'current_next_indicator' field is set to '0', this means that the transmitted 'TS_program_map_section' field is not applicable yet and the next 'TS_program_map_section' field will be valid.
  • the 'section_number' field contains a value of an 8-bit field to be '0x00'.
  • the 'last_section_number' field includes a value of an 8-bit field to be '0x00'.
  • 'PCR_PID' is a 13-bit field indicating the PID of transport stream (TS) packets that will contain valid PCR fields for the program specified by the 'program_number' field. In this case, if no PCR is associated with the program definition for the private stream, this field will take the value '0x1FFF'.
  • the 'program_info_length' field is a 12-bit field, the first two bits of which will be '00'.
  • the 'program_info_length' field specifies the number of bytes of descriptors immediately following the 'program_info_length' field.
  • the 'stream_type' field is an 8-bit field that specifies the type of elementary stream or payload carried in packets with a PID whose value is specified by the 'elementary_PID' field.
  • the 'stream_type' field may indicate a coding type of a corresponding video element.
  • JPEG, MPEG-2, MPEG-4, H.264 / AVC, H.264 / SVC or H.264 / MVC techniques can be used.
  • the 'elementary_PID' field is a 13-bit field that specifies the PID of transport stream (TS) packets carrying the associated elementary stream or payload. This PID may be used as primary video data or secondary video data.
  • TS transport stream
  • the 'ES_info_length' field is a 12-bit field, the first two bits of which will be '00'.
  • the 'ES_info_length' field may specify the number of bytes of descriptors of the associated elementary stream immediately following the 'ES_info_length' field.
  • the 'CRC_32' field is a 32-bit field containing a CRC value that provides zero output of the registers in the decoder defined in Annex B after processing the entire transport stream program map section.
  • the descriptor field 11010 includes 3D complementary video information, which will be described later in detail with reference to the accompanying drawings.
  • FIG. 40 illustrates a syntax structure of picture extension and user data of a video ES including 3D complementary video information according to an embodiment of the present invention.
  • the ATSC telephony communication system may include 3D complementary video information in header information of the video ES instead of the PISP layer, and may signal corresponding information. More specifically, 3D complementary video information (complementary_video_info ()) 13030 may be included in the complementary video ES and thereby transmitted, and by parsing the corresponding information in the video decoder, the receiving system controls the display output. The information required to do so can be obtained.
  • the 3D complementary video information may be included in user_data () 13010 of picture extension and user data. Will be transmitted.
  • the picture extension and user data can be received after the picture header and picture coding extension, thereby being decoded.
  • the field value of the user_data_start_code field is fixed to 0x0000 01B2.
  • the field value of the user_data_identifier (or ATSC_identifier) field corresponds to a 32-bit code given a value of 0x4741 3934.
  • the user_data_type_code field may indicate a data type of ATSC user data 1320 and may have an 8-bit field value. According to an embodiment of the present invention, by using a value of 0x10, this field may indicate that 3D complementary video information 1302 is included.
  • the corresponding information is transmitted to the Supplemental Enhancement Information (SEI) region as illustrated in FIG. 41.
  • SEI Supplemental Enhancement Information
  • user_identifier and user_structure are included in user_data_registered_itu_t_135 ().
  • the corresponding information is passed to the SEI payload instead of user_data ().
  • FIG. 42 illustrates a method for providing a full-resolution image using base video data, complementary video data, and 3D complementary video information received from a 3D video service Spec-B according to an embodiment of the present invention. .
  • an image of the base video data is received in a top-bottom format, where the left image is located above and the right image is located below.
  • the field value of the complementary_type field is indicated as '0x0000'
  • the field value of the naive_subsampling_flag field is indicated as '1'
  • the field value of the left_image_first_flag field is indicated as '1'
  • complementary_first_flag The field value of the field is indicated as '0'.
  • the 3D complementary video information indicates that the complementary video data is processed into a line-interleave, the low-pass filtering and the high-pass filtering are not performed when performing subsampling, and correspond to the left eye. Indicates that the video data is presented first, and that the video data corresponding to the base video precedes the video data corresponding to the complementary video.
  • the receiving system extracts the left image parts Lb1 to Lb5 from the base video frame 16010 of the top-bottom format, and the left image parts Lc1 from the complementary video frame 1620. ⁇ Lc5), and reconstructed (or rebuilt) the extracted video data line by line, thereby obtaining a full-resolution left image 1630.
  • the receiving system extracts the right image portions Rb1 to Rb5 from the base video frame 16010 in the top-bottom format, and the right image portion from the complementary video frame 1620. Fields Rc1 to Rc5, and reconstructed (or reconstructed) the extracted video data line by line, thereby obtaining a full-resolution right image 16040.
  • the reception system may display the obtained full-resolution left image 1630 and the right image 16040 through a frame sequential technique. In this case, since two frames 1630 and 16040 were generated frame by frame from one frame 16010, a temporal full-resolution display becomes available.
  • FIG. 43 illustrates a method for providing a full-resolution image using base video data, complementary video data, and 3D complementary video information received from a 3D video service Spec-B according to another embodiment of the present invention. To illustrate.
  • an image of the base video data is received in a top-bottom format, where the left image is located above and the right image is located below.
  • the field value of the complementary_type field is indicated as '0x0000'
  • the field value of the naive_subsampling_flag field is indicated as '0'
  • the field value of the left_image_first_flag field is indicated as '1'
  • the complementary_first_flag The field value of the field is indicated as '0'.
  • the 3D complementary video information indicates that the complementary video data is to be processed into line-interleaves, the low-pass filtering and the high-pass filtering should be performed when performing subsampling, and correspond to the left eye. Indicates that the video data is presented first, and that the video data corresponding to the base video precedes the video data corresponding to the complementary data.
  • the receiving system performs low-pass filtering on the base video frame, so that the filtered base video frames Lb1 'to Lb5' and Rb1 'to Rb5' are obtained. do.
  • the receiving system performs high-pass filtering on the complementary video frame, whereby filtered complementary video frames Lc1'-Lc5 'and Rc1'-Rc5' are obtained.
  • the receiving system extracts low-pass filtered left image parts Lb1 'to Lb5' from the base video frame of the top-bottom format, and low-pass filtered from the complementary video frame.
  • the left image parts Lc1 'to Lc5' are extracted.
  • the receiving system reconstructs (or reconstructs) the extracted video data line by line, whereby a full-resolution left image 1030 is obtained.
  • the receiving system extracts the low-pass filtered right picture portions Rb1 'to Rb5' from the base video frame in the top-bottom format, and extracts the low-from the complementary video frame.
  • the pass filtered right image parts Rc1 'to Rc5' are extracted.
  • the receiving system reconstructs (or reconstructs) the extracted video data line by line, whereby a full-resolution right image 1704 is obtained.
  • the reception system may display the obtained full-resolution left image 1730 and the right image 17040 through a frame sequential technique. In this case, since two frames 1730 and 17040 were generated frame by frame from one frame 17010, a temporal full-resolution display becomes available.
  • FIG. 44 illustrates a method for providing a full-resolution image using base video data, complementary video data, and 3D complementary video information received from a 3D video service Spec-B according to another embodiment of the present invention. To illustrate.
  • the image of the base video data is received in a top-bottom format, where the left image is located at the top and the right image is located at the bottom.
  • the field value of the complementary_type field is indicated as '0x0004'
  • the field value of the naive_subsampling_flag field is indicated as '1'
  • the field value of the left_image_first_flag field is indicated as '1'
  • the complementary_first_flag The field value of the field is indicated as '0'.
  • the 3D complementary video information indicates that the complementary video data includes residual video data for the base video data (0x0004), and low-pass filtering and high-pass filtering are not performed when performing subsampling. Indicates that video data corresponding to the left eye is presented first, and that video data corresponding to the base video precedes the video data corresponding to the complementary video.
  • the receiving system first performs line-by-line interpolation on the received base frame 18010, thereby obtaining a spatially doubled video frame 18040.
  • the receiving system then combines the interpolated lines Li1, Li2, ..., Ri5 with the residual data lines Lc1-Lc10 and Rc1-Rc10 of the complementary video frame 1820. Then, by placing the combined lines line by line with the lines of the base video frame, a full-resolution left image 1850 and right image 1080 are obtained.
  • the line Li1 of the interpolated base video frame 18040 is combined with the data of the lines Lc1 and Lc2 of the complementary video frame 1820, and then As a result, the line image Lc1 of the full-resolution image 1850 is obtained. Subsequently, by using the method of placing this line image Lc1 between the line images Lb1 and Lb2, a full-resolution left image 1850 can be obtained.
  • the receiving system may display the obtained full-resolution left image 1850 and right image 1802 through a frame sequential technique.
  • a temporal full-resolution display becomes available.
  • FIG. 45 illustrates a method for providing a full-resolution image using base video data, complementary video data, and 3D complementary video information received from a 3D video service Spec-B according to another embodiment of the present invention.
  • an image of the base video data is received in a checkerboard format, where the left image is located at the uppermost pixel of the left-end portion.
  • the field value of the complementary_type field is indicated as '0x0001'
  • the field value of the naive_subsampling_flag field is indicated as '1'
  • the field value of the left_image_first_flag field is indicated as '1'
  • the complementary_first_flag The field value of the field is indicated as '0'.
  • the 3D complementary video information indicates that the complementary video data includes a line-alternating order of the complementary video image with respect to the base video image (0x0001), low-pass filtering and High-pass filtering indicates that no subsampling is performed, video data corresponding to the left eye is presented first, and video data corresponding to base video precedes video data corresponding to complementary data Is displayed.
  • the receiving system aligns the pixels of the right eye and the pixels of the left eye contained in the received base video frame 19010 for each line in each order by using 3D complementary video information, and receives The pixels of the right eye and the pixels of the right eye are included in the complementary video frame 19020. Accordingly, the full-resolution left image 19030 and the right image 19040 may be obtained.
  • the receiving system reconstructs (or reconstructs) the received base video frame 19010 and the complementary video frame 19020 into a side-by-side format or a top-bottom image format. The receiving system then aligns the reconstructed video frames according to the 3D complementary video information so that a full-resolution left image 19030 and a right image 19040 are obtained.
  • the receiving system may display the obtained full-resolution left image 19030 and right image 19040 through a frame sequential technique. In this case, since two frames 19030 and 19040 occurred from one frame 19010 on a frame-by-frame basis, a temporal full-resolution display becomes available.
  • Operation of a receiving system to obtain a full-resolution video component by combining a base video component and a complementary video component may be performed according to various embodiments according to the above-described embodiments of the present invention.
  • the base video component is referred to as B
  • the complementary video component is referred to as C
  • the full-resolution video component is referred to as F
  • the following operating scenarios are used. It may be possible.
  • B 'and C' correspond to B and C processed by interpolation / filtering, respectively.
  • Case 1 corresponds to an example in which a field value of the naive_subsampling_flag field is equal to '1'.
  • this case corresponds to the embodiment where two subsampled video components are interleaved and aligned.
  • Case 2 corresponds to the example where B is processed with interpolation / filtering and then combined with C, whereby F is obtained.
  • C may correspond to a residual / incremental data format. (In particular, this form of coupling may be performed when the SVC coding technique is used.)
  • Case 3 corresponds to an example in which a field value of the naive_subsampling_flag field is equal to '0'.
  • this case corresponds to an embodiment where both B and C are processed with interpolation / filtering and B 'is combined with C', whereby F is obtained.
  • 46 is another embodiment illustrating signaling 3DTV service using SDT.
  • the service descriptor includes a service type that indicates that it is a 3DTV 2.0 service (signaling whether video data is included for Spec-B support).
  • the descriptor () includes information on the complementary video component constituting the 3DTV service corresponding to Spec-B.
  • FIG. 47 illustrates a service type of a full-resolution stereoscopic 3DTV service for supporting Spec-B according to an embodiment of the present invention.
  • the value of the service type may be included in the descriptor loop of the SDT included in the service descriptor of the DVB. Improvements according to the present invention compared to the conventional Spec-A are as follows.
  • Spec-A and Spec-B services are defined separately, but the streams constituting each service are shared.
  • the service type is defined as described in FIG.
  • the base layer streams that make up the service can be shared, and furthermore, the Spec-B service also includes an enhancement layer stream to provide full-resolution 3DTV service.
  • FIG. 48 exemplifies a Service_type added to signal a 3DTV service using an SDT. 49 illustrates the syntax of a conventional component descriptor.
  • FIG. 50 exemplifies definition and description of stream_content and component_type for displaying full-resolution 3D stereoscopic service in a DVB broadcasting system.
  • Each elementary stream configured for DVB performs signaling by adding a component descriptor into the descriptor of the SDT.
  • stream_content and component_type are defined as shown in FIG. 50 to separate 3D complementary video to provide full-resolution 3D stereoscopic service.
  • the stream type that indicates the type of the stream is defined as 0x01, and for H.264 video it is defined as 0x05.
  • 51 illustrates a syntax structure of a 3D complementary video descriptor included in an SDT according to an embodiment of the present invention.
  • the 3D_complementary_video_descriptor_SDT is located in the descriptor of the descriptors_loop_length field in the SDT and contains information about the 3D complementary elementary stream.
  • the meaning of each field is the same as 3D_complementary_video_descriptor_VCT as illustrated in FIG. codec_type may be replaced with the stream_content and component_type fields in the component descriptor in the SDT, in which case it may also be omitted from the 3D complementary video descriptor.
  • component_tag may be used to indicate the relationship between the ES of the ES_loop of the PMT and the component descriptor.
  • a receiver operating process for receiving a 3D complementary video descriptor via TVCT is described.
  • a receiver supporting Spec-B can determine whether a full-resolution stereoscopic 3DTV service is provided or not by the presence of a 3D complementary video descriptor by using the same service_type as the half-resolution stereoscopic 3DTV service.
  • elementary_PID information (PID_B) of the 3D stereoscopic base video component is received using the stereoscopic format descriptor.
  • PID_C elementary PID information regarding the complementary video component is received using the 3D complementary video descriptor.
  • the base video corresponding to PID_B is decoded, and then the complementary video signal corresponding to PID_C is decoded.
  • the full-resolution left and right images are obtained by combining the base video and the complementary video signals using the complementary_type, left_image_first_flag, and complementary_first_flag included in the 3D complementary video descriptor.
  • the left image and the right image are then output to the full-resolution stereoscopic display to provide the 3D display to the user.
  • FIG 52 illustrates how a 3D complementary video descriptor is received via PMT, in accordance with an embodiment of the present invention.
  • the stream corresponding to Spec-A is identified in the elementary stream signaled from the PMT.
  • the complementary video stream is identified in the elementary stream signaled from the PMT. Mapping is performed using information provided through TVCT and a program_number field. The base video is then decoded with the decoding of the complementary video signal.
  • FIG. 53 illustrates a flow process for outputting a stereoscopic video signal by parsing a 3D signal. This process is described below.
  • an SDT is obtained, and TS packets are filtered. Then, PMT information about the corresponding service is obtained.
  • PMT information about the corresponding service is obtained.
  • Spec-B 3D service type information is obtained and stored.
  • PMT information corresponding to the service is obtained and stored.
  • Spec-A and Spec-B information are determined through a linkage descriptor.
  • PMT information for Spec-B is used to determine the PID information for the complementary video stream.
  • the receiver can receive Spec-B, the service_id providing Spec-B 3D video is selected, and the ES decoding on the video / audio along with the PID filter of the complementary video stream and the conventional A / V stream is performed. do. Then, full-resolution 3D video is output by the reconstruction control, and the conversion of the 3D video output is performed using complementary video descriptor information.
  • the receiver can receive Spec-A, the service_id provided by the frame-compatible video included in Spec-A is selected. Then, video / audio ES decoding and a PID filter are performed on the A / V stream. Finally, half-resolution 3D video is output.
  • FIG. 55 illustrates that the location of 3D_complementary_video_descriptor is in event_information_table_section () to provide a full-resolution 3D TV service guide for ATSC PSIP EIT.
  • the descriptor () is in a for loop to indicate whether a full-resolution 3D TV service is available for each program and event.
  • 3D_Complementary_video_descriptor_TVCT is included in the EIT to signal full-resolution 3DTV, and component descriptors are also used in addition to the same method for ATSC for DVB.
  • FIG. 57 illustrates a process for parsing and rendering a 3D complementary video descriptor for an ATSC PSIP EIT
  • FIG. 58 illustrates a process for DVB SI EIT.
  • filtering is performed on TS packets having a PID value of 0x1FFB. Then, section data having a table id equal to 0xC7 and 0xC8 is parsed. Then information about the PID of the stream with the EIT is obtained from the MGT. Then, the TS packet is filtered from the obtained EIT PID. Information about the 3D complementary video of each VC event is obtained using the 3D complementary video descriptor of each event in the EIT.
  • the availability of the full-resolution 3D service for the broadcast guide information is displayed to view the full-resolution mode for the 3D broadcast event.
  • the information of the PID of the basic A / V stream using the SLD in the TVCT is obtained. Acquiring information of the 3D complementary video through the 3D complementary video descriptor from the EIT is performed. Next, filtering the PID of the basic A / V stream is also performed and ES decoding of the video / audio is performed.
  • the output of full-resolution 3D video is performed by transforming from an output formatter and reconstruction control of the full-resolution 3D video using complementary video descriptor information.
  • 58 shows a process of parsing and rendering a 3D complementary video descriptor for DVB SI EIT.
  • the difference between ATSC and DVB is that 3D complementary video descriptors or component descriptors in DVB can be used to determine the presence of a 3D complementary video stream.
  • Figure 59 illustrates a block diagram of a broadcast receiver with a 3D video decoder.
  • Video streams in the two layers go through a new-generation broadcast receiver and the base-layer video stream is decoded in the primary video decoder.
  • the enhancement-layer video stream is decoded in the secondary video decoder.
  • the PSI / PSIP / SI processor parses 3D stereoscopic information from the new-generation ATSC spec and DVB spec, where the PMT / TVCT / SDT includes new signaling syntaxes to support 3D services.
  • the next-generation receiver can convert the full-resolution 3D video format into a specific stereoscopic format according to various types of 3DTV or 3D displays.
  • 60 illustrates a conceptual diagram of a 3D service 2.0 (Spec-B), according to an embodiment of the present invention.
  • FIG. 1A shows a conceptual diagram of a 3D service according to an embodiment of the present invention.
  • the 3D service (C10000) for providing a full-resolution image will be referred to as 3D service 2.0 (Spec-B) and / or 3D service 2B.
  • 3D service 1.0 (C10010) for providing a half-resolution image will be referred to as a Frame Compatible 3D service (FC-3D service), 3D service 1.0 (Spec-A), and / or 3D service 1A.
  • the 3D service 1.0 (Spec-A) C10010 may be serviced as a half-resolution left image and a half-resolution right image.
  • 3D Service 2.0 (Spec-B) C10020 which provides full-resolution video, must be compatible with 3D Service 1.0 (Spec-A) (C10010) instead of transmitting new full-resolution video.
  • a method for maintaining image transmission of 1.0 (Spec-A) C10010 and providing differential data or additional data for transmitting full-resolution images may be used. More specifically, full-resolution 3D by adding the complementary video element C10020 of 3D service 2.0 (Spec-B) to the half-resolution video element of 3D service 1.0 (Spec-A) (C10010).
  • the service C10000 may be provided.
  • a broadcast receiver capable of supporting 3D service 1.0 may provide half-resolution video by receiving and processing data of 3D service 1.0 (Spec-A) C10010, and may provide 3D service 2.0 (
  • a broadcast receiver capable of supporting Spec-B can provide full-resolution video by receiving and processing data of 3D Service 1.0 (Spec-A) (C10010) and complementary data of 3D Service 2.0 (Spec-B). Can be.
  • FIG. 61 is a diagram illustrating a method for signaling 3D service 2.0 (Spec-B) according to an embodiment of the present invention.
  • Signaling information signaling 3D service 2.0 may be defined at the system level and / or video level.
  • signaling information may be included in PSI, ATSC-PSIP, and / or DVB-SI.
  • the PSI table may include a PAT and / or a PMT.
  • the DVB SI table may include NIT, SDT, and / or EIT.
  • the ATSC PSIP table may also include VCT, STT, RRT, ETT, DCCT, DDCSCT, EIT, and / or MGT. Details of the PSI table, the DVB SI table, and / or the ATSC PSIP table are the same as those described above, and thus will be replaced with the above description. In the following description, the case where signaling information is defined in a PSI table and / or a DVB SI table will be described.
  • signaling information When defined at the video level, signaling information may be included in the header of the video ES. If video data is encoded using an MPEG-2 or MPEG-4 video coding technique, signaling information may be included in the picture extension and user_data () 13010 of the user data. If video data is encoded using an H.264 / AVC or H.265 / HEVC video coding technique, signaling information may be included in a Supplemental Enhancement Information (SEI) message. Definition of signaling information at the video level may be equally applied to DVB transmission as well as ATSC transmission. Hereinafter, signaling information will be described based on encoding of video data using H.264 / AVC video coding technique.
  • SEI Supplemental Enhancement Information
  • the signaling information may include a service descriptor, a component descriptor of the base layer, and / or a component descriptor of the enhancement layer. Details of the service descriptor, the component descriptor of the base layer, and / or the component descriptor of the enhancement layer will be described later.
  • signaling information for signaling of 3D service 2.0 may be a descriptor and / or field included in at least one of PMT, video ES, subtitle, SDT, and / or EIT.
  • the PMT may include an AVC_video_descriptor and / or a subtitling descriptor.
  • each video ES may include an SEI message
  • the subtitles may include disparity information and / or Disparity Signaling Segment (DSS).
  • the SDT may include a service descriptor and / or a component descriptor.
  • the EIT may include extended event linkage, component descriptor, content descriptor, and / or video depth range descriptor. Details of the signaling information will be described below.
  • a receiver capable of supporting 3D service 1.0 may provide half-resolution 3D video by receiving and processing base video data and / or 3D video information of a base layer based on signaling information.
  • a receiver capable of supporting 3D service 2.0 may use not only base video data and / or 3D video information of the base layer based on signaling information, but also complementary video data and / or 3D complementary information of the enhancement layer. Receiving and processing video information can provide full-resolution 3D video.
  • signaling information may include AVC_video_descriptor.
  • the AVC_video_descriptor may include basic information identifying coding parameters of an associated AVC video stream, such as a profile and / or level parameters included in a Sequence Parameter Set (SPS) of the AVC video stream.
  • SPS Sequence Parameter Set
  • AVC_video_descriptor may be included in a descriptor loop for each elementary stream entry in the PMT of the transport stream for transmitting the 3D service.
  • the SPS is header information that contains information that spans the entire sequence, such as profile and / or level.
  • the AVC_video_descriptor may include a descriptor_tag field, a descriptor_length field, a profile_idc field, a constraint_set0_flag field, a constraint_set1_flag field, a constraint_set2_flag field, an AVC_compatible_flags field, a level_idc field, an AVC_still_present field, an AVC_24_hour_picture_flag field, and / or a reserved field.
  • the descriptor_tag field may include information for identifying AVC_video_descriptor.
  • the descriptor_length field may include information indicating the size of the AVC_video_descriptor.
  • the profile_idc field may indicate a profile on which the bitstream is based.
  • the profile may include a baseline profile, a main profile, and / or an extended profile.
  • the constraint_set0_flag field may indicate whether the baseline profile can be decoded.
  • the constraint_set1_flag field may indicate whether the main profile can be decoded.
  • the constraint_set2_flag field may indicate whether the extended profile can be decoded.
  • the AVC_compatible_flags field may be 5 bits of data having a value of '0'. The decoder can ignore these 5 bits.
  • the level_idc field may indicate the level of the bitstream. In each profile, the level is determined according to the size of the image. The level defines the restriction of the parameter according to the size of the corresponding image.
  • the level_idc field may include five levels and intermediate levels thereof.
  • the AVC_still_present field may indicate whether the AVC video stream includes AVC still pictures.
  • the AVC still pictures may include an AVC Access Unit including an IDR picture.
  • the IDR picture may be followed by a Sequence Parameter Set (SPS) NAL unit and / or a Picture Parameter Set (PPS) NAL unit that transmits sufficient information to correctly decode the IDR pictrue.
  • SPS Sequence Parameter Set
  • PPS Picture Parameter Set
  • the AVC_24_hour_picture_flag field may indicate whether the associated AVC video stream includes AVC 24-hour pictures.
  • An AVC 24-hour picture is an AVC Access Unit that contains more than 24 hours of future presentation time.
  • the AVC_video_descriptor may further include a Frame_Packing_SEI_not_present_flag field.
  • the Frame_Packing_SEI_not_present_flag field may indicate whether a frame packing arrangement SEI message is present in the coded video sequence (or video ES).
  • the Frame_Packing_SEI_not_present_flag field may indicate whether the video sequence is a 3DTV video format or an HDTV video format.
  • the Frame_Packing_SEI_not_present_flag field may be set to '0' to signal that a frame packing arrangement SEI message is present in the coded video sequence.
  • the Frame_Packing_SEI_not_present_flag field may be set to '0' to signal that a frame packing arrangement SEI message is present in the coded video sequence.
  • the Frame_Packing_SEI_not_present_flag field is a coded video when only the HDTV video format is used, no format transition is expected to occur in the 3D video format, and / or no format transition has occurred in the 3D video format. It may be set to '1' to signal that there is no frame packing arrangement SEI message in the sequence.
  • the receiver may process base video data based on the signaling information.
  • the video stream of the enhancement layer may indicate “MVC video sub-bitstream of an AVC video stream”.
  • the receiver may identify whether a frame packing arrangement SEI message is present in the bitstream based on the Frame_Packing_SEI_not_present_flag field of the AVC_video_descriptor.
  • the signaling information may include a service descriptor.
  • a service descriptor is a descriptor that describes the type of service, service provider, and / or name of the service.
  • the service descriptor may be included in the PMT, the SDT, and / or the EIT.
  • the service descriptor may include a descriptor_tag field, a descriptor_length field, a service_type field, a service_provider_name_length field, a char field indicating a name of a service provider, a service_name_length field field, and / or a char field indicating a name of a service. Details of the service descriptor will be replaced with the above description.
  • the service_type field may indicate the type of service.
  • the assignment of service_type for a service will be replaced with the above description.
  • the value of the service_type field when the value of the service_type field is “0x0B”, it may indicate “advanced codec mosaic service” and / or “H.264 / AVC mosaic service”. If the value of the service_type field is "0x16”, it may indicate "advanced codec SD digital television service” and / or “H.264 / AVC SD digital television service”. If the value of the service_type field is “0x17”, it may indicate “advanced codec SD NVOD time-shifted service” and / or “H.264 / AVC SD NVOD time-shifted service”.
  • the value of the service_type field is “0x18”, it may indicate “advanced codec SD NVOD reference service” and / or “H.264 / AVC SD NVOD reference service”. If the value of the service_type field is "0x19”, it may indicate "advanced codec HD digital television service” and / or “H.264 / AVC HD digital television service”. If the value of the service_type field is “0x1A”, it may indicate “advanced codec HD NVOD time-shifted service” and / or “H.264 / AVC HD NVOD time-shifted service”. If the value of the service_type field is “0x1B”, it may indicate “advanced codec HD NVOD reference service” and / or “H.264 / AVC HD NVOD reference service”.
  • the value of the service_type field is “0x1C”, it may indicate “advanced codec frame compatible plano-stereoscopic HD digital television service” and / or “H.264 / AVC frame compatible plano-stereoscopic HD digital television service”. If the value of the service_type field is “0x1D”, it may indicate “advanced codec frame compatible plano-stereoscopic HD NVOD time-shifted service” and / or “H.264 / AVC frame compatible plano-stereoscopic HD NVOD time-shifted service”. have.
  • the value of the service_type field is “0x1E”, it may indicate “advanced codec frame compatible plano-stereoscopic HD NVOD reference service” and / or “H.264 / AVC frame compatible plano-stereoscopic HD NVOD reference service”.
  • the service_type field is one of “0x1C”, “0x1D”, and / or “0x1E”, it may indicate the “H.264 / AVC frame compatible HD” service. If the value of the service_type field of the base video data and / or the enhancement video data is one of “0x1C”, “0x1D”, and / or “0x1E”, the video service is “H.264 / AVC frame compatible HD”. Can be judged.
  • the signaling information may include a content descriptor (content_descriptor).
  • the content descriptor may provide classification information of the event.
  • the content descriptor may be included in the EIT and / or SIT. Details of the content descriptor will be described later.
  • the signaling information may include a Frame Packing Arrangement SEI message.
  • 3D Service 2.0 may provide signaling for dynamic switching between 2D events and 3D events.
  • the encoded video stream may include a Frame Packing Arrangement (FPA) Supplemental Enhancement Information (SEI) message to signal 3D service 2.0 (Spec-B).
  • FPA Frame Packing Arrangement
  • SEI Supplemental Enhancement Information
  • 3D services that can switch between 3DTV video formats and HDTV video formats provide frame packing not only during the transmission time of the 3DTV video format video stream, but also during the transmission time of the HDTV video format video stream. It may include an Arrangement SEI message.
  • the Frame Packing Arrangement SEI message may include information signaling 3D service 2.0 (Spec-B).
  • the Frame Packing Arrangement SEI message may include other characteristic information such as a Frame_packing_arrangement_cancel_flag field, a format of a 3D Service 2.0 (Spec-B) video stream.
  • the Frame_packing_arrangement_cancel_flag field may indicate whether the video stream is a 3D video format or an HDTV video format.
  • the value of the Frame_packing_arrangement_cancel_flag field may indicate that the 3D video format is used, and other fields of the descriptor may signal the format and / or other characteristics of the 3D video stream.
  • the value of the Frame_packing_arrangement_cancel_flag field is '1', it may indicate that an HDTV video format (ie, a non-3D video format) is used. For example, if the value of the Frame_packing_arrangement_cancel_flag field is '1', it may indicate that the HDTV video format is used.
  • the receiver may rearrange the samples based on the Frame Packing Arrangement SEI message and process it to be suitable for displaying the samples of the constituent frames (left image and / or right image).
  • the Frame Packing Arrangement SEI message may include fields included in the aforementioned 3D service descriptor and / or fields included in the aforementioned 3D complementary video information.
  • the 3D service descriptor may include signaling information about the base video data.
  • the 3D complementary video information may include signaling information about the enhancement video data. Since the 3D service descriptor and / or 3D complementary video information is the same as the above description, it will be replaced with the above description.
  • the receiver may display additional content such as subtitles and / or graphics so that the 3D image is not obscured.
  • Subtitles are an important element of 3D services as well as SDTV and / or HDTV. With on-screen graphics, it is very important for subtitles to be located accurately on the 3D video content, under depth and timing conditions.
  • the signaling information may include Disparity Signaling Segment (DSS).
  • DSS Disparity Signaling Segment
  • DSS may include the definition of sub-areas within a region.
  • Disparity Signaling Segment may be included in a subtitle.
  • the signaling information may include information about the DVB subtitle.
  • the signaling information may include information about the DVB subtitle according to the value of the component_type field.
  • the signaling information may include a subtitling descriptor.
  • the subtitling descriptor may include information about the DVB subtitle.
  • the subtitling descriptor may be included in the PMT.
  • the signaling information may include disparity information.
  • Disparity information is information indicating how close the object is to the front of the screen or how far behind the screen. Disparity information may be included in the subtitle.
  • the signaling information may include a video depth range descriptor.
  • the video depth range descriptor may indicate the intended depth range of the 3D video.
  • the signaling information may include multi-region disparity.
  • Multi-region disparity provides disparity information at the video level.
  • DSS Disparity Signaling Segment
  • FIG. 62 illustrates a stream_content field and / or a component_type field of a component descriptor according to an embodiment of the present invention.
  • the signaling information may include a component descriptor.
  • the signaling information may include a component descriptor of the base layer and / or a component descriptor of the enhancement layer.
  • the component descriptor may be used to identify the type of component stream and provide a text description of the elementary stream.
  • Component descriptors may be included in the SDT and / or the EIT. That is, the component descriptor may be defined as a descriptor of the SDT to determine whether the corresponding service is a 3DTV service, and as a descriptor of the EIT, the component descriptor may also determine whether the corresponding event is 3DTV.
  • the component descriptor may include a descriptor_tag field, descriptor_length field, reserved_future_use field, stream_content field, component_type field, component_tag field, ISO_639_language_code field, and / or text_char field. Details of the component descriptor will be replaced with the above description.
  • the type of component stream may include at least one parameter.
  • the parameter may include bitstream information, codec information of a video stream, profile information, resolution information, aspect ratio information, frame rate information, image format information, and / or Bitdepth information.
  • the stream_content field is '0x01'
  • the component_type field is '0x11'
  • the component_type field is '0x12', it is 30Hz. It may indicate that the frame is a compatible 3D video.
  • the stream_content field when the stream_content field is '0x05', it indicates H.264 / AVC standard definition (SD) video.
  • the component_type field when the component_type field is '0x11', it indicates that it is a frame-compatible 3D video of 25 Hz. If '0x12' may indicate that the frame-compatible 3D video of 30Hz.
  • the DVB subtitles (normal) for display on the 3D monitor are indicated
  • the stream_content field is' 0x03 '
  • the component_type field is' 0x24 '
  • the DVB subtitles (for the hard of hearing) for display on the 3D monitor can be indicated.
  • a component descriptor may include a new stream_content field and / or a component_type field applicable to 3D service 2.0 (Spec-B).
  • the base video data is H.264 / AVC planostereoscopic frame compatible high definition video, 16: 9 aspect ratio, 25 Hz, and / or side-by. -Side can be indicated.
  • the base video data indicates H.264 / AVC planostereoscopic frame compatible high definition video, 16: 9 aspect ratio, 25 Hz, and top-and-bottom. can do.
  • the base video data indicates H.264 / AVC planostereoscopic frame compatible high definition video, 16: 9 aspect ratio, 30 Hz, and side-by-side. can do.
  • the base video data includes H.264 / AVC stereoscopic frame compatible high definition video, 16: 9 aspect ratio, 30 Hz, and Top-and-Bottom. Can be directed.
  • a new component_type field may be included for the enhancement layer.
  • each of the stream_content field and / or component_type field of the enhancement layer corresponding to the base layer may have one and the same value. However, according to an embodiment, each may have a different value.
  • the enhancement video data may indicate H.264 / MVC dependent view and plano-stereoscopic service compatible video.
  • the value of the component_type field is not a fixed value and may be changed according to an embodiment. For example, even when the stream_content field of the enhancement layer is '0x05' and the component_type field is '0x84', the H.264 / MVC dependent view and the plano-stereoscopic service compatible video may be indicated.
  • the receiver may identify the type of the video stream based on the stream_content field and / or the component_type field of the component descriptor. For example, when the stream_content field of the base layer is '0x05' and the component_type field is '0x80', it may indicate that the format of the base video data is a frame-compatible video format. In addition, when the stream_content field of the enhancement layer is '0x05' and the component_type field is '0x85', it may indicate that the format of the enhancement video data is a frame-compatible video format. In this case, the format of the base video data and / or enhancement video data may be one of side-by-side and / or top-bottom.
  • the 3D service 2.0 (Spec-B) may be indicated.
  • FIG. 63 illustrates a linkage_type field and / or a link_type field of a linkage descriptor according to an embodiment of the present invention.
  • the signaling information may include a linkage descriptor.
  • the linkage descriptor may indicate a service that may exist when additional information related to a specific entity in the SI system is required.
  • the linkage descriptor may exist in a corresponding table requiring a service for providing additional information. In the case of a service replacement service, the linkage descriptor may be used.
  • the receiver may automatically select the replacement service.
  • the linkage descriptor may be included in the SDT and / or the EIT, and the receiver may identify a 3D service or event corresponding to a specific 2D service_id currently being viewed or a specific 2D event_id to be broadcasted based on the linkage descriptor.
  • the linkage descriptor may include a descriptor_tag field, descriptor_length field, transport_stream_id field, original_network_id field, service_id field, linkage_type field, mobile_hand-over_info () field, event_linkage_info () field, extended_event_linkage_info () field, and / or private_data_byte field. Details of the linkage descriptor will be replaced with the above description.
  • the mobile_hand-over_info () field may identify a service handed over by the mobile receiver. This service can be automatically selected by the receiver when the service_id can no longer receive the actual service.
  • the event_linkage_info () field may be used when two events are signaled identically.
  • the linked event may be a simulcast or a time offset. If the target event has a higher quality, event_simulcast filed may be set.
  • the event_linkage_info () field may include a target_event_id field, a target_listed field, an event_simulcast field, and / or a reserved field.
  • the target_event_id field may include event identification (event_id) information for identifying a target event corresponding to the source event.
  • the source event is an event to which the corresponding linkage_descriptor belongs and is an event identified by the location of the linkage_descriptor.
  • the target event is an event specified by the corresponding linkage_descriptor and is an event transmitted to a service defined by original_network_id, transport_stream_id, and / or service_id.
  • the target_listed field may indicate whether a service defined by the original_network_id field, the transport_stream_id field, and / or the service_id field is included in the SDT transmitted to the TS.
  • the event_simulcast field may indicate whether the target event and the source event are simulcasted.
  • the extended_event_linkage_info () field may be used when signaling at least one event equally.
  • the linked event may be a simulcast or a time offset.
  • the extended_event_linkage_info () field may include loop_length and / or at least one loop. Each loop can represent a linked event. Each loop may include a target_event_id field, target_listed field, event_simulcast field, link_type field, target_id_type field, original_network_id_flag field, service_id_flag field, user_defined_id field, target_transport_stream_id field, target_original_network_id field, and / or target_service_id field.
  • the loop_length field may indicate the size of a byte unit of a following loop.
  • the target_event_id may include event identification (event_id) information for identifying a target event corresponding to the source event.
  • event_id event identification
  • the target_listed field may indicate whether a service defined by the original_network_id field, the transport_stream_id field, and / or the service_id field is included in the SDT transmitted to the TS.
  • the event_simulcast field may indicate whether the target event and the source event are simulcasted.
  • the link_type field may indicate the type of the target service.
  • the linkage type is "extended event linkage” and the type of the target service may indicate standard definition video (SD).
  • the linkage_type field is "0x0E” and link_type is "1”
  • the linkage type is “extended event linkage” and the type of the target service may indicate high definition video (HD).
  • the linkage_type field is "0x0E” and link_type is "2”
  • the linkage type is “extended event linkage” and the type of the target service may indicate frame compatible plano-stereoscopic H.264 / AVC.
  • the linkage_type field is "0x0E” and link_type is "3”
  • the linkage type is “extended event linkage” and the type of the target service may indicate service compatible plano-stereoscopic MVC.
  • the target_id_type field may identify the target service or target services together with the original_network_id_flag field and / or the service_id_flag field.
  • target_id_type field is “0”, this indicates that a transport_stream_id field can be used to identify a single target service. If the target_id_type field is “1”, this indicates that the target_transport_stream_id field may be used instead of the transport_stream_id field to identify a single target service. If the target_id_type field is “2”, it indicates that the target services are included in at least one transport stream (wildcarded TSid). If the target_id_type field is “3”, target services may be matched by a user defined identifier.
  • the original_network_id_flag field may indicate whether the target_original_network_id field is used instead of the original_network_id field to determine the target service.
  • the service_id_flag field may indicate whether target_service_id is used instead of service_id to determine the target service.
  • the linkage descriptor may be included in the range of the private data specifier descriptor. Accordingly, the receiver may determine the meaning of the user_defined_id field.
  • the target_transport_stream_id field may identify an alternate TS including an information service indicated under the control of the target_id_type field, the original_network_id_flag field, and / or the service_id_flag field.
  • the target_original_network_id may include network_id information of an alternate originating delivery system including an information service indicated under the control of the target_id_type field, the original_network_id_flag field, and / or the service_id_flag field. .
  • the target_service_id may identify an alternative information service indicated under the control of the target_id_type field, the original_network_id_flag field, and / or the service_id_flag field.
  • the linkage type may indicate “service replacement service”.
  • the replacement type may be designated as 3D service 2.0 (Spec-B) in the private_data_byte area.
  • linkage descriptor when linkage descriptor is transmitted to EIT, if linkage_type is '0x0D', the linkage type may indicate “event linkage”.
  • the receiver may signal the corresponding 3D service 2.0 (Spec-B) using the 3D service descriptor or component descriptor for the target_event_id through the EIT.
  • linkage_type is '0x0E' and the linkage type may indicate “extended event linkage”.
  • the type of target service may indicate frame compatible plano-stereoscopic H.264 / AVC.
  • the linkage_type field is '0x0E' and the link_type field is '2', it may indicate 3D service 2.0 (Spec-B).
  • the receiver may signal 3D service 2.0 (Spec-B) based on the linkage_type field and / or the link_type field.
  • the receiver may signal the corresponding 3D service 2.0 (Spec-B) using the 3D service descriptor or component descriptor for the target_event_id through the EIT.
  • a new value of '0x0F' may be specified in the linkage_type field and the description may be designated as “3D Service 2.0 (Spec-B)”.
  • the linkage type may indicate “service replacement service”.
  • the receiver may signal the 3D service 2.0 (Spec-B) by directly parsing the SDT, the EIT, etc. for the service for the target service_type. Based on this method, the receiver can find a 2D service corresponding to 3D.
  • 64 is a block diagram of a receiver according to an embodiment of the present invention.
  • the receiver according to an embodiment of the present invention may include the functions of the above-described 3D image display apparatus and / or broadcast receiver.
  • a receiver includes a receiver C10210, a demodulator C10220, a demultiplexer C12030, a signaling information processor C10240, a 3D video decoder C10250, and / Or an output formatter C10260.
  • the receiver C10210 receives a broadcast signal through a radio frequency (RF) channel.
  • RF radio frequency
  • the demodulator C10220 demodulates the received broadcast signal.
  • the demultiplexer C10230 demultiplexes audio data, video data, and / or signaling information from the demodulated broadcast signal.
  • the demultiplexer C10230 may demultiplex a broadcast signal by filtering using a PID (Packet IDentifier).
  • the demultiplexer C10230 outputs the demultiplexed video signal to the subsequent 3D video decoder C10250 and outputs signaling information to the signaling information processor C10240.
  • the signaling information may include system information such as the above-described PSI, ATSC-PSIP, and / or DVB-SI.
  • the signaling information processor C10240 may process the signaling information received from the demultiplexer C10230.
  • the signaling information processor C10240 may include a database (DB) for temporarily storing the processed signaling information, either internally or externally.
  • DB database
  • the signaling information processor C10240 may process signaling information signaling the aforementioned 3D service 2.0 (Spec-B).
  • the signaling information may include information identifying whether the 3D service is present, 3D service information (3D service descriptor) that provides specific information about the base video data, and / or 3D complementary information that provides specific information about the enhancement video data.
  • Type video information 3D service information.
  • the information identifying whether the 3D service is 2D or 3D may include information for identifying whether the content is 3D service 1.0 (Spec-A) or information for identifying 3D service 2.0 (Spec-B).
  • the video decoder C10250 receives and decodes the demultiplexed video data.
  • the video decoder C10250 may decode video data based on the signaling information.
  • the video decoder C10250 may include a base video decoder C10252 that decodes base video data and / or an enhancement video decoder C10254 that decodes the enhancement video data.
  • the output formatter C10260 formats the 3D video data decoded by the 3D video decoder C10250 according to the output format and outputs the 3D video data to an output unit (not shown).
  • the output formatter C10260 may format the output format for 3D output. If the decoded video data is 2D video data, the output formatter C10260 may output the video data as it is without processing the video data according to an output format for 2D output.
  • the demultiplexer C10230 filters and parses the SDT and / or EIT sections from the received broadcast signal.
  • the demultiplexer may filter the SDT and / or EIT sections through PID filtering.
  • the signaling information processing unit C10240 obtains and stores information on a service having a 3D service 1.0 (Spec-A) type in a service loop in the parsed SDT and / or EIT. Acquire and store PMT information for Spec-A).
  • Spec-A 3D service 1.0
  • the signaling information processor C10240 acquires and stores information about a service having a 3D service 2.0 (Spec-B) type in a service loop in the parsed SDT and / or EIT. That is, the signaling information processor acquires and stores PMT information for 3D service 2.0 (Spec-B).
  • Spec-B 3D service 2.0
  • the signaling information processor C10240 may use the PMT information for the 3D service 1.0 (Spec-A) and / or the PMT information for the 3D service 2.0 (Spec-B) and / or the PID information and / or the enhancement video stream for the base video stream. PID information may be obtained.
  • the operation of the receiver can be divided into two parts depending on the type of service.
  • the receiver selects a service_id that provides 2D video (base view video).
  • the channel with service_id is, for example, a legacy channel.
  • the 3D video decoder C10250 performs PID filtering and video ES decoding on a video stream corresponding to 2D data. The receiver then outputs the decoded 2D video through the output.
  • the receiver selects a service_id that provides a 3D service 1.0 (Spec-A).
  • the service_type of the service_id may be, for example, a half-resolution 3D service.
  • the 3D video decoder C10250 decodes the PID filtering and the video ES for the base video stream of the base layer.
  • the output formatter C10260 formats the 3D video data according to the output format based on the signaling information and / or the 3D service descriptor and outputs the 3D video data to an output unit (not shown).
  • the receiver outputs the half-resolution 3D video on the screen through the output unit.
  • the receiver selects a service_id that provides a 3D service 2.0 (Spec-B).
  • the service_type of the service_id may be, for example, a full-resolution 3D service.
  • the 3D video decoder C10250 decodes the PID filtering and the video ES for the base video stream of the base layer. In addition, the 3D video decoder C10250 decodes the PID filtering and the video ES for the enhancement video stream of the enhancement layer.
  • the output formatter C10260 formats the 3D video data according to the output format based on the signaling information, the 3D service information, and / or the 3D complementary video information, and outputs the 3D video data to an output unit (not shown).
  • the receiver outputs full-resolution 3D video on the screen via the output.
  • the signaling information processing unit C10240 parses the linkage descriptor from the signaling information, and uses the information of the parsed linkage descriptor to 2D service, 3D service 1.0 (Spec-A), and / or 3D service 2.0 (Spec-B). It can grasp the connection information between.
  • the figure shows a conceptual diagram of 3D service 3.0 (C20000) according to an embodiment of the present invention.
  • 3D service 3.0 (C20000) according to an embodiment of the present invention implements a function for transmitting and / or receiving a broadcast signal of a video stream encoded using a H.265 / HEVC (High efficiency video coding) coding technique. It is for.
  • 3D Service 3.0 (C20000) provides a service-frame-compatible 3D service (e.g., video stream, video data, etc.) for services encoded using H.265 / HEVC (High efficiency video coding) coding scheme.
  • the SFC-3D service is a frame-compatible service (FC-3D service) for selectively transmitting signaling information in a video bitstream.
  • the signaling information may include control information for extracting any one image (eg, left image) from two images (eg, left image and right image) included in the video stream of the FC-3D service. have.
  • the signaling information may include control information for up-scaling the extracted image in order to simulate reception of the HDTV service.
  • the HDTV receiver extracts any one image (eg, the left image) from two images (eg, the left image and the right image) included in the video stream of the FC-3D service based on the signaling information.
  • the extracted image may be up-scaled to simulate the reception of a service.
  • SFC-3D service differs from HDTV service in that the video component of the 3D service is a frame compatible video format bitstream.
  • the video bitstream of the SFC-3D service follows the video format requirements of the HDTV service except for signaling information of the video layer.
  • the 3DTV service represents a DVB service capable of transmitting 3DTV events.
  • the SFC-3D service may be signaled by a service_type field (to be described later) indicating “HEVC digital television service”.
  • the SFC-3D event (or SFC-3DTV event) is a DVB service event that includes a video stream in SFC-3D format.
  • the signaling information may include HD video encoding parameters (eg, codec / profile, resolution, and / or frame rate, etc.) for the SFC-3D service.
  • the signaling information may include information signaling a video format transition for the 3D service (or 3DTV service) switching between the 3DTV mode and the HDTV mode.
  • the service-frame-compatible 3DTV service may be represented as an SFC-3DTV service and / or an SFC-3D service.
  • the SFC-3D service may include a service-compatible 3D service and a frame-compatible 3D service.
  • the frame compatible 3D service (FC-3D service) is a service that spatially multiplexes and arranges the left image C20010 and the right image C20020 constituting video content. Therefore, the 3D receiver supporting the frame compatible 3D service may display the 3D image based on the received left image C20010 and the right image C20020. However, the existing 2D receiver which does not support the frame compatible 3D service outputs the received left image (C20010) and the right image (C20020) all on one screen as if it were a normal 2D image (HDTV image).
  • SC-3D Service allows the existing 2D receiver (e.g., HDTV) to extract the 2D version of the video content from the frame-compatible 3D service.
  • This service arranges and transmits (C20020).
  • the SFC-3D service and the existing HDTV service can be multiplexed onto one MPEG-2 Transport Stream and delivered to the receiver.
  • the transport system for the SFC-3D service is applicable to any broadcast and / or delivery channel that uses the DVB MPEG-2 Transport Stream to transport DVB services.
  • the service provider may provide a video stream capable of displaying 3D video to an existing 2D receiver (eg, an HDTV infrastructure).
  • a conventional 2D receiver receives an encoded video stream using H.265 / HEVC (High efficiency video coding) coding scheme, and selects one of a left image (C20010) and a right image (C20020) included in the received video stream. Can be extracted. Then, the existing 2D receiver may upscale the extracted one image (eg, the left image C20010) and display the 2D image (eg, HDTV).
  • H.265 / HEVC High efficiency video coding
  • the 3D receiver may receive a video stream encoded by using an H.265 / HEVC coding technique, and determine whether to output the received video stream as a 2D image or as a 3D image. If output as a 2D image, as described above, one of the left image C20010 and the right image C20020 included in the received video stream may be extracted. Then, the 3D receiver may upscale the extracted one image and display the 2D image (eg, HDTV). If output as a 3D image, the 3D image may be displayed by formatting the left image C20010 and the right image C20010 included in the received video stream.
  • an H.265 / HEVC coding technique determines whether to output the received video stream as a 2D image or as a 3D image. If output as a 2D image, as described above, one of the left image C20010 and the right image C20020 included in the received video stream may be extracted. Then, the 3D receiver may upscale the extracted one image and display the 2D image (eg,
  • the 3D receiver receives a 2D video stream and a 3D video stream encoded by using an H.265 / HEVC (High efficiency video coding) coding scheme, and the relationship information of each video stream (eg, the relationship information is signaling information).
  • the 2D image and the 3D image of the video content may be selectively switched and output.
  • the SFC-3D service may provide signaling information.
  • signaling information for signaling an SFC-3D service will be described in detail.
  • 66 is a diagram illustrating a method for signaling an SFC-3DTV service according to an embodiment of the present invention.
  • the signaling information signaling the SFC-3D service may represent all control information for transmitting, receiving, and / or processing a video stream encoded using an H.265 / HEVC coding scheme.
  • the signaling information may represent all control information for transmitting, receiving, and / or processing a video stream encoded using an H.265 / HEVC coding technique.
  • the signaling information may include Frame Packing Arrangement SEI message, Default Display Window (DDW) information, AFD / bar data, HEVC_video_descriptor, content descriptor (content_descriptor), Disparity Signaling Segment (DSS), DVB subtitle information, subtitle At least among a subtitling descriptor, disparity information, a video depth range descriptor, a multi-region disparity, a service descriptor, a component descriptor, and / or a linkage descriptor It may include one.
  • DSW Default Display Window
  • DSS Disparity Signaling Segment
  • DVB subtitle information subtitle At least among a subtitling descriptor, disparity information, a video depth range descriptor, a multi-region disparity, a service descriptor, a component descriptor, and / or a linkage descriptor It may include one.
  • the signaling information signaling the SFC-3D service (3D service 3.0) may be included in at least one of a video stream, a transport layer, and / or subtitles.
  • the signaling information may be included in the video ES.
  • video data is encoded using an MPEG-2 or MPEG-4 video coding technique
  • signaling information may be included in the picture extension and user_data () 13010 of the user data.
  • video data is encoded using an H.264 / AVC or H.265 / HEVC video coding technique
  • signaling information may be included in a Supplemental Enhancement Information (SEI) message.
  • SEI Supplemental Enhancement Information
  • the signaling information When the signaling information is included in the transport layer, the signaling information may be included in the PSI, ATSC-PSIP, and / or DVB-SI. Details of the PSI, ATSC-PSIP, and / or DVB-SI are the same as those described above, and thus will be replaced with the above description. Hereinafter, a case in which signaling data is included in PSI and / or DVB-SI will be described.
  • the signaling information may include disparity information and may signal the subtitles along with other information included in the PSI and / or DVB-SI.
  • the signaling information may include a Frame Packing Arrangement SEI message.
  • the coded video stream of the SFC-3D service may include a Frame Packing Arrangement SEI message to signal the format of the video component of the FC-3D service. If the video is a format of the SFC-3D service, the coded video stream of the SFC-3D service may include a Frame Packing Arrangement SEI message for every video frame.
  • the Frame Packing Arrangement SEI message may include a Frame_packing_arrangement_cancel_flag field that identifies whether a frame compatible video format is used.
  • the Frame Packing Arrangement SEI message may include information signaling the format and / or other characteristics of the 3D video stream.
  • the Frame Packing Arrangement SEI message may include fields included in the aforementioned 3D service descriptor.
  • the 3D service descriptor is a descriptor including signaling information about video data. Since the 3D service descriptor is the same as the above description, it will be replaced with the above description.
  • the Frame_packing_arrangement_cancel_flag field may indicate whether the video stream is a 3D video format or an HDTV video format.
  • Frame_packing_arrangement_cancel_flag field '0'
  • other fields of the Frame Packing Arrangement SEI message may signal the format and / or other characteristics of the 3D video stream.
  • the value of the Frame_packing_arrangement_cancel_flag field is '1', it may indicate that a non-3D video format is used. For example, if the value of the Frame_packing_arrangement_cancel_flag field is '1', it may indicate that the HDTV video format is used.
  • SFC-3D services capable of switching 3DTV and non-3DTV may transmit Frame Packing Arrangement SEI messages even during transmission of a video stream in HDTV format.
  • the receiver may signal the SFC-3D service based on the Frame_packing_arrangement_cancel_flag field of the Frame Packing Arrangement SEI message.
  • the receiver may rearrange the samples based on the Frame Packing Arrangement SEI message and process it to be suitable for displaying the samples of the configuration frames (left image and / or right image).
  • Frame Packing Arrangement SEI message is frame_packing_arrangement_id field, frame_packing_arrangement_cancel_flag field, frame_packing_arrangement_type field, quincunx_sampling_flag field, content_interpretation_type field, spatial_flipping_flag field, frame0_flipped_flag field, field_views_flag field, current_frame_is_frame0_flag field, frame0_self_contained_flag field, frame1_self_contained_flag field, frame0_grid_position_x field, frame0_grid_position_y field, It may include a frame1_grid_position_x field, frame1_grid_position_y field, frame_packing_arrangement_reserved_byte field, frame_packing_arrangement_repetition_period field, and / or frame_packing_arrangement_extension_flag field.
  • the frame_packing_arrangement_id field may include information for identifying a frame packing arrangement SEI message.
  • the frame_packing_arrangement_cancel_flag field may indicate whether the frame packing arrangement SEI message cancels persistence of a previous frame packing arrangement SEI message in output order. For example, if the value of the frame_packing_arrangement_cancel_flag field is “0”, the frame packing arrangement information may follow previous frame packing arrangement information.
  • the frame_packing_arrangement_type field may indicate the type of packing arrangement.
  • the quincunx_sampling_flag field may indicate whether a color component plane of each component frame (left image and / or right image) has been quincunx sampled.
  • the content_interpretation_type field may indicate an intended interpretation of the configuration frame (left image and / or right image). For example, if the value of the content_interpretation_type field is "0", it may indicate that there is no relation between the configuration frames. If the value of the content_interpretation_type field is “1”, two component frames may indicate a left image and a right image of the stereo view scene, respectively, “frame 0” indicates a left image and “frame 1” indicates a right image can do. If the value of the content_interpretation_type field is "2”, two component frames may indicate left and right images of the stereo view scene, respectively, “frame 0" indicates a right image and “frame 1” indicates a left image can do.
  • the spatial_flipping_flag field may indicate whether one image among the left image and the right image is spatially flipped compared to the originally intended direction. For example, in the case of a side-by-side format, when the value of the spatial_flipping_flag field is “1”, it may indicate that one image among the left image and the right image is horizontally flipped. In the case of the top-bottom format, when the value of the spatial_flipping_flag field is “1”, it may indicate that one image among the left image and the right image is vertically flipped.
  • the frame0_flipped_flag field may indicate which of the two configuration frames (left image and right image) is to be prepped.
  • the field_views_flag field may indicate whether all pictures in the current coded video sequence are coded as complementary field pairs.
  • the current_frame_is_frame0_flag field may indicate whether a frame currently decoded is a left image or a right image. For example, when the value of the current_frame_is_frame0_flag field is “1”, the frame currently decoded may be a left image and the next frame decoded may indicate a right image.
  • the frame0_self_contained_flag field may be used to refer to interprediction operations (see the sample of the second component frame (eg, right image) in the decoding process for the sample of the first component frame (eg, left image) of the coded video sequence ( inter prediction operations) may be indicated.
  • the frame1_self_contained_flag field may be used to refer to interprediction operations (eg, reference to a sample of a first component frame (eg, right image) in a decoding process for a sample of a second component frame (eg, right image) of a coded video sequence. inter prediction operations) may be indicated.
  • the frame0_grid_position_x field indicates the x component of the (x, y) coordinate pair for the first configuration frame (eg, the left image).
  • the frame0_grid_position_y field indicates the y component of the (x, y) coordinate pair for the first configuration frame (eg, the left image).
  • the frame1_grid_position_x field indicates an x component of a pair of (x, y) coordinates for a second composition frame (eg, a right image).
  • the frame1_grid_position_y field indicates the y component of the (x, y) coordinate pair for the second composition frame (eg, the right image).
  • the frame_packing_arrangement_reserved_byte field may be a reserved byte that can be used in the future.
  • the frame_packing_arrangement_repetition_period field may indicate the persistence of the frame packing arrangement SEI message.
  • the frame_packing_arrangement_repetition_period field may indicate a frame order count interval. Another frame packing arrangement SEI message with the same frame_packing_arrangement_id value within the frame order count interval or the last part of the coded video sequence may appear in the bitstream.
  • the frame_packing_arrangement_extension_flag field may indicate whether there is additional data in the frame packing arrangement SEI message.
  • the signaling information may include Default Display Window (DDW) information.
  • DSW Default Display Window
  • the DDW information may include information for extracting a 2D image from the 3D image.
  • the DDW information may include information for extracting one image from the left image and the right image from the 3D image.
  • the DDW may include a coordinate range corresponding to the left image or the right image in the 3D image.
  • the DDW may be included inside the Video Usability Informaton (VUI) of the SPS.
  • VUI Video Usability Informaton
  • the signaling information may include AFD / bar data.
  • AFD / bar data may include information for extracting an effective region from the entire encoded video region. For example, if the transmitter can transmit video content in a first aspect ratio (eg, 21: 9) and the receiver can process video content in a second aspect ratio (eg, 16: 9), The AFD / bar data may include information for extracting the second aspect ratio. AFD / bar data may be included in the video ES.
  • a first aspect ratio eg, 21: 9
  • the AFD / bar data may include information for extracting the second aspect ratio.
  • AFD / bar data may be included in the video ES.
  • the signaling information may include HEVC_video_descriptor.
  • the HEVC_video_descriptor may include information identifying coding parameters related to the HEVC video stream.
  • the HEVC_video_descriptor may include a descriptor_tag field, descriptor_length field, profile_idc field, reserved_zero_8bits field, level_idc field, temporal_layer_subset_flag field, HEVC_still_present_flag field, HEVC_24hr_picture_present_flag field, temporal_id_min field, and / or temporal_id_max field.
  • the descriptor_tag field may include information for identifying the HEVC_video_descriptor.
  • the descriptor_length field may include information indicating the size of the HEVC_video_descriptor.
  • the profile_idc field may indicate a profile on which the bitstream is based.
  • the profile may include a Main profile, a Main 10 profile, and / or a Main Still Picture profile defined in the HEVC specification.
  • the reserved_zero_8bits field may include 8 bits of information immediately following the profile_idc field in sequence_parameter_set according to the HEVC specification.
  • the level_idc field may indicate the level of the bitstream defined in the HEVC specification. In each profile, the level is determined according to the size of the image. The level defines the restriction of the parameter according to the size of the corresponding image.
  • the level_idc field may include six levels and intermediate levels thereof.
  • the temporal_layer_subset_flag field may indicate whether a syntax element for describing a subset of temporal layers exists in the HEVC_video_descriptor.
  • the HEVC_still_present_flag field may indicate whether the HEVC video stream includes AVC still pictures.
  • the AVC still pictures may include an AVC Access Unit including an IDR picture.
  • the IDR picture may be followed by a Sequence Parameter Set (SPS) NAL unit and / or a Picture Parameter Set (PPS) NAL unit that transmits sufficient information to correctly decode the IDR pictrue.
  • SPS Sequence Parameter Set
  • PPS Picture Parameter Set
  • the HEVC_24hr_picture_present_flag field may indicate whether the associated HEVC video stream includes HEVC 24-hour pictures.
  • HEVC 24-hour picture means an HEVC Access Unit that contains more than 24 hours of future presentation time.
  • the temporal_id_min field indicates the minimum value of the temporal_id field in the NAL unit header for the related video ES.
  • the temporal_id_max field indicates the maximum value of the temporal_id field in the NAL unit header for the associated video ES.
  • the HEVC_video_descriptor may further include a non_packed_constraint_flag field.
  • the non_packed_constraint_flag field may signal whether a Frame Packing Arrangement SEI message is present in the video stream.
  • the non_packed_constraint_flag field may indicate whether the video stream is a 3DTV video format or an HDTV video format.
  • the non_packed_constraint_flag field may be set to '0' to signal that a Frame Packing Arrangement SEI message is present in the coded video sequence.
  • the non_packed_constraint_flag field may be set to '0' to signal that a Frame Packing Arrangement SEI message is present in the coded video sequence.
  • non_packed_constraint_flag field is a coded video when only the HDTV video format is used, no format transition is expected to occur in the 3D video format, and / or no format transition has occurred in the 3D video format. Can be set to '1' to signal that there is no Frame Packing Arrangement SEI message in the sequence.
  • the receiver may identify whether a frame packing arrangement SEI message is present in the bitstream based on the non_packed_constraint_flag field.
  • the signaling information may include a content descriptor (content_descriptor).
  • the content descriptor may provide classification information of the event.
  • the content descriptor may be included in the EIT and / or SIT.
  • the content descriptor may include a descriptor_tag field, a descriptor_length field, and / or at least one for loop.
  • Each for loop may include classification information of an event.
  • Each for loop may include a content_nibble_level_1 field, a content_nibble_level_2 field, and / or a user_byte field.
  • the descriptor_tag field may include information for identifying a content descriptor.
  • the descriptor_length field may indicate the size of a byte unit of a following for loop.
  • the content_nibble_level_1 field may be 4 bits and represents a content identifier of the first level.
  • the content_nibble_level_2 field may be 4 bits and represents a content identifier of the second level.
  • the second level may be a detailed field of the first level.
  • the user_byte field may be 8 bits and may be defined by the transmitter.
  • the content_nibble_level_1 field may have a value of "0x0" to "0xF”. If the content_nibble_level_1 field is “0x0”, the content classification information of the first level may indicate “undefined content”. In addition, when the content_nibble_level_1 field is “0x1”, the content classification information of the first level may indicate “Movie / Drama”. In addition, when the content_nibble_level_1 field is “0x2”, the content classification information of the first level may indicate “News / Current affairs”. In addition, when the content_nibble_level_1 field is “0x3”, the content classification information of the first level may indicate “Show / Game show”.
  • the content classification information of the first level may indicate “Sports”.
  • the content_nibble_level_1 field is “0x5”
  • the content classification information of the first level may indicate “Children's / Youth programmes”.
  • the content_nibble_level_1 field is “0x6”
  • the content classification information of the first level may indicate “Music / ballet / Dance”.
  • the content_nibble_level_1 field is “0x7”
  • the content classification information of the first level may indicate “Arts / Culture (without music)”.
  • the content classification information of the first level may indicate “Social / Political issues / Economics”.
  • the content classification information of the first level may indicate "Education / Science / Factual topics”.
  • the content classification information of the first level may indicate “Leisure hobbies”.
  • the content classification information of the first level may indicate “Special characteristics”.
  • the content classification information of the first level may indicate “Reserved for future use”.
  • the content classification information of the first level may indicate “User defined”.
  • the content_nibble_level_2 field may have a value of “0x0” to “0xF”.
  • the content_nibble_level_2 field may have a specific value for each content_nibble_level_1 field and may indicate more detailed classification information with respect to the content classification information of the first level.
  • the content_nibble_level_1 field is "0xB”
  • the content classification information of the first level is "Special characteristics”
  • the content_nibble_level_2 field is "0x4"
  • the second level content classification information is "plano-stereoscopic 3DTV format”. Can be represented.
  • the receiver may identify the classification information of the video stream based on the content_nibble_level_1 field and / or the content_nibble_level_2 field.
  • the receiver may highlight an event supporting the 3D service in the EPG based on the component descriptor.
  • the receiver may display additional content such as subtitles and / or graphics so that the 3D image is not obscured.
  • Subtitles are an important element of 3D services as well as SDTV and / or HDTV. With on-screen graphics, it is very important for subtitles to be located accurately on the 3D video content, under depth and timing conditions.
  • the signaling information may include Disparity Signaling Segment (DSS).
  • DSS Disparity Signaling Segment
  • DSS may include the definition of sub-areas within a region.
  • Disparity Signaling Segment may be included in a subtitle.
  • disparity values may be sent for each sub-area to enable placement of subtitles at varying depths within the page and / or area.
  • the disparity information can be transmitted along with the sub-pixel accuracy information.
  • the sub-pixel acuity information may include information for optimally placing subtitles in the 3D scene.
  • the DSS may support subtitling of 3D content based on disparity values for the region or subregion.
  • the signaling information may include information about the DVB subtitle.
  • the signaling information may include information about the DVB subtitle according to the value of the component_type field.
  • the value of the stream_content field is “0x03” and the value of the component_type field is “0x14”, it may indicate “DVB subtitles (normal) for display on a high definition monitor”.
  • the value of the stream_content field is “0x03” and the value of the component_type field is “0x15”
  • “DVB subtitles (normal) with plano-stereoscopic disparity for display on a high definition monitor” may be indicated.
  • the signaling information may include a subtitling descriptor.
  • the subtitling descriptor may include information about the DVB subtitle.
  • the subtitling descriptor may be included in the PMT.
  • the subtitling descriptor may include an ISO_639_language_code field, a subtitling_type field, a composition_page_id field, and / or an ancillary_page_id field.
  • the ISO_639_language_code field may indicate a language code expressed in three letter text.
  • the subtitling_type field may include information on the content of the subtitle and the display to be applied.
  • the subtitling_type field may indicate a code defined for each component_type field when the value of the stream_content field is “0x03” in the component_descriptor.
  • composition_page_id field may include information for identifying a composition page.
  • the ancillary_page_id field may include information for identifying an ancillary page.
  • the signaling information may include disparity information.
  • Subtitles may include disparity information.
  • Disparity information is information indicating how close the object is to the front of the screen or how far behind the screen.
  • the disparity information may be included in the video depth range descriptor and / or the multi region descriptor.
  • the signaling information may include a video depth range descriptor.
  • the video depth range descriptor may indicate the intended depth range of the 3D video.
  • the video depth range descriptor provides disparity information at the service and / or event level.
  • the disparity information may have a static value for the duration of the service and / or the event.
  • the video depth range descriptor may include a range_type field, a range_length field, a production_disparity_hint_info () field, and / or a range_selector_byte field.
  • the range_type field indicates the type of depth range. If the value of the range_type field is '0x00', the depth range type is “production disparity hint”. If the value of the range_type field is “0x01”, the depth range type may indicate “multi-region disparity SEI present”. If the value of the range_type field is '0x01', it may indicate that multi-region disparity SEI data exists in the video ES. In this case, the value of the range_length field may be '0'.
  • the signaling information may include multi-region disparity.
  • Multi-region disparity provides disparity information at the video level.
  • the multi-region disparity can have dynamic values for every frame of 3D video and / or other spatial regions of each frame.
  • the receiver may transmit depth information in the form of a disparity value to overlay additional information (graphics, menus, etc.) based on the multi-region disparity.
  • additional information graphics, menus, etc.
  • the receiver can avoid depth violations between the 3D video (plano-stereoscopic video) and the graphics.
  • one maximum disparity value may be sent. Regions may be defined according to a set of predefined image partitioning patterns. For each region of each frame, exactly one minimum disparity value can be transmitted.
  • the multi-region disparity may include a multi_region_disparity_length field, a max_disparity_in_picture field, and / or a min_disparity_in_region_i field.
  • the multi_region_disparity_length field may be 8 bits and may define the number of bytes immediately following the byte defined as the value of the multi_region_disparity_length field in multi_region_disparity ().
  • the multi_region_disparity_length field may signal the type of the region pattern.
  • the multi_region_disparity_length field may include a limited set of values corresponding to predefined image partitioning patterns. Each image partitioning pattern can define several areas of the image.
  • the value of the multi_region_disparity_length field when the value of the multi_region_disparity_length field is '0', “no disparity information is to be delivered” may be indicated.
  • the value of the multi_region_disparity_length field When the value of the multi_region_disparity_length field is '2', it may indicate “one minimum_disparity_in_region is coded as representing the minimum value in overall picture”. If the value of the multi_region_disparity_length field is '3', it may indicate “two vertical minimum_disparity_in_regions are coded”. If the value of the multi_region_disparity_length field is '4', it may indicate “three vertical minimum_disparity_in_regions are coded”.
  • the value of the multi_region_disparity_length field is '5', it may indicate “four minimum_disparity_in_regions are coded”. If the value of the multi_region_disparity_length field is '10', it may indicate “nine minimum_disparity_in_regions are coded”. If the value of the multi_region_disparity_length field is '17', it may indicate “sixteen minimum_disparity_in_regions are coded”.
  • the max_disparity_in_picture field may define a maximum disparity value in a picture.
  • the min_disparity_in_region_i field may define a minimum disparity value in the i th region.
  • 67 is a view illustrating a service_type field of a service descriptor according to an embodiment of the present invention.
  • the signaling information may include a service descriptor.
  • Service descriptors are descriptors describing the type of service, service provider, and / or name of the service.
  • the service descriptor may be included in the SDT and / or the Selection Information Table (SIT).
  • the service descriptor may include a descriptor_tag field, a descriptor_length field, a service_type field, a service_provider_name_length field, a char field indicating a name of a service provider, a service_name_length field field, and / or a char field indicating a name of a service. Details of the service descriptor will be replaced with the above description.
  • the service_type field may describe the type of service.
  • the assignment of service_type for a service will be replaced with the above description.
  • the service type (or service type information) may indicate “HEVC digital television service” and / or “H.265 / HEVC frame compatible plano-stereoscopic HD digital television service”. . If the value of the service_type field is “0x20”, the service type may indicate “HEVC digital television service” and / or “H.265 / HEVC frame compatible plano-stereoscopic HD NVOD time-shifted service”. If the value of the service_type field is “0x21”, the service type may indicate “HEVC digital television service” and / or “H.265 / HEVC frame compatible plano-stereoscopic HD NVOD reference service”.
  • the service_type field may signal whether an HEVC video service exists in a broadcast signal (eg, MPEG-TS). For example, if the value of the service_type field is one of “0x1F”, “0x20”, and / or “0x21”, the service type is “HEVC digital television service” and / or “H.265 / HEVC frame compatible HD service”. Can be indicated.
  • a broadcast signal eg, MPEG-TS
  • the service descriptor may include a value of the service_type field (the value of the service_type field is “0x1F”) used in the existing 2D service (2D HEVC).
  • the existing 2D receiver may display 2D video by processing video data based on the value of the service_type field used in the existing 2D service.
  • a receiver supporting the SFC-3D service may display 2D video and / or 3D video by processing video data based on a value of a newly defined service_type field.
  • the receiver may signal whether the HEVC video service exists in the broadcast signal (eg, MPEG-TS) based on the service_type field. For example, if the value of the service_type field of the video stream is "0x1F", the receiver may determine that the service type is "HEVC digital television service” and / or "H.265 / HEVC frame compatible HD service”.
  • the receiver may determine that the corresponding video service is an HEVC video service. For example, if the value of the service_type field is one of “0x20” and / or “0x21”, the receiver may determine that the service type is “HEVC digital television service” and / or “H.265 / HEVC frame compatible HD service”. Can be.
  • 68 is a view showing a service_type field of a service descriptor according to an embodiment of the present invention.
  • the signaling information may include a service descriptor. Since the details of the service descriptor are the same as those described above, the above description is replaced with the above description.
  • Bitdepth may indicate the unit of the number of colors (or degree of brightness in the case of gray scale images) that can reproduce colors (or shades of gray).
  • Bitdepth may include 8-bit Bitdepth and / or 10-bit Bitdepth.
  • the service type may indicate “HEVC digital television service” and / or “H.265 / HEVC frame compatible plano-stereoscopic 8 bit bitdepth HD digital television service”. If the value of the service_type field is “0x20”, the service type may indicate “HEVC digital television service” and / or “H.265 / HEVC frame compatible plano-stereoscopic 8 bit bitdepth HD NVOD time-shifted service”. If the value of the service_type field is “0x21”, the service type may indicate “HEVC digital television service” and / or “H.265 / HEVC frame compatible plano-stereoscopic 8 bit bitdepth HD NVOD reference service”.
  • the service type may indicate “HEVC digital television service” and / or “H.265 / HEVC frame compatible plano-stereoscopic 10 bit bitdepth HD digital television service”. If the value of the service_type field is “HEVC digital television service” and / or “0x23”, the service type may indicate “H.265 / HEVC frame compatible plano-stereoscopic 10 bit bitdepth HD NVOD time-shifted service”. If the value of the service_type field is “0x24”, the service type may indicate “HEVC digital television service” and / or “H.265 / HEVC frame compatible plano-stereoscopic 10 bit bitdepth HD NVOD reference service”.
  • the service_type field may signal whether an HEVC video service exists in a broadcast signal (eg, MPEG-TS). For example, if the value of the service_type field is one of “0x1F”, “0x20”, “0x21”, “0x22”, “0x23”, and / or “0x24”, the service type is “HEVC digital television service” and / or Or, it may indicate “H.265 / HEVC frame compatible HD service”.
  • the receiver may signal whether the HEVC video service exists in the broadcast signal (eg, MPEG-TS) based on the service_type field. For example, if the value of the service_type field of the video stream is "0x1F", the receiver may determine that the service type is "HEVC digital television service” and / or "H.265 / HEVC frame compatible HD service”.
  • the receiver may signal whether the HEVC video service exists in the broadcast signal (eg, MPEG-TS) based on the service_type field. For example, if the value of the service_type field of the video stream is one of “0x20”, “0x21”, “0x22”, “0x23”, and / or “0x24”, the video service is “HEVC digital television service”. And / or “H.265 / HEVC frame compatible HD service”.
  • FIG. 69 illustrates a stream_content field and / or a component_type field of a component descriptor according to an embodiment of the present invention.
  • the signaling information may include a component descriptor.
  • the component descriptor may be used to identify the type of component stream and provide a text description of the elementary stream.
  • Component descriptors may be included in the SDT and / or the EIT.
  • the component descriptor may include a descriptor_tag field, descriptor_length field, reserved_future_use field, stream_content field (or first stream content information), component_type field (or component type information), component_tag field, ISO_639_language_code field, and / or text_char field. . Details of the component descriptor will be replaced with the above description.
  • the component descriptor may include a value of a stream_content field and / or a value of a component_type field used in an existing 2D service (2D HEVC).
  • the component descriptor may include a value of a new stream_content field and / or a component_type field applicable to the SFC-3D service.
  • an SFC-3D service may identify component type information (eg, a type of a bitstream, a component stream, and / or a video stream) based on one component descriptor. Can be.
  • the component type information may include at least one parameter.
  • the parameter may include bitstream information, codec information (encoding type information) of a video stream, profile information, resolution information, aspect ratio information, frame rate information, image format information, and / or Bitdepth information.
  • codec information encoding type information
  • profile information profile information
  • resolution information aspect ratio information
  • frame rate information image format information
  • / or Bitdepth information can be.
  • Bitstream information is information about a type of a bitstream.
  • the bitstream information may include a video stream, an audio stream, and / or a subtitle stream.
  • Codec information is information about a codec encoding a bitstream.
  • the codec information may include H.265 / HEVC.
  • Resolution information is resolution information in the bitstream.
  • resolution information may include a high definition and / or a standard definition.
  • Aspect ratio information is ratio information of a horizontal size and a vertical size of a frame when displaying video data.
  • the aspect ratio information may include a 16: 9 aspect ratio.
  • the resolution information, frame rate information is information on how many times a video frame is output in one second.
  • the resolution information may include 25 Hz and / or 30 Hz.
  • the image format information is information about an arrangement of left and right images included in a video stream.
  • the image format information may include side-by-side and / or top-and-bottom.
  • Bitdepth information is information indicating a unit of the number of colors (or, in the case of gray scale images, brightness) that can reproduce colors (or shades of gray).
  • Bitdepth may include 8-bit Bitdepth and / or 10-bit Bitdepth.
  • the video data is “H.265 / HEVC plano-stereoscopic frame compatible high definition video, 16: 9 aspect ratio, 25 Hz, Side-by- Side ”can be indicated.
  • the value of the stream_content field is '0x09'
  • H.265 / HEVC video may be indicated.
  • the value of the stream_content field '0x09' and / or the component_type field '0x80' are not fixed values, they may be changed.
  • H.265 / HEVC plano-stereoscopic frame compatible high definition video means that the bitstream is a video stream, the video stream is encoded using H.265 / HEVC video coding, and the video stream is frame-compatible 3D. High Definition video supporting a service may be indicated.
  • “16: 9 aspect ratio” indicates aspect ratio information of a video stream
  • “25 Hz” indicates frame rate information of a video stream
  • “Side-by-Side” indicates left image included in a video stream.
  • / or image format information of the right image In the following, the same expression schemes have the same meanings, and thus will be briefly described.
  • the video data is “H.265 / HEVC plano-stereoscopic frame compatible high definition video, 16: 9 aspect ratio, 25 Hz, Top-and -Bottom ”.
  • the video data is “H.265 / HEVC plano-stereoscopic frame compatible high definition video, 16: 9 aspect ratio, 30 Hz, Side -by-Side ”.
  • the video data is “H.265 / HEVC plano-stereoscopic frame compatible high definition video, 16: 9 aspect ratio, 30 Hz, Top -and-Bottom ”.
  • the existing 2D receiver may display 2D video by processing video data based on the value of the stream_content field and / or the value of the component_type field used in the existing 2D service.
  • a receiver supporting the SFC-3D service may display 2D video and / or 3D video by processing video data based on a value of a newly defined stream_content field and / or a value of a component_type field. have.
  • 70 is a diagram illustrating a stream_content field and / or a component_type field of a component descriptor according to an embodiment of the present invention.
  • the signaling information may include a component descriptor.
  • the component descriptor may identify the type of component stream. Since the details of the component descriptor are the same as the above description, the description is replaced with the above description.
  • Bitdepth may indicate the unit of the number of colors (or degree of brightness in the case of gray scale images) that can reproduce colors (or shades of gray).
  • Bitdepth may include 8-bit Bitdepth and / or 10-bit Bitdepth.
  • the video data is “H.265 / HEVC plano-stereoscopic frame compatible high definition video, 16: 9 aspect ratio, 25 Hz, Side-by -Side, 8 bit bitdepth ”. If the value of the stream_content field is '0x09' and the value of the component_type field is '0x81', the video data is “H.265 / HEVC plano-stereoscopic frame compatible high definition video, 16: 9 aspect ratio, 25 Hz, Top-and -Bottom, 8 bit bitdepth ”.
  • the video data is “H.265 / HEVC plano-stereoscopic frame compatible high definition video, 16: 9 aspect ratio, 30 Hz, Side-by -Side, 8 bit bitdepth ”. If the value of the stream_content field is '0x09' and the value of the component_type field is '0x83', the video data is “H.265 / HEVC plano-stereoscopic frame compatible high definition video, 16: 9 aspect ratio, 30 Hz, Top-and -Bottom, 8 bit bitdepth ”.
  • the video data is “H.265 / HEVC plano-stereoscopic frame compatible high definition video, 16: 9 aspect ratio, 25 Hz, Side -by-side, 10 bit bitdepth ”. If the value of the stream_content field is '0x09' and the value of the component_type field is '0x85', the video data is “H.265 / HEVC plano-stereoscopic frame compatible high definition video, 16: 9 aspect ratio, 25 Hz, Top-and -Bottom, 10 bit bitdepth ”.
  • the video data is “H.265 / HEVC plano-stereoscopic frame compatible high definition video, 16: 9 aspect ratio, 30 Hz, Side-by -Side, 10 bit bitdepth ”. If the value of the stream_content field is '0x09' and the value of the component_type field is '0x87', the video data is “H.265 / HEVC plano-stereoscopic frame compatible high definition video, 16: 9 aspect ratio, 30 Hz, Top-and -Bottom, 10 bit bitdepth ”.
  • the receiver may identify the type of component stream (eg, video stream) based on the component descriptor. For example, the receiver may identify that the video stream is high definition video encoded using the H.265 / HEVC video coding technique based on the component descriptor. In addition, the receiver may identify that the image format information of the left and right images included in the video stream is “top and bottom” based on the component descriptor.
  • the type of component stream eg, video stream
  • the receiver may identify that the video stream is high definition video encoded using the H.265 / HEVC video coding technique based on the component descriptor.
  • the receiver may identify that the image format information of the left and right images included in the video stream is “top and bottom” based on the component descriptor.
  • 71 is a view illustrating a stream_content field and / or a component_type field of a component descriptor according to an embodiment of the present invention.
  • the signaling information may include a plurality of component descriptors.
  • the component descriptor may be used to identify the type of component stream (eg, video stream) and provide a text description of the elementary stream.
  • Component descriptors may be included in the SDT and / or the EIT.
  • the component descriptor may include a descriptor_tag field, descriptor_length field, stream_content_ext field, reserved_future_use field, stream_content field (first stream content information), component_type field (component type information), component_tag field, ISO_639_language_code field, and / or text_char field.
  • the stream_content_ext field (second stream content information) may be combined with the stream_content field to identify the type of the strip.
  • the component descriptor may further include a stream_content_ext field, and may expand a space in which stream_content may be defined.
  • the component_tag field may include information for identifying a component stream. That is, the component_tag field may identify which video stream the component descriptor is for. For example, the component_tag field has the same value as the component_tag field in the stream identifier descriptor (if present in the PSI program map section) for the component stream.
  • the SDT and / or EIT may each include a plurality of component descriptors.
  • the SDT may include one component descriptor
  • the EIT may include a plurality of component descriptors.
  • a plurality of component streams may be used for one component stream. There may be component descriptors of.
  • an additional component descriptor may include image format information of a left image and a right image included in a video stream.
  • the image format information may indicate “Top-and-Bottom”.
  • the image format information may indicate "Side-by-Side”.
  • the value of the stream_content field, the value of the stream_content_ext field, and / or the value of the component_type field are not fixed values and may be changed.
  • the signaling information may include a plurality of component descriptors.
  • the at least one component descriptor may identify the same bitstream.
  • some of the at least one component descriptor may be a component descriptor having the same content identifying the same bitstream.
  • the remaining part of the at least one component descriptor may be a component descriptor of another content identifying the same bitstream.
  • bitstream for example, encoding type information (compression information) of a video stream and / or an audio stream
  • an image of a left image and a right image included in a video stream providing a 3D service
  • Type of format information profile information of video data (e.g., profile information of HEVC), aspect ratio information, frame rate information, and / or other bitstreams (including video data, subtitles, and / or audio data)
  • At least one of information that can identify the data may be identified.
  • each component descriptor may include encoding type information (compression information) of a bitstream (for example, a video stream and / or an audio stream), and image format information of left and right images included in a video stream providing 3D service.
  • encoding type information compression information
  • bitstream for example, a video stream and / or an audio stream
  • image format information of left and right images included in a video stream providing 3D service.
  • profile information of the video data e.g., profile information of HEVC
  • aspect ratio information e.g., aspect ratio information, frame rate information, and / or other types of bitstream (including video data, subtitles, and / or audio data). It may include at least one of information that can be.
  • the receiver combines the at least one component descriptor to obtain encoding type information of the bitstream, image format information of the left and right images included in the video stream providing the 3D service, and / or additional information about the bitstream.
  • the receiver may combine at least one component descriptor to obtain information that the bitstream has been encoded using the H.265 / HEVC video coding technique and / or the image format form is a side-by-side format.
  • the receiver may combine the at least one component descriptor to obtain information that the bitstream provides the SFC-3D service.
  • the component descriptor may include a value of a stream_content field, a value of a stream_content_ext field, and / or a value of a component_type field used in an existing 2D service (2D HEVC). For example, if the value of the stream_content field of the component descriptor is '0x09', the value of the stream_content_ext field is '0x0', and the value of the component_type field is '0x00', the video data is “HEVC Main Profile high definition video, 50 Hz Can indicate.
  • the video data is “HEVC Main 10 Profile high definition video, 50 Hz”. Can be indicated.
  • the video data indicates “HEVC Main Profile high definition video, 60 Hz”. Can be directed.
  • the video data is “HEVC Main 10 Profile high definition video, 60 Hz”. Can be indicated.
  • the video data may indicate “HEVC ultra high definition video”.
  • the component descriptor may include a value of a new stream_content field, a value of a stream_content_ext field, and / or a value of a component_type field applicable to an SFC-3D service.
  • the existing 2D receiver may display 2D video by processing video data based on the value of the stream_content field, the value of the stream_content_ext field, and / or the value of the component_type field used in the existing 2D service.
  • a receiver supporting the SFC-3D service may display 2D video and / or 3D video by processing video data based on a newly defined stream_content field value, a stream_content_ext field value, and / or a component_type field value. have.
  • the SFC-3D service may identify the same type of one bitstream (eg, a video stream) based on multiple component descriptors.
  • the signaling information may include a first component descriptor, a second component descriptor, a third component descriptor, and / or a fourth component descriptor.
  • the first component descriptor, the second component descriptor, the third component descriptor, and / or the fourth component descriptor may identify the same bit stream with a value of the same component_tag field.
  • the video data includes “HEVC Main Profile high definition video, 50 Hz”. Can be directed.
  • the video data is “plano-stereoscopic top and bottom (TaB) frame- packing ”may be indicated.
  • the video data may indicate “16: 9 aspect ratio”.
  • the video data is “plano-stereoscopic top and bottom (TaB) frame- packing ”may be indicated.
  • the second component descriptor and the fourth component descriptor are component descriptors having the same contents identifying the same stream.
  • the signaling information may identify the same one video stream by combining at least one of the first component descriptor, the second component descriptor, the third component descriptor, and / or the fourth component descriptor.
  • the receiver may identify the type of the received bitstream based on the at least one component descriptor.
  • the receiver may receive a service even though some of the component descriptors are not recognized among the plurality of component descriptors. That is, the relationship between the plurality of component descriptives may be processed by using an “or” operation to receive a service based on only some component descriptors.
  • the receiver may identify that the received video stream is high definition video encoded using the H.265 / HEVC video coding technique.
  • the receiver when the receiver receives the first component descriptor and the second component descriptor, the receiver combines the first component descriptor and the second component descriptor so that the received video stream is encoded using an H.265 / HEVC video coding technique. It is a high definition video, and image format information of left and right images included in a video stream may be identified as “plano-stereoscopic top and bottom (TaB) frame-packing”.
  • the receiver may identify the same video stream by receiving the remaining component descriptors. For example, even if the receiver does not receive the fourth component descriptor, the receiver receives the second component descriptor, and the image format information of the left and right images included in the video stream is “plano-stereoscopic top and bottom (TaB). frame-packing ”.
  • FIG. 72 is a diagram illustrating a linkage_type field and / or a link_type field of a linkage descriptor according to an embodiment of the present invention.
  • the signaling information may include a linkage descriptor.
  • the linkage descriptor may indicate a service that may exist when additional information related to a specific entity in the SI system is required.
  • the linkage descriptor may exist in a corresponding table requiring a service for providing additional information. In the case of a service replacement service, the linkage descriptor may be used.
  • the receiver may automatically select the replacement service.
  • linkage descriptors may be included in the SDT and / or the EIT. However, when the value of the linkage_type field is in the range of '0x0D' to '0x1F', the linkage descriptor may be included only in the EIT.
  • the receiver may identify a 3D service or event corresponding to a specific 2D service_id currently being viewed or a specific 2D event_id to be broadcasted later based on the linkage descriptor.
  • the linkage descriptor may include a descriptor_tag field, descriptor_length field, transport_stream_id field, original_network_id field, service_id field, linkage_type field, mobile_hand-over_info () field, event_linkage_info () field, extended_event_linkage_info () field, and / or private_data_byte field. Details of the linkage descriptor will be replaced with the above description.
  • 73 is a block diagram of a receiver according to an embodiment of the present invention.
  • a receiver includes a receiver C20210, a demodulator C20220, a demultiplexer C20230, a signaling information processor C20240, a decoder C20250, and / or An output unit C20260 is included.
  • the above description may be applied to a broadcast signal transmission apparatus.
  • the receiver C20210 receives a broadcast signal through a radio frequency (RF) channel.
  • RF radio frequency
  • the receiver C20210 may receive a broadcast signal including a video stream for the video component and signaling information signaling the video stream.
  • the video stream may provide one of a frame compatible 3-Dimensional Television (3DTV) service and a High Definition Television (HDTV) service.
  • 3DTV 3-Dimensional Television
  • HDTV High Definition Television
  • the signaling information includes first information indicating whether the video stream is a frame compatible 3DTV video format or a high definition television (HDTV) video format, and the service type of the video stream is a high efficiency video coding (HEVC) service.
  • Service type information indicating a, and a plurality of component descriptors indicating a type of an HEVC service with respect to a video stream.
  • the signaling information may include Frame Packing Arrangement SEI message, Default Display Window (DDW) information, AFD / bar data, HEVC_video_descriptor, content descriptor (content_descriptor), Disparity Signaling Segment (DSS), DVB subtitle information, subtitle descriptor at least one of a subtitling descriptor, disparity information, a video depth range descriptor, a multi-region disparity, a service descriptor, a component descriptor, and / or a linkage descriptor. It may include.
  • the demodulator C20220 demodulates the received broadcast signal.
  • the demultiplexer C20230 demultiplexes audio data, video data, and / or signaling information from the demodulated broadcast signal. To this end, the demultiplexer C20230 may demultiplex a broadcast signal by filtering using a PID (Packet IDentifier). The demultiplexer C20230 outputs the demultiplexed video signal to the decoding unit C20250 at a later stage, and outputs signaling information to the signaling information processor C20240.
  • PID Packet IDentifier
  • the signaling information processor C20240 may process the signaling information received from the demultiplexer C20230. In addition, the signaling information processor C20240 may identify the type of the video stream based on the signaling information.
  • the signaling information processor C10240 may include a database (DB) for temporarily storing the processed signaling information, either internally or externally.
  • DB database
  • the signaling information processor C20240 may instruct switching between the 3DTV video format and the HDTV video format based on the signaling information. Meanwhile, the receiver may receive switching information indicating to output the video stream in the 2D video mode from the user. In this case, the signaling information may include switching information.
  • the decoding unit C20250 receives and decodes the demultiplexed video data.
  • the decoder C20250 may decode video data based on the signaling information.
  • the decoding unit C20250 may decode a video stream in a 3DTV video format and / or an HDTV video format.
  • the output unit C20260 may format and decode the decoded video data according to an output format.
  • the output unit C20260 may output the video stream in one of a 3DTV video format and an HDTV video format.
  • the output unit may format the output format for 3D output.
  • the output unit C20260 may output the video data as it is, without separately processing the output data for the 2D output format.
  • the signaling information processor C20240 processes the signaling information included in the PMT.
  • the signaling information processor C20240 may obtain the received channel information based on the signaling information and generate a channel map.
  • the signaling information processor C20240 may identify whether the received video stream is a 3DTV video format or an HDTV video format.
  • the signaling information processor C20240 checks non_packed_constraint_flag of the HEVC video descriptor included in the PMT.
  • the non_packed_constraint_flag field may indicate whether the video stream is a 3DTV video format or an HDTV video format.
  • the non_packed_constraint_flag field may signal whether a Frame Packing Arrangement SEI message is present in the video stream.
  • a Frame Packing Arrangement SEI message may exist.
  • the non_packed_constraint_flag field indicates that the video stream is an HDTV video format
  • the Frame Packing Arrangement SEI message may not exist.
  • the decoding unit C20250 may decode the video stream of the HDTV video format.
  • the output unit C20260 may output the decoded video stream as a 2D image.
  • the signaling information processor C20240 may identify a type of a service and a type of a component stream of the video stream.
  • the signaling information processor C20240 may identify a type of a service based on service type information (service_type field) of the service descriptor.
  • the service_type field may signal whether an HEVC video service exists in a broadcast signal (eg, MPEG-TS). For example, if the value of the service_type field is "0x1F", the service type may indicate "HEVC digital television service” and / or "H.265 / HEVC frame compatible HD service”.
  • the signaling information processor C20240 may identify the type of the component stream based on the first stream content information (stream_content field), the second stream content information, and the component type information (component_type field) of the component descriptor.
  • each component descriptor may include first stream content information indicating the type of the video stream, second stream content information indicating the type of the video stream in combination with the first stream content information, and the type of the video component.
  • Component type information may be included.
  • the component descriptor may indicate the type of the HEVC service based on the first stream content information, the second stream content information, and the component type information.
  • the SDT and / or EIT may each include a plurality of component descriptors.
  • the SDT may include one component descriptor
  • the EIT may include a plurality of component descriptors.
  • a plurality of component streams may be used for one component stream.
  • the video data is “HEVC Main Profile high definition video, 50 Hz ”can be indicated.
  • the video data is “plano-stereoscopic top and bottom (TaB) frame- packing ”may be indicated.
  • the service type information and the first component descriptor may be included in the SDT, and the second component descriptor may be included in the EIT.
  • the SDT and / or the EIT may include a plurality of component descriptors.
  • the SDT and / or EIT may include at least one first component descriptor and / or at least one second component descriptor.
  • the signaling information processor C20240 combines the first component descriptor and the second component descriptor so that the received video stream is a high definition video encoded by using an H.265 / HEVC video coding technique, and includes a left image included in the video stream.
  • Image format information of the right and right images can be identified as "plano-stereoscopic top and bottom (TaB) frame-packing".
  • the signaling information processor C20240 may obtain a Frame Packing Arrangement SEI message included in the video stream.
  • the signaling information processor C20240 may obtain specific information about the video stream based on the Frame Packing Arrangement SEI message.
  • Frame Packing Arrangement SEI message is frame_packing_arrangement_id field, frame_packing_arrangement_cancel_flag field, frame_packing_arrangement_type field, quincunx_sampling_flag field, content_interpretation_type field, spatial_flipping_flag field, frame0_flipped_flag field, field_views_flag field, current_frame_is_frame0_flag field, frame0_self_contained_flag field, frame1_self_contained_flag field, frame0_grid_position_x field, frame0_grid_position_y field, frame1_grid_position_x field, frame1_grid_position_y field, frame_packing_arrangement_reserved
  • the output unit C20260 may output the video stream as a 3D image based on the Frame Packing Arrangement SEI message.
  • the output unit C20260 may separate the 3D image into a left image and a right image based on the Frame Packing Arrangement SEI message.
  • the output unit C20260 may format the left image and the right image and output the 3D image.
  • the output unit C20260 may further include a formatter (not shown) for formatting the left image and the right image according to the 3D output format.
  • the output unit C20260 may output the 3D image by efficiently changing the aspect ratio of the image according to the aspect ratio of the receiver based on the AFD / bar data.
  • the receiver may receive switching information indicating to output the video stream in the 2D video mode from the user.
  • the signaling information may include switching information.
  • the signaling information processor C20240 may extract one image from the left image or the right image from the video stream.
  • the output unit C20260 may output the extracted one image as a 2D image.
  • the signaling information processor C20240 may acquire Default Display Window (DDW) information and / or AFD / bar data. That is, in the case of a stream including a frame packing arrangement SEI message, the DDW information may include information for extracting a 2D image from a 3D image, and the AFD / bar data may include information for extracting an effective region from the entire encoded video region. It may include.
  • DDW Default Display Window
  • AFD / bar data may include information for extracting an effective region from the entire encoded video region. It may include.
  • the output unit C20260 may extract the 2D image from the 3D image based on the Default Display Window (DDW) information, and separate the effective area based on the AFD / bar data.
  • DSW Default Display Window
  • the output unit C20260 may upscale the extracted one image.
  • the broadcast signal transmission apparatus may include a first encoder (not shown), a second encoder (not shown), and / or a transmitter (not shown).
  • the first encoder can generate a video stream to the video component.
  • the video stream may provide one of a frame compatible 3-Dimensional Television (3DTV) service and a High Definition Television (HDTV) service.
  • 3DTV 3-Dimensional Television
  • HDTV High Definition Television
  • the signaling information includes first information indicating whether the video stream is a frame compatible 3DTV video format or a high definition television (HDTV) video format, and a service type of the video stream indicating a high efficiency video coding (HEVC) service.
  • the transmitter may transmit a broadcast signal including a video stream and signaling information.
  • the broadcast signal transmission method according to an embodiment of the present invention may be performed using the above-described broadcast signal transmission apparatus, and may be performed in a reverse process of the broadcast signal reception method.
  • the foregoing may be applied to a broadcast signal transmission apparatus and / or a broadcast signal transmission method.
  • AFD Active Format Description
  • AFD is information describing a specific area of interest in a coded video frame.
  • the broadcast transmitter may transmit active_format information to signal a specific region of interest, and the receiver may use active_format information to identify a specific region of interest.
  • Bar data includes size information of a top or bottom letter box displayed on the top or bottom of the screen, or a side box displayed on the left or right side of the screen.
  • the AFD may be used for solving a compatibility problem between a source format of a program, a format for transmitting a program, and a format of a target receiver consuming the program.
  • a 14: 9 content (or program) designed for widescreen may be transmitted in the form of a letter box in a 4: 3 coded frame for the user of a receiver with a 4: 3 screen.
  • a user with a wide screen receiver may display an unwanted letter box or consume 14: 9 content in an aspect ratio of the screen.
  • signaling information may be provided so that the receiver may identify an area of interest in the corresponding content and display the area, which may correspond to AFD and / or bar data.
  • the "region of interest" may be defined as the area of the screen that contains the content of the content.
  • the active_format information can be used with the source aspect ratio at the receiver.
  • the source aspect ratio represents the aspect ratio of the source content to be transmitted.
  • the source aspect ratio may be provided as signaling information or calculated by the receiver using the provided signaling information according to a coding scheme of transmitted broadcast data.
  • the receiver becomes the 'region of interest' for the entire area of the transmitted frame.
  • the receiver may know that a letter box exists in the transmitted frame.
  • the receiver may know that a pillar box exists in the transmitted frame.
  • the receiver becomes an 'area of interest' for the entire area of the transmitted frame.
  • the receiver may know that a letter box exists in the transmitted frame. If the source aspect ratio indicates 21: 9 and the active_format indicates 16: 9 center, the receiver may know that a pillar box exists in the transmitted frame.
  • Active_format information can be included in the broadcast stream and delivered.
  • the value of the Active_format information is 0000, it indicates that the 'region of interest' is not defined or is not available.
  • the 'region of interest' has a ratio of 16: 9 and indicates that it is located at the top of the screen. That is, in a receiver having a different aspect ratio, a bar may be displayed below.
  • the 'region of interest' has a ratio of 14: 9 and indicates that it is located at the top of the screen. That is, in a receiver having a different aspect ratio, a bar may be displayed below.
  • the 'region of interest' corresponds to an area having an aspect ratio greater than 16: 9, and 'the region of interest' is displayed in the center of the screen.
  • the size (horizontal or vertical length) of the image should be calculated using the bar data described below.
  • the value of the Active_format information is 1000, it indicates that the aspect ratio of the 'region of interest' is the same as the aspect ratio of the coded frame.
  • the 'region of interest' has a 4: 3 ratio and indicates that it is located at the center of the screen.
  • the 'region of interest' has a ratio of 16: 9 and indicates that the information is in the center of the screen.
  • the 'region of interest' has a ratio of 14: 9 and indicates that it is located at the center of the screen.
  • the 'region of interest' has a 4: 3 ratio and indicates that the frame is protected based on a 14: 9 ratio.
  • the 'region of interest' has a ratio of 16: 9 and indicates that the frame is protected based on a ratio of 14: 9.
  • the 'region of interest' has a ratio of 16: 9 and indicates that the frame is protected based on 4: 3 ratio.
  • 75 is a diagram illustrating an example of a display of a receiver according to an active_format and an aspect ratio of a transmitted frame, according to an embodiment of the present invention.
  • a 4: 3 coded frame includes a bar at the bottom of the frame, and a 16: 9 coded frame does not have a separate bar.
  • a 4: 3 coded frame includes a bar at the bottom of the frame, and a 16: 9 coded frame includes a bar at the side.
  • a 4: 3 coded frame includes a bar at the bottom of the frame
  • a 16: 9 coded frame includes a bar at the bottom of the frame.
  • the 4: 3 coded frame and the 16: 9 coded frame do not all include separate bars.
  • a frame coded 4: 3 does not include a separate bar
  • a frame coded 16: 9 includes a bar on the side.
  • FIG. 76 illustrates an example of a display of a receiver according to an active_format and an aspect ratio of a transmitted frame according to another embodiment of the present invention.
  • the 4: 3 coded frame includes bars in the upper and lower regions of the frame, and the 16: 9 coded frame does not have a separate bar.
  • a 4: 3 coded frame includes bars in the upper and lower regions of the frame
  • a 16: 9 coded frame includes bars in the left and right sides of the frame.
  • a 4: 3 coded frame does not include a separate bar
  • a 16: 9 coded frame includes a bar on left and right sides of the frame.
  • the 4: 3 coded frame includes bars in the upper and lower regions of the frame, and the 16: 9 coded frame does not have a separate bar.
  • a 4: 3 coded frame includes bars in the upper and lower regions of the frame, and a 16: 9 coded frame does not have a separate bar.
  • 77 is a diagram illustrating bar data according to an embodiment of the present invention.
  • the bar data can be included in the video user data. Bar data may be included in the video stream. Alternatively, bar data may be transmitted through a separate signaling signal.
  • the bar data includes top_bar_flag information, bottom_bar_flag information, left_bar_flag information, right_bar_flag information, marker_bits information, line_number_end_of_top_bar information, line_number_start_of_bottom_bar information, pixel_number_end_of_left_bar information, and / or pixel_number_of_start_start_bar.

Abstract

La présente invention concerne un appareil de réception de diffusion qui comprend : un syntoniseur pour recevoir un signal de diffusion comprenant des informations de signalisation et des données de composantes pour le service, les données de composantes comprenant des premières données de composantes pour une image 2D et des secondes données de composantes pour une image 3D ; un décodeur pour décoder au moins une parmi les premières données de composantes et les secondes données de composantes sur la base des informations de signalisation ; et une partie de sortie pour afficher l'image 3D sur la base seulement des premières données de composantes décodées parmi les premières données de composantes et les secondes données de composantes.
PCT/KR2015/009905 2014-09-25 2015-09-22 Procédé et appareil de traitement de signaux de diffusion 3d WO2016047985A1 (fr)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201462055620P 2014-09-25 2014-09-25
US62/055,620 2014-09-25
US201462073005P 2014-10-30 2014-10-30
US62/073,005 2014-10-30
US201462092798P 2014-12-16 2014-12-16
US62/092,798 2014-12-16

Publications (1)

Publication Number Publication Date
WO2016047985A1 true WO2016047985A1 (fr) 2016-03-31

Family

ID=55581443

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2015/009905 WO2016047985A1 (fr) 2014-09-25 2015-09-22 Procédé et appareil de traitement de signaux de diffusion 3d

Country Status (1)

Country Link
WO (1) WO2016047985A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10552973B2 (en) 2016-11-14 2020-02-04 Samsung Electronics Co., Ltd. Image vision processing method, device and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002218502A (ja) * 2001-01-22 2002-08-02 Nippon Television Network Corp 立体映像信号の伝送方法及び、そのシステム
KR20130044264A (ko) * 2010-10-13 2013-05-02 경희대학교 산학협력단 스테레오스코픽 영상 정보의 전송 방법 및 장치
KR20130044266A (ko) * 2010-12-13 2013-05-02 한국전자통신연구원 스테레오스코픽 비디오 서비스 위한 시그널링 방법 및 이러한 방법을 사용하는 장치
KR20130096289A (ko) * 2010-10-28 2013-08-29 엘지전자 주식회사 모바일 환경에서 3차원 방송 신호를 수신하기 위한 수신 장치 및 방법
KR20130108245A (ko) * 2010-09-19 2013-10-02 엘지전자 주식회사 방송 수신기 및 3d 비디오 데이터 처리 방법
KR20140018254A (ko) * 2011-03-07 2014-02-12 엘지전자 주식회사 디지털 방송 신호 송/수신 방법 및 장치

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002218502A (ja) * 2001-01-22 2002-08-02 Nippon Television Network Corp 立体映像信号の伝送方法及び、そのシステム
KR20130108245A (ko) * 2010-09-19 2013-10-02 엘지전자 주식회사 방송 수신기 및 3d 비디오 데이터 처리 방법
KR20130044264A (ko) * 2010-10-13 2013-05-02 경희대학교 산학협력단 스테레오스코픽 영상 정보의 전송 방법 및 장치
KR20130096289A (ko) * 2010-10-28 2013-08-29 엘지전자 주식회사 모바일 환경에서 3차원 방송 신호를 수신하기 위한 수신 장치 및 방법
KR20130044266A (ko) * 2010-12-13 2013-05-02 한국전자통신연구원 스테레오스코픽 비디오 서비스 위한 시그널링 방법 및 이러한 방법을 사용하는 장치
KR20140018254A (ko) * 2011-03-07 2014-02-12 엘지전자 주식회사 디지털 방송 신호 송/수신 방법 및 장치

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10552973B2 (en) 2016-11-14 2020-02-04 Samsung Electronics Co., Ltd. Image vision processing method, device and equipment

Similar Documents

Publication Publication Date Title
WO2015126214A1 (fr) Procédé et appareil de traitement de signaux de diffusion 3d
WO2015126213A1 (fr) Dispositif d'émission de signaux de diffusion et dispositif de réception de signaux de diffusion
WO2011136621A2 (fr) Appareil et procédé de traitement d'image
WO2010126227A2 (fr) Récepteur de diffusion et son procédé de traitement de données vidéo tridimensionnelles
WO2015126144A1 (fr) Procédé et appareil destinés à l'émission-réception combinée de signal de radiodiffusion destiné au service panoramique
WO2012036532A2 (fr) Procédé et appareil pour traiter un signal de télédiffusion pour un service de diffusion 3d (en 3 dimensions)
WO2013015596A2 (fr) Appareil d'émission de flux vidéo, appareil de réception de flux vidéo, procédé d'émission de flux vidéo, et procédé de réception de flux vidéo
WO2012023789A2 (fr) Appareil et procédé de réception d'un signal de radiodiffusion numérique
WO2012026746A2 (fr) Procédé adapté pour fournir des données vidéo en 3d sur un poste de télévision en 3d
WO2011093676A2 (fr) Procédé et appareil de génération de flux de données pour fournir un service multimédia tridimensionnel, et procédé et appareil de réception du flux de données
WO2011093677A2 (fr) Procédé et appareil d'émission de flux de diffusion numérique à l'aide d'informations de liaison concernant un flux vidéo multivue, et procédé et appareil de réception de ce flux
WO2016182371A1 (fr) Dispositif d'émission de signal de radiodiffusion, dispositif de réception de signal de radiodiffusion, procédé d'émission de signal de radiodiffusion, et procédé de réception de signal de radiodiffusion
WO2016204481A1 (fr) Dispositif de transmission de données multimédias, dispositif de réception de données multimédias, procédé de transmission de données multimédias et procédé de réception de données multimédias
WO2011059290A2 (fr) Procédé et appareil pour générer un flux multimédia destiné à l'adaptation de la profondeur d'informations de reproduction vidéo additionnelles tridimensionnelles, et procédé et appareil pour recevoir un flux multimédia destiné à l'adaptation de la profondeur d'informations de reproduction vidéo additionnelles tridimensionnelles
WO2012030158A2 (fr) Procédé et appareil adaptés pour traiter et pour recevoir un signal de diffusion numérique pour un affichage en trois dimensions
WO2017043863A1 (fr) Dispositif d'émission de signal de radiodiffusion, dispositif de réception de signal de radiodiffusion, procédé d'émission de signal de radiodiffusion, et procédé de réception de signal de radiodiffusion
WO2012157999A2 (fr) Dispositif de transmission de flux vidéo, dispositif de réception de flux vidéo, procédé de transmission de flux vidéo, et procédé de réception de flux vidéo
WO2011013995A2 (fr) Procédé et appareil destinés à générer un flux de données d'image tridimensionnelle comprenant des informations supplémentaires destinées à reproduire une image tridimensionnelle, et procédé et appareil destinés à recevoir le flux de données d'image tridimensionnelle
WO2012077987A2 (fr) Dispositif et procédé de réception d'un signal de diffusion numérique
WO2011122914A2 (fr) Procédé et appareil d'envoi de contenu de radiodiffusion numérique pour fournir du contenu bidimensionnel et tridimensionnel, et procédé et appareil de réception de contenu de radiodiffusion numérique
WO2011046338A2 (fr) Récepteur de diffusion et son procédé de traitement de données vidéo 3d
WO2012030177A2 (fr) Récepteur numérique et procédé destiné à traiter un contenu 3d dans le récepteur numérique
WO2010071283A1 (fr) Procédé de réception de diffusion numérique capable d'afficher une image stéréoscopique, et appareil de réception de diffusion numérique utilisant un tel procédé
WO2011152633A2 (fr) Procédé et appareil permettant de traiter et de recevoir un signal de radiodiffusion numérique pour un sous-titre tridimensionnel
WO2012030176A2 (fr) Procédé et dispositif de traitement de signal de diffusion pour un service de diffusion en trois dimensions (3d)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15843777

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15843777

Country of ref document: EP

Kind code of ref document: A1