WO2016006971A1 - 방송 신호 송수신 방법 및 장치 - Google Patents
방송 신호 송수신 방법 및 장치 Download PDFInfo
- Publication number
- WO2016006971A1 WO2016006971A1 PCT/KR2015/007203 KR2015007203W WO2016006971A1 WO 2016006971 A1 WO2016006971 A1 WO 2016006971A1 KR 2015007203 W KR2015007203 W KR 2015007203W WO 2016006971 A1 WO2016006971 A1 WO 2016006971A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- caption
- subtitle
- metadata
- data
- xml
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
- H04N21/4884—Data services, e.g. news ticker for displaying subtitles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
- H04N21/2353—Processing of additional data, e.g. scrambling of additional data or processing content descriptors specifically adapted to content descriptors, e.g. coding, compressing or processing of metadata
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/236—Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
- H04N21/2362—Generation or processing of Service Information [SI]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
- H04N21/4886—Data services, e.g. news ticker for displaying a ticker, e.g. scrolling banner for news, stock exchange, weather data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8543—Content authoring using a description language, e.g. Multimedia and Hypermedia information coding Expert Group [MHEG], eXtensible Markup Language [XML]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/015—High-definition television systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/08—Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division
- H04N7/087—Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division with signal insertion during the vertical blanking interval only
- H04N7/088—Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division with signal insertion during the vertical blanking interval only the inserted signal being digital
- H04N7/0884—Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division with signal insertion during the vertical blanking interval only the inserted signal being digital for the transmission of additional display-information, e.g. menu for programme or channel selection
- H04N7/0885—Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division with signal insertion during the vertical blanking interval only the inserted signal being digital for the transmission of additional display-information, e.g. menu for programme or channel selection for the transmission of subtitles
Definitions
- the present invention relates to an apparatus and method for transmitting and receiving broadcast signals.
- broadcasting subtitle service is provided in the form of closed caption or DVB (Digital Video Broadcasting) subtitle.
- DVB subtitles are provided in the form of a bitmap image. Therefore, subtitles of different sizes should be provided for images of various sizes, or services should be provided by scaling subtitles of a single standard. In this case, in the former case, the efficiency decreases in terms of bandwidth, and in the latter case, a sharpness due to scaling occurs.
- the discussion of high-definition broadcasting service through UHDTV is being actively conducted, and a need for a new broadcasting subtitle service is emerging to solve this problem.
- An object of the present invention is to improve transmission efficiency in a method and apparatus for transmitting a broadcast signal.
- Another object of the present invention is to provide a transmission apparatus and method for providing a caption service in a broadcasting network.
- Another object of the present invention is to provide a broadcast apparatus and method for improving the quality of a subtitle service.
- the broadcast signal transmission method may include generating a broadcast signal including video data and subtitle data and transmitting the generated broadcast signal.
- the caption data may include XML caption data.
- the XML caption data may include caption text and caption metadata.
- the caption metadata may include extended color gamut and high dynamic range for high quality broadcasting.
- the caption metadata may include information about the color gamut of the caption, the dynamic range of the caption, and the bit depth of the caption.
- the broadcast signal receiving method may include receiving a broadcast signal including video data and caption data, and processing and outputting the video data and caption data.
- the caption data may include XML caption data.
- the XML caption data may include caption text and caption metadata.
- the caption metadata may include information corresponding to an extended color gamut and a high dynamic range for high quality broadcasting.
- the caption metadata may include information corresponding to an extended color gamut and a high dynamic range for high quality broadcasting.
- the caption metadata may include information about a color gamut of captions, a dynamic range of captions, and bit depths of captions.
- the video data may further include video metadata.
- the broadcast signal receiving method may further include detecting whether the subtitle metadata and the video metadata match.
- the broadcast signal receiving method may further include converting the caption metadata when the caption metadata and the video metadata do not match each other.
- a high quality subtitle service can be provided in a broadcasting network.
- the broadcast reception device may extract and display a subtitle included in a broadcast signal.
- FIG. 1 is a diagram illustrating an XML-based broadcast subtitle service according to an embodiment of the present invention.
- FIG. 2 is a diagram illustrating an operation of a receiver for XML-based subtitles according to an embodiment of the present invention.
- FIG. 3 illustrates a preprocessing process for a subtitle of a broadcast receiver according to an embodiment of the present invention.
- FIG 4 illustrates dynamic range mapping for brightness according to an embodiment of the present invention.
- FIG. 5 illustrates a method of transmitting meta data about a subtitle according to an embodiment of the present invention.
- FIG. 6 shows a detailed description of elements of meta data of a caption according to an embodiment of the present invention.
- FIG. 7 illustrates a detailed description of elements of meta data of a caption according to an embodiment of the present invention.
- FIG. 8 shows an additional description of elements of meta data of a subtitle according to an embodiment of the present invention.
- FIG. 9 is a block diagram illustrating a broadcast transmitter according to an embodiment of the present invention.
- FIG. 10 is a block diagram illustrating a broadcast receiver according to an embodiment of the present invention.
- FIG. 11 illustrates a detailed configuration of a broadcast receiver according to an embodiment of the present invention.
- FIG. 12 is a diagram illustrating a method of transmitting a broadcast signal including XML subtitles according to an embodiment of the present invention.
- FIG. 13 is a diagram illustrating a method for receiving a broadcast signal including XML subtitles according to an embodiment of the present invention.
- the present invention provides digital subtitle services using XML subtitles (TTML, SMPTE-TT, EBU-TT-D, etc.) in existing and new receivers based on high definition image elements such as WCG, HDR and higher bit depth. It describes a method for providing a subtitle service.
- XML subtitles TTML, SMPTE-TT, EBU-TT-D, etc.
- TTML time text markup language
- EBU-TT EBU time text
- XML-based subtitle can support various sizes of video and is considered as one of UHD-based next-generation subtitle service method because it is suitable for IP streaming-based service. This is because the change from HD to UHD will not only improve resolution but also bring about changes in various image quality aspects such as dynamic range, color gamut, and bit depth, and these video elements need to be considered in the next-generation subtitle service.
- current TTML-based XML subtitle does not consider these elements, and it is necessary to consider elements such as WCG and HDR to cope with various service environments.
- XML subtitle metadata capable of delivering information on an XML subtitle production environment to support receivers and displays having various performances in terms of HDR and WCG. Describe the service method.
- the present invention can propose a method of extending the expression up to 16 bits while maintaining the support for the existing 8-bit system in a situation where the bitdepth expression of the XML subtitle is limited to 8 bits.
- a receiver operation may be described in terms of color gamut, dynamic range, and bit depth in a case where the subtitle production environment and the image reproduction environment are different.
- High Efficiency Video Coding is a high-efficiency video coding standard that offers the same video quality with approximately twice the compression rate compared to traditional H.265 / AVC technology.
- Extensible Markup Language is a language that is an improvement of HTML. It has improved the homepage building function and search function, and can easily handle the complex data of the client system.
- the caption data is used as a language, and the XML caption may be composed of a head and a body.
- PTS Presentation Time Stamp
- the PTS can be used to synchronize the video ES and the subtitle ES.
- ES Electronic Stream
- outputs of the video encoder and the audio encoder may be defined as a video ES and an audio ES, respectively.
- the XML subtitle ES can be defined and used.
- TS Transport Stream
- TS is a transport stream including one or several programs in an MPEG-2 system and can be used for a transmission medium in which a transport error exists.
- TS may mean a transport stream in which at least two of a video ES, an audio ES, and a subtitle ES are multiplexed and transmitted.
- FIG. 1 is a diagram illustrating an XML-based broadcast subtitle service according to an embodiment of the present invention. 1 illustrates a diagram of a transmitter, a receiver, and an end-to-end system for a digital broadcast service.
- the XML-based subtitles used in the present invention can be applied to both UHD / HD / SD since they are not affected by the size of the image.
- the transmitting end may transmit the compressed image and the modified XML subtitle for transmission through a multiplexer.
- the receiver demultiplexes the received signal and provides subtitles through image decoding and XML parser, and the graphic engine may modify the subtitle expression method according to the receiver's environment and output it to the display processor.
- the display processor may output the decoded image and the subtitle.
- the transmitting end may receive video data and subtitle information.
- the resolution of the video data input to the transmitter may be UHD, HD or SD.
- the caption information input to the transmitting end may be written in XML.
- Video data input to the transmitting end may be encoded by an encoder at the transmitting end (101).
- the transmitting end may use HEVC (High Efficiency Video Coding) as an encoding method for video data.
- the transmitting end may synchronize the encoded video data with XML subtitles and multiplex using a multiplexer (102).
- the XML subtitle can be modified for transmission.
- the transformation method for the XML subtitle and the metadata generation method for the XML subtitle will be described in detail below.
- the transmitter may perform channel coding and modulation of the synchronized and multiplexed data and then transmit the broadcast signal as a broadcast signal.
- the receiving end may receive a broadcast signal and perform demodulation and transport packet decoding.
- the receiving end may perform video decoding and XML parsing after demultiplexing the decoded transport packet.
- XML parsing can be performed through an XML parser.
- the video decoder and the XML parser may exchange metadata. Such metadata may be used as additional information when displaying an image and a subtitle together.
- the receiving end may demodulate the received broadcast signal and perform transport packet decoding (104).
- the decoded transport packet is input to the video decoder 106 and the XML parser 107 after demultiplexing 105.
- the video decoder 106 may decode the UHD, HD or SD video data according to the resolution of the received video data.
- the XML parser 107 may extract XML captions.
- the receiving end may consider an image element in displaying video data and XML subtitles using meta data.
- the image element may include, for example, a dynamic range, a color gamut, a bit depth, and the like.
- WCG wide color gamut
- HDR high dynamic range
- the standard of subtitle production Information about the image quality element may be provided to the receiver.
- the receiver can change the color or brightness of the subtitle appropriately for the display environment.
- the graphics engine 108 may modify a method of expressing XML subtitles corresponding to the above-described image element.
- the decoded video data and the XML subtitle in which the presentation method is modified may be processed and displayed by the display processor 109.
- the receiver can analyze the content through the XML parser for XML-based subtitles.
- the receiver may transmit the contents of the caption, information for representing the caption, and spatial information of the caption to the graphics engine.
- the information for expressing the caption may include at least one of font, color and / or size information.
- the spatial information of the subtitle may include at least one of region and / or resolution information.
- the receiver of the present invention may perform a preprocessing process before delivering subtitles and subtitle information to the graphics engine.
- the receiver can deliver the modified subtitle information to the graphics engine through preprocessing.
- the graphics engine may generate subtitles using the contents of the subtitles and information about the modified subtitles and transmit them to the display processor.
- the preprocessing process may include detecting and converting a subtitle production environment and a display environment.
- the receiver may detect or determine a match based on metadata about the target video format of the subtitle and metadata about the display of the receiver.
- the metadata about the target video format of the subtitle may include bitdepth, dynamic range, and color gamut information.
- the criterion may be transmitted through metadata in XML, and in case of EBU-TT-D, it may be delivered to a receiver through ebuttm: RefGamut, ebuttm: RefDynamicRange, ebuttm: EOTF, and ebuttm: RefBitDepth.
- ebuttm RefGamut
- ebuttm RefDynamicRange
- ebuttm EOTF
- ebuttm RefBitDepth.
- the case of defining the EBU-TT-D metadata has been described as an example, but the same information may be defined in the TTML metadata (ttm), parameter (ttp), and style (tts).
- the elements newly defined in the present invention can be extended to an XML-based subtitle standard such as TTML, EBU-TT, SMPTE-TT, CFF-TT, Youview, and EBU-TT.
- the comparison criteria are gamut, dynamic range, and bitdepth aspects, and a comparison in terms of resolution and aspect ratio can also be performed.
- the metadata of the display of the receiver may include display environment information, and may include bitdepth, dynamic range, and color gamut information of the receiver display.
- the caption text data and the metadata about the caption are transmitted to a graphics engine for high-end display. That is, if it is determined that the target video format of the subtitle matches or is acceptable to the metadata for the video or the display, the processing proceeds to the next step without further processing.
- the target video format of the subtitle matches or is acceptable to the metadata of the video or the display it may mean that the video is an HDR / WCG image or the display is detected as an HDR / WCG display.
- the graphics engine may generate captions using the received caption text data and meta data about the captions and deliver the captions to the display processor.
- the preprocessing may include converting the subtitles in terms of color and brightness of the subtitles. Can be.
- the caption expression method included in the caption metadata may be converted based on the metadata of the target video format of the caption and the metadata of the display of the receiver. That is, the modified bitdepth, modified dynamic range, and modified color gamut may be transmitted to the graphics engine by converting bitdepth, dynamic range, and color gamut included in the caption metadata.
- the converted meta data and subtitle text can be delivered to a graphics engine for mid-end or low-end display.
- the graphics engine may generate captions using the received caption text data and the converted meta data about the captions, and deliver the captions to the display processor.
- each element is converted.
- the conversion process is based on color gamut, dynamic range, EOTF, and bitdepth information, which is the reference information passed through newly defined ebuttm: RefGamut, ebuttm: RefDynamicRange, ebuttm: EOTF, and ebuttm: RefBitDepth in the metadata.
- color gamut mapping or dynamic range mapping can be performed.
- the graphics engine converts the text information into video information, and the receiver processes the output of the graphics engine to display the video, subtitles, and other video elements to form an image for final display. do.
- FIG. 4 illustrates dynamic range mapping for brightness according to an embodiment of the present invention. That is, the dynamic range mapping of the luminance of the XML-based subtitles is shown. More specifically, it is an example of a method for reproducing XML subtitles produced for HDR video in an SDR display environment. If the luminance range used in the HDR image is wider than the brightness range supported by the receiver's display, the brightness of the image is changed by dynamic range mapping.In this case, only the brightness of the image is considered without considering the brightness range of the subtitles.
- the subtitles may not be suitable for the change in the peripheral brightness.
- the subtitles may be excessively high compared to the overall brightness of the image as shown in a) or the subtitles may be excessively high compared to the brightness of the image as shown in c). Lowering problems may occur.
- the subtitle brightness may be matched using a conversion equation that is the same as or similar to that used for video dynamic range mapping for the subtitle brightness values.
- the receiver may use reference information about the brightness of the caption.
- the broadcast transmission device may insert dynamic range information of a caption production environment or a caption reproduction environment considered as a target, into XML metadata.
- XML metadata can contain dynamic range information about the subtitle production environment or subtitle playback environment that is considered as a target (ebuttm: RefDynamicRange), so that the receiver uses the appropriate dynamic range mapping compared to the subtitle playback environment of the display.
- Captions can be played by converting the caption expression method to the brightness suitable for the receiver environment.
- the brightness range of the video and the brightness range of the subtitles may be different. In this case, it is necessary to convert the brightness range of the subtitles to match the brightness range of the video, and the receiver proposes as a criterion for making such a determination and conversion.
- the reference information ebuttm: RefDynamicRange, which is reference information about the dynamic range, can be used.
- the receiver performs the same color gamut mapping as the color gamut mapping for the image. Can be converted into a displayable color gamut. If necessary, the above-mentioned dynamic range mapping or color gamut mapping information may be transmitted to the XML subtitle.
- the metadata about the caption may include information on at least one of color gamut, dynamic range, EOTF, and / or bitdepth.
- reference information about a subtitle may be set.
- the color gamut may be set to BT.2020
- the dynamic range may be set to 0.0001nits to 2000nits
- the bitdepth may be set to 12bit.
- the XML caption metadata may include color gamut information about the caption.
- the XML caption metadata may include dynamic range information about the caption.
- 200000 may refer to the contrast between the minimum brightness and the maximum erection. Detailed description will be given below.
- the XML subtitle metadata may include EOTF (Electro-Optical Transfer Function) information about the subtitle.
- the XML caption metadata may include bitdepth information about the caption.
- Meta data about the subtitle of the present invention can be applied to the EBU-TT-D as described above and can be applied to the XML-based subtitle standards such as TTML, SMPTE-TT, CFF-TT, Youview, EBU-TT in a similar manner. Can be.
- FIG. 6 shows a detailed description of elements of meta data of a caption according to an embodiment of the present invention.
- (a) shows a color gamut field.
- the ebuttm: RefGamut included in the meta data indicates a color gamut field considered in caption production.
- ebuttm: RefGamut can designate an existing color gamut such as BT.709 or BT.2020.
- ebuttm: RefGamut can provide information on any color gamut by directly specifying CIExy coordinates, as shown.
- CIExy coordinates (xRed, yRed, xGreen, yGreen, xBlue, yBlue, xWhite, yWhite) for red, green, blue and white points may be transmitted.
- value originalValue * 10000, 10000 times the original coordinate value can be delivered.
- the BT709 or BT2020 attribute may be specified in advance, and as shown, the ⁇ namedGamut> attribute may be used to indicate that the color gamut is BT2020.
- the color gamut field may be used for determining whether the color space of the subtitle production and the display environment (or image) match and, if necessary, for color gamut mapping.
- the dynamic range field is an element representing a dynamic range of an image to be considered when producing a subtitle.
- the dynamic range field may include PeakBrightness, BlackLevel, and ContrastRatio indicating maximum brightness, minimum brightness, and high contrast of the dynamic range.
- ContrastRatio may represent the ratio between the maximum brightness and the minimum brightness, and for example, 10,000: 1 may deliver a value of 10,000.
- a standardized dynamic range such as HD
- it can be used as shown, for example, using the SMPTE reference HDTV standard, ⁇ namedDynamicRange> attribute.
- standardized dynamic ranges can be defined and used in namedDynamicRange.
- the dynamic range field may be used to determine whether the brightness ranges of the subtitles production and the display environment (or the image) match and, if necessary, for dynamic range mapping.
- the color gamut and the dynamic range may be used to provide information about the subtitle production environment as described above, but may be used in an extended meaning, such as providing color gamut and dynamic range information of a target video / display.
- the EOTF field can deliver the EOTF information used in relation to the dynamic range.
- the EOTF field may carry previously defined EOTF information such as BT.1886 and SMPTE 2084. In the previous embodiment, a case of using SMPTE 2084 is illustrated, and an EOTF element may be used to deliver any EOTF later.
- the EOTF field may be used for luminance linearization before applying dynamic range mapping.
- the UHD broadcast transmission device may transmit a service based on bitdepth of 8 bits or more to provide improved image quality.
- a service is provided based on 10 bits in DVB UHD-1 phase 1
- a service may be provided based on at least 10 bits in UHD-1 phase 2 in which quality elements such as WCG and HDR are added.
- next-generation storage media such as BD UHD-FE or SCSA
- more than 10 bitdepth is considered.
- the EBU-TT-D standard restricts the display of colors based on 8 bits. Therefore, it is necessary to define a new bitdepth expression method or to extend bitdepth expression while supporting an existing system.
- the present invention describes a method of extending and representing bitdepth while supporting an existing system.
- the metadata of the caption in the present invention may indicate bitdepth that can be provided in the caption system through ebuttm: RefBitDepth.
- bitdepth may indicate the number of bits of information representing a color.
- the range of bitdepth can range from 8 to 16.
- bitdepth can be set to 8, 10, 12 or 16.
- this element can also be used to convey information about the pallet used in the subtitles.
- This field may be used as a reference for the receiver to compare the bitdepth of the subtitle and the bitdepth of the receiver or the image during the preprocessing. In addition, this field may be used for notifying the receiver that 8 bit bitdepth or more bitdepth is used or for detecting the receiver. If ebuttm: RefBitDepth is used and has a value of 9 or higher, tts: color and ebuttds: colorExtent can be used together to express colors.
- EBU-TT-D based subtitle service needs to support high bitdepth for UHD, that is, when ebuttm: RefBitDepth is set to a value of 9 or more in the present invention, it is necessary to extend an existing color representation method.
- colors are defined using tts: color, and as shown in (b), 8bit for red, green, blue, and (alpha) in the color representation method defined in ⁇ ebuttdt: distributionColorType>. You can express colors with bitdepth.
- an extension field may be additionally defined as shown in (c).
- the upper 8 bits of red, green, blue, and (alpha) of each color that can be expressed as the total bitdepth can be represented through the previously defined tts: color, and the lower bits except the upper 8 bits can be expressed as ebuttds: colorExtent. .
- receivers targeting UHD can interpret 8bit basic color information through tts: color and high bitdepth exceeding 8bit through ebutts: colorExtent.
- the existing expression method ⁇ ebuttdt: distributionColorType> can be used as it is.
- the lower bit except for the upper 8 bit in the bitdepth defined in ebuttm: RefBitDepth is expressed as 8 bits. Embodiments thereof are as described above.
- the color can be expressed as rgb (r-value, g-value, b-value) together with #rrggbb described above.
- (a) shows an example of the metadata representation of the color of the caption in the TTML, and the EBU-TT may define the color representation as ⁇ ebuttdt: colorType> for the same method.
- extension field tts: colorExtent in order to express the extended bitdepth, a method of defining an extension field tts: colorExtent may be used as in the above-described embodiment.
- (b) and (c) are examples of defining tts: colorExtent in TTML.
- extension fields do not have independent meanings, so ⁇ namedColor> may not be used.
- an extension field may be defined as ⁇ ebuttdt: colorTypeExtension>.
- the upper 8 bits of the 12 bits are represented by the existing color representation method as shown below, and the lower 4 bits are extended fields. It can be represented using. That is, the upper 8 bits may be represented by tts: color and the lower 4 bits may be represented by tts: colorExtent.
- the meta data about the caption according to the embodiment of the present invention can extend the color gamut of the caption and express the captions of various colors while adding the extension field while maintaining the existing color expression method.
- the broadcast transmitter 1701 may include an encoder 1702, a multiplexer 1703, and / or a transmitter 1704.
- the resolution of video data input to the broadcast transmitter 1701 may be UHD, HD, or SD.
- the caption information input to the broadcast transmitter 1701 may be written in XML.
- Video data input to the broadcast transmitter 1701 may be encoded by the encoder 1702.
- the transmitting end may use HEVC (High Efficiency Video Coding) as an encoding method for video data.
- the transmitting end may synchronize the encoded video data and the XML subtitle and multiplex using the multiplexer 1703.
- the XML subtitle may include metadata about the subtitle and may include information about color gamut, dynamic range, EOTF, and / or bitdepth of the subtitle.
- the transmitter 1704 may transmit the transport stream output from the multiplexer 1703 as a broadcast signal.
- the transport stream may be transmitted as a broadcast signal after being channel coded and modulated before transmission.
- the broadcast receiver 1801 may include a receiver 1802, a demultiplexer 1803, and / or a decoder 1804.
- the broadcast signal received by the receiver 1802 may be demodulated and then channel decoded.
- the channel decoded broadcast signal may be input to the demultiplexer 103 and demultiplexed into a video stream and a subtitle stream.
- the output of the demultiplexer may be input to the decoder 1804.
- the decoder may include a video decoder and an XML parser. That is, the video stream may be decoded by the video decoder, the subtitle stream by the subtitle decoder, or parsed by an XML parser and output as video data and subtitle data, respectively.
- the video decoder and the XML parser may exchange meta data. That is, as described above, the XML parser may compare metadata about video and metadata about subtitles.
- the metadata to be compared may include dynamic range, color gamut, bit depth, etc. of an image and a subtitle. It is also possible to compare the metadata for the display of the receiver and the metadata for the subtitles.
- the metadata to be compared may include a display environment, a subtitle dynamic range, a color gamut, a bit depth, and the like.
- the standard of subtitle production Information about the image quality element may be provided to the receiver.
- the receiver can change the color or brightness of the subtitle appropriately for the display environment.
- the broadcast signal receiver may modify a method of expressing XML subtitles corresponding to the above-described video element. Video data and subtitle data can be synchronized and displayed by the receiver.
- the broadcast receiver includes a receiver 1901, a demodulator 1902, a demultiplexer 1803, a video decoder 1904, an XML subtitle decoder 1905, an audio / video subtitle synchronizer (A / V / S sync, 1906), It may include a system information processor (SI processor) 1907, a graphics engine 1908, and / or a display processor 1909.
- SI processor system information processor
- the receiver 1901 may receive a broadcast signal transmitted by a transmitter.
- the received broadcast signal may be input to the demodulator 1902.
- the demodulator 1902 can demodulate a broadcast signal and output a transport stream (TS).
- the TS may be input to the demultiplexing unit 1903 and demultiplexed.
- the demultiplexed TS may include a HEVC bitstream, an XML subtitle, and system information (SI).
- SI system information
- the XML subtitle may include metadata.
- the video decoder 1904 may receive an HEVC bitstream, decode it, and output a video frame.
- the XML subtitle decoder 1905 may receive an XML subtitle and extract a subtitle.
- the XML subtitle decoder 1905 may parse metadata included in the XML subtitle and compare it with metadata about an image or a display environment.
- the metadata to be compared may include a dynamic range, color gamut, bit depth, and the like.
- the XML subtitle decoder 1905 may perform the conversion on the meta data of the subtitle according to the match between the compared meta data.
- the XML caption decoder 1905 may deliver the caption metadata and the caption to the graphics engine without performing a separate conversion when the matched caption data is matched.
- the XML caption decoder 1905 may convert the metadata for the caption and transmit the caption data and the converted metadata to the graphic engine. This can improve the consistency of the subtitles and the image.
- the system information processor 1907 may extract the OSD information by receiving the system information (SI) information output from the demultiplexer.
- SI system information
- the graphics engine 1908 may receive captions and caption metadata from the XML caption decoder 1905 and output caption images.
- the caption image is generated based on the caption and the metadata of the caption, and the color, brightness, etc. of the output caption image may vary according to whether to convert the caption metadata.
- the display processor 1909 may receive the video frame and the subtitle and output the display frame.
- the display processor 1909 may receive OSD (On Screen Display) information in addition to the video frame and the subtitle to output the display frame.
- the output display frame may be displayed by the image output device, and the XML caption and video frame described in the present invention may be displayed together.
- the method for transmitting a broadcast signal including XML subtitles includes: generating video data by encoding a video stream (S2210), generating a broadcast signal including the generated video data and subtitle information (S2220), and generating the broadcast. It may include the step of transmitting a signal (S2230).
- Generating video data by encoding the video stream may receive a video stream having a resolution of UHD, HD, or SD, and may generate video data by encoding the video stream.
- the video stream may be encoded by High Efficiency Video Coding (HEVC).
- HEVC High Efficiency Video Coding
- XML caption data can be generated.
- the XML subtitle data may include metadata and the metadata may include XML subtitle related data suitable for UHD broadcasting. That is, the metadata may include dynamic range, color gamut, bit depth, and EOTF information, and these information may have values corresponding to wide color gamut (WCG) and high dynamic range (HDR) of UHD broadcasting.
- WCG wide color gamut
- HDDR high dynamic range
- Generating a broadcast signal including the generated video data and subtitle data may build a broadcast signal frame and generate a broadcast signal using a modulation process.
- the generated broadcast signal may be transmitted as a broadcast signal.
- the method for receiving a broadcast signal including the XML caption includes receiving a broadcast signal (S2310), demultiplexing the received broadcast signal into video data and caption data (S2320), and decoding the video data (S2330). ) May be included.
- the broadcast signal received using the receiver may be demodulated and then channel decoded.
- Demultiplexing the received broadcast signal into video data and caption data may demultiplex the channel decoded broadcast signal into video data and caption data using a demultiplexer.
- Decoding the video data and the caption data, respectively may decode the video data and obtain the video data using the video decoder.
- the caption data may be acquired using a caption decoder or an XML parser.
- the receiver may extract the caption by receiving the XML subtitle.
- the X receiver can parse the metadata contained in the XML subtitle and compare it with the metadata for the video or display environment.
- the metadata to be compared may include a dynamic range, color gamut, bit depth, and the like.
- the receiver may perform the conversion on the meta data of the subtitles according to the matching between the compared meta data. If the compared subtitle data is matched, the receiver may transmit the subtitle metadata and the subtitle to the graphics engine without performing a separate conversion. In contrast, if the compared caption data does not match the metadata of the image or the display, the receiver may convert the metadata for the caption and output the converted caption data and the metadata. This can improve the consistency of the subtitles and the image.
- UHD As an image quality factor, the variety of contents and receivers is likely to expand. However, for text-based subtitles, it's inefficient to create separate versions for different video or receiver types. In the case of XML subtitles, there is an advantage that they can be applied independently to the image size, but there is no consideration in the case of changing elements such as WCG, HDR, etc. It can provide the same quality of service for.
- the present invention is described in terms of a receiver, it can be used in a production or subtitle production environment. In addition, it can be used in all broadcasting services (e.g. DVB UHD-1 service) using XML-based subtitle service as well as IP streaming-based service.
- Apparatus and method according to the present invention is not limited to the configuration and method of the embodiments described as described above, the above-described embodiments may be selectively all or part of each embodiment so that various modifications can be made It may be configured in combination.
- the image processing method of the present invention can be implemented as a processor-readable code on a processor-readable recording medium provided in the network device.
- the processor-readable recording medium includes all kinds of recording devices that store data that can be read by the processor. Examples of the processor-readable recording medium include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like, and may also be implemented in the form of a carrier wave such as transmission over the Internet. .
- the processor-readable recording medium can also be distributed over network coupled computer systems so that the processor-readable code is stored and executed in a distributed fashion.
- the present invention has industrial applicability that is usable and repeatable in the field of broadcast and video signal processing.
Abstract
Description
Claims (15)
- 비디오 데이터 및 자막 데이터를 포함하는 방송 신호를 생성하는 단계; 및상기 생성된 방송 신호를 전송하는 단계;를 포함하는 방송 신호 송신 방법.
- 제 1 항에 있어서,상기 자막 데이터는 XML 자막 데이터를 포함하는 방송 신호 송신 방법.
- 제 2 항에 있어서,상기 XML 자막 데이터는 자막 텍스트 및 자막 메타 데이터를 포함하는 방송 신호 송신 방법.
- 제 3 항에 있어서,상기 자막 메타 데이터는 고화질 방송을 위한 확장 색상 영역 및 하이 다이내믹 레인지에 대응하는 정보를 포함하는 방송 신호 송신 방법.
- 제 3 항에 있어서,상기 자막 메타 데이터는 자막의 색 영역, 자막의 다이내믹 레인지, 자막의 비트 뎁스에 대한 정보를 포함하는 방송 신호 송신 방법.
- 비디오 데이터 및 자막 데이터를 포함하는 방송 신호를 수신하는 단계; 및상기 비디오 데이터 및 자막 데이터를 처리하여 출력하는 단계;를 포함하는 방송 신호 수신 방법.
- 제 6 항에 있어서,상기 자막 데이터는 XML 자막 데이터를 포함하는 방송 신호 수신 방법.
- 제 7 항에 있어서,상기 XML 자막 데이터는 자막 텍스트 및 자막 메타 데이터를 포함하는 방송 신호 수신 방법.
- 제 8 항에 있어서,상기 자막 메타 데이터는 고화질 방송을 위한 확장 색상 영역 및 하이 다이내믹 레인지에 대응하는 정보를 포함하는 방송 신호 수신 방법.
- 제 8 항에 있어서,상기 자막 메타 데이터는 자막의 색 영역, 자막의 다이내믹 레인지, 자막의 비트 뎁스에 대한 정보를 포함하는 방송 신호 수신 방법.
- 제 8 항에 있어서,상기 비디오 데이터는 비디오 메타 데이터를 더 포함하는 방송 신호 수신 방법.
- 제 9 항에 있어서,상기 자막 메타 데이터 및 상기 비디오 메타 데이터의 매칭 여부를 디텍트하는 단계를 더 포함하는 방송 신호 수신 방법.
- 제 12 항에 있어서,상기 자막 메타 데이터 및 상기 비디오 메타 데이터가 상호 매칭되지 않는 경우 상기 자막 메타 데이터를 컨버젼하는 단계를 더 포함하는 방송 신호 수신 방법.
- 비디오 데이터 및 자막 데이터를 포함하는 방송 신호를 생성하는 인코더; 및상기 생성된 방송 신호를 전송하는 송신부;를 포함하는 방송 신호 송신 장치.
- 비디오 데이터 및 자막 데이터를 포함하는 방송 신호를 수신하는 수신부; 및상기 비디오 데이터 및 상기 자막 데이터를 디코딩하는 디코더;를 포함하는 방송 신호 수신 장치.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020167033752A KR20170007333A (ko) | 2014-07-11 | 2015-07-10 | 방송 신호 송수신 방법 및 장치 |
US15/322,597 US10582269B2 (en) | 2014-07-11 | 2015-07-10 | Method and device for transmitting and receiving broadcast signal |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201462023198P | 2014-07-11 | 2014-07-11 | |
US62/023,198 | 2014-07-11 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016006971A1 true WO2016006971A1 (ko) | 2016-01-14 |
Family
ID=55064521
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2015/007203 WO2016006971A1 (ko) | 2014-07-11 | 2015-07-10 | 방송 신호 송수신 방법 및 장치 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10582269B2 (ko) |
KR (1) | KR20170007333A (ko) |
WO (1) | WO2016006971A1 (ko) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018004291A1 (ko) * | 2016-07-01 | 2018-01-04 | 엘지전자 주식회사 | 방송 신호 송신 방법, 방송 신호 수신 방법, 방송 신호 송신 장치 및 방송 신호 수신 장치 |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10410398B2 (en) * | 2015-02-20 | 2019-09-10 | Qualcomm Incorporated | Systems and methods for reducing memory bandwidth using low quality tiles |
US10735755B2 (en) * | 2015-04-21 | 2020-08-04 | Arris Enterprises Llc | Adaptive perceptual mapping and signaling for video coding |
JP6519329B2 (ja) * | 2015-06-09 | 2019-05-29 | ソニー株式会社 | 受信装置、受信方法、送信装置および送信方法 |
EP3349470A4 (en) * | 2015-09-09 | 2019-01-16 | LG Electronics Inc. | BROADCAST SIGNAL TRANSMITTING DEVICE, BROADCAST SIGNAL RECEIVING DEVICE, BROADCASTING SIGNAL TRANSMITTING METHOD, AND BROADCAST SIGNAL RECEIVING METHOD |
WO2017135673A1 (ko) * | 2016-02-01 | 2017-08-10 | 엘지전자 주식회사 | 방송 신호 송신 장치, 방송 신호 수신 장치, 방송 신호 송신 방법, 및 방송 신호 수신 방법 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060136803A1 (en) * | 2004-12-20 | 2006-06-22 | Berna Erol | Creating visualizations of documents |
WO2010095838A2 (ko) * | 2009-02-17 | 2010-08-26 | 삼성전자 주식회사 | 그래픽 화면 처리 방법 및 장치 |
US20100220175A1 (en) * | 2009-02-27 | 2010-09-02 | Laurence James Claydon | Systems, apparatus and methods for subtitling for stereoscopic content |
US20110242104A1 (en) * | 2008-12-01 | 2011-10-06 | Imax Corporation | Methods and Systems for Presenting Three-Dimensional Motion Pictures with Content Adaptive Information |
US20140009677A1 (en) * | 2012-07-09 | 2014-01-09 | Caption Colorado Llc | Caption extraction and analysis |
KR20150007201A (ko) * | 2014-04-11 | 2015-01-20 | 주식회사 우리이앤씨 | 중앙에 주름부를 갖는 지수판 |
Family Cites Families (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100939711B1 (ko) | 2002-12-12 | 2010-02-01 | 엘지전자 주식회사 | 텍스트 기반의 서브타이틀 재생장치 및 방법 |
KR100739680B1 (ko) | 2004-02-21 | 2007-07-13 | 삼성전자주식회사 | 스타일 정보를 포함하는 텍스트 기반 서브타이틀을 기록한저장 매체, 재생 장치 및 그 재생 방법 |
JP4980018B2 (ja) | 2006-09-21 | 2012-07-18 | パナソニック株式会社 | 字幕生成装置 |
US8942289B2 (en) | 2007-02-21 | 2015-01-27 | Microsoft Corporation | Computational complexity and precision control in transform-based digital media codec |
KR20080078217A (ko) | 2007-02-22 | 2008-08-27 | 정태우 | 영상에 포함된 객체 색인 방법과 그 색인 정보를 이용한부가 서비스 방법 및 그 영상 처리 장치 |
JP5219183B2 (ja) * | 2007-03-01 | 2013-06-26 | 任天堂株式会社 | 映像コンテンツ表示プログラム、情報処理装置、映像コンテンツ表示方法および映像コンテンツ表示システム |
CA2745021C (en) | 2008-12-02 | 2014-10-28 | Lg Electronics Inc. | Method for displaying 3d caption and 3d display apparatus for implementing the same |
KR20100115845A (ko) | 2009-04-21 | 2010-10-29 | 문강환 | 동영상 데이터 스트리밍 상에 실시간 자막송출 시스템 |
US8718367B1 (en) * | 2009-07-10 | 2014-05-06 | Intuit Inc. | Displaying automatically recognized text in proximity to a source image to assist comparibility |
CN102065234B (zh) | 2009-11-12 | 2014-11-05 | 新奥特(北京)视频技术有限公司 | 基于分布式字幕处理系统的字幕制播方法及系统 |
CN102939748B (zh) | 2010-04-14 | 2015-12-16 | 三星电子株式会社 | 用于产生用于具有字幕的数字广播的广播比特流的方法和设备以及用于接收用于具有字幕的数字广播的广播比特流的方法和设备 |
KR20110138151A (ko) * | 2010-06-18 | 2011-12-26 | 삼성전자주식회사 | 자막 서비스를 포함하는 디지털 방송 서비스를 제공하기 위한 비디오 데이터스트림 전송 방법 및 그 장치, 자막 서비스를 포함하는 디지털 방송 서비스를 제공하는 비디오 데이터스트림 수신 방법 및 그 장치 |
JP5627319B2 (ja) | 2010-07-05 | 2014-11-19 | キヤノン株式会社 | 画像検出装置及び画像検出方法 |
CN103733219B (zh) | 2011-05-11 | 2016-11-23 | I3研究所股份有限公司 | 图像处理装置、图像处理方法、及储存程序的储存媒体 |
RU2618373C2 (ru) * | 2011-07-29 | 2017-05-03 | Сони Корпорейшн | Устройство и способ распределения потоковой передачи данных, устройство и способ приема потоковой передачи данных, система потоковой передачи данных, программа и носитель записи |
US8899378B2 (en) * | 2011-09-13 | 2014-12-02 | Black & Decker Inc. | Compressor intake muffler and filter |
CN103179093B (zh) | 2011-12-22 | 2017-05-31 | 腾讯科技(深圳)有限公司 | 视频字幕的匹配系统和方法 |
JP6003992B2 (ja) * | 2012-08-27 | 2016-10-05 | ソニー株式会社 | 受信装置および受信方法 |
US10055866B2 (en) * | 2013-02-21 | 2018-08-21 | Dolby Laboratories Licensing Corporation | Systems and methods for appearance mapping for compositing overlay graphics |
US9161066B1 (en) | 2013-03-14 | 2015-10-13 | Google Inc. | Methods, systems, and media for generating and presenting supplemental content based on contextual information |
DE202014011407U1 (de) * | 2013-05-03 | 2020-04-20 | Kofax, Inc. | Systeme zum Erkennen und Klassifizieren von Objekten in durch Mobilgeräte aufgenommenen Videos |
US10171787B2 (en) * | 2013-07-12 | 2019-01-01 | Sony Corporation | Reproduction device, reproduction method, and recording medium for displaying graphics having appropriate brightness |
JP6419807B2 (ja) | 2013-07-19 | 2018-11-07 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | Hdrメタデータ転送 |
CN103826114B (zh) | 2013-11-15 | 2017-04-19 | 青岛海信电器股份有限公司 | 一种立体显示方法及自由立体显示装置 |
KR101880467B1 (ko) | 2014-02-24 | 2018-07-20 | 엘지전자 주식회사 | 방송 신호 송신 장치, 방송 신호 수신 장치, 방송 신호 송신 방법, 및 방송 신호 수신 방법 |
JP6217462B2 (ja) | 2014-03-05 | 2017-10-25 | ソニー株式会社 | 画像処理装置及び画像処理方法、並びに画像処理システム |
EP3154271A4 (en) | 2014-06-09 | 2017-11-29 | LG Electronics Inc. | Service guide information transmission method, service guide information reception method, service guide information transmission device, and service guide information reception device |
-
2015
- 2015-07-10 US US15/322,597 patent/US10582269B2/en not_active Expired - Fee Related
- 2015-07-10 KR KR1020167033752A patent/KR20170007333A/ko not_active Application Discontinuation
- 2015-07-10 WO PCT/KR2015/007203 patent/WO2016006971A1/ko active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060136803A1 (en) * | 2004-12-20 | 2006-06-22 | Berna Erol | Creating visualizations of documents |
US20110242104A1 (en) * | 2008-12-01 | 2011-10-06 | Imax Corporation | Methods and Systems for Presenting Three-Dimensional Motion Pictures with Content Adaptive Information |
WO2010095838A2 (ko) * | 2009-02-17 | 2010-08-26 | 삼성전자 주식회사 | 그래픽 화면 처리 방법 및 장치 |
US20100220175A1 (en) * | 2009-02-27 | 2010-09-02 | Laurence James Claydon | Systems, apparatus and methods for subtitling for stereoscopic content |
US20140009677A1 (en) * | 2012-07-09 | 2014-01-09 | Caption Colorado Llc | Caption extraction and analysis |
KR20150007201A (ko) * | 2014-04-11 | 2015-01-20 | 주식회사 우리이앤씨 | 중앙에 주름부를 갖는 지수판 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018004291A1 (ko) * | 2016-07-01 | 2018-01-04 | 엘지전자 주식회사 | 방송 신호 송신 방법, 방송 신호 수신 방법, 방송 신호 송신 장치 및 방송 신호 수신 장치 |
US10841627B2 (en) | 2016-07-01 | 2020-11-17 | Lg Electronics Inc. | Broadcast signal transmission method, broadcast signal reception method, broadcast signal transmission apparatus, and broadcast signal reception apparatus |
Also Published As
Publication number | Publication date |
---|---|
US10582269B2 (en) | 2020-03-03 |
US20170155966A1 (en) | 2017-06-01 |
KR20170007333A (ko) | 2017-01-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016006970A1 (ko) | 방송 신호 송수신 방법 및 장치 | |
WO2016006971A1 (ko) | 방송 신호 송수신 방법 및 장치 | |
US11445228B2 (en) | Apparatus for transmitting broadcast signal, apparatus for receiving broadcast signal, method for transmitting broadcast signal and method for receiving broadcast signal | |
WO2016017961A1 (ko) | 방송 신호 송수신 방법 및 장치 | |
KR101366912B1 (ko) | 통신 시스템, 송신 장치 및 수신 장치, 통신 방법, 및프로그램 | |
US8886007B2 (en) | Apparatus and method of transmitting/receiving multimedia playback enhancement information, VBI data, or auxiliary data through digital transmission means specified for multimedia data transmission | |
US20170223336A1 (en) | Transmitting apparatus, stereo image data transmitting method, receiving apparatus, and stereo image data receiving method | |
US20170018109A1 (en) | Image processing apparatus and image processing method | |
US10595099B2 (en) | Method and device for transmitting and receiving broadcast signal for broadcast service on basis of XML subtitle | |
WO2010143820A2 (ko) | 3차원 pip 영상 제공 장치 및 그 방법 | |
US7339959B2 (en) | Signal transmitter and signal receiver | |
JP2009296383A (ja) | 信号送信装置、信号送信方法、信号受信装置及び信号受信方法 | |
WO2017171391A1 (ko) | 방송 신호 송수신 방법 및 장치 | |
US20090015655A1 (en) | Transmitting apparatus and transmitting/receiving apparatus | |
WO2012015288A2 (en) | Method and apparatus for transmitting and receiving extended broadcast service in digital broadcasting | |
WO2021060578A1 (ko) | 영상표시장치, 이의 립싱크 보정방법 및 영상표시시스템 | |
WO2017164551A1 (ko) | 방송 신호 송수신 방법 및 장치 | |
WO2016036012A1 (ko) | 방송 신호 송수신 방법 및 장치 | |
US20210195254A1 (en) | Device for transmitting broadcast signal, device for receiving broadcast signal, method for transmitting broadcast signal, and method for receiving broadcast signal | |
Supa’at et al. | HIGH DEFINITION TELEVISION SYSTEM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15819733 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20167033752 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15322597 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15819733 Country of ref document: EP Kind code of ref document: A1 |