WO2016125691A1 - Dispositif d'émission, procédé d'émission, dispositif de réception et procédé de réception - Google Patents

Dispositif d'émission, procédé d'émission, dispositif de réception et procédé de réception Download PDF

Info

Publication number
WO2016125691A1
WO2016125691A1 PCT/JP2016/052594 JP2016052594W WO2016125691A1 WO 2016125691 A1 WO2016125691 A1 WO 2016125691A1 JP 2016052594 W JP2016052594 W JP 2016052594W WO 2016125691 A1 WO2016125691 A1 WO 2016125691A1
Authority
WO
WIPO (PCT)
Prior art keywords
subtitle
luminance
level adjustment
luminance level
stream
Prior art date
Application number
PCT/JP2016/052594
Other languages
English (en)
Japanese (ja)
Inventor
塚越 郁夫
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Priority to US15/542,524 priority Critical patent/US10542304B2/en
Priority to RU2017126901A priority patent/RU2712433C2/ru
Priority to JP2016573323A priority patent/JP6891492B2/ja
Priority to EP16746522.8A priority patent/EP3255892B1/fr
Priority to CN201680007336.8A priority patent/CN107211169B/zh
Publication of WO2016125691A1 publication Critical patent/WO2016125691A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2353Processing of additional data, e.g. scrambling of additional data or processing content descriptors specifically adapted to content descriptors, e.g. coding, compressing or processing of metadata
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/22Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of characters or indicia using display control signals derived from coded signals representing the characters or indicia, e.g. with a character-code memory
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8543Content authoring using a description language, e.g. Multimedia and Hypermedia information coding Expert Group [MHEG], eXtensible Markup Language [XML]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/278Subtitling

Definitions

  • the present technology relates to a transmission device, a transmission method, a reception device, and a reception method, and more particularly to a transmission device that transmits subtitle information together with image data.
  • subtitle information is transmitted as bitmap data.
  • text character code that is, on a text basis
  • the brightness level of the caption is appropriately adjusted according to the image content from the viewpoint of suppressing visual fatigue. It is necessary to do.
  • the purpose of this technology is to be able to satisfactorily adjust the subtitle brightness level on the receiving side.
  • a video encoder that generates a video stream with image data
  • a subtitle encoder that generates a subtitle stream having subtitle information
  • An adjustment information insertion unit that inserts luminance level adjustment information for adjusting the luminance level of subtitles into the video stream and / or the subtitle stream
  • a transmission apparatus includes a transmission unit that transmits a container of a predetermined format including the video stream and the subtitle stream.
  • a video stream having image data is generated by a video encoder. For example, a video stream including transmission video data obtained by performing high dynamic range photoelectric conversion on high dynamic range image data is generated.
  • a subtitle stream having caption information is generated by the subtitle encoder. For example, a subtitle stream including a segment having subtitle text information as a constituent element is generated.
  • the adjustment information insertion unit inserts luminance level adjustment information for adjusting the luminance level of subtitles into the video stream and / or subtitle stream.
  • the brightness level adjustment information may be brightness level adjustment information corresponding to the entire screen and / or brightness level adjustment information corresponding to each divided region obtained by dividing the screen into a predetermined number. .
  • the luminance level adjustment information inserted into the video stream may include a maximum luminance value, a minimum luminance value, and an average luminance value generated based on the image data.
  • the luminance level adjustment information inserted into the video stream further includes a high luminance threshold, a low luminance threshold, and an average luminance threshold set based on the electro-optic conversion characteristics. Also good.
  • the luminance level adjustment information inserted into the subtitle stream may include subtitle luminance range restriction information.
  • the luminance level adjustment information inserted into the subtitle stream further includes a high luminance threshold, a low luminance threshold, and an average luminance threshold set based on the electro-optic conversion characteristics. Also good.
  • the luminance level adjustment information inserted into the subtitle stream may further include color space information.
  • the subtitle encoder generates a subtitle stream based on the subtitle text information having a TTML structure or a structure similar thereto, and the adjustment information insertion unit includes a metadata element or a styling Luminance level adjustment information may be inserted using an extension element. Further, for example, the subtitle encoder may generate the subtitle stream having segments as components, and the adjustment information insertion unit may insert a segment including luminance level adjustment information into the subtitle stream.
  • the sending unit sends a container of a predetermined format including a video stream and a subtitle stream.
  • the container may be a transport stream (MPEG-2 TS) adopted in the digital broadcasting standard.
  • MPEG-2 TS transport stream
  • the container may be MP4 used for Internet distribution or the like, or a container of other formats.
  • the luminance level adjustment information for adjusting the luminance level of the caption is inserted into the video stream and / or the subtitle stream. For this reason, it is possible to satisfactorily adjust the luminance level of the caption on the receiving side.
  • an identification information insertion unit that inserts identification information indicating that luminance level adjustment information is inserted into a video stream into a container may be further provided.
  • the receiving side can easily recognize that the luminance level adjustment information is inserted into the video stream by this identification information.
  • an identification information insertion unit that inserts identification information indicating that the luminance level adjustment information is inserted into the subtitle stream may be further provided.
  • information indicating the insertion position of the luminance level adjustment information in the subtitle stream may be added to the identification information.
  • the receiving side can easily recognize that the luminance level adjustment information is inserted in the subtitle stream based on the identification information.
  • a receiving unit for receiving a container of a predetermined format including a video stream having image data and a subtitle stream having subtitle information;
  • a video decoding unit that decodes the video stream to obtain image data;
  • a subtitle decoding unit that decodes the subtitle stream to obtain bitmap data of subtitles;
  • a brightness level adjustment unit that performs brightness level adjustment processing on the bitmap data based on the brightness level adjustment information;
  • a receiving apparatus includes a video superimposing unit that superimposes the bitmap data after the luminance level adjustment obtained by the luminance level adjusting unit on the image data obtained by the video decoding unit.
  • the receiving unit receives a container having a predetermined format including a video stream having image data and a subtitle stream having subtitle information.
  • the video stream includes, for example, transmission video data obtained by performing high dynamic range photoelectric conversion on high dynamic range image data.
  • the subtitle stream includes, for example, bit bitmap data or caption text information as caption information.
  • the video decoding unit performs decoding processing on the video stream to obtain image data.
  • the subtitle decoding unit performs decoding processing on the subtitle stream to obtain subtitle bitmap data.
  • the luminance level adjustment unit performs luminance level adjustment processing on the bitmap data based on the luminance level adjustment information. Then, the video superimposing unit superimposes the bitmap data after the luminance level adjustment on the image data.
  • the luminance level adjustment unit may adjust the luminance level using luminance level adjustment information inserted in the video stream and / or the subtitle stream.
  • a luminance level adjustment information generation unit that generates luminance level adjustment information is further provided, and the luminance level adjustment unit performs luminance level adjustment using the luminance level adjustment information generated by the luminance level adjustment information generation unit. It may be made like.
  • the caption level bitmap data superimposed on the image data is subjected to the brightness level adjustment process based on the brightness level adjustment information. Therefore, the luminance of the subtitle can be set according to the background image, and it is possible to suppress visual fatigue due to a large luminance difference between the background image and the subtitle and to suppress deterioration of the atmosphere of the background image.
  • a transmission unit that transmits a video stream including transmission video data obtained by performing high dynamic range photoelectric conversion on high dynamic range image data in a container of a predetermined format;
  • the transmission apparatus includes an identification information insertion unit that inserts identification information indicating that the video stream corresponds to a high dynamic range into the container.
  • a video stream including transmission video data obtained by performing high dynamic range photoelectric conversion on high dynamic range image data is transmitted by a transmission unit in a container of a predetermined format.
  • the identification information insertion unit inserts identification information indicating that the video stream corresponds to the high dynamic range into the container.
  • identification information indicating that the video stream corresponds to the high dynamic range is inserted into the container. For this reason, the receiving side can easily recognize that the video stream corresponds to the high dynamic range based on the identification information.
  • a transmission unit for transmitting a video stream including image data and a subtitle stream having subtitle text information in a container of a predetermined format includes an identification information insertion unit that inserts identification information indicating that the caption is transmitted in a text code into the container.
  • the transmission unit transmits a video stream including image data and a subtitle stream having subtitle text information in a container of a predetermined format.
  • the identification information insertion unit inserts identification information indicating that the caption is transmitted as a text code into the container.
  • the identification information indicating that the caption is transmitted in the text code is inserted into the container. Therefore, the receiving side can easily recognize that the caption is transmitted as a text code by this identification information.
  • the luminance level of captions can be adjusted satisfactorily on the receiving side. Note that the effects described in the present specification are merely examples and are not limited, and may have additional effects.
  • FIG. 1 shows a configuration example of a transmission / reception system 10 as an embodiment.
  • the transmission / reception system 10 includes a transmission device 100 and a reception device 200.
  • the transmission apparatus 100 generates an MPEG2 transport stream TS as a container, and transmits the transport stream TS on a broadcast wave or a net packet.
  • This transport stream TS includes a video stream having image data.
  • the transport stream TS includes a subtitle stream having caption information.
  • Luminance level adjustment information for adjusting the luminance level of subtitles (subtitles) is inserted into the video stream and / or subtitle stream.
  • the receiving apparatus 200 receives the transport stream TS sent from the transmitting apparatus 100.
  • the receiving device 200 performs decoding processing on the video stream to obtain image data, and also performs decoding processing on the subtitle stream to obtain caption bitmap data.
  • the receiving apparatus 200 performs luminance level adjustment processing on the caption bitmap data based on the luminance level adjustment information inserted in the video stream and / or subtitle stream, and converts the adjusted bitmap data into image data. Superimpose. Note that the receiving apparatus 200 generates and uses luminance level adjustment information when no luminance level adjustment information is inserted into the video stream and / or subtitle stream.
  • Fig. 2 shows an outline of subtitle brightness level adjustment.
  • the horizontal axis represents time
  • the vertical axis represents the luminance level of the background image (image based on image data).
  • the maximum luminance and the minimum luminance of the background image change with time.
  • the luminance range D from the minimum luminance to the maximum luminance is very wide.
  • the overall luminance level of the subtitle is adjusted according to the luminance (maximum luminance, minimum luminance, average luminance) of the background image, and the subtitle luminance range is also within the R limit.
  • subtitles generally, subtitles with borders are used. This subtitle with a border has a rectangular border portion around the character portion.
  • the caption luminance range in this case means the entire luminance range including both the character portion and the border portion.
  • subtitles there are also subtitles with borders, but the brightness level is adjusted in the same manner as subtitles with borders.
  • the border portion corresponds to the border portion.
  • description will be given by taking a caption with a border as an example.
  • the overall brightness level of the caption (having the character part “ABC”) is adjusted to be high. At this time, the caption luminance range is within the limit of R1 set in advance in association with the caption.
  • the overall luminance level of the caption (having the character portion of “DEF”) is adjusted to be low. At this time, the caption luminance range is within the limit of R2 set in advance in association with the caption.
  • FIG. 3 (a) brightness level adjustment information corresponding to the entire screen, and as shown in FIG. 3 (b), each obtained by dividing the screen into a predetermined number. There is luminance level adjustment information corresponding to a divided area (hereinafter referred to as “partition” as appropriate).
  • FIG. 3B shows an example in which the screen is divided into 24 and 24 partitions from P0 to P23 are formed.
  • the maximum brightness value “global_content_level_max”, the minimum brightness value “global_content_level_min”, and the average brightness value “global_content_level_ave” corresponding to the entire screen are inserted, and the maximum brightness value “partition_content_level_max” and the minimum brightness value corresponding to each partition are inserted.
  • the value “partition_content_level_min” and the luminance average value “partition_content_level_ave” are inserted.
  • the video stream has a high luminance threshold “Th_max”, a low luminance threshold “Th_min”, and an average luminance threshold “Th_ave” for determining how to adjust the luminance of captions on the receiving side. Inserted. These values are obtained based on the electro-optic conversion characteristics (EOTF characteristics).
  • the threshold values recommended as broadcasting / distribution services that is, the brightness values corresponding to the dark threshold value, the average threshold value, and the bright threshold value are set to “Th_min”, “Th_ave”, and “Th_max”. Note that this value may not be inserted into the video stream as long as it is inserted into the subtitle stream.
  • high luminance threshold “Th_max”, low luminance threshold “Th_min”, and average luminance threshold “Th_ave” are inserted into the subtitle stream. Note that this value may not be inserted into the subtitle stream as long as it is inserted into the video stream. Also, subtitle brightness range restriction information “renderingdrange” is inserted into the subtitle stream. Further, color space information “colorspace” is inserted into the subtitle stream.
  • luminance level adjustment information is inserted, for example, as an SEI message. Therefore, as shown in FIG. 5A, luminance level adjustment information is inserted into the video stream in units of pictures, for example. It is also possible to insert in GOP units or other units. Also, the luminance level adjustment information described above is inserted into the subtitle stream, for example, in units of caption display.
  • the subtitle brightness level adjustment on the receiving side will be described.
  • the display has a high luminance contrast between the background image and the subtitle, and an object with a large luminance difference coexists in the screen, leading to visual fatigue.
  • the luminance level of the caption is adjusted while maintaining the HDR effect of the background image.
  • the foreground portion of the subtitle character portion and the background portion of the border portion are controlled separately according to the luminance of the background image.
  • FIGS. 6 (a-1) and (a-2) the luminance level adjustment of subtitles in the case of “a bright scene” of type a will be described.
  • a high brightness area exists in the background image.
  • FIG. 6 (a-1) when the subtitle is directly superimposed on such a background image, the luminance difference between the low luminance background portion and the high luminance region of the adjacent background image is large. It is difficult to see and causes visual fatigue.
  • the low-luminance background portion is conspicuous, and the bright atmosphere is impaired. Therefore, in this case, as shown in FIG. 6A-2, subtitles are superimposed after adjusting the luminance level to increase the luminance level of the background portion from low luminance to medium luminance.
  • FIGS. 6 (b-1) and 6 (b-2) the luminance level adjustment of the caption in the case of the type b “dark scene” will be described.
  • the background image has a low luminance area. If the subtitle is directly superimposed on such a background image as shown in FIG. 6 (b-1), the high-intensity foreground portion becomes conspicuous and the dark atmosphere is impaired. Therefore, in this case, as shown in FIG. 6 (b-2), the subtitles are superimposed after adjusting the luminance level to lower the luminance level of the foreground portion from high luminance to medium luminance.
  • FIGS. 7C-1 and 7C-2 the luminance level adjustment of captions in the case of the type c “scene with high luminance in the dark” will be described.
  • a high-brightness area exists in the dark background image as a whole, and the background image has a high contrast ratio.
  • subtitles are superimposed on such a background image as shown in FIG. 7 (c-1), a high-intensity foreground part is conspicuous and the dark atmosphere is impaired. Therefore, in this case, as shown in FIG. 7C-2, subtitles are superimposed after adjusting the luminance level to lower the luminance level of the foreground portion from high luminance to medium luminance.
  • FIGS. 7 (d-1) and (d-2) the luminance level adjustment of subtitles in the case of type d "scene with low luminance while being bright" will be described.
  • the background image has a high contrast ratio in which a low-brightness area exists in the bright background image as a whole.
  • the luminance difference between the low luminance background portion and the high luminance region of the adjacent background image is large. It is difficult to see and causes visual fatigue.
  • the low-luminance background portion is conspicuous, and the bright atmosphere is impaired. Therefore, in this case, as shown in FIG. 7D-2, subtitles are superimposed after adjusting the luminance level to increase the luminance level of the background portion from low luminance to medium luminance.
  • Subtitle brightness level adjustment may be based on global parameters per screen or on parameters for each partition.
  • subtitle brightness level adjustment using global parameters per screen will be described.
  • the adjustment control uses a high luminance threshold “Th_max”, a low luminance threshold “Th_min”, and an average luminance threshold “Th_ave”.
  • the luminance “Lf” of the foreground portion of the caption and the luminance “Lb” of the background portion of the caption are used.
  • FIG. 9A shows an example of TTML (Timed Text Markup Language).
  • the 6-digit color code is based on the table shown in FIG.
  • “color” indicates the color of the foreground part that is the character part of the caption
  • “backgroundColor” indicates the color of the background part that is the border part of the caption.
  • the foreground color is “# ffff00”, that is, “Yellow”
  • the background color is “# 000000”, that is, “Black”. It is shown.
  • the foreground color is “#ffffff”, that is, “White”, and the background color is “# 000000”, that is, “Black”. It is shown that there is.
  • the transmission of subtitle color information is performed separately in the foreground portion and the background portion, but both are often performed in the RGB domain.
  • the relationship between visual visibility and luminance is not a linear relationship. Therefore, the luminance level adjustment control of subtitles is transferred from the RGB domain to the YCbCr domain (luminance / color difference domain) by the following conversion. Done in
  • the color conversion depends on the color space.
  • the luminance Lf and Lb are given by performing color conversion on the color information (R, G, B) of the foreground part and the background part of the caption. As described above, color conversion depends on the color space. Therefore, in this embodiment, the color space information “colorspace” of the color information (R, G, B) is inserted into the subtitle stream.
  • the receiving side can obtain the luminances Lf and Lb from the CLUT output.
  • the subtitle luminance level adjustment in the “bright scene” of type a shown in FIG. 6 will be described.
  • the chart of FIG. 10 (a-1) corresponds to FIG. 6 (a-1).
  • the maximum luminance value is “max”
  • the minimum luminance value is “min”
  • the average luminance value is “ave”. Since Th_max ⁇ max, it is determined that there is a very bright area in the background image. Further, since Th_ave ⁇ ave, the background image is judged to be bright overall. Further, since Th_min ⁇ min, it is determined that there is no very low luminance area in the background image. It is assumed that the luminance Lf of the foreground portion of the caption and the luminance Lb of the background portion of the caption are at the illustrated levels.
  • the luminance Lb of the background portion of the caption is adjusted to increase to the luminance Lb ′, and the luminance range range satisfies the subtitle luminance range restriction information “renderingdrange”.
  • “Renderingdrange” indicates, for example, the ratio of the foreground brightness to the background brightness, and the level of the brightness Lb ′ is adjusted so that Lf / Lb ′ is equal to or less than the ratio.
  • the chart in FIG. 10 (b-1) corresponds to FIG. 6 (b-1). Since Th_max> max, it is determined that there is no very bright area in the background image. Since Th_ave_> ave, the background image is judged to be dark overall. In addition, since Th_min> min, it is determined that there is a very low luminance area in the background image. It is assumed that the luminance Lf of the foreground portion of the caption and the luminance Lb of the background portion of the caption are at the illustrated levels.
  • the luminance Lf of the foreground portion of the caption is adjusted to decrease to the luminance Lf ′, and the luminance range range satisfies the subtitle luminance range restriction information “renderingdrange”.
  • the level of the luminance Lf ′ is adjusted so that Lf ′ / Lb is equal to or less than the ratio indicated by “renderingdrange”.
  • FIG. 11 corresponds to FIG. 7 (c-1). Since Th_max ⁇ max, it is determined that there is a very bright area in the background image. Since Th_ave_> ave, the background image is judged to be dark overall. In addition, since Th_min> min, it is determined that there is a very low luminance area in the background image. It is assumed that the luminance Lf of the foreground portion of the caption and the luminance Lb of the background portion of the caption are at the illustrated levels.
  • the luminance Lf of the foreground portion of the caption is adjusted to decrease to the luminance Lf ′, and the luminance range range satisfies the subtitle luminance range restriction information “renderingdrange”.
  • the level of the luminance Lf ′ is adjusted so that Lf ′ / Lb is equal to or less than the ratio indicated by “renderingdrange”.
  • FIG. 11 (d-1) corresponds to FIG. 7 (d-1). Since Th_max ⁇ max, it is determined that there is a very bright area in the background image. Further, since Th_ave ⁇ ave, the background image is judged to be bright overall. In addition, since Th_min> min, it is determined that there is a very low luminance area in the background image. It is assumed that the luminance Lf of the foreground portion of the caption and the luminance Lb of the background portion of the caption are at the illustrated levels.
  • the luminance Lb of the background portion of the caption is adjusted to increase to the luminance Lb ′, and the luminance range range satisfies the subtitle luminance range restriction information “renderingdrange”.
  • the level of the luminance Lb ′ is adjusted so that Lf / Lb ′ is equal to or less than the ratio indicated by “renderingdrange”.
  • the maximum luminance value, the minimum luminance value, and the average luminance value per screen cannot indicate a local luminance distribution.
  • the screen is divided into 24 to form 24 partitions from P0 to P23, and 8 partitions A, B, C, D, E, F, G, and H are formed. It is assumed that subtitles are superimposed so as to straddle partitions.
  • a broken-line rectangle indicates a superimposed position and size of a caption (caption with a border).
  • the maximum luminance value “partition_content_level_max”, the minimum luminance value “partition_content_level_min” corresponding to the eight partitions A, B, C, D, E, F, G, and H, the luminance The average value “partition_content_level_ave” is used. It is possible to include a wider range of partitions than these eight partitions. In this case, a high luminance threshold “Th_max”, a low luminance threshold “Th_min”, and an average luminance threshold “Th_ave” are used. In this case, the luminance “Lf” of the foreground portion of the caption and the background luminance “Lb” of the caption are used.
  • the same determination as the adjustment of the luminance level of the caption by the parameter per screen described above is performed, and the final determination is made based on majority decision or priority.
  • judgment in the partition C is employed (see FIGS. 11 (d-1) and (d-2)).
  • the luminance Lb of the background portion of the caption is adjusted to increase to the luminance Lb ′, and the luminance range range satisfies the caption luminance range restriction information “renderingdrange”. That is, the level of the luminance Lb ′ is adjusted so that Lf / Lb ′ is equal to or less than the ratio indicated by “renderingdrange”.
  • FIG. 14 illustrates a configuration example of the transmission device 100.
  • the transmission apparatus 100 includes a control unit 101, an HDR camera 102, an HDR photoelectric conversion unit 103, an RGB / YCbCr conversion unit 104, a video encoder 105, a luminance level calculation unit 106, a threshold setting unit 107, a subtitle
  • the generator 108, the text format converter 109, the subtitle encoder 110, the system encoder 111, and the transmitter 112 are included.
  • the control unit 101 includes a CPU (Central Processing Unit), and controls the operation of each unit of the transmission device 100 based on a control program.
  • the HDR camera 102 images a subject and outputs HDR (High Dynamic Range) video data (image data).
  • This HDR video data has a contrast ratio of 0 to 100% * N (N is a number greater than 1) exceeding the brightness of the white peak of the conventional SDR image, for example, 0 to 1000%.
  • N is a number greater than 1
  • the level of 100% is produced so as to correspond to a white luminance value of 100 cd / m 2 , for example.
  • the master monitor 103a is a monitor for grading HDR video data obtained by the HDR camera 102.
  • the master monitor 103a has a display luminance level corresponding to the HDR video data or suitable for grading the HDR video data.
  • the HDR photoelectric conversion unit 103 applies the HDR photoelectric conversion characteristics to the HDR video data obtained by the HDR camera 102 to obtain transmission video data V1.
  • the RGB / YCbCr conversion unit 104 converts the transmission video data V1 from the RGB domain to the YCbCr (luminance / color difference) domain.
  • the luminance level calculation unit 106 Based on the transmission video data V1 converted into the YCbCr domain, the luminance level calculation unit 106, for example, for each picture, the luminance maximum value “global_content_level_max”, the luminance minimum value “global_content_level_min”, the luminance average value “ “global_content_level_ave” is calculated, and the maximum luminance value “partition_content_level_max”, the minimum luminance value “partition_content_level_min”, and the average luminance value “partition_content_level_ave” corresponding to each divided region (partition) obtained by dividing the screen into a predetermined number are calculated.
  • FIG. 15 shows a configuration example of the luminance level calculation unit 106.
  • the luminance level calculation unit 106 includes image value comparison units 106a and 106b.
  • the transmission video data V1 is input to the pixel value comparison unit 106a, and the screen division size is designated by the control unit 101. Instead of designating the screen division size, the number of screen divisions may be designated.
  • the pixel value comparison unit 106a compares the pixel values for each partition (divided region) to obtain the maximum luminance value “partition_content_level_max”, the minimum luminance value “partition_content_level_min”, and the average luminance value “partition_content_level_ave”.
  • Each value for each partition obtained by the pixel value comparison unit 106a is input to the pixel value comparison unit 106b.
  • the pixel value comparison unit 106b compares the values of the partitions to obtain the maximum luminance value “global_content_level_max”, the minimum luminance value “global_content_level_min”, and the average luminance value “global_content_level_ave” corresponding to the entire screen.
  • the threshold setting unit 107 determines a high luminance threshold “Th_max” for determining how to perform subtitle luminance adjustment on the reception side based on the electro-optic conversion characteristic (EOTF characteristic).
  • a low luminance threshold “Th_min” and an average luminance threshold “Th_ave” are set (see FIG. 4).
  • the video encoder 105 performs encoding such as MPEG4-AVC or HEVC on the transmission video data V1, and generates a video stream (PES stream) VS including encoded image data. Also, the video encoder 105 inserts luminance level adjustment information for adjusting the luminance level of the subtitle (subtitle) into the video stream.
  • encoding such as MPEG4-AVC or HEVC
  • PES stream video stream
  • luminance level adjustment information for adjusting the luminance level of the subtitle (subtitle) into the video stream.
  • the luminance maximum value “global_content_level_max”, the luminance minimum value “global_content_level_min”, and the luminance average value “global_content_level_ave”, which are obtained by the luminance level calculation unit 106 and correspond to the entire screen, are inserted into the video stream.
  • the maximum luminance value “partition_content_level_max”, the minimum luminance value “partition_content_level_min”, and the average luminance value “partition_content_level_ave” corresponding to the partition are inserted.
  • the high luminance threshold “Th_max”, the low luminance threshold “Th_min”, and the average luminance threshold “Th_ave” set by the threshold setting unit are inserted into the video stream.
  • the video encoder 105 inserts a newly defined luma dynamic range SEI message (Luma_dynamic_range SEI message) in the “SEIs” portion of the access unit (AU).
  • FIG. 16 shows the top access unit of GOP (Group Of Pictures) when the encoding method is HEVC.
  • FIG. 17 shows an access unit other than the head of the GOP when the encoding method is HEVC.
  • a decoding SEI message group “Prefix_SEIs” is arranged before a slice (slices) in which pixel data is encoded, and a display SEI message group “ “Suffix_SEIs” is arranged.
  • the luma dynamic range SEI message may be arranged as an SEI message group “Suffix_SEIs”.
  • FIG. 18 and 19 show an example of the structure (Syntax) of the luma dynamic range SEI message.
  • FIG. 20 shows the contents (Semantics) of main information in the structural example.
  • the 1-bit flag information of “Luma_dynamic_range_cancel_flag” indicates whether to refresh the “Luma_dynamic_rangedynamic” message. “0” indicates that the message “Luma_dynamic_range” is refreshed. “1” indicates that the message “Luma_dynamic_range” is not refreshed, that is, the previous message is maintained as it is.
  • the 8-bit field of “coded_data_bit_depth” indicates the number of encoded pixel bits.
  • An 8-bit field of “number_of_partitions” indicates the number of partition areas (partitions) in the screen. When this value is less than “2”, it indicates that the image is not divided.
  • An 8-bit field of “block_size” indicates a block size, that is, a size corresponding to an area obtained by dividing the entire screen by the number of divided areas.
  • the 16-bit field of “global_content_level_max” indicates the maximum luminance value of the entire screen.
  • a 16-bit field of “global_content_level_min” indicates the minimum luminance value of the entire screen.
  • a 16-bit field of “global_content_level_ave” indicates an average luminance value of the entire screen.
  • a 16-bit field of “content_threshold_max” indicates a high luminance threshold.
  • a 16-bit field of “content_threshold_min” indicates a low luminance threshold.
  • a 16-bit field of “content_threshold_ave” indicates a threshold value of average luminance.
  • a 16-bit field of “partition_content_level_max” indicates the maximum luminance value in the partition.
  • a 16-bit field of “partition_content_level_min” indicates a minimum luminance value in the partition.
  • a 16-bit field of “partition_content_level_ave” indicates an average luminance value in the partition.
  • the caption generation unit 108 generates text data (character code) DT as caption information.
  • the text format conversion unit 109 receives the text data DT and obtains text information of a subtitle having a predetermined format having display timing information. In this embodiment, conversion to TTML is performed.
  • FIG. 21 shows a TTML structure.
  • TTML is described on an XML basis.
  • Each element such as metadata, styling, and layout exists in the header (head).
  • FIG. 22A shows a structural example of metadata (TTM: TTML Metadata). This metadata includes metadata title information and copyright information.
  • FIG. 22B shows a structural example of styling (TTS: TTML Styling).
  • This styling includes information such as a color (color), a font (fontFamily), a size (fontSize), and an alignment (textAlign) in addition to the identifier (id).
  • FIG. 22C shows a structural example of a layout (region: TTML layout). This layout includes information such as an extent (extent), an offset (padding), a background color (backgroundColor), and an alignment (displayAlign) in addition to the identifier (id) of the region in which the subtitle is arranged.
  • FIG. 23 shows an example of the structure of the body.
  • information of three subtitles subtitle 1 (subtitle 1), subtitle 2 (subtitle 2), and subtitle 3 (subtitle 3) is included.
  • a display start timing and a display end timing are described, and text data is described.
  • the display start timing is “0.76 s”
  • the display end timing is “3.45 s”
  • the text data is “It seems a paradox, dose it not,”. ing.
  • the subtitle encoder 110 converts the TTML obtained by the text format conversion unit 109 into various segments, and generates a subtitle stream SS composed of PES packets in which those segments are arranged in the payload.
  • luminance level adjustment information for adjusting the luminance level of the subtitle is inserted into the subtitle stream SS. Specifically, the high luminance threshold “Th_max”, the low luminance threshold “Th_min”, the average luminance threshold “Th_ave”, the subtitle luminance range restriction information “renderingdrange”, and the subtitle color space information “colorspace” are inserted.
  • the high luminance threshold “Th_max”, the low luminance threshold “Th_min”, the average luminance threshold “Th_ave”, the subtitle luminance range restriction information “renderingdrange”, and the subtitle color space information “colorspace” are inserted.
  • the insertion of the luminance level adjustment information is performed by the text format conversion unit 109 or the subtitle encoder 110.
  • the luminance level adjustment information is inserted by the text format conversion unit 109, for example, it is inserted by using an element of metadata (metadata) existing in the header of the TTML structure.
  • FIG. 24 shows a structural example of metadata (TTM: TTML Metadata) in that case.
  • TTM TTML Metadata
  • “Ttm-ext: colorspace” indicates color space information, followed by “ITUR2020”, “ITIUR709”, and the like.
  • “ITUR2020” is described.
  • “Ttm-ext: dynamicrange” indicates dynamic range information, that is, the type of HDR EOTF characteristics, followed by “ITUR202x”, “ITIUR709”, and the like.
  • ITUR202x is described.
  • Ttm-ext: renderingcontrol indicates rendering control information as luminance level adjustment information.
  • Ttm-ext: lumathmax indicates a threshold value of high luminance, and subsequently “TH_max” that is an actual value is described.
  • Ttm-ext: lumathmin indicates a low-luminance threshold value, followed by “TH_min” that is the actual value.
  • Ttm-ext: lumathave indicates a threshold value of average luminance, and subsequently, “TH_ave” which is an actual value thereof is described.
  • “Ttm-ext: renderingdrange” indicates restriction information on the subtitle luminance range, followed by “Maxminratio”.
  • “Maxminratio” is a ratio obtained by dividing the maximum luminance value of subtitles by the minimum luminance value of subtitles. When this value is “4”, for example, the maximum luminance of the subtitles after luminance adjustment is within 4 times the minimum luminance. That means
  • the text format conversion unit 109 inserts luminance level adjustment information, for example, it is inserted using a styling extension element existing in the header of the TTML structure. In this case, independent rendering control (brightness level adjustment) is possible for each “xml: id”.
  • FIG. 25 shows a structural example of a styling extension (TTML Styling Extension) in that case.
  • Ttse: colorspace indicates color space information, followed by “ITUR2020”, “ITIUR709”, and the like.
  • ITUR2020 is described.
  • Tetse: dynamicrange indicates dynamic range information, that is, the type of HDR EOTF characteristics, followed by “ITUR202x”, “ITIUR709”, and the like.
  • ITUR202x is described.
  • Ttse: renderingcontrol: lumathmax indicates a high-luminance threshold, followed by “TH_max” which is the actual value.
  • Ttse: renderingcontrol: lumathmin indicates a low-luminance threshold value, followed by “TH_min” that is the actual value.
  • Ttse: renderingcontrol: lumathave indicates an average luminance threshold value, followed by “TH_ave” that is the actual value.
  • Ttse: renderingcontrol: renderingdrange indicates restriction information on the subtitle luminance range, followed by “Maxminratio”.
  • a segment including luminance level adjustment information is inserted into the subtitle stream.
  • a newly defined subtitle rendering control segment (SRCS: Subtitle_rendering_control_segment) is inserted.
  • FIG. 26 shows a structural example (syntax) of the subtitle / rendering / control segment.
  • This structure includes information of “sync_byte”, “segment_type”, “page_id”, “segment_length”, “version_number”, and “number_of _resion”.
  • An 8-bit field of “segment_type” indicates a segment type, and here indicates a subtitle rendering control segment.
  • An 8-bit field of “segment_length” indicates the length (size) of the segment.
  • the 8-bit field “number_of _resion” indicates the number of regions.
  • An 8-bit field of “resion_id” indicates an identifier for identifying a region.
  • An 8-bit field of “colorspace_type” indicates color space information.
  • the 8-bit field of “dynamicrange_type” indicates dynamic range information, that is, the type of HDR EOTF characteristic.
  • a 16-bit field of “luma_th_max” indicates a high luminance threshold.
  • a 16-bit field of “luma_th_min” indicates a low luminance threshold.
  • a 16-bit field of “luma_th_ave” indicates an average luminance threshold value.
  • the 8-bit field of“ renderingdrange ” indicates restriction information on the subtitle luminance range.
  • This restriction information is, for example, a ratio obtained by dividing the maximum luminance value of subtitles by the minimum luminance value of subtitles. When this value is “4”, for example, the maximum luminance of the subtitles after luminance adjustment is four times the minimum luminance. It means to fit in.
  • the system encoder 111 generates a transport stream TS including the video stream VS generated by the video encoder 105 and the subtitle stream SS generated by the subtitle decoder 110.
  • the transmission unit 112 transmits the transport stream TS on a broadcast wave or a net packet and transmits the transport stream TS to the reception device 200.
  • the system encoder 111 inserts identification information indicating that luminance level adjustment information is inserted into the video stream into the transport stream TS as a container.
  • the system encoder 111 inserts an HDR rendering support descriptor (HDR_rendering_support_descriptor) under a program map table (PMT: Program Map Table).
  • FIG. 27A shows a structural example (Syntax) of HDR, rendering, support, and descriptor.
  • FIG. 27B shows the contents (Semantics) of main information in the structural example.
  • An 8-bit field of “descriptor_tag” indicates a descriptor type, and here indicates that it is an HDR rendering support descriptor.
  • the 8-bit field of “descriptor_length” indicates the length (size) of the descriptor, and indicates the number of subsequent bytes as the descriptor length.
  • the flag “HDR_flag” indicates that the service stream (video stream) is compatible with HDR. “1” indicates that HDR is supported, and “0” indicates that HDR is not supported.
  • the flag “composition_control_flag” indicates that a luma dynamic range SEI message (Luma_dynamic_Range SEI message) is encoded in the video stream, that is, luminance level adjustment information is inserted in the video stream. “1” indicates that it is encoded, and “0” indicates that it is not encoded.
  • the 8-bit field of “EOTF_type” indicates the type of video EOTF characteristic (the VUI value of the video stream).
  • system encoder 111 inserts identification information indicating that luminance level adjustment information is inserted into the subtitle stream into the transport stream TS as a container.
  • system encoder 111 inserts a subtitle rendering metadata descriptor (Subtitle_rendering_metadata_descriptor) under the program map table (PMT: Program Map).
  • PMT Program Map
  • FIG. 28A shows a structure example (Syntax) of the subtitle, rendering, metadata, and descriptor.
  • FIG. 28B shows the contents (Semantics) of main information in the structural example.
  • An 8-bit field of “descriptor_tag” indicates a descriptor type, and here indicates a subtitle / rendering / metadata descriptor.
  • the 8-bit field of “descriptor_length” indicates the length (size) of the descriptor, and indicates the number of subsequent bytes as the descriptor length.
  • the “subtitle_text_flag” flag indicates that the subtitle (caption) is transmitted as a text code. “1” indicates that the subtitle is a text code, and “0” indicates that the subtitle is not a text code.
  • the flag “subtitle_rendering_control_flag” indicates that the luminance adjustment meta information of the subtitle is encoded, that is, the luminance level adjustment information is inserted into the subtitle. “1” indicates that it is encoded, and “0” indicates that it is not encoded.
  • 3 bits of “meta_container_type” indicate an arrangement storage location of luminance adjustment meta information (luminance level adjustment information), that is, an insertion position. “0” indicates a subtitle rendering control segment, “1” indicates a metadata element existing in the header of the TTML structure, and “2” indicates a styling extension (existing in the header of the TTML structure). styling extension) element.
  • HDR video data obtained by imaging with the HDR camera 102 is supplied to the HDR photoelectric conversion unit 103.
  • the HDR video data obtained by the HDR camera 102 is graded using the master monitor 103a.
  • the HDR photoelectric conversion unit 103 the HDR photoelectric conversion characteristic (HDR OETF curve) is applied to the HDR video data to perform photoelectric conversion, and transmission video data V1 is obtained.
  • the transmission video data V1 is converted from the RGB domain to the YCbCr (luminance / color difference) domain by the RGB / YCbCr conversion unit 104.
  • the transmission video data V1 converted into the YCbCr domain is supplied to the video encoder 105 and the luminance level calculation unit 106.
  • the luminance level calculation unit 106 obtains the maximum luminance value “global_content_level_max”, the minimum luminance value “global_content_level_min”, and the average luminance value “global_content_level_ave” corresponding to the entire screen for each picture, and is obtained by dividing the screen into a predetermined number.
  • the maximum luminance value “partition_content_level_max”, the minimum luminance value “partition_content_level_min”, and the average luminance value “partition_content_level_ave” corresponding to each divided area (partition) are obtained (see FIG. 15). Each obtained value is supplied to the video encoder 105.
  • Threshold value setting unit 107 is supplied with information on EOTF characteristics (light-emitting conversion characteristics).
  • EOTF characteristics light-emitting conversion characteristics
  • the threshold “Th_ave” is set (see FIG. 4).
  • Each set value is supplied to the video encoder 105.
  • Each set value is supplied to the text format conversion unit 109 or the subtitle encoder 110.
  • the video encoder 105 performs encoding such as MPEG4-AVC or HEVC on the transmission video data V1, and generates a video stream (PES stream) VS including encoded image data. Also, the video encoder 105 inserts luminance level adjustment information for adjusting the luminance level of the subtitle (subtitle) into the video stream. That is, the video encoder 105 inserts a newly defined luma dynamic range SEI message in the “SEIs” portion of the access unit (AU) (see FIG. 16).
  • the subtitle generating unit 108 generates text data (character code) DT as subtitle information.
  • This text data DT is supplied to the text format conversion unit 109.
  • the text format conversion unit 109 converts the subtitle text information having display timing information, that is, TTML, based on the text data DT (see FIG. 21). This TTML is supplied to the subtitle encoder 110.
  • the TTML obtained by the text format conversion unit 109 is converted into various segments, and a subtitle stream SS composed of PES packets in which those segments are arranged in the payload is generated.
  • the luminance level adjustment information for adjusting the luminance level of the subtitle (subtitle) is inserted into the subtitle stream SS.
  • the insertion of the brightness level adjustment information is performed by the text format conversion unit 109 or the subtitle encoder 110.
  • an element of metadata (metadata) existing in the header of the TTML structure or an element of styling extension (styling extension) existing in the header of the TTML structure is used (See FIGS. 24 and 25).
  • a newly defined subtitle rendering control segment is inserted into the subtitle stream (see FIG. 26).
  • the video stream VS generated by the video encoder 105 is supplied to the system encoder 111.
  • the subtitle stream SS generated by the subtitle encoder 110 is supplied to the system encoder 111.
  • a transport stream TS including the video stream VS and the subtitle stream SS is generated.
  • the transport stream TS is transmitted to the receiving device 200 by the transmitting unit 112 on a broadcast wave or a net packet.
  • the system encoder 111 inserts identification information indicating that luminance level adjustment information is inserted into the video stream into the transport stream TS. That is, in the system encoder 111, the HDR rendering support descriptor is inserted under the program map table (PMT) (see FIG. 27A). In the system encoder 111, identification information indicating that the luminance level adjustment information is inserted into the subtitle stream SS is inserted into the transport stream TS. That is, in the system encoder 111, a subtitle rendering metadata descriptor is inserted under the program map table (PMT) (see FIG. 28A).
  • PMT program map table
  • FIG. 29 illustrates a configuration example of the transport stream TS.
  • this configuration example there is a PES packet “video PES1” of the video stream identified by PID1.
  • a luma dynamic range SEI message in which brightness level adjustment information (background image brightness value, composition threshold value, etc.) is described is inserted.
  • Luminance level adjustment information in the metadata element existing in the header of the TTML structure, the styling extension element existing in the header of the TTML structure, or the subtitle rendering control segment Information, threshold value for composition, subtitle luminance range restriction information, and the like) are inserted.
  • the transport stream TS includes a PMT (Program Map Table) as PSI (Program Specific Information).
  • PSI is information describing to which program each elementary stream included in the transport stream belongs.
  • the PMT has a program loop (Program ⁇ ⁇ ⁇ loop) that describes information related to the entire program.
  • an elementary stream loop having information related to each elementary stream.
  • video ES loop video elementary stream loop
  • subtitle elementary stream loop subtitle elementary stream loop
  • video elementary stream loop information such as a stream type and PID (packet identifier) is arranged corresponding to the video stream, and a descriptor describing information related to the video stream is also provided. Be placed.
  • the value of “Stream_type” of this video stream is set to a value indicating, for example, the HEVC video stream, and the PID information indicates PID1 given to the PES packet “video PES1” of the video stream.
  • the descriptors HEVC descriptors, and newly defined HDR / rendering / support / descriptors are inserted.
  • subtitle elementary stream loop In the subtitle elementary stream loop (Subtitle ES loop), information such as a stream type and a PID (packet identifier) is arranged corresponding to the subtitle stream, and a descriptor describing information related to the subtitle stream is also provided. Be placed.
  • the value of “Stream_type” of this subtitle stream is set to, for example, a value indicating a private stream, and the PID information indicates PID2 assigned to the PES packet “Subtitle PES2” of the subtitle stream.
  • a descriptor a newly defined subtitle, rendering, metadata, descriptor, and the like are inserted.
  • FIG. 30 illustrates a configuration example of the receiving device 200.
  • the receiving apparatus 200 includes a control unit 201, a reception unit 202, a system decoder 203, a video decoder 204, a subtitle text decoder 205, a font development unit 206, an RGB / YCbCr conversion unit 208, and a luminance level adjustment unit. 209.
  • the receiving apparatus 200 includes a video superimposing unit 210, a YCbCr / RGB converting unit 211, an HDR electro-optic converting unit 212, an HDR display mapping unit 213, and a CE monitor 214.
  • the control unit 201 includes a CPU (Central Processing Unit) and controls the operation of each unit of the receiving device 200 based on a control program.
  • the reception unit 202 receives the transport stream TS transmitted from the transmission device 100 on broadcast waves or net packets.
  • the system decoder 203 extracts the video stream VS and the subtitle stream SS from the transport stream TS. Further, the system decoder 203 extracts various information inserted in the transport stream TS (container) and sends it to the control unit 201.
  • the extracted information includes an HDR rendering support descriptor (see FIG. 27A) and a subtitle rendering metadata descriptor (see FIG. 28A).
  • the control unit 201 recognizes that the video stream (service stream) corresponds to HDR since the flag of “HDR_flag” of the HDR rendering support descriptor is “1”. Further, since the flag of “composition_control_flag” in the HDR rendering support descriptor is “1”, the control unit 201 encodes the luma dynamic range SEI message in the video stream, that is, in the video stream. Recognize that brightness level adjustment information is inserted.
  • control unit 201 recognizes that the subtitle (caption) is transmitted in the text code because the flag “subtitle_text_flag” of the subtitle / rendering / metadata / descriptor is “1”. In addition, the control unit 201 determines that the luminance adjustment metadata of the subtitle is encoded because the “subtitle_rendering_control_flag” flag of the subtitle / rendering / metadata descriptor is “1”, that is, the luminance level adjustment information is included in the subtitle. Recognize that is inserted.
  • the video decoder 204 performs a decoding process on the video stream VS extracted by the system decoder 203 and outputs transmission video data V1. Further, the video decoder 204 extracts a parameter set and SEI message inserted in each access unit constituting the video stream VS, and sends necessary information to the control unit 201.
  • the control unit 201 since the control unit 201 recognizes that the luma dynamic range SEI message is encoded in the video stream as described above, the video decoder 204 is controlled under the control of the control unit 201. Reliably extracts the SEI message, and acquires brightness level adjustment information such as a background image brightness value and a composition threshold.
  • the subtitle text decoder 205 performs a decoding process on the segment data of each region included in the subtitle stream SS to obtain the text data and control code of each region.
  • the subtitle text decoder 205 acquires luminance level adjustment information such as color space information, composition threshold, and subtitle luminance range restriction information from the subtitle stream SS.
  • luminance level adjustment information such as color space information, composition threshold, and subtitle luminance range restriction information from the subtitle stream SS.
  • the control unit 201 since the control unit 201 recognizes that the luminance adjustment meta information of the subtitle is encoded as described above, the subtitle text decoder 205 is controlled under the control of the control unit 201. Ensure that brightness level adjustment information is obtained.
  • the font expansion unit 206 expands the font based on the text data and control code of each region obtained by the subtitle segment decoder 302 to obtain bitmap data of each region.
  • the RGB / YCbCr converter 208 converts the bitmap data from the RGB domain to the YCbCr (luminance / color difference) domain. In this case, the RGB / YCbCr conversion unit 208 performs conversion using a conversion formula corresponding to the color space based on the color space information.
  • the luminance level adjustment unit 209 performs luminance level adjustment on the caption bitmap data converted into the YCbCr domain using the background image luminance value, the synthesis threshold value, and the caption luminance range restriction information.
  • subtitle brightness level adjustment see FIG. 8 using global parameters per screen or subtitle brightness level adjustment (see FIG. 12) using parameters for each partition is performed.
  • the video superimposing unit 210 superimposes the bitmap data of each region whose luminance level is adjusted by the luminance level adjusting unit 209 on the transmission video data V1 obtained by the video decoder 204.
  • the YCbCr / RGB conversion unit 211 converts the transmission video data V1 ′ on which the bitmap data is superimposed from the YCbCr (luminance / color difference) domain to the RGB domain. In this case, the YCbCr / RGB conversion unit 211 performs conversion using a conversion formula corresponding to the color space based on the color space information.
  • the HDR electro-optic conversion unit 212 applies the HDR electro-optic conversion characteristics to the transmission video data V1 ′ converted into the RGB domain, and obtains display video data for displaying an HDR image.
  • the HDR display mapping unit 213 performs display luminance adjustment on the display video data according to the maximum luminance display capability of the CE monitor 214.
  • the CE monitor 214 displays an HDR image based on display video data for which display brightness adjustment has been performed.
  • the CE monitor 214 includes, for example, an LCD (Liquid Crystal Display), an organic EL display (organic electroluminescence display), and the like.
  • the reception unit 202 receives the transport stream TS transmitted from the transmission device 100 on broadcast waves or net packets.
  • This transport stream TS is supplied to the system decoder 203.
  • the system decoder 203 extracts the video stream VS and the subtitle stream SS from the transport stream TS.
  • various information inserted in the transport stream TS is extracted and sent to the control unit 201.
  • the extracted information includes an HDR rendering support descriptor (see FIG. 27A) and a subtitle rendering metadata descriptor (see FIG. 28A).
  • the control unit 201 recognizes that the video stream (service stream) corresponds to HDR since the flag of “HDR_flag” of the HDR rendering support descriptor is “1”. Further, the control unit 201 recognizes that the luma dynamic range SEI message is encoded in the video stream because the flag of “composition_control_flag” of the HDR rendering support descriptor is “1”.
  • control unit 201 recognizes that the subtitle (caption) is transmitted as a text code because the flag “subtitle_text_flag” of the subtitle / rendering / metadata / descriptor is “1”. Further, the control unit 201 recognizes that the luminance adjustment meta information of the subtitle is encoded because the flag of “subtitle_rendering_control_flag” of the subtitle / rendering / metadata descriptor is “1”.
  • the video stream VS extracted by the system decoder 203 is supplied to the video decoder 204.
  • the video decoder 204 performs a decoding process on the video stream VS to obtain transmission video data V1. Also, the video decoder 204 extracts luma, dynamic range, and SEI messages from the video stream VS, and acquires luminance level adjustment information such as background image luminance values and composition thresholds.
  • the subtitle stream SS extracted by the system decoder 203 is supplied to the subtitle text decoder 205.
  • the segment data of each region included in the subtitle stream SS is decoded, and the text data and control code of each region are obtained.
  • the subtitle text decoder 205 acquires luminance level adjustment information such as color space information, composition threshold value, and subtitle luminance range restriction information from the subtitle stream SS.
  • the text data and control codes of each region are supplied to the font expansion unit 206.
  • the font expansion unit 206 expands the font based on the text data and control code of each region, and obtains bitmap data of each region.
  • This bitmap data is converted from the RGB domain to the YCbCr domain based on the color space information S by the RGB / YCbCr conversion unit 208 and supplied to the luminance level adjustment unit 209.
  • the luminance level adjustment is performed on the bitmap data of each region converted into the YCbCr domain by using the background image luminance value, the synthesis threshold value, and the caption luminance range restriction information.
  • subtitle brightness level adjustment see FIG. 8 using global parameters per screen or subtitle brightness level adjustment (see FIG. 12) using parameters for each partition is performed.
  • the transmission video data V1 obtained by the video decoder 204 is supplied to the video superimposing unit 210.
  • the bitmap data of each region after the luminance level adjustment obtained by the luminance level adjustment unit 209 is supplied to the video superimposing unit 209.
  • the bitmap data of each region is superimposed on the transmission video data V1.
  • the transmission video data V1 ′ obtained by superimposing the bitmap data obtained by the video superimposing unit 210 is converted from the YCbCr (luminance / color difference) domain to the RGB domain in the YCbCr / RGB conversion unit 211 based on the designation of the color space information V. And is supplied to the HDR electro-optic conversion unit 212.
  • the HDR electro-optic conversion unit 212 the HDR electro-optic conversion characteristics are applied to the transmission video data V1 ′, and display video data for displaying an HDR image is obtained.
  • the display video data is supplied to the HDR display mapping unit 213.
  • HDR display mapping unit 213 display luminance adjustment is performed on the display video data in accordance with the maximum luminance display capability of the CE monitor 214 and the like.
  • the display video data whose display brightness is adjusted in this way is supplied to the CE monitor 214.
  • On the CE monitor 214, an HDR image is displayed based on the display video data.
  • the receiving apparatus 200 further includes a subtitle bitmap decoder 215 in order to cope with the case where the subtitle information included in the subtitle stream SS is bitmap data.
  • the subtitle bitmap decoder 215 performs decoding processing on the subtitle stream SS to obtain caption bitmap data.
  • the subtitle bitmap data is supplied to the luminance level adjustment unit 209.
  • caption information (transmission data) included in the subtitle stream SS is transmitted to the CLUT, and the CLUT output may be in the YCbCr domain. Therefore, the caption bitmap data obtained by the subtitle bitmap decoder 215 is directly supplied to the luminance level adjustment unit 209. In this case, the reception side can obtain the luminance Lf of the foreground portion of the caption and the luminance Lb of the background portion of the caption from the CLUT output.
  • the receiving apparatus 200 can deal with the case where the luma dynamic range SEI message is not encoded in the video stream VS and the background image luminance value cannot be obtained from the SEI message.
  • a calculation unit 216 is further included.
  • the luminance level calculation unit 216 is configured in the same manner as the luminance level calculation unit 106 in the transmission device 100 shown in FIG. 14 (see FIG. 15).
  • the luminance level calculation unit 216 Based on the transmission video data V1 obtained by the video decoder 204, the luminance level calculation unit 216, for each picture, the maximum luminance value “global_content_level_max”, the minimum luminance value “global_content_level_min”, the average luminance value “ ”global_content_level_ave” and the maximum luminance value “partition_content_level_max”, the minimum luminance value “partition_content_level_min”, and the average luminance value “partition_content_level_ave” corresponding to each divided area (partition) obtained by dividing the screen into a predetermined number (see FIG. 15).
  • the receiving apparatus 200 does not encode the luma dynamic range SEI message in the video stream VS or encodes the luma dynamic range SEI message in the video stream VS, but includes a synthesis threshold value. If it is not, a threshold setting unit 217 is provided in order to cope with the case where the composition threshold is not inserted in the subtitle stream SS.
  • the threshold setting unit 217 is configured in the same manner as the threshold setting unit 107 in the transmission device 100 illustrated in FIG.
  • the threshold value setting unit 217 determines a high luminance threshold value “Th_max” and a low luminance threshold value “for determining how to adjust the luminance of captions on the receiving side based on the electro-optic conversion characteristic (EOTF characteristic)”. “Th_min” and an average luminance threshold “Th_ave” are set (see FIG. 4).
  • step ST1 the receiving apparatus 200 reads the subtitle / rendering / metadata / description of the subtitle stream SS, and detects whether the encoded data of the subtitle information is text-based and the presence of the brightness adjustment meta information.
  • step ST2 the receiving apparatus 200 determines whether there is brightness adjustment meta information.
  • the receiving apparatus 200 reads the arrangement location of the meta information, and obtains the meta information (color space information, composition threshold, caption luminance range restriction information) from the arrangement location.
  • the receiving apparatus 200 proceeds to the process of step ST5.
  • the receiving apparatus 200 sets the color space to the conventional type, and sets the compositing threshold value and the caption luminance range restriction information by itself. After this step ST4, the receiving apparatus 200 proceeds to the process of step ST5.
  • step ST5 the receiving apparatus 200 determines whether or not the encoded data of the caption information is text-based. In the case of the text base, the receiving apparatus 200 decodes the text-based subtitle in step ST6, develops the font from the character code together with the subtitle synthesis position, and generates bitmap data. At this time, bitmap data is created with the size of the development and the foreground and background colors.
  • step ST7 the receiving apparatus 200 obtains the subtitle foreground luminance Lf and the subtitle background luminance Lb based on the color space information. After this step ST7, the receiving apparatus 200 proceeds to the process of step ST16.
  • the receiving apparatus 200 decodes the subtitle stream in step ST8, and obtains subtitle bitmap data and subtitle synthesis position.
  • step ST9 the receiving apparatus 200 obtains the subtitle foreground luminance Lf and the subtitle background luminance Lb through the CLUT specified by the stream. After this step ST9, the receiving apparatus 200 proceeds to the process of step ST16.
  • step ST11 the receiving apparatus 200 reads the HDR / rendering / support / descriptor and detects the presence / absence of the luma / dynamic range / SEI of the video stream VS.
  • step ST12 the receiving apparatus 200 determines whether there is a luma / dynamic range / SEI message. If present, the receiving apparatus 200 reads each element of the SEI message and detects the background image luminance value and the composition threshold value in step ST13. After this step ST13, the receiving apparatus 200 proceeds to the process of step ST15. On the other hand, if present, in step ST14, the receiving apparatus 200 calculates the luminance level of the decoded image to obtain the background image luminance value, and sets a synthesis threshold value. After this step ST14, the receiving apparatus 200 proceeds to the process of step ST15.
  • step ST15 the receiving apparatus 200 determines whether there is partition division information.
  • the receiving apparatus 200 determines in step ST16 whether the low-brightness and high-brightness object is separated from the caption synthesis (superimposition) position.
  • the receiving apparatus 200 proceeds to the process of step ST18.
  • the receiving apparatus 200 performs brightness level adjustment processing based on partition division information in step ST17. After this step ST17, the receiving apparatus 200 proceeds to the process of step ST19.
  • step ST18 the receiving apparatus 200 performs global brightness level adjustment processing. After this step ST8, the receiving apparatus 200 proceeds to the process of step ST19.
  • step ST19 the receiving apparatus 200 synthesizes (superimposes) a caption on the background image with the adjusted luminance. After step ST19, the receiving apparatus 200 ends the process in step ST20.
  • step ST21 the receiving apparatus 200 starts processing.
  • step ST22 the receiving apparatus 200 determines whether the maximum luminance value is higher than a high luminance threshold value. When the value is high, the receiving apparatus 200 determines in step ST23 whether the minimum luminance value is lower than a low luminance threshold value. When the value is not a low value, the reception device 200 corrects the luminance level of the background of the caption so that it falls within the range when the maximum and minimum ratio of the caption is specified in step ST24 (FIG. 10 (a)). -2)).
  • the receiving apparatus 200 determines in step ST25 whether the average luminance value is higher than the average luminance threshold value. When the value is high, the reception apparatus 200 corrects the luminance level of the background of the caption so that it falls within the range when the maximum / minimum ratio of the caption is specified in step ST26 (FIG. 11 ( d-2)). On the other hand, when the value is not a high value, the receiving apparatus 200 corrects the luminance level of the foreground of the subtitle so that it falls within the range when the maximum / minimum ratio of the subtitle is specified in step ST27 (FIG. 11). (See (c-2)).
  • the receiving apparatus 200 determines in step ST28 whether the minimum luminance value is lower than a low luminance threshold value. When the value is low, the receiving apparatus 200 corrects the luminance level of the foreground of the caption so that it falls within the range when the maximum / minimum ratio of the caption is specified in step ST29 (FIG. 10 ( b-2)). On the other hand, when it is not a low value, the receiving apparatus 200 does not adjust the luminance of the caption in step ST30.
  • the receiving apparatus 200 performs the flow processing of FIG. 33 in units of partition division, and makes a final decision among the results of each division by majority decision, or Give priorities (starting from 1), and do in that order.
  • the result corresponding to step ST24 in FIG. 33 is priority 1 (in the example of FIG. 13, the result of division D), and the result corresponding to step ST26 in FIG. (The result of division C in the example of FIG. 13), the result corresponding to step ST29 in FIG. 33 is given priority 3 (the result of division F in the example of FIG. 13), and the result corresponding to step ST27 in FIG.
  • the priority is set to 4 (the result of division E in the example of FIG. 13).
  • luminance level adjustment information for adjusting the luminance level of captions is inserted into the video stream VS and the subtitle stream SS. Therefore, it is possible to satisfactorily adjust the subtitle brightness level on the receiving side, and it is possible to suppress visual fatigue and to prevent the atmosphere of the background image from being damaged.
  • identification information indicating that luminance level adjustment information is inserted into the video stream VS is inserted into the transport stream TS (container). Therefore, the receiving side can easily recognize that the luminance level adjustment information is inserted into the video stream VS by this identification information.
  • identification information indicating that luminance level adjustment information is inserted into the subtitle stream SS is inserted into the transport stream TS (container). Therefore, the receiving side can easily recognize that the luminance level adjustment information is inserted into the subtitle stream SS based on this identification information.
  • the container is a transport stream (MPEG-2 TS)
  • MPEG-2 TS transport stream
  • the transport is not limited to the TS
  • the video layer can be realized by the same method even in the case of other packets such as ISOBMFF and MMT.
  • the subtitle stream does not need to be limited to being composed of PES packets placed in the multiplexed payload after the TTML is placed in the segment, and the content of the present technology is the above-described multiplexing. This can also be realized by directly arranging the TTML in the PES packet or section section arranged in the payload.
  • this technique can also take the following structures.
  • a video encoder that generates a video stream having image data
  • a subtitle encoder that generates a subtitle stream having subtitle information
  • An adjustment information insertion unit that inserts luminance level adjustment information for adjusting the luminance level of subtitles into the video stream and / or the subtitle stream
  • a transmission apparatus comprising: a transmission unit configured to transmit a container having a predetermined format including the video stream and the subtitle stream.
  • the luminance level adjustment information is luminance level adjustment information corresponding to the entire screen and / or luminance level adjustment information corresponding to each divided area obtained by dividing the screen into a predetermined number. The transmitting device described.
  • the transmission apparatus wherein the luminance level adjustment information inserted into the video stream includes a maximum luminance value, a minimum luminance value, and an average luminance value generated based on the image data.
  • the luminance level adjustment information inserted into the video stream further includes a high luminance threshold, a low luminance threshold, and an average luminance threshold set based on the electro-optic conversion characteristics. Transmitter.
  • the luminance level adjustment information inserted into the subtitle stream further includes a high luminance threshold, a low luminance threshold, and an average luminance threshold set based on the electro-optic conversion characteristics. Transmitter.
  • the transmission device according to (5) or (6), wherein the luminance level adjustment information inserted into the subtitle stream further includes color space information.
  • the subtitle encoder generates the subtitle stream based on the text information of the TTML caption, The transmission apparatus according to any one of (1) to (7), wherein the adjustment information insertion unit inserts the luminance level adjustment information using an element of metadata existing in a header having a TTML structure.
  • the subtitle encoder generates the subtitle stream based on text information of TTML subtitles, The transmission apparatus according to any one of (1) to (7), wherein the adjustment information insertion unit inserts the luminance level adjustment information using a styling extension element present in a header having a TTML structure.
  • the subtitle encoder generates the subtitle stream including segments as components, The transmission apparatus according to any one of (1) to (7), wherein the adjustment information insertion unit inserts a segment including the luminance level adjustment information into the subtitle stream.
  • (13) The transmission device according to (12), wherein information indicating an insertion position of the luminance level adjustment information in the subtitle stream is added to the identification information.
  • a video encoding step for generating a video stream including image data
  • a subtitle encoding step for generating a subtitle stream including subtitle data
  • An adjustment information insertion step of inserting luminance level adjustment information for adjusting the luminance level of subtitles into the video stream and / or the subtitle stream
  • the transmission method which has a transmission step which transmits the container of the predetermined format containing the said video stream and the said subtitle stream by a transmission part.
  • a receiving unit that receives a container in a predetermined format including a video stream having image data and a subtitle stream having subtitle information;
  • a video decoding unit that decodes the video stream to obtain image data;
  • a subtitle decoding unit that decodes the subtitle stream to obtain bitmap data of subtitles;
  • a brightness level adjustment unit that performs brightness level adjustment processing on the bitmap data based on the brightness level adjustment information;
  • a receiving apparatus comprising: a video superimposing unit that superimposes the bitmap data after the luminance level adjustment obtained by the luminance level adjusting unit on the image data obtained by the video decoding unit.
  • the luminance level adjustment unit performs luminance level adjustment using the luminance level adjustment information inserted in the video stream and / or the subtitle stream.
  • a luminance level adjustment information generation unit that generates the luminance level adjustment information is further provided.
  • the receiving device wherein the luminance level adjustment unit performs luminance level adjustment using the luminance level adjustment information generated by the luminance level adjustment information generation unit.
  • a receiving method comprising: a video superimposing step of superimposing the bitmap data after the luminance level adjustment obtained in the luminance level adjusting step on the image data obtained in the video decoding step.
  • a transmission unit that transmits a video stream including transmission video data obtained by performing high dynamic range photoelectric conversion on high dynamic range image data in a container of a predetermined format;
  • a transmission apparatus comprising: an identification information insertion unit that inserts identification information indicating that the video stream corresponds to a high dynamic range into the container.
  • a transmission unit that transmits a video stream including image data and a subtitle stream having subtitle text information in a container of a predetermined format;
  • a transmission apparatus comprising: an identification information insertion unit that inserts identification information indicating that subtitles are transmitted in a text code into the container.
  • the main feature of this technology is that the luminance level adjustment information for adjusting the luminance level of the subtitle is inserted into the video stream VS and the subtitle stream SS, so that the luminance level of the subtitle can be adjusted satisfactorily on the receiving side. In other words, it is possible to suppress visual fatigue and to prevent the atmosphere of the background image from being impaired (see FIG. 27).
  • DESCRIPTION OF SYMBOLS 10 ... Transmission / reception system 100 ... Transmission apparatus 101 ... Control part 102 ... HDR camera 103 ... HDR photoelectric conversion part 103a ... Master monitor 104 ... RGB / YCbCr conversion part 105 ... Video encoder 106: Luminance level calculation unit 106a, 106b ... Pixel value comparison unit 107 ... Threshold setting unit 108 ... Subtitle generation unit 109 ... Text format conversion unit 110 ... Subtitle encoder 111 ... System encoder 112 ... Transmitter 200 ... Receiver 201 ... Control unit 202 ... Receiver 203 ... System decoder 204 ... Video decoder 205 ...
  • Subtitle text decoder 206 ..Font development unit 208 ... RGB / YCbCr conversion unit 209 ... Brightness level adjustment unit 210 ... Video superimposing unit 211 ... YCbCr / RGB conversion unit 212 ... HDR electro-optic conversion unit 213 ... HDR display mapping unit 214 ... CE monitor

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Security & Cryptography (AREA)
  • Library & Information Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Studio Circuits (AREA)
  • Television Systems (AREA)

Abstract

Le problème décrit par la présente invention est de permettre que le réglage du niveau de luminosité de sous-titres soit mis en œuvre de manière satisfaisante côté réception. La solution selon la présente invention porte sur un procédé consistant à générer un flux vidéo comportant des données d'image au moyen d'un codeur vidéo. Un flux de sous-titres comportant des informations de sous-titres est généré par un codeur de sous-titres. Les informations de réglage de niveau de luminosité permettant de régler le niveau de luminosité des sous-titres sont insérées dans le flux vidéo et/ou le flux de sous-titres au moyen d'une unité d'insertion d'informations de réglage. Un conteneur dans un format prédéfini comprend le flux vidéo et le flux de sous-titres et est émis par une unité d'émission.
PCT/JP2016/052594 2015-02-03 2016-01-29 Dispositif d'émission, procédé d'émission, dispositif de réception et procédé de réception WO2016125691A1 (fr)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US15/542,524 US10542304B2 (en) 2015-02-03 2016-01-29 Transmission device, transmission method, reception device, and reception method
RU2017126901A RU2712433C2 (ru) 2015-02-03 2016-01-29 Передающее устройство, способ передачи, приемное устройство и способ приема
JP2016573323A JP6891492B2 (ja) 2015-02-03 2016-01-29 送信装置、送信方法、受信装置および受信方法
EP16746522.8A EP3255892B1 (fr) 2015-02-03 2016-01-29 Dispositif d'émission, procédé d'émission, dispositif de réception et procédé de réception
CN201680007336.8A CN107211169B (zh) 2015-02-03 2016-01-29 发送装置、发送方法、接收装置以及接收方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015-019761 2015-02-03
JP2015019761 2015-02-03

Publications (1)

Publication Number Publication Date
WO2016125691A1 true WO2016125691A1 (fr) 2016-08-11

Family

ID=56564038

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/052594 WO2016125691A1 (fr) 2015-02-03 2016-01-29 Dispositif d'émission, procédé d'émission, dispositif de réception et procédé de réception

Country Status (6)

Country Link
US (1) US10542304B2 (fr)
EP (1) EP3255892B1 (fr)
JP (1) JP6891492B2 (fr)
CN (1) CN107211169B (fr)
RU (1) RU2712433C2 (fr)
WO (1) WO2016125691A1 (fr)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107450814A (zh) * 2017-07-07 2017-12-08 深圳Tcl数字技术有限公司 菜单亮度自动调节方法、用户设备及存储介质
JPWO2016152684A1 (ja) * 2015-03-24 2018-01-11 ソニー株式会社 送信装置、送信方法、受信装置および受信方法
JP2018129706A (ja) * 2017-02-09 2018-08-16 シャープ株式会社 受信装置、テレビジョン受像機、映像信号生成装置、送信装置、映像信号伝送システム、受信方法、プログラム、及び記録媒体
WO2018147196A1 (fr) * 2017-02-09 2018-08-16 シャープ株式会社 Dispositif d'affichage, récepteur de télévision, procédé de traitement vidéo, procédé de commande de rétroéclairage, dispositif de réception, dispositif de génération de signal vidéo, dispositif de transmission, système de transmission de signal vidéo, procédé de réception, programme, programme de commande et support d'enregistrement
JP6407496B1 (ja) * 2017-08-23 2018-10-17 三菱電機株式会社 映像再生装置
JP2019004267A (ja) * 2017-06-13 2019-01-10 エヌ・ティ・ティ・コムウェア株式会社 情報提供システム、及び情報提供方法
JP2019506817A (ja) * 2015-11-24 2019-03-07 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. 複数のhdr画像ソースの処理
JP2019040659A (ja) * 2018-08-07 2019-03-14 三菱電機株式会社 映像コンテンツ媒体
WO2019087775A1 (fr) * 2017-10-31 2019-05-09 ソニー株式会社 Dispositif de reproduction, procédé de reproduction, programme et support d'enregistrement
JPWO2018070255A1 (ja) * 2016-10-11 2019-07-25 ソニー株式会社 送信装置、送信方法、受信装置および受信方法
JP2019124869A (ja) * 2018-01-18 2019-07-25 日本放送協会 表示制御装置及びプログラム
JP2019153939A (ja) * 2018-03-02 2019-09-12 日本放送協会 文字スーパー合成装置及びプログラム
JP2020162160A (ja) * 2020-06-18 2020-10-01 三菱電機株式会社 映像再生方法
JP2022171984A (ja) * 2016-09-14 2022-11-11 ソニーグループ株式会社 送信装置、送信方法、受信装置および受信方法
WO2023095718A1 (fr) * 2021-11-29 2023-06-01 パナソニックIpマネジメント株式会社 Procédé de traitement vidéo, dispositif de traitement vidéo et programme
JP7502902B2 (ja) 2020-05-29 2024-06-19 キヤノン株式会社 画像処理装置、撮像装置、制御方法及びプログラム

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10536665B2 (en) * 2016-02-01 2020-01-14 Lg Electronics Inc. Device for transmitting broadcast signal, device for receiving broadcast signal, method for transmitting broadcast signal, and method for receiving broadcast signal
WO2020000135A1 (fr) 2018-06-25 2020-01-02 华为技术有限公司 Procédé et appareil de traitement d'une vidéo à plage dynamique élevée comprenant des sous-titres
CN111279687A (zh) * 2018-12-29 2020-06-12 深圳市大疆创新科技有限公司 视频的字幕处理方法和导播系统
CN115334348A (zh) * 2021-05-10 2022-11-11 腾讯科技(北京)有限公司 一种视频字幕调整方法、装置、电子设备和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005036550A1 (fr) * 2003-10-14 2005-04-21 Lg Electronics Inc. Support d'enregistrement possedant une structure de donnees destinee a gerer la reproduction de sous-titre texte et procedes et appareils d'enregistrement et de reproduction
US20050117813A1 (en) * 2002-11-29 2005-06-02 Matsushita Electric Industrial Co., Ltd. Image reproducing apparatus and image reproducing method
US20050123283A1 (en) * 2003-12-08 2005-06-09 Li Adam H. File format for multiple track digital data
WO2012172460A1 (fr) * 2011-06-14 2012-12-20 Koninklijke Philips Electronics N.V. Traitement graphique pour des données vidéo hdr (gamme dynamique élevée)
WO2015007910A1 (fr) * 2013-07-19 2015-01-22 Koninklijke Philips N.V. Transport de métadonnées hdr

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR0144427B1 (ko) * 1994-11-30 1998-10-01 이형도 광 주사장치
JPH08265661A (ja) * 1995-03-23 1996-10-11 Sony Corp 字幕データ符号化/復号化方法および装置、および符号化字幕データ記録媒体
US5721792A (en) * 1996-08-29 1998-02-24 Sensormatic Electronics Corporation Control of brightness of text/graphics overlay
JP2001333350A (ja) * 2000-03-15 2001-11-30 Sony Corp 画質調整方法および画質調整装置
US6741323B2 (en) * 2002-08-12 2004-05-25 Digital Theater Systems, Inc. Motion picture subtitle system and method
JP2004194311A (ja) 2002-11-29 2004-07-08 Matsushita Electric Ind Co Ltd 映像再生装置及び映像再生方法
RU2358337C2 (ru) * 2003-07-24 2009-06-10 Эл Джи Электроникс Инк. Носитель записи, имеющий структуру данных для управления воспроизведением данных текстовых субтитров, записанных на нем, и устройства и способы записи и воспроизведения
KR100599118B1 (ko) * 2004-07-20 2006-07-12 삼성전자주식회사 자막신호표시상태를 변경하는 데이터재생장치 및 그 방법
JP4518194B2 (ja) * 2008-06-10 2010-08-04 ソニー株式会社 生成装置、生成方法、及び、プログラム
JP5685969B2 (ja) 2011-02-15 2015-03-18 ソニー株式会社 表示制御方法、表示制御装置
CN102843603A (zh) * 2012-08-17 2012-12-26 Tcl集团股份有限公司 一种智能电视及其字幕控制的方法
PL3783883T3 (pl) * 2013-02-21 2024-03-11 Dolby Laboratories Licensing Corporation Systemy i sposoby odwzorowywania wyglądu w przypadku komponowania grafiki nakładkowej
WO2015050857A1 (fr) * 2013-10-02 2015-04-09 Dolby Laboratories Licensing Corporation Transmission de métadonnées de gestion d'affichage sur hdmi
CN103905744B (zh) * 2014-04-10 2017-07-11 中央电视台 一种渲染合成方法及系统
US10419718B2 (en) * 2014-07-11 2019-09-17 Lg Electronics Inc. Method and device for transmitting and receiving broadcast signal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050117813A1 (en) * 2002-11-29 2005-06-02 Matsushita Electric Industrial Co., Ltd. Image reproducing apparatus and image reproducing method
WO2005036550A1 (fr) * 2003-10-14 2005-04-21 Lg Electronics Inc. Support d'enregistrement possedant une structure de donnees destinee a gerer la reproduction de sous-titre texte et procedes et appareils d'enregistrement et de reproduction
US20050123283A1 (en) * 2003-12-08 2005-06-09 Li Adam H. File format for multiple track digital data
WO2012172460A1 (fr) * 2011-06-14 2012-12-20 Koninklijke Philips Electronics N.V. Traitement graphique pour des données vidéo hdr (gamme dynamique élevée)
WO2015007910A1 (fr) * 2013-07-19 2015-01-22 Koninklijke Philips N.V. Transport de métadonnées hdr

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
See also references of EP3255892A4 *
ZENJI NISHIKAWA: "Nishikawa Zenji no Daigamen Mania Dai 200 Kai Mietekita 4K Blu-Ray 'ULTRA HD BLU-RAY' towa?", 13 January 2015 (2015-01-13), XP009505167, Retrieved from the Internet <URL:http://av.watch.impress.co.jp/docs/series/dg/20150113_683374.html> [retrieved on 20160404] *

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2016152684A1 (ja) * 2015-03-24 2018-01-11 ソニー株式会社 送信装置、送信方法、受信装置および受信方法
JP2019506817A (ja) * 2015-11-24 2019-03-07 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. 複数のhdr画像ソースの処理
JP2022171984A (ja) * 2016-09-14 2022-11-11 ソニーグループ株式会社 送信装置、送信方法、受信装置および受信方法
JP7397938B2 (ja) 2016-09-14 2023-12-13 サターン ライセンシング エルエルシー 送信装置、送信方法、受信装置および受信方法
JPWO2018070255A1 (ja) * 2016-10-11 2019-07-25 ソニー株式会社 送信装置、送信方法、受信装置および受信方法
JP7113746B2 (ja) 2016-10-11 2022-08-05 ソニーグループ株式会社 送信装置、送信方法、受信装置および受信方法
WO2018147196A1 (fr) * 2017-02-09 2018-08-16 シャープ株式会社 Dispositif d'affichage, récepteur de télévision, procédé de traitement vidéo, procédé de commande de rétroéclairage, dispositif de réception, dispositif de génération de signal vidéo, dispositif de transmission, système de transmission de signal vidéo, procédé de réception, programme, programme de commande et support d'enregistrement
JP2018129706A (ja) * 2017-02-09 2018-08-16 シャープ株式会社 受信装置、テレビジョン受像機、映像信号生成装置、送信装置、映像信号伝送システム、受信方法、プログラム、及び記録媒体
JP2019004267A (ja) * 2017-06-13 2019-01-10 エヌ・ティ・ティ・コムウェア株式会社 情報提供システム、及び情報提供方法
CN107450814A (zh) * 2017-07-07 2017-12-08 深圳Tcl数字技术有限公司 菜单亮度自动调节方法、用户设备及存储介质
CN107450814B (zh) * 2017-07-07 2021-09-28 深圳Tcl数字技术有限公司 菜单亮度自动调节方法、用户设备及存储介质
WO2019038848A1 (fr) * 2017-08-23 2019-02-28 三菱電機株式会社 Support de contenu vidéo et dispositif de reproduction vidéo
JP6407496B1 (ja) * 2017-08-23 2018-10-17 三菱電機株式会社 映像再生装置
CN110999280B (zh) * 2017-08-23 2022-02-25 三菱电机株式会社 影像内容介质和影像再现装置
CN110999280A (zh) * 2017-08-23 2020-04-10 三菱电机株式会社 影像内容介质和影像再现装置
KR102558213B1 (ko) * 2017-10-31 2023-07-24 소니그룹주식회사 재생 장치, 재생 방법, 프로그램, 및 기록 매체
EP3706412A4 (fr) * 2017-10-31 2020-11-04 Sony Corporation Dispositif de reproduction, procédé de reproduction, programme et support d'enregistrement
JPWO2019087775A1 (ja) * 2017-10-31 2020-11-19 ソニー株式会社 再生装置、再生方法、プログラム、および記録媒体
KR20200077513A (ko) * 2017-10-31 2020-06-30 소니 주식회사 재생 장치, 재생 방법, 프로그램, 및 기록 매체
US11153548B2 (en) 2017-10-31 2021-10-19 Sony Corporation Reproduction apparatus, reproduction method, program, and recording medium
JP7251480B2 (ja) 2017-10-31 2023-04-04 ソニーグループ株式会社 再生装置、再生方法、プログラム
WO2019087775A1 (fr) * 2017-10-31 2019-05-09 ソニー株式会社 Dispositif de reproduction, procédé de reproduction, programme et support d'enregistrement
JP7002948B2 (ja) 2018-01-18 2022-01-20 日本放送協会 表示制御装置及びプログラム
JP2019124869A (ja) * 2018-01-18 2019-07-25 日本放送協会 表示制御装置及びプログラム
JP2019153939A (ja) * 2018-03-02 2019-09-12 日本放送協会 文字スーパー合成装置及びプログラム
JP7012562B2 (ja) 2018-03-02 2022-01-28 日本放送協会 文字スーパー合成装置及びプログラム
JP2019040659A (ja) * 2018-08-07 2019-03-14 三菱電機株式会社 映像コンテンツ媒体
JP7502902B2 (ja) 2020-05-29 2024-06-19 キヤノン株式会社 画像処理装置、撮像装置、制御方法及びプログラム
JP7069378B2 (ja) 2020-06-18 2022-05-17 三菱電機株式会社 映像コンテンツ媒体
JP2021119663A (ja) * 2020-06-18 2021-08-12 三菱電機株式会社 映像再生方法および映像コンテンツ媒体
JP2020162160A (ja) * 2020-06-18 2020-10-01 三菱電機株式会社 映像再生方法
WO2023095718A1 (fr) * 2021-11-29 2023-06-01 パナソニックIpマネジメント株式会社 Procédé de traitement vidéo, dispositif de traitement vidéo et programme

Also Published As

Publication number Publication date
EP3255892A1 (fr) 2017-12-13
EP3255892B1 (fr) 2021-12-29
RU2712433C2 (ru) 2020-01-28
RU2017126901A3 (fr) 2019-07-25
CN107211169B (zh) 2020-11-20
JPWO2016125691A1 (ja) 2017-11-09
JP6891492B2 (ja) 2021-06-18
RU2017126901A (ru) 2019-01-28
US20180270512A1 (en) 2018-09-20
EP3255892A4 (fr) 2018-12-26
CN107211169A (zh) 2017-09-26
US10542304B2 (en) 2020-01-21

Similar Documents

Publication Publication Date Title
JP6891492B2 (ja) 送信装置、送信方法、受信装置および受信方法
US11716493B2 (en) Transmission device, transmission method, reception device, reception method, display device, and display method
JP6519329B2 (ja) 受信装置、受信方法、送信装置および送信方法
CN101218827B (zh) 对包括图像序列和标识的视频内容进行编码的方法和设备
US20200296322A1 (en) Transmission device, transmission method, reception device, and reception method
US11330303B2 (en) Transmission device, transmission method, reception device, and reception method
JP6738413B2 (ja) トランスポートストリームにおける高ダイナミックレンジおよび広色域コンテンツの伝達
EP3324637B1 (fr) Dispositif d&#39;émission, procédé d&#39;émission, dispositif de réception et procédé de réception
KR20180069805A (ko) 송신 디바이스, 송신 방법, 수신 디바이스, 및 수신 방법
US10382834B2 (en) Transmission device, transmission method, receiving device, and receiving method
JP2024015131A (ja) 送信装置、送信方法、受信装置および受信方法
US10904592B2 (en) Transmission apparatus, transmission method, image processing apparatus, image processing method, reception apparatus, and reception method
JP2016076957A (ja) 送信装置、送信方法、受信装置および受信方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16746522

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016573323

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 15542524

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2017126901

Country of ref document: RU

Kind code of ref document: A

REEP Request for entry into the european phase

Ref document number: 2016746522

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE