US20220167019A1 - Method and device for reconstructing image data from decoded image data - Google Patents

Method and device for reconstructing image data from decoded image data Download PDF

Info

Publication number
US20220167019A1
US20220167019A1 US17/669,218 US202217669218A US2022167019A1 US 20220167019 A1 US20220167019 A1 US 20220167019A1 US 202217669218 A US202217669218 A US 202217669218A US 2022167019 A1 US2022167019 A1 US 2022167019A1
Authority
US
United States
Prior art keywords
image data
parameters
image
data
recovery mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/669,218
Inventor
Pierre Andrivon
David Touze
Nicolas Caramelli
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
InterDigital VC Holdings Inc
Original Assignee
InterDigital VC Holdings Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from EP17305212.7A external-priority patent/EP3367684A1/en
Priority claimed from JP2017239281A external-priority patent/JP7086587B2/en
Application filed by InterDigital VC Holdings Inc filed Critical InterDigital VC Holdings Inc
Priority to US17/669,218 priority Critical patent/US20220167019A1/en
Assigned to INTERDIGITAL VC HOLDINGS, INC. reassignment INTERDIGITAL VC HOLDINGS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: THOMSON LICENSING SAS
Assigned to THOMSON LICENSING reassignment THOMSON LICENSING ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Andrivon, Pierre, CARAMELLI, NICOLAS, TOUZE, DAVID
Publication of US20220167019A1 publication Critical patent/US20220167019A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/65Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using error resilience
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/98Adaptive-dynamic-range coding [ADRC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/434Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
    • H04N21/4344Remultiplexing of multiplex streams, e.g. by modifying time stamps or remapping the packet identifiers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/68Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits
    • H04N9/69Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits for modifying the colour signals by gamma correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present principles generally relate to image/video reconstruction from decoded image/video data. Particularly, but not exclusively, the technical field of the present principles is related to recovering parameters for reconstructing an image from another image.
  • image data refer to one or several arrays of samples (pixel values) in a specific image/video format which specifies all information relative to the pixel values of an image (or a video) and all information which may be used by a display and/or any other device to visualize and/or decode a image (or video) for example.
  • An image comprises a first component, in the shape of a first array of samples, usually representative of luminance (or luma) of the image, and a second and third component, in the shape of other arrays of samples, usually representative of the color (or chroma) of the image.
  • the same information may also be represented by a set of arrays of color samples, such as the traditional tri-chromatic RGB representation.
  • a pixel value is represented by a vector of C values, where C is the number of components.
  • Each value of a vector is represented with a number of bits which defines a maximal dynamic range of the pixel values.
  • Standard-Dynamic-Range images are images whose luminance values are represented with a limited number of bits (typically 8). This limited representation does not allow correct rendering of small signal variations, in particular in dark and bright luminance ranges.
  • HDR images high-dynamic range images
  • pixel values representing luminance levels are usually represented in floating-point format (typically at least 10 bits per component, namely float or half-float), the most popular format being openEXR half-float format (16-bit per RGB component, i.e. 48 bits per pixel) or in integers with a long representation, typically at least 16 bits.
  • HEVC High Efficiency Video Coding
  • ITU - T H. 265 Telecommunication standardization sector of ITU (10/2014), series H: audiovisual and multimedia systems, infrastructure of audiovisual services—coding of moving video, High efficiency video coding, Recommendation ITU - T H. 265)
  • HEVC High Efficiency Video Coding
  • WCG color gamut
  • HDR high dynamic range
  • SDR Standard dynamic range
  • SDR backward compatibility with decoding and rendering devices is an important feature in some video distribution systems, such as broadcasting or multicasting systems.
  • a solution based on a single layer coding/decoding process may be backward compatible, e.g. SDR compatible, and may leverage legacy distribution networks and services already in place.
  • Such a single layer based distribution solution enables both high quality HDR rendering on HDR-enabled Consumer Electronic (CE) devices, while also offering high quality SDR rendering on SDR-enabled CE devices.
  • CE Consumer Electronic
  • Such a single layer based distribution solution generates an encoded signal, e.g. SDR signal, and associated metadata (of a few bytes per video frame or scene) that can be used to reconstruct another signal, e.g. HDR signal, from a decoded signal, e.g. SDR signal.
  • an encoded signal e.g. SDR signal
  • metadata of a few bytes per video frame or scene
  • HDR signal e.g. HDR signal
  • Metadata stored parameters values used for the reconstruction of the signal may be static or dynamic.
  • Static metadata means metadata that remains the same for a video (set of images) and/or a program.
  • Static metadata are valid for the whole video content (scene, movie, clip . . . ) and may not depend on the image content. They may define, for example, image format or color space, color gamut. For instance, SMPTE ST 2086:2014, “Mastering Display Color Volume Metadata Supporting High Luminance and Wide Color Gamut Images” is such a kind of static metadata for use in production environment.
  • the Mastering Display Colour Volume (MDCV) SEI (Supplemental Enhanced Information) message is the distribution flavor of ST 2086 for both H.264/AVC (“ Advanced video coding for generic audiovisual Services ”, SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS, Recommendation ITU-T H.264, Telecommunication Standardization Sector of ITU, January 2012) and HEVC video codecs.
  • Dynamic metadata are content-dependent, that is metadata can change with the image/video content, e.g. for each image or when each group of images.
  • SMPTE ST 2094:2016 standards families, “Dynamic Metadata for Color Volume Transform” are dynamic metadata for use in production environment.
  • SMPTE ST 2094-30 can be distributed along HEVC coded video stream thanks to the Color Remapping Information (CRI) SEI message.
  • CRI Color Remapping Information
  • HDR 10-bits image data e.g. image data which signal is represented as an HLG10 or PQ10 signal as specified in Rec. ITU-R BT.2100-0 “Recommendation ITU-R BT.2100-0, Image parameter values for high dynamic range television for use in production and international program exchange”
  • an input signal typically 12 or 16 bits
  • the dynamic range of the reconstructed signal being adapted according to the associated metadata that may depend on characteristics of a target display.
  • the single layer based distribution solutions cannot work without the presence of different bunch of dynamic metadata with some of them being critical for guaranteeing the success of the reconstruction of the video signal.
  • Similar issues may also occur when dynamic metadata are not aligned with an image whose graphics or overlay is added to. This occurs, for example, when graphics (overlays, OSD, . . . ) are inserted in (added to) an image outside the distribution chain because the metadata, computed for said image, is also applied once the graphics are inserted in (added to) the image. The metadata are then considered as being not aligned with the image whose graphics or overlay are added to because they may not be adapted to the part of said image which contains graphics or overlay.
  • the present principles set out to remedy at least one of the drawbacks of the prior art with a method and a device for reconstructing image data representative of original image data from decoded image data and parameters obtained from a bitstream, said parameters having been processed from said original image data.
  • the method comprises:
  • the information data is explicitly signaled by a syntax element in a bitstream.
  • the information data (ID) is implicitly signaled.
  • the information data identifies what is the processing applied to the original image data to process the parameters.
  • a parameter is considered as being lost when it is not retrieved from a bitstream.
  • a parameter is considered as being corrupted when at least one of the following conditions is fulfilled:
  • a recovery mode is to replace all the parameters by recovered parameters even if only some of the parameters are not corrupted, lost or not aligned with the decoded image data whose graphics or overlay is added to.
  • a recovery mode is to replace each lost, corrupted or not aligned parameters by a recovered parameter.
  • a recovery mode is to replace a lost, corrupted or not aligned parameter by a value of a set of pre-determined parameter values previously stored.
  • a recovery mode is selected according to either at least one characteristic of original image data, or of a mastering display used to grade the original image data or the image data to be reconstructed, or at least one characteristic of reconstructed image data or of a target display.
  • the present principles also relate to a device comprising means for implementing the above method and a non-transitory processor-readable medium whose program code instructions to execute the steps of the above method when this program is executed on a computer.
  • FIG. 1 shows a diagram of the steps of a method for reconstructing an image I 3 representative of original image I 1 from a decoded image in accordance with an example of the present principles
  • FIG. 2 shows an end-to-end workflow supporting content production and delivery to HDR and SDR enabled CE displays in accordance with an example of the present principles
  • FIG. 3 shows a variant of the end-to-end workflow of FIG. 2 in accordance with an embodiment of the present principles
  • FIG. 4 shows a variant of the end-to-end workflow of FIG. 2 in accordance with another embodiment of the present principles
  • FIG. 5 a shows an illustration of a perceptual transfer function
  • FIG. 5 b shows an example of a piece-wise curve used for mapping
  • FIG. 5 c shows an example of a curve used for converting back a signal to a linear light domain
  • FIG. 6 shows another example of the use of a method for reconstructing an image from a decoded image data and parameters obtained from a bitstream in accordance with an example of the present principles
  • FIG. 7 shows an example of an architecture of a device in accordance with an example of present principles
  • each block represents a circuit element, module, or portion of code which comprises one or more executable instructions for implementing the specified logical function(s).
  • the function(s) noted in the blocks may occur out of the order noted. For example, two blocks shown in succession may, in fact, be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending on the functionality involved.
  • the capital symbols for example (C 1 , C 2 , C 3 ), designate components of a first image
  • lower-case symbols for example (c 1 , c 2 , c 3 ) designate components of another image whose dynamic range of the luminance is lower than the dynamic range of the luminance of the first image.
  • the dynamic range of the luminance of an image is the ratio between the maximum over the minimum of the luminance values of said image.
  • the dynamic range of the luminance of a SDR image is 500 (100 cd/m2 over 0.2 cd/m2) and 10000 (1000 cd/m2 over 0.1 cd/m2) for an HDR image.
  • prime symbols designate gamma-compressed components of a first image when those prime symbols are capital symbols and prime symbols, for example (y′,u′,v′), designate gamma-compressed components of a second image when those prime symbols are lower-case symbols.
  • the present principles are described for coding/decoding/reconstructing an image but extends to the coding/decoding/reconstruction of a sequence of images (video) because each image of the sequence is sequentially encoded/decoded/reconstructed as described below.
  • FIG. 1 shows a diagram of the steps of a method for reconstructing an image I 3 representative of original image I 1 from a decoded image in accordance with an example of the present principles.
  • a set of parameters SP is obtained to reconstruct the image I 3 .
  • These parameters are either parameters P obtained from the bitstream B, or recovered parameters P r when at least one parameter P is lost, corrupted or not aligned with a decoded image whose graphics or overlay is added to.
  • a module M 1 obtains the decoded image and in step 12 , a module M 2 reconstructs the image I 3 from the decoded image by using the set of parameters SP.
  • the decoded image data is obtained from the bitstream (signal) B or any other bitstream and, possibly, said bitstreams may be stored on a local memory or any other storage medium.
  • a module M 3 obtains the parameters P required to reconstruct the image I 3 .
  • a module M 4 checks if at least one of the parameters P is lost, corrupted or not aligned with the decoded image whose graphics or overlay is added to.
  • the set of parameters SP only comprises the parameters P.
  • a module M 5 obtains an information data ID indicating how said parameters have been processed, in sub-step 104 (of step 10 ), a module M 6 selects a recovery mode RM i according to said information data ID, and in sub-step 105 (of step 10 ), a module M 7 recovers said at least one lost, corrupted or not aligned parameter by applying the selected recovery mode RM i .
  • the at least one recovered parameter P r is added to the set of parameters SP.
  • step 12 the image I 3 is then reconstructed taking also into account said at least one recovered parameter P r .
  • the method is advantageous because it allows to obtain parameters for a single layer based distribution solution when multiple single layer based distribution solutions share a same set of syntax elements for carrying a common set of parameters and when said single layer based distribution solutions require different recovery modes (process) for recovering lost, corrupted or not aligned parameters, guaranteeing thus the success of the reconstruction of the image I 3 for each of said single layer based distribution solution.
  • the method is also advantageous when a CE device, typically a set-top-box or a player, inserts graphics on top of a decoded image , because the method selects a specific recovery mode to replace the not aligned parameters by parameters adapted to the decoded image I 2 plus the graphics (or overlay) and reconstructs the image I 3 by using said recovered parameters from said decoded image whose graphics or overlay is added to, avoiding thus some flickering artefacts or undesired effects impacting the reconstructed image quality.
  • the method as described in reference with FIG. 1 may be used in various applications when an image must be reconstructed from a decoded image.
  • FIG. 2 shows an end-to-end workflow supporting content production and delivery to HDR and SDR enabled CE displays in accordance with an example of the present principles.
  • This workflow involves a single layer based distribution solution with associated metadata and illustrates an example of the use of a method for reconstructing an image I 3 representative of original image data I 1 from a decoded image data and a set of parameters SP obtained in accordance with an example of the present principles illustrated in FIG. 1 .
  • this single layer based distribution solution comprises a pre-processing part and a post-processing part.
  • a pre-processing stage 20 decomposes the original image I 1 in an output image I 12 and a set of parameters SP, and a switching step 24 determines if either the original image I 1 or the output image I 12 is encoded in the bitstream B (step 23 ).
  • the image I 2 may be encoded with any legacy video codec and the bitstream B is carried throughout an existing legacy distribution network with accompanying associated metadata (set of parameters SP) conveyed on a specific channel or embedded in the bitstream B.
  • bitstream B with accompanying metadata are stored on a storage medium such a Blu-ray disk or a memory or a register of a Set-Top-Box for example.
  • the accompanying associated metadata is carried by another specific channel or store on a separate storage medium.
  • the video is coded with H.265/HEVC codec ( ITU - T H. 265 Telecommunication standardization sector of ITU (10/2014), series H: audiovisual and multimedia systems, infrastructure of audiovisual services—coding of moving video, High efficiency video coding, Recommendation ITU - T H. 265) or H.264/AVC (“ Advanced video coding for generic audiovisual Services ”, SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS, Recommendation ITU-T H.264, Telecommunication Standardization Sector of ITU, January 2012).
  • H.265/HEVC codec ITU - T H. 265 Telecommunication standardization sector of ITU (10/2014)
  • series H audiovisual and multimedia systems
  • infrastructure of audiovisual services coding of moving video
  • High efficiency video coding High efficiency video coding
  • H.264/AVC Advanced video coding for generic audiovisual Services
  • SERIES H AUDIOVISUAL AND MULTIMEDIA
  • the information data ID determines that the original image I 1 (possibly represented by the components (C 1 , U′, V′) or a Y′CbCr 4:2:0 PQ10 or HLG10 video signal) is encoded in step 23 , said original image I 1 may be encoded with the HEVC Main 10 profile.
  • the output image I 12 which can be represented as a Y′CbCr 4:2:0 gamma transfer characteristics (Standard Dynamic Range) signal may be encoded with any HEVC profile including Main 10 or Main profiles.
  • the information data ID may be also conveyed as associated metadata (step 23 ).
  • a decoded image is obtained from the bitstream B (step 11 )
  • a set of parameters SP is obtained as explained in FIG. 1 (step 10 )
  • a post-processing stage 12 which is the functional inverse of the pre-processing stage 20 , reconstructs an image I 3 from the decoded image and the set of parameters SP.
  • This single layer based distribution solution may also comprise optional format adapting steps 21 , 22 , 25 , 26 .
  • the format of the original image I 1 may be adapted to a specific format (C 1 , U′, V′) of the input of the pre-processing stage 20
  • the format (c, u′, v′) of the output image I 12 may also be adapted to a specific output format before encoding
  • the format of the decoded image may be adapted to a specific format of the input of the post-processing stage 12
  • the image I 3 may be adapted to at least one characteristic of a targeted apparatus (e.g.
  • a Set-Top-Box a connected TV, HDR/SDR enabled CE device, a Blu-ray disc player
  • an inverse gamut mapping may be used when the decoded image and the image I 3 or the original image I 1 are represented in different color spaces and/or gamut.
  • Said format adaptation steps may include color space conversion and/or color gamut mapping.
  • Usual format adapting processes may be used such as RGB-to-YUV or YUV-to-RGB conversion, BT.709-to-BT.2020 or BT.2020-to-BT.709, down-sampling or up-sampling chroma components, etc.
  • the well-known YUV color space refers also to the well-known YCbCr in the prior art.
  • Annex E of the recommendation ETSI recommendation ETSI TS 103 433 V1.1.1, release 2016-8 provides an example of format adapting processes and inverse gamut mapping (Annex D).
  • Said input format adaptation step 21 may also include adapting the bit depth of the original image I 1 to specific bit depth such as 10 bits for example, by applying a transfer function on the original image I 1 .
  • a transfer function for example, a PQ or HLG transfer function may be used (Rec. ITU-R BT.2100-0).
  • the pre-processing stage 20 comprises steps 200 - 202 .
  • a first component c 1 of the output image I 12 is obtained by mapping a first component C 1 of the original image I 1 :
  • mapping function TM may reduce or increase the dynamic range of the luminance of the original image I 1 and its inverse may increase or reduce the dynamic range of the luminance of an image.
  • a second and third component u′, v′ of the output image I 12 are derived by correcting second and third components U′, V′ of the original image I 1 according to the first component c 1 .
  • the correction of the chroma components may be maintained under control by tuning the parameters of the mapping.
  • the color saturation and hue are thus under control.
  • the second and third components U′ and V′ are divided by a scaling function ⁇ 0 (c 1 ) whose value depends on the first component c 1 .
  • the first component c 1 may be adjusted to further control the perceived saturation, as follows:
  • This step 202 allows to control the luminance of the output image I 12 to guarantee the perceived color matching between the colors of the output image I 12 and the colors of the original image I 1 .
  • the set of parameters SP may comprise parameters relative to the mapping function TM or its inverse ITM, the scaling function ⁇ 0 (c 1 ). These parameters are associated with dynamic metadata and carried in a bitstream, for example the bitstream B.
  • the parameters a and b may also be carried in a bitstream.
  • step 10 a set of parameters SP is obtained as explained in FIG. 1 .
  • the set of parameters SP is carried by static/dynamic metadata obtained from a specific channel or from a bitstream, including the bitstream B, possibly store on a storage medium.
  • step 11 the module M 1 obtains a decoded image by decoding the bitstream B and the decoded image is then available for either an SDR or HDR enabled CE display.
  • the post-processing stage 12 comprises steps 120 - 122 .
  • the first component c of the decoded image may be adjusted as follows:
  • step 121 the first component C 1 of the image I 3 is obtained by inverse-mapping the first component c 1 :
  • step 122 the second and third component U′, V′ of the image I 3 are derived by inverse correcting the second and third components u′, v′ of the decoded image according to the component c 1 .
  • a second and third components u′ and v′ are multiplied by a scaling function ⁇ 0 (c 1 ) whose value depends on the first component c 1 .
  • the first component C 1 of the original image I 1 is a linear-light luminance component L obtained from the RGB component of the original image I 1 by:
  • the second and third component U′, V′ are derived by applying a pseudo-gammatization using square-root (close to BT.709 OETF) to the RGB components of the original image I 1 :
  • step 200 the first component y 1 of the output image I 12 is obtained by mapping said linear-light luminance component L:
  • step 201 the second and third component u′, v′ of the output image I 12 are derived by correcting the first and second components U′, V′ according to the first component y 1 .
  • a linear-light luminance component L of the image I 3 is obtained by inverse-mapping the first component c 1 :
  • step 122 the second and third component U′, V′ of the image I 3 are derived by inverse correcting the second and third components u′, v′ of the output image I 12 according to the first component y 1 .
  • step 122 the second and third components u′ and v′ are multiplied by a scaling function ⁇ 0 (y 1 ) whose value depends on the first component y 1 .
  • the first component C 1 of the original image I 1 is a component Y′ obtained from the gamma-compressed RGB components of the original image I 1 by:
  • may be a gamma factor, preferably equal to 2.4.
  • the component Y′ which is a non-linear signal, is different of the linear-light luminance component L.
  • step 200 the first component y′ 1 of the output image I 12 is obtained by mapping said component Y′:
  • step 121 a reconstructed component is obtained by inverse-mapping the first component y′ 1 :
  • ITM is the inverse of the mapping function TM.
  • the values of the reconstructed component belong thus to the dynamic range of the values of the component Y′.
  • step 201 the second and third component u′, v′ of the output image I 12 are derived by correcting the first and second components U′, V′ according to the first component y′ 1 and the reconstructed component .
  • This step 201 allows to control the colors of the output image I 12 and guarantees their matching to the colors of the original image I 1 .
  • the correction of the chroma components may be maintain under control by tuning the parameters of the mapping (inverse mapping).
  • the color saturation and hue are thus under control.
  • Such a control is not possible, usually, when a non-parametric perceptual transfer function is used.
  • the second and third components U′ and V′ are divided by a scaling function ⁇ 0 (y′ 1 ) whose value depends on the ratio of the reconstructed component over the component y′ 1 :
  • is constant value depending on the color primaries of the original image I 1 (equals to 1.3 for BT.2020 for example).
  • a component of the image I 3 is obtained by inverse-mapping the first component y′ 1 :
  • step 122 the second and third component U′, V′ of the image I 3 are derived by inverse correcting the second and third components u′, v′ of the decoded image according to the first component y′ 1 and the component .
  • a second and third components u′ and v′ are multiplied by the scaling function ⁇ 0 (y′ 1 ).
  • the mapping function TM is based on a perceptual transfer function, whose goal is to convert a component of an original image I 1 into a component of an output image I 12 , thus reducing (or increasing) the dynamic range of the values of their luminance.
  • the values of a component of an output image I 12 belong thus to a lower (or greater) dynamic range than the values of the component of an original image I 1 .
  • Said perceptual transfer function uses a limited set of control parameters.
  • FIG. 5 a shows an illustration of a perceptual transfer function which may be used for mapping luminance components but a similar perceptual transfer function for mapping luma components may be used.
  • the mapping is controlled by a mastering display peak luminance parameter (equal to 5000 cd/m 2 in FIG. 5 a ).
  • a signal stretching between content-dependent black and white levels is applied.
  • the converted signal is mapped using a piece-wise curve constructed out of three parts, as illustrated in FIG. 5 b .
  • the lower and upper sections are linear, the steepness being determined by the shadowGain and highlightGain parameters respectively.
  • the mid-section is a parabola providing a smooth bridge between the two linear sections.
  • the width of the cross-over is determined by the midToneWidthAdjFactor parameter.
  • All the parameters controlling the mapping may be conveyed as metadata for example by using a SEI message as defined in JCTVC-W0133 to carry the SMPTE ST 2094-20 metadata.
  • FIG. 5 c shows an example of the inverse of the perceptual transfer function TM ( FIG. 5 a ) to illustrate how a perceptual optimized video signal may be converted back to the linear light domain based on a targeted legacy display maximum luminance, for example 100 cd/m 2 .
  • step 10 the set of parameters SP is obtained to reconstruct an image I 3 from a decoded image .
  • bitstream B may be obtained from metadata obtained from a bitstream, for example the bitstream B.
  • the recommendation ETSI TS 103 433 V1.1.1 clause 6, 2016-08 provides an example of syntax of said metadata.
  • the syntax of the recommendation ETSI TS 103 433 v1.1.1 is described for reconstructing an HDR video from an SDR video but this syntax may extend to the reconstruction of any image I 3 from any decoded image .
  • the post-processing (step 12 ) operates on an inverse mapping function ITM and a scaling function ⁇ 0 (.) that are derived from dynamic metadata because they depend on the first component c 1 .
  • said dynamic metadata may be conveyed according to either a so-called parameter-based mode or a table-based mode.
  • the parameter-based mode may be of interest for distribution workflows which primary goal is to provide direct SDR backward compatible services with very low additional payload or bandwidth usage for carrying the dynamic metadata.
  • the table-based mode may be of interest for workflows equipped with low-end terminals or when a higher level of adaptation is required for representing properly both HDR and SDR streams.
  • dynamic metadata to be conveyed are luminance mapping parameters representative of the inverse function ITM, i.e.
  • color correction parameters saturatedGainNumVal, saturationGainX(i) and saturationGainY(i) used to define the function ⁇ 0 (.) (ETSI recommendation ETSI TS 103 433 V1.1.1 clauses 6.3.5 and 6.3.6).
  • parameters a and b may be respectively carried/hidden in the saturationGain function parameters as explained above.
  • Typical dynamic metadata payload is about 25 bytes per scene.
  • step 101 the CVRI SEI message is parsed the SEI message to obtain mapping parameters and color-correction parameters.
  • step 12 the inverse mapping function ITM (so-called lutMapY) is reconstructed (derived) from the obtained mapping parameters (see recommendation ETSI TS 103 433 V1.1.1 clause 7.2.3.1 for more details).
  • step 12 the scaling function ⁇ 0 (.) (so-called lutCC) is also reconstructed (derived) from the obtained color-correction parameters (see recommendation ETSI TS 103 433 V1.1.1 clause 7.2.3.2 for more details).
  • dynamic data to be conveyed are pivots points of a piece-wise linear curve representative of the inverse mapping function ITM.
  • the dynamic metadata are luminanceMappingNumVal that indicates the number of the pivot points, luminanceMappingX that indicates the x values of the pivot points, and luminanceMappingY that indicates the y values of the pivot points (see recommendation ETSI TS 103 433 V1.1.1 clauses 6.2.7 and 6.3.7 for more details).
  • other dynamic metadata to be conveyed may be pivots points of a piece-wise linear curve representative of the scaling function ⁇ 0 (.).
  • the dynamic metadata are colorCorrectionNumVal that indicates the number of pivot points, colorCorrectionX that indicates the x values of pivot points, and colorCorrectionY that indicates the y values of the pivot points (see the recommendation ETSI TS 103 433 V1.1.1 clauses 6.2.8 and 6.3.8 for more details).
  • Typical payload is about 160 bytes per scene.
  • step 102 the CRI (Colour Remapping Information) SEI message (as specified in HEVC/H.265 version published in December 2016) is parsed to obtain the pivot points of a piece-wise linear curve representative of the inverse mapping function ITM and the pivot points of a piece-wise linear curve representative of the scaling function ⁇ 0 (.), and the chroma to luma injection parameters a and b.
  • CRI Cold Remapping Information
  • step 12 the inverse mapping function ITM is derived from those of pivot points relative to a piece-wise linear curve representative of the inverse mapping function ITM (see recommendation ETSI TS 103 433 V1.1.1 clause 7.2.3.3 for more details).
  • step 12 the scaling function ⁇ 0 (.), is also derived from those of said pivot points relative to a piece-wise linear curve representative of the scaling function ⁇ 0 (.), (see recommendation ETSI TS 103 433 V1.1.1 clause 7.2.3.4 for more details).
  • static metadata also used by the post-processing stage may be conveyed by SEI message.
  • SEI Information
  • the selection of either the parameter-based mode or table-based mode may be carried by the Information (TSI) user data registered SEI message (payloadMode) as defined by the recommendation ETSI TS 103 433 V1.1.1 (clause A.2.2).
  • Static metadata such as, for example, the color primaries or the maximum display mastering display luminance are conveyed by a Mastering Display Colour Volume (MDCV) SEI message as defined in AVC, HEVC.
  • MDCV Mastering Display Colour Volume
  • the information data ID is explicitly signaled by a syntax element in a bitstream and thus obtained by parsing the bitstream.
  • said syntax element is a part of an SEI message.
  • said information data ID identifies what is the processing applied to the original image I 1 to process the set of parameters SP.
  • the information data ID may then be used to deduce how to use the parameters to reconstruct the image I 3 (step 12 ).
  • the information data ID indicates that the parameters SP have been obtained by applying the pre-processing stage (step 20 ) to an original HDR image Ii and that the decoded image is a SDR image.
  • the information data ID indicates that the parameters have been obtained by applying the pre-processing stage (step 20 ) to an HDR10bits image (input of step 20 ), that the decoded image is a HDR10 image, and the mapping function TM is a PQ transfer function.
  • the information data ID indicates that the parameters have been obtained by applying the pre-processing stage (step 20 ) to a HDR10 image (input of step 20 ), that the decoded image is an HLG10 image, and the mapping function TM is a HLG transfer function to the original image I 1 .
  • the information data ID is implicitly signaled.
  • the syntax element transfer-characteristics present in the VUI of HEVC (annex E) or AVC (annex E) usually identifies a transfer function (mapping function TM) to be used. Because different single layer distribution solutions use different transfer function (PQ, HLG, . . . ), the syntax element transfer-characteristics may be used to identify implicitly the recovery mode to be used.
  • the information data ID may also be implicitly signaled by a service defined at a higher transport or system layer.
  • a peak luminance value and the color space of the image I 3 may be obtained by parsing the MDCV SEI message carried by the bitstream, and the information data ID may be deduced from specific combinations of peak luminance values and color spaces (color primaries).
  • a parameter P is considered as being lost when it is not present in (not retrieved from) the bitstream.
  • a parameter P is considered as being lost (not present) when the SEI message is not transmitted in the bitstream or when the parsing of the SEI message fails.
  • a parameter P is considered as being corrupted when at least one of the following conditions is fulfilled:
  • a recovery mode RM i is to replace all the parameters P by recovered parameters P r even if only some of the parameters P are not corrupted, lost or not aligned with the decoded image whose graphics or overlay is added to.
  • another recovery mode RM i is to replace each lost, corrupted or not aligned parameter P by a recovered parameter P r .
  • a recovery mode RM i is to replace a lost, corrupted or not aligned parameter P by a value of a set of pre-determined parameter values previously stored.
  • a set of pre-determined parameter values may gather a pre-determined value for at least one metadata carried by the CRI and/or CVRI SEI message.
  • a specific set of pre-determined parameter values may be determined, for example, for each single layer based distribution solution identified by the information data ID.
  • Table 1 is a non-limitative example of specific set of predetermined values for 3 different single layer based distribution solutions.
  • a recovery mode RMi is selected according to either at least one characteristic of the original video (image I 1 ), typically the peak luminance of the original content, or of a mastering display used to grade the input image data or the image data to be reconstructed, or at least one characteristic of another video, typically the peak luminance of the reconstructed image I 3 , or of a target display.
  • a recovery mode RMi is to check if a characteristic of the original video (I 1 ) or of a mastering display used to grade the input image data or the image data to be reconstructed (e.g. a characteristic as defined in ST 2086) is present and to compute at least one recovered parameter from said characteristic. If said characteristic of the input video is not present and a characteristic of a mastering display is not present, one checks if a characteristic of the reconstructed image I 3 or of the target display is present (e.g. the peak luminance as defined in CTA-861.3) and computes at least one recovered parameter from said characteristic.
  • a characteristic of the original video (I 1 ) or of a mastering display used to grade the input image data or the image data to be reconstructed e.g. a characteristic as defined in ST 2086
  • At least one recovered parameter is a fixed value (e.g. fixed by a video standardization committee or an industry forum such as, for example 1000 cd/m2).
  • Table 2 provides examples of recovery values for some parameters used by the post-processing stage that depends on the presence of available information on the input/output content and mastering/target displays.
  • the parameters matrix_coefficient_value[i] may be set according to the input/output video color space, BT.709 or BT.2020 (characteristic of the input or output video) obtained by parsing a MDCV SEI/ST 2086 message if present.
  • the recovery mode depends on said color spaces.
  • the parameter shadow_gain_control may be computed according to a value obtained by parsing a MDCV SEI/ST 2086 message if present.
  • an information representative of the peak luminance of a mastering display is obtained from said MDCV SEI/ST 2086 message and the parameter shadow_gain_control is computed by (recovery mode 1):
  • hdrDisplayMaxLuminance is known. This value may also be set to the peak luminance of a target (presentation) display when this characteristic is available. Otherwise (recovery mode 2), it is arbitrarily set to a default value, typically 1000 cd/m2. This default value corresponds to a currently observed reference maximum display mastering luminance in most of the current HDR markets.
  • FIG. 6 shows another example of the use of a method for reconstructing an image I 3 from a decoded image data and a set of parameters SP obtained from a bitstream B in accordance with an example of the present principles.
  • Said example is intended to be implemented, at least partially, in any (mid-)device implementing an overlay inserting and mixing mechanism (e.g. Set-Top-Box or UltraHD Blu-ray player) and signaling/sending an event (typically an overlay_present_flag set to 1) to a decision module that an overlay has to be added to the decoded image .
  • an overlay inserting and mixing mechanism e.g. Set-Top-Box or UltraHD Blu-ray player
  • signaling/sending an event typically an overlay_present_flag set to 1
  • the set of parameters SP is obtained (step 10 )
  • the decoded image is obtained (step 11 )
  • the image I 3 is reconstructed (step 12 ) as described in FIG. 1 .
  • the decoded image is obtained (step 11 ) and, in step 60 , a composite image I′ 2 is obtained by adding graphics (overlay) to the decoded image .
  • the information data ID is then obtained (step 103 ), a recovery mode selected (step 104 ) and the selected recovery mode RMi applies (step 105 ) to obtain recovered parameters P r .
  • the image I 3 is then reconstructed (step 12 ) from the recovered parameters P r and the decoded image .
  • the parameters P r are obtained by training a large set of images of different aspects (bright, dark, with logos . . . )
  • the step 12 may be implemented in a remote device such a TV set.
  • a remote device such as a TV set.
  • either the decoded image plus the parameters P or the composite image I′ 2 plus the parameters Pr are transmitted to said TV set.
  • the modules are functional units, which may or not be in relation with distinguishable physical units. For example, these modules or some of them may be brought together in a unique component or circuit, or contribute to functionalities of a software. A contrario, some modules may potentially be composed of separate physical entities.
  • the apparatus which are compatible with the present principles are implemented using either pure hardware, for example using dedicated hardware such ASIC or FPGA or VLSI, respectively «Application Specific Integrated Circuit», «Field-Programmable Gate Array», «Very Large Scale Integration», or from several integrated electronic components embedded in a device or from a blend of hardware and software components.
  • FIG. 7 represents an exemplary architecture of a device 70 which may be configured to implement a method described in relation with FIG. 1-6 .
  • Device 70 comprises following elements that are linked together by a data and address bus 71 :
  • the battery 76 is external to the device.
  • the word «register» used in the specification can correspond to area of small capacity (some bits) or to very large area (e.g. a whole program or large amount of received or decoded data).
  • the ROM 73 comprises at least a program and parameters.
  • the ROM 73 may store algorithms and instructions to perform techniques in accordance with present principles. When switched on, the CPU 72 uploads the program in the RAM and executes the corresponding instructions.
  • RAM 64 comprises, in a register, the program executed by the CPU 72 and uploaded after switch on of the device 70 , input data in a register, intermediate data in different states of the method in a register, and other variables used for the execution of the method in a register.
  • the implementations described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method or a device), the implementation of features discussed may also be implemented in other forms (for example a program).
  • An apparatus may be implemented in, for example, appropriate hardware, software, and firmware.
  • the methods may be implemented in, for example, an apparatus such as, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones, portable/personal digital assistants (“PDAs”), and other devices that facilitate communication of information between end-users.
  • PDAs portable/personal digital assistants
  • the input video or an original image of an input video is obtained from a source.
  • the source belongs to a set comprising:
  • bitstreams carrying on the metadata are sent to a destination.
  • a destination e.g. a video memory ( 74 ) or a RAM ( 74 ), a hard disk ( 73 ).
  • at least one of the bitstreams is sent to a storage interface ( 75 ), e.g. an interface with a mass storage, a flash memory, ROM, an optical disc or a magnetic support and/or transmitted over a communication interface ( 75 ), e.g. an interface to a point to point link, a communication bus, a point to multipoint link or a broadcast network.
  • the bitstream carrying on the metadata is obtained from a source.
  • the bitstream is read from a local memory, e.g. a video memory ( 74 ), a RAM ( 74 ), a ROM ( 73 ), a flash memory ( 73 ) or a hard disk ( 73 ).
  • the bitstream is received from a storage interface ( 75 ), e.g. an interface with a mass storage, a RAM, a ROM, a flash memory, an optical disc or a magnetic support and/or received from a communication interface ( 75 ), e.g. an interface to a point to point link, a bus, a point to multipoint link or a broadcast network.
  • device 70 being configured to implement the method as described above, belongs to a set comprising:
  • Implementations of the various processes and features described herein may be embodied in a variety of different equipment or applications.
  • Examples of such equipment include an encoder, a decoder, a post-processor processing output from a decoder, a pre-processor providing input to an encoder, a video coder, a video decoder, a video codec, a web server, a set-top box, a laptop, a personal computer, a cell phone, a PDA, and any other device for processing a image or a video or other communication devices.
  • the equipment may be mobile and even installed in a mobile vehicle.
  • a computer readable storage medium can take the form of a computer readable program product embodied in one or more computer readable medium(s) and having computer readable program code embodied thereon that is executable by a computer.
  • a computer readable storage medium as used herein is considered a non-transitory storage medium given the inherent capability to store the information therein as well as the inherent capability to provide retrieval of the information therefrom.
  • a computer readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. It is to be appreciated that the following, while providing more specific examples of computer readable storage mediums to which the present principles can be applied, is merely an illustrative and not exhaustive listing as is readily appreciated by one of ordinary skill in the art: a portable computer diskette; a hard disk; a read-only memory (ROM); an erasable programmable read-only memory (EPROM or Flash memory); a portable compact disc read-only memory (CD-ROM); an optical storage device; a magnetic storage device; or any suitable combination of the foregoing.
  • the instructions may form an application program tangibly embodied on a processor-readable medium.
  • Instructions may be, for example, in hardware, firmware, software, or a combination. Instructions may be found in, for example, an operating system, a separate application, or a combination of the two.
  • a processor may be characterized, therefore, as, for example, both a device configured to carry out a process and a device that includes a processor-readable medium (such as a storage device) having instructions for carrying out a process. Further, a processor-readable medium may store, in addition to or in lieu of instructions, data values produced by an implementation.
  • implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted.
  • the information may include, for example, instructions for performing a method, or data produced by one of the described implementations.
  • a signal may be formatted to carry as data the rules for writing or reading the syntax of a described example of the present principles, or to carry as data the actual syntax-values written by a described example of the present principles.
  • Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal.
  • the formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream.
  • the information that the signal carries may be, for example, analog or digital information.
  • the signal may be transmitted over a variety of different wired or wireless links, as is known.
  • the signal may be stored on a processor-readable medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method or device for reconstructing image data representative of original image data from decoded image data and parameters obtained from a bitstream is described. The parameters having been processed from the original image data. The method includes checking if parameters are lost, corrupted or not aligned with the decoded image data whose graphics or overlay is added to, when at least one of the parameters is lost, corrupted or not aligned with the decoded image data whose graphics or overlay is added to; selecting a recovery mode according to an information data indicating how the parameters have been processed; and recovering the at least one lost, corrupted or not aligned parameter by applying the selected recovery mode, the reconstruction of image data then taking also into account the recovered parameters.

Description

    1. REFERENCE TO RELATED EUROPEAN APPLICATIONS
  • This application claims priority from European Patent Application No. 17305212.7 entitled “METHOD AND DEVICE FOR RECONSTRUCTING IMAGE DATA FROM DECODED IMAGE DATA”, filed on Feb. 24, 2017; European Patent Application No. 17158481.6, entitled “METHOD AND DEVICE FOR RECONSTRUCTING IMAGE DATA FROM DECODED IMAGE DATA”, filed on Feb. 28, 2017; and Japanese Patent Application No. 2017-239281, entitled “METHOD AND DEVICE FOR RECONSTRUCTING IMAGE DATA FROM DECODED IMAGE DATA”, filed on Dec. 14, 2017, the contents of which are hereby incorporated by reference in their entirety.
  • 2. FIELD
  • The present principles generally relate to image/video reconstruction from decoded image/video data. Particularly, but not exclusively, the technical field of the present principles is related to recovering parameters for reconstructing an image from another image.
  • 3. BACKGROUND
  • The present section is intended to introduce the reader to various aspects of art, which may be related to various aspects of the present principles that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present principles. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
  • In the following, image data refer to one or several arrays of samples (pixel values) in a specific image/video format which specifies all information relative to the pixel values of an image (or a video) and all information which may be used by a display and/or any other device to visualize and/or decode a image (or video) for example. An image comprises a first component, in the shape of a first array of samples, usually representative of luminance (or luma) of the image, and a second and third component, in the shape of other arrays of samples, usually representative of the color (or chroma) of the image. Or, equivalently, the same information may also be represented by a set of arrays of color samples, such as the traditional tri-chromatic RGB representation.
  • A pixel value is represented by a vector of C values, where C is the number of components. Each value of a vector is represented with a number of bits which defines a maximal dynamic range of the pixel values.
  • Standard-Dynamic-Range images (SDR images) are images whose luminance values are represented with a limited number of bits (typically 8). This limited representation does not allow correct rendering of small signal variations, in particular in dark and bright luminance ranges. In high-dynamic range images (HDR images), the signal representation is extended to maintain a high accuracy of the signal over its entire range. In HDR images, pixel values representing luminance levels are usually represented in floating-point format (typically at least 10 bits per component, namely float or half-float), the most popular format being openEXR half-float format (16-bit per RGB component, i.e. 48 bits per pixel) or in integers with a long representation, typically at least 16 bits.
  • The arrival of the High Efficiency Video Coding (HEVC) standard (ITU-T H.265 Telecommunication standardization sector of ITU (10/2014), series H: audiovisual and multimedia systems, infrastructure of audiovisual services—coding of moving video, High efficiency video coding, Recommendation ITU-T H.265) enables the deployment of new video services with enhanced viewing experience, such as Ultra HD broadcast services. In addition to an increased spatial resolution, Ultra HD can bring a wider color gamut (WCG) and a higher dynamic range (HDR) than the Standard dynamic range (SDR) HD-TV currently deployed. Different solutions for the representation and coding of HDR/WCG video have been proposed (SMPTE 2014, “High Dynamic Range Electro-Optical Transfer Function of Mastering Reference Displays, or SMPTE ST 2084, 2014, or Diaz, R., Blinstein, S. and Qu, S. “Integrating HEVC Video Compression with a High Dynamic Range Video Pipeline”, SMPTE Motion Imaging Journal, Vol. 125, Issue 1, February 2016, pp 14-21).
  • SDR backward compatibility with decoding and rendering devices is an important feature in some video distribution systems, such as broadcasting or multicasting systems.
  • A solution based on a single layer coding/decoding process may be backward compatible, e.g. SDR compatible, and may leverage legacy distribution networks and services already in place.
  • Such a single layer based distribution solution enables both high quality HDR rendering on HDR-enabled Consumer Electronic (CE) devices, while also offering high quality SDR rendering on SDR-enabled CE devices.
  • Such a single layer based distribution solution generates an encoded signal, e.g. SDR signal, and associated metadata (of a few bytes per video frame or scene) that can be used to reconstruct another signal, e.g. HDR signal, from a decoded signal, e.g. SDR signal.
  • Metadata stored parameters values used for the reconstruction of the signal and may be static or dynamic. Static metadata means metadata that remains the same for a video (set of images) and/or a program.
  • Static metadata are valid for the whole video content (scene, movie, clip . . . ) and may not depend on the image content. They may define, for example, image format or color space, color gamut. For instance, SMPTE ST 2086:2014, “Mastering Display Color Volume Metadata Supporting High Luminance and Wide Color Gamut Images” is such a kind of static metadata for use in production environment. The Mastering Display Colour Volume (MDCV) SEI (Supplemental Enhanced Information) message is the distribution flavor of ST 2086 for both H.264/AVC (“Advanced video coding for generic audiovisual Services”, SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS, Recommendation ITU-T H.264, Telecommunication Standardization Sector of ITU, January 2012) and HEVC video codecs.
  • Dynamic metadata are content-dependent, that is metadata can change with the image/video content, e.g. for each image or when each group of images. As an example, SMPTE ST 2094:2016 standards families, “Dynamic Metadata for Color Volume Transform” are dynamic metadata for use in production environment. SMPTE ST 2094-30 can be distributed along HEVC coded video stream thanks to the Color Remapping Information (CRI) SEI message.
  • Other single layer based distribution solutions exist on distribution networks for which display adaptation dynamic metadata are delivered along with a legacy video signal. These single layer based distribution solutions may produce HDR 10-bits image data (e.g. image data which signal is represented as an HLG10 or PQ10 signal as specified in Rec. ITU-R BT.2100-0 “Recommendation ITU-R BT.2100-0, Image parameter values for high dynamic range television for use in production and international program exchange”) and associated metadata from an input signal (typically 12 or 16 bits), encodes said HDR 10-bits image data using, for example an HEVC Main 10 profile encoding scheme, and reconstructs a video signal from a decoded video signal and said associated metadata. The dynamic range of the reconstructed signal being adapted according to the associated metadata that may depend on characteristics of a target display.
  • Dynamic metadata transmission in actual real-world production and distribution facilities were hard to guarantee and could be possibly lost or corrupted because of splicing, overlay layers insertion, professional equipment pruning bitstream, stream handling by affiliates and current lack of standardization for the carriage of metadata throughout the post-production/professional plant.
  • The single layer based distribution solutions cannot work without the presence of different bunch of dynamic metadata with some of them being critical for guaranteeing the success of the reconstruction of the video signal.
  • Similar issues may also occur when dynamic metadata are not aligned with an image whose graphics or overlay is added to. This occurs, for example, when graphics (overlays, OSD, . . . ) are inserted in (added to) an image outside the distribution chain because the metadata, computed for said image, is also applied once the graphics are inserted in (added to) the image. The metadata are then considered as being not aligned with the image whose graphics or overlay are added to because they may not be adapted to the part of said image which contains graphics or overlay.
  • These issues might be characterized by image flickering on fixed portion of graphics when the decoded image is displayed over time or by undesirable effects (saturation, clipping . . . ) on portion of the image containing graphics or overlay processed with inappropriate metadata (e.g. bright OSD processed by metadata generated for a dark content).
  • 4. SUMMARY
  • The following presents a simplified summary of the present principles in order to provide a basic understanding of some aspects of the present principles. This summary is not an extensive overview of the present principles. It is not intended to identify key or critical elements of the present principles. The following summary merely presents some aspects of the present principles in a simplified form as a prelude to the more detailed description provided below.
  • The present principles set out to remedy at least one of the drawbacks of the prior art with a method and a device for reconstructing image data representative of original image data from decoded image data and parameters obtained from a bitstream, said parameters having been processed from said original image data. The method comprises:
      • checking if said parameters are lost, corrupted or not aligned with the decoded image data whose graphics or overlay is added to;
      • when at least one of said parameters is lost, corrupted or not aligned with the decoded image data whose graphics or overlay is added to,
      • selecting a recovery mode according to an information data indicating how said parameters have been processed; and
      • recovering said at least one lost, corrupted or not aligned parameter by applying the selected recovery mode, said reconstruction of image data then taking also into account said recovered parameters.
  • According to an embodiment, the information data is explicitly signaled by a syntax element in a bitstream.
  • According to an embodiment, the information data (ID) is implicitly signaled.
  • According to an embodiment, the information data identifies what is the processing applied to the original image data to process the parameters.
  • According to an embodiment, a parameter is considered as being lost when it is not retrieved from a bitstream.
  • According to an embodiment, a parameter is considered as being corrupted when at least one of the following conditions is fulfilled:
      • its value is out of a range of values;
      • said parameter does not have a coherent value according to other parameter values.
  • According to an embodiment, a recovery mode is to replace all the parameters by recovered parameters even if only some of the parameters are not corrupted, lost or not aligned with the decoded image data whose graphics or overlay is added to.
  • According to an embodiment, a recovery mode is to replace each lost, corrupted or not aligned parameters by a recovered parameter.
  • According to an embodiment, a recovery mode is to replace a lost, corrupted or not aligned parameter by a value of a set of pre-determined parameter values previously stored.
  • According to an embodiment, a recovery mode is selected according to either at least one characteristic of original image data, or of a mastering display used to grade the original image data or the image data to be reconstructed, or at least one characteristic of reconstructed image data or of a target display.
  • According to other of their aspects, the present principles also relate to a device comprising means for implementing the above method and a non-transitory processor-readable medium whose program code instructions to execute the steps of the above method when this program is executed on a computer.
  • 5. BRIEF DESCRIPTION OF DRAWINGS
  • In the drawings, examples of the present principles are illustrated. It shows:
  • FIG. 1 shows a diagram of the steps of a method for reconstructing an image I3 representative of original image I1 from a decoded image
    Figure US20220167019A1-20220526-P00001
    in accordance with an example of the present principles;
  • FIG. 2 shows an end-to-end workflow supporting content production and delivery to HDR and SDR enabled CE displays in accordance with an example of the present principles;
  • FIG. 3 shows a variant of the end-to-end workflow of FIG. 2 in accordance with an embodiment of the present principles;
  • FIG. 4 shows a variant of the end-to-end workflow of FIG. 2 in accordance with another embodiment of the present principles;
  • FIG. 5a shows an illustration of a perceptual transfer function;
  • FIG. 5b shows an example of a piece-wise curve used for mapping;
  • FIG. 5c shows an example of a curve used for converting back a signal to a linear light domain;
  • FIG. 6 shows another example of the use of a method for reconstructing an image from a decoded image data and parameters obtained from a bitstream in accordance with an example of the present principles; and
  • FIG. 7 shows an example of an architecture of a device in accordance with an example of present principles;
  • Similar or same elements are referenced with the same reference numbers.
  • 6. DESCRIPTION OF EXAMPLE OF THE PRESENT PRINCIPLES
  • The present principles will be described more fully hereinafter with reference to the accompanying figures, in which examples of the present principles are shown. The present principles may, however, be embodied in many alternate forms and should not be construed as limited to the examples set forth herein. Accordingly, while the present principles are susceptible to various modifications and alternative forms, specific examples thereof are shown by way of examples in the drawings and will herein be described in detail. It should be understood, however, that there is no intent to limit the present principles to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present principles as defined by the claims.
  • The terminology used herein is for the purpose of describing particular examples only and is not intended to be limiting of the present principles. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises”, “comprising,” “includes” and/or “including” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Moreover, when an element is referred to as being “responsive” or “connected” to another element, it can be directly responsive or connected to the other element, or intervening elements may be present. In contrast, when an element is referred to as being “directly responsive” or “directly connected” to other element, there are no intervening elements present. As used herein the term “and/or” includes any and all combinations of one or more of the associated listed items and may be abbreviated as “/”.
  • It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element without departing from the teachings of the present principles.
  • Although some of the diagrams include arrows on communication paths to show a primary direction of communication, it is to be understood that communication may occur in the opposite direction to the depicted arrows.
  • Some examples are described with regard to block diagrams and operational flowcharts in which each block represents a circuit element, module, or portion of code which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in other implementations, the function(s) noted in the blocks may occur out of the order noted. For example, two blocks shown in succession may, in fact, be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending on the functionality involved.
  • Reference herein to “in accordance with an example” or “in an example” means that a particular feature, structure, or characteristic described in connection with the example can be included in at least one implementation of the present principles. The appearances of the phrase in accordance with an example” or “in an example” in various places in the specification are not necessarily all referring to the same example, nor are separate or alternative examples necessarily mutually exclusive of other examples.
  • Reference numerals appearing in the claims are by way of illustration only and shall have no limiting effect on the scope of the claims. While not explicitly described, the present examples and variants may be employed in any combination or sub-combination.
  • In the following, the capital symbols, for example (C1, C2, C3), designate components of a first image, and lower-case symbols, for example (c1, c2, c3), designate components of another image whose dynamic range of the luminance is lower than the dynamic range of the luminance of the first image.
  • The dynamic range of the luminance of an image is the ratio between the maximum over the minimum of the luminance values of said image. Typically, the dynamic range of the luminance of a SDR image is 500 (100 cd/m2 over 0.2 cd/m2) and 10000 (1000 cd/m2 over 0.1 cd/m2) for an HDR image.
  • Prime symbols, in the following, for example
  • ( Y ' = Y 1 γ , U ' = U 1 γ , V ' = V 1 γ ) ,
  • designate gamma-compressed components of a first image when those prime symbols are capital symbols and prime symbols, for example (y′,u′,v′), designate gamma-compressed components of a second image when those prime symbols are lower-case symbols.
  • The present principles are described for coding/decoding/reconstructing an image but extends to the coding/decoding/reconstruction of a sequence of images (video) because each image of the sequence is sequentially encoded/decoded/reconstructed as described below.
  • FIG. 1 shows a diagram of the steps of a method for reconstructing an image I3 representative of original image I1 from a decoded image
    Figure US20220167019A1-20220526-P00002
    in accordance with an example of the present principles.
  • In step 10, a set of parameters SP is obtained to reconstruct the image I3. These parameters are either parameters P obtained from the bitstream B, or recovered parameters Pr when at least one parameter P is lost, corrupted or not aligned with a decoded image
    Figure US20220167019A1-20220526-P00002
    whose graphics or overlay is added to.
  • In step 11, a module M1 obtains the decoded image
    Figure US20220167019A1-20220526-P00002
    and in step 12, a module M2 reconstructs the image I3 from the decoded image
    Figure US20220167019A1-20220526-P00002
    by using the set of parameters SP.
  • The decoded image data
    Figure US20220167019A1-20220526-P00002
    is obtained from the bitstream (signal) B or any other bitstream and, possibly, said bitstreams may be stored on a local memory or any other storage medium.
  • In sub-step 101 (of step 10), a module M3 obtains the parameters P required to reconstruct the image I3.
  • In sub-step 102 (of step 10), a module M4 checks if at least one of the parameters P is lost, corrupted or not aligned with the decoded image
    Figure US20220167019A1-20220526-P00002
    whose graphics or overlay is added to.
  • When none of the parameter P is lost, corrupted, or not aligned with the decoded image
    Figure US20220167019A1-20220526-P00002
    whose graphics or overlay is added to, the set of parameters SP only comprises the parameters P.
  • When at least one of the parameters P is either lost, corrupted or not aligned with the decoded image
    Figure US20220167019A1-20220526-P00002
    whose graphics or overlay is added to, in sub-step 103 (of step 10), a module M5 obtains an information data ID indicating how said parameters have been processed, in sub-step 104 (of step 10), a module M6 selects a recovery mode RMi according to said information data ID, and in sub-step 105 (of step 10), a module M7 recovers said at least one lost, corrupted or not aligned parameter by applying the selected recovery mode RMi. The at least one recovered parameter Pr is added to the set of parameters SP.
  • In step 12, the image I3 is then reconstructed taking also into account said at least one recovered parameter Pr.
  • The method is advantageous because it allows to obtain parameters for a single layer based distribution solution when multiple single layer based distribution solutions share a same set of syntax elements for carrying a common set of parameters and when said single layer based distribution solutions require different recovery modes (process) for recovering lost, corrupted or not aligned parameters, guaranteeing thus the success of the reconstruction of the image I3 for each of said single layer based distribution solution.
  • The method is also advantageous when a CE device, typically a set-top-box or a player, inserts graphics on top of a decoded image
    Figure US20220167019A1-20220526-P00002
    , because the method selects a specific recovery mode to replace the not aligned parameters by parameters adapted to the decoded image I2 plus the graphics (or overlay) and reconstructs the image I3 by using said recovered parameters from said decoded image
    Figure US20220167019A1-20220526-P00002
    whose graphics or overlay is added to, avoiding thus some flickering artefacts or undesired effects impacting the reconstructed image quality.
  • The method as described in reference with FIG. 1 may be used in various applications when an image must be reconstructed from a decoded image.
  • FIG. 2 shows an end-to-end workflow supporting content production and delivery to HDR and SDR enabled CE displays in accordance with an example of the present principles.
  • This workflow involves a single layer based distribution solution with associated metadata and illustrates an example of the use of a method for reconstructing an image I3 representative of original image data I1 from a decoded image data
    Figure US20220167019A1-20220526-P00002
    and a set of parameters SP obtained in accordance with an example of the present principles illustrated in FIG. 1.
  • Basically, this single layer based distribution solution comprises a pre-processing part and a post-processing part.
  • At the pre-processing part, a pre-processing stage 20 decomposes the original image I1 in an output image I12 and a set of parameters SP, and a switching step 24 determines if either the original image I1 or the output image I12 is encoded in the bitstream B (step 23).
  • In step 23, the image I2 may be encoded with any legacy video codec and the bitstream B is carried throughout an existing legacy distribution network with accompanying associated metadata (set of parameters SP) conveyed on a specific channel or embedded in the bitstream B.
  • In a variant, the bitstream B with accompanying metadata are stored on a storage medium such a Blu-ray disk or a memory or a register of a Set-Top-Box for example.
  • In a variant, the accompanying associated metadata is carried by another specific channel or store on a separate storage medium.
  • Preferably, the video is coded with H.265/HEVC codec (ITU-T H.265 Telecommunication standardization sector of ITU (10/2014), series H: audiovisual and multimedia systems, infrastructure of audiovisual services—coding of moving video, High efficiency video coding, Recommendation ITU-T H.265) or H.264/AVC (“Advanced video coding for generic audiovisual Services”, SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS, Recommendation ITU-T H.264, Telecommunication Standardization Sector of ITU, January 2012).
  • In the case the information data ID determines that the original image I1 (possibly represented by the components (C1, U′, V′) or a Y′CbCr 4:2:0 PQ10 or HLG10 video signal) is encoded in step 23, said original image I1 may be encoded with the HEVC Main 10 profile.
  • In the case the information data ID determines that the output image I12 is encoded in step 23, the output image I12, which can be represented as a Y′CbCr 4:2:0 gamma transfer characteristics (Standard Dynamic Range) signal may be encoded with any HEVC profile including Main 10 or Main profiles.
  • The information data ID may be also conveyed as associated metadata (step 23). At the post-processing part, a decoded image
    Figure US20220167019A1-20220526-P00002
    is obtained from the bitstream B (step 11), a set of parameters SP is obtained as explained in FIG. 1 (step 10) and a post-processing stage 12, which is the functional inverse of the pre-processing stage 20, reconstructs an image I3 from the decoded image
    Figure US20220167019A1-20220526-P00002
    and the set of parameters SP.
  • This single layer based distribution solution may also comprise optional format adapting steps 21, 22, 25, 26.
  • For example, in step 21 (optional), the format of the original image I1 may be adapted to a specific format (C1, U′, V′) of the input of the pre-processing stage 20, and in step 22 (optional), the format (c, u′, v′) of the output image I12 may also be adapted to a specific output format before encoding. In step 25, the format of the decoded image
    Figure US20220167019A1-20220526-P00002
    may be adapted to a specific format of the input of the post-processing stage 12, and in step 26, the image I3 may be adapted to at least one characteristic of a targeted apparatus (e.g. a Set-Top-Box, a connected TV, HDR/SDR enabled CE device, a Blu-ray disc player) and/or an inverse gamut mapping may be used when the decoded image
    Figure US20220167019A1-20220526-P00002
    and the image I3 or the original image I1 are represented in different color spaces and/or gamut.
  • Said format adaptation steps (21, 22, 25, 26) may include color space conversion and/or color gamut mapping. Usual format adapting processes may be used such as RGB-to-YUV or YUV-to-RGB conversion, BT.709-to-BT.2020 or BT.2020-to-BT.709, down-sampling or up-sampling chroma components, etc. Note that the well-known YUV color space refers also to the well-known YCbCr in the prior art. Annex E of the recommendation ETSI recommendation ETSI TS 103 433 V1.1.1, release 2016-8, provides an example of format adapting processes and inverse gamut mapping (Annex D).
  • Said input format adaptation step 21 may also include adapting the bit depth of the original image I1 to specific bit depth such as 10 bits for example, by applying a transfer function on the original image I1. For example, a PQ or HLG transfer function may be used (Rec. ITU-R BT.2100-0).
  • In more details, the pre-processing stage 20 comprises steps 200-202.
  • In step 200, a first component c1 of the output image I12 is obtained by mapping a first component C1 of the original image I1:
  • c 1 = T M ( C 1 )
  • with TM being a mapping function. The mapping function TM may reduce or increase the dynamic range of the luminance of the original image I1 and its inverse may increase or reduce the dynamic range of the luminance of an image.
  • In step 201, a second and third component u′, v′ of the output image I12 are derived by correcting second and third components U′, V′ of the original image I1 according to the first component c1.
  • The correction of the chroma components may be maintained under control by tuning the parameters of the mapping. The color saturation and hue are thus under control.
  • According to an embodiment of step 201, the second and third components U′ and V′ are divided by a scaling function β0(c1) whose value depends on the first component c1.
  • Mathematically speaking, the second and third components u′, v′ are given by:
  • [ u v ] = 1 β 0 ( c 1 ) · [ U V ]
  • Optionally, in step 202, the first component c1 may be adjusted to further control the perceived saturation, as follows:
  • c = c 1 - max ( 0 , a · u + b · v )
  • where a and b are two parameters of the set of parameters SP.
  • This step 202 allows to control the luminance of the output image I12 to guarantee the perceived color matching between the colors of the output image I12 and the colors of the original image I1.
  • The set of parameters SP may comprise parameters relative to the mapping function TM or its inverse ITM, the scaling function β0(c1). These parameters are associated with dynamic metadata and carried in a bitstream, for example the bitstream B. The parameters a and b may also be carried in a bitstream.
  • In more details, at the post-processing part, in step 10, a set of parameters SP is obtained as explained in FIG. 1.
  • According to an embodiment of step 10, the set of parameters SP is carried by static/dynamic metadata obtained from a specific channel or from a bitstream, including the bitstream B, possibly store on a storage medium.
  • In step 11, the module M1 obtains a decoded image
    Figure US20220167019A1-20220526-P00002
    by decoding the bitstream B and the decoded image
    Figure US20220167019A1-20220526-P00002
    is then available for either an SDR or HDR enabled CE display.
  • In more details, the post-processing stage 12 comprises steps 120-122.
  • In optional step 120, the first component c of the decoded image
    Figure US20220167019A1-20220526-P00002
    may be adjusted as follows:
  • c = c 1 + max ( 0 , a · u + b · v )
  • where a and b are two parameters of the set of parameters SP.
  • In step 121, the first component C1 of the image I3 is obtained by inverse-mapping the first component c1:
  • C 1 = ITM ( c 1 )
  • In step 122, the second and third component U′, V′ of the image I3 are derived by inverse correcting the second and third components u′, v′ of the decoded image
    Figure US20220167019A1-20220526-P00002
    according to the component c1.
  • According to an embodiment, a second and third components u′ and v′ are multiplied by a scaling function β0(c1) whose value depends on the first component c1.
  • Mathematically speaking, the two first and second components U′, V′ are given by:
  • [ U V ] = β 0 ( c 1 ) · [ u v ]
  • According to a first embodiment of the method of FIG. 2, as illustrated in FIG. 3, at the pre-processing part, the first component C1 of the original image I1 is a linear-light luminance component L obtained from the RGB component of the original image I1 by:
  • C 1 = L = A 1 [ R G B ]
  • and the second and third component U′, V′ are derived by applying a pseudo-gammatization using square-root (close to BT.709 OETF) to the RGB components of the original image I1:
  • [ U V ] = [ A 2 A 3 ] [ R G B ] × 1024
  • In step 200, the first component y1 of the output image I12 is obtained by mapping said linear-light luminance component L:
  • y 1 = T M ( L )
  • In step 201, the second and third component u′, v′ of the output image I12 are derived by correcting the first and second components U′, V′ according to the first component y1.
  • At the post-processing part, in step 121, a linear-light luminance component L of the image I3 is obtained by inverse-mapping the first component c1:
  • L = ITM ( y 1 )
  • In step 122, the second and third component U′, V′ of the image I3 are derived by inverse correcting the second and third components u′, v′ of the output image I12 according to the first component y1.
  • According to an embodiment of step 122, the second and third components u′ and v′ are multiplied by a scaling function β0(y1) whose value depends on the first component y1.
  • Mathematically speaking, the two first and second components U′, V′ are given by:
  • [ U V ] = β 0 ( y 1 ) · [ u v ]
  • According to a second embodiment of the method of FIG. 2, as illustrated in FIG. 4, at the pre-processing part, the first component C1 of the original image I1 is a component Y′ obtained from the gamma-compressed RGB components of the original image I1 by:
  • Y = A 1 [ R G B ]
  • and the second and third component U′, V′ by applying a gammatization to the RGB components of the original image I1:
  • [ U V ] = [ A 2 A 3 ] [ R G B ] × 1024
  • where γ may be a gamma factor, preferably equal to 2.4.
  • Note, the component Y′, which is a non-linear signal, is different of the linear-light luminance component L.
  • In step 200, the first component y′1 of the output image I12 is obtained by mapping said component Y′:
  • y 1 = TM ( Y )
  • In step 121, a reconstructed component
    Figure US20220167019A1-20220526-P00003
    is obtained by inverse-mapping the first component y′1:
  • = ITM ( y 1 )
  • where ITM is the inverse of the mapping function TM.
  • The values of the reconstructed component
    Figure US20220167019A1-20220526-P00003
    belong thus to the dynamic range of the values of the component Y′.
  • In step 201, the second and third component u′, v′ of the output image I12 are derived by correcting the first and second components U′, V′ according to the first component y′1 and the reconstructed component
    Figure US20220167019A1-20220526-P00003
    .
  • This step 201 allows to control the colors of the output image I12 and guarantees their matching to the colors of the original image I1.
  • The correction of the chroma components may be maintain under control by tuning the parameters of the mapping (inverse mapping). The color saturation and hue are thus under control. Such a control is not possible, usually, when a non-parametric perceptual transfer function is used.
  • According to an embodiment of step 201, the second and third components U′ and V′ are divided by a scaling function γ0(y′1) whose value depends on the ratio of the reconstructed component
    Figure US20220167019A1-20220526-P00004
    over the component y′1:
  • β 0 ( y 1 ) = ITM ( y 1 ) · Ω y 1 = Y ^ · Ω y 1
  • with Ω is constant value depending on the color primaries of the original image I1 (equals to 1.3 for BT.2020 for example).
  • At the post-processing part, in step 121, a component
    Figure US20220167019A1-20220526-P00005
    of the image I3 is obtained by inverse-mapping the first component y′1:
  • = ITM ( y 1 )
  • In step 122, the second and third component U′, V′ of the image I3 are derived by inverse correcting the second and third components u′, v′ of the decoded image
    Figure US20220167019A1-20220526-P00006
    according to the first component y′1 and the component
    Figure US20220167019A1-20220526-P00007
    .
  • According to an embodiment of step 122, a second and third components u′ and v′ are multiplied by the scaling function γ0(y′1).
  • Mathematically speaking, the two first and second components U′, V′ are given by:
  • [ U V ] = β 0 ( y 1 ) · [ u v ]
  • The mapping function TM is based on a perceptual transfer function, whose goal is to convert a component of an original image I1 into a component of an output image I12, thus reducing (or increasing) the dynamic range of the values of their luminance. The values of a component of an output image I12 belong thus to a lower (or greater) dynamic range than the values of the component of an original image I1.
  • Said perceptual transfer function uses a limited set of control parameters.
  • FIG. 5a shows an illustration of a perceptual transfer function which may be used for mapping luminance components but a similar perceptual transfer function for mapping luma components may be used.
  • The mapping is controlled by a mastering display peak luminance parameter (equal to 5000 cd/m2 in FIG. 5a ). To better control the black and white levels, a signal stretching between content-dependent black and white levels is applied. Then the converted signal is mapped using a piece-wise curve constructed out of three parts, as illustrated in FIG. 5b . The lower and upper sections are linear, the steepness being determined by the shadowGain and highlightGain parameters respectively. The mid-section is a parabola providing a smooth bridge between the two linear sections. The width of the cross-over is determined by the midToneWidthAdjFactor parameter.
  • All the parameters controlling the mapping may be conveyed as metadata for example by using a SEI message as defined in JCTVC-W0133 to carry the SMPTE ST 2094-20 metadata.
  • FIG. 5c shows an example of the inverse of the perceptual transfer function TM (FIG. 5a ) to illustrate how a perceptual optimized video signal may be converted back to the linear light domain based on a targeted legacy display maximum luminance, for example 100 cd/m2.
  • In step 10 (FIG. 1), the set of parameters SP is obtained to reconstruct an image I3 from a decoded image
    Figure US20220167019A1-20220526-P00008
    .
  • These parameters may be obtained from metadata obtained from a bitstream, for example the bitstream B.
  • The recommendation ETSI TS 103 433 V1.1.1 clause 6, 2016-08 provides an example of syntax of said metadata.
  • The syntax of the recommendation ETSI TS 103 433 v1.1.1 is described for reconstructing an HDR video from an SDR video but this syntax may extend to the reconstruction of any image I3 from any decoded image
    Figure US20220167019A1-20220526-P00009
    .
  • The post-processing (step 12) operates on an inverse mapping function ITM and a scaling function β0(.) that are derived from dynamic metadata because they depend on the first component c1.
  • According to the recommendation ETSI TS 103 433 V1.1.1, said dynamic metadata may be conveyed according to either a so-called parameter-based mode or a table-based mode.
  • The parameter-based mode may be of interest for distribution workflows which primary goal is to provide direct SDR backward compatible services with very low additional payload or bandwidth usage for carrying the dynamic metadata. The table-based mode may be of interest for workflows equipped with low-end terminals or when a higher level of adaptation is required for representing properly both HDR and SDR streams.
  • In the parameter-based mode, dynamic metadata to be conveyed are luminance mapping parameters representative of the inverse function ITM, i.e.
  • tmInputSignalBlackLevelOffset;
  • tmInputSignalWhiteLevelOffset;
  • shadowGain;
  • highlightGain;
  • midToneWidthAdjFactor;
  • tmOutputFineTuning parameters;
  • Moreover, other dynamic metadata to be conveyed are color correction parameters (saturationGainNumVal, saturationGainX(i) and saturationGainY(i)) used to define the function β0(.) (ETSI recommendation ETSI TS 103 433 V1.1.1 clauses 6.3.5 and 6.3.6).
  • Note the parameters a and b may be respectively carried/hidden in the saturationGain function parameters as explained above.
  • These dynamic metadata may be conveyed using the HEVC Colour Volume Reconstruction Information (CVRI) user data registered SEI message whose syntax is based on the SMPTE ST 2094-20 specification (recommendation ETSI TS 103 433 V1.1.1 Annex A.3).
  • Typical dynamic metadata payload is about 25 bytes per scene.
  • In step 101, the CVRI SEI message is parsed the SEI message to obtain mapping parameters and color-correction parameters.
  • In step 12, the inverse mapping function ITM (so-called lutMapY) is reconstructed (derived) from the obtained mapping parameters (see recommendation ETSI TS 103 433 V1.1.1 clause 7.2.3.1 for more details).
  • In step 12, the scaling function β0(.) (so-called lutCC) is also reconstructed (derived) from the obtained color-correction parameters (see recommendation ETSI TS 103 433 V1.1.1 clause 7.2.3.2 for more details).
  • In the table-based mode, dynamic data to be conveyed are pivots points of a piece-wise linear curve representative of the inverse mapping function ITM. For example, the dynamic metadata are luminanceMappingNumVal that indicates the number of the pivot points, luminanceMappingX that indicates the x values of the pivot points, and luminanceMappingY that indicates the y values of the pivot points (see recommendation ETSI TS 103 433 V1.1.1 clauses 6.2.7 and 6.3.7 for more details).
  • Moreover, other dynamic metadata to be conveyed may be pivots points of a piece-wise linear curve representative of the scaling function β0(.). For example, the dynamic metadata are colorCorrectionNumVal that indicates the number of pivot points, colorCorrectionX that indicates the x values of pivot points, and colorCorrectionY that indicates the y values of the pivot points (see the recommendation ETSI TS 103 433 V1.1.1 clauses 6.2.8 and 6.3.8 for more details).
  • These dynamic metadata may be conveyed using the HEVC Colour Remapping Information (CRI) SEI message whose syntax is based on the SMPTE ST 2094-30 specification (recommendation ETSI TS 103 433 V1.1.1 Annex A.4).
  • Typical payload is about 160 bytes per scene.
  • In step 102, the CRI (Colour Remapping Information) SEI message (as specified in HEVC/H.265 version published in December 2016) is parsed to obtain the pivot points of a piece-wise linear curve representative of the inverse mapping function ITM and the pivot points of a piece-wise linear curve representative of the scaling function β0(.), and the chroma to luma injection parameters a and b.
  • In step 12, the inverse mapping function ITM is derived from those of pivot points relative to a piece-wise linear curve representative of the inverse mapping function ITM (see recommendation ETSI TS 103 433 V1.1.1 clause 7.2.3.3 for more details).
  • In step 12, the scaling function β0(.), is also derived from those of said pivot points relative to a piece-wise linear curve representative of the scaling function β0(.), (see recommendation ETSI TS 103 433 V1.1.1 clause 7.2.3.4 for more details).
  • Note that static metadata also used by the post-processing stage may be conveyed by SEI message. For example, the selection of either the parameter-based mode or table-based mode may be carried by the Information (TSI) user data registered SEI message (payloadMode) as defined by the recommendation ETSI TS 103 433 V1.1.1 (clause A.2.2). Static metadata such as, for example, the color primaries or the maximum display mastering display luminance are conveyed by a Mastering Display Colour Volume (MDCV) SEI message as defined in AVC, HEVC.
  • According to an embodiment of step 103, the information data ID is explicitly signaled by a syntax element in a bitstream and thus obtained by parsing the bitstream.
  • For example, said syntax element is a part of an SEI message.
  • According to an embodiment, said information data ID identifies what is the processing applied to the original image I1 to process the set of parameters SP.
  • According to this embodiment, the information data ID may then be used to deduce how to use the parameters to reconstruct the image I3 (step 12).
  • For example, when equal to 1, the information data ID indicates that the parameters SP have been obtained by applying the pre-processing stage (step 20) to an original HDR image Ii and that the decoded image
    Figure US20220167019A1-20220526-P00010
    is a SDR image.
  • When equal to 2, the information data ID indicates that the parameters have been obtained by applying the pre-processing stage (step 20) to an HDR10bits image (input of step 20), that the decoded image
    Figure US20220167019A1-20220526-P00011
    is a HDR10 image, and the mapping function TM is a PQ transfer function.
  • When equal to 3, the information data ID indicates that the parameters have been obtained by applying the pre-processing stage (step 20) to a HDR10 image (input of step 20), that the decoded image
    Figure US20220167019A1-20220526-P00012
    is an HLG10 image, and the mapping function TM is a HLG transfer function to the original image I1.
  • According to an embodiment of step 103, the information data ID is implicitly signaled.
  • For example, the syntax element transfer-characteristics present in the VUI of HEVC (annex E) or AVC (annex E) usually identifies a transfer function (mapping function TM) to be used. Because different single layer distribution solutions use different transfer function (PQ, HLG, . . . ), the syntax element transfer-characteristics may be used to identify implicitly the recovery mode to be used.
  • The information data ID may also be implicitly signaled by a service defined at a higher transport or system layer.
  • In accordance with another example, a peak luminance value and the color space of the image I3 may be obtained by parsing the MDCV SEI message carried by the bitstream, and the information data ID may be deduced from specific combinations of peak luminance values and color spaces (color primaries).
  • According to an embodiment of step 102, a parameter P is considered as being lost when it is not present in (not retrieved from) the bitstream.
  • For example, when the parameters P are carried by SEI message such as the CVRI or CRI SEI messages as described above, a parameter P is considered as being lost (not present) when the SEI message is not transmitted in the bitstream or when the parsing of the SEI message fails.
  • According to an embodiment of step 103, a parameter P is considered as being corrupted when at least one of the following conditions is fulfilled:
      • its value is out of a determined range of values (e.g. saturation_gain_num_val equal to 10 when the compliant range is 0 to 6);
      • said parameter does not have a coherent value according to other parameter values (e.g. saturation_gain_y[i] contains an outlier i.e. a value that is far from other saturation_gain_y[i] values; typically saturation_gain[0] until saturation_gain[4] are equal to value in the range of 0 to 16 and saturation_gain[1]=255).
  • According to an embodiment of the method, a recovery mode RMi is to replace all the parameters P by recovered parameters Pr even if only some of the parameters P are not corrupted, lost or not aligned with the decoded image
    Figure US20220167019A1-20220526-P00013
    whose graphics or overlay is added to.
  • According to an embodiment of the method, another recovery mode RMi is to replace each lost, corrupted or not aligned parameter P by a recovered parameter Pr.
  • According to an embodiment of the method, a recovery mode RMi is to replace a lost, corrupted or not aligned parameter P by a value of a set of pre-determined parameter values previously stored.
  • For example, a set of pre-determined parameter values may gather a pre-determined value for at least one metadata carried by the CRI and/or CVRI SEI message.
  • A specific set of pre-determined parameter values may be determined, for example, for each single layer based distribution solution identified by the information data ID.
  • Table 1 is a non-limitative example of specific set of predetermined values for 3 different single layer based distribution solutions.
  • TABLE 1
    Information data ID ETSI TS 103 433 parameters
    0 Shadow gain: 1.16
    Highlight gain: 2.0
    MidTones Adjustment: 1.5
    White stretch: 0
    Black stretch: 0
    Saturation Gain [ ]:{(0,64); (24,64);
    (62,59); (140,61); (252,64); (255,64)}
    1 Shadow gain: 1.033
    Highlight gain: 2.0
    MidTones Adjustment: 1.5
    White stretch: 0
    Black stretch: 0
    2 Shadow gain: 1.115
    Highlight gain: 2.0
    MidTones Adjustment: 1.5
    White stretch: 0
  • According to the Table 1, three different sets of predetermined values are defined according to the information data ID. These sets of predetermined values defined recovered values for some parameters used by the post-processing stage. The other parameters being set to fixed values that are common to the different single layer solutions.
  • According to an embodiment of step 104, a recovery mode RMi is selected according to either at least one characteristic of the original video (image I1), typically the peak luminance of the original content, or of a mastering display used to grade the input image data or the image data to be reconstructed, or at least one characteristic of another video, typically the peak luminance of the reconstructed image I3, or of a target display.
  • According to an embodiment, a recovery mode RMi is to check if a characteristic of the original video (I1) or of a mastering display used to grade the input image data or the image data to be reconstructed (e.g. a characteristic as defined in ST 2086) is present and to compute at least one recovered parameter from said characteristic. If said characteristic of the input video is not present and a characteristic of a mastering display is not present, one checks if a characteristic of the reconstructed image I3 or of the target display is present (e.g. the peak luminance as defined in CTA-861.3) and computes at least one recovered parameter from said characteristic. If said characteristic of the reconstructed image I3 is not present and said characteristic of the target display is not present, at least one recovered parameter is a fixed value (e.g. fixed by a video standardization committee or an industry forum such as, for example 1000 cd/m2).
  • In accordance with a non-limitative example, Table 2 provides examples of recovery values for some parameters used by the post-processing stage that depends on the presence of available information on the input/output content and mastering/target displays.
  • TABLE 2
    Syntax element Recovery value
    matrix_coefficient_value[i] {889; 470; 366; 994}, if
    BT.2020
    {915; 464; 392; 987}, if
    BT.709
    shadow_gain_control if MDCV SEI message is
    present, recovery mode 1
    otherwise
    recovery mode 2
  • The parameters matrix_coefficient_value[i] may be set according to the input/output video color space, BT.709 or BT.2020 (characteristic of the input or output video) obtained by parsing a MDCV SEI/ST 2086 message if present. The recovery mode depends on said color spaces.
  • The parameter shadow_gain_control may be computed according to a value obtained by parsing a MDCV SEI/ST 2086 message if present.
  • For example, an information representative of the peak luminance of a mastering display is obtained from said MDCV SEI/ST 2086 message and the parameter shadow_gain_control is computed by (recovery mode 1):
  • shadow_gain _control = Clip ( 0 ; 255 ; Floor ( rs ( hdrDisplayMaxLuminance ) × 127 , 5 + 0 , 5 ) ) with r s ( x ) = 7 , 5 ln ( 1 + 4 , 7 x ( x 100 ) 1 / 2.4 ) - 2 and Clip 3 ( x ; y ; z ) { x , z < x y , z > y z otherwise
  • It is likely that at a service level information or for a specific workflow the value of hdrDisplayMaxLuminance is known. This value may also be set to the peak luminance of a target (presentation) display when this characteristic is available. Otherwise (recovery mode 2), it is arbitrarily set to a default value, typically 1000 cd/m2. This default value corresponds to a currently observed reference maximum display mastering luminance in most of the current HDR markets.
  • FIG. 6 shows another example of the use of a method for reconstructing an image I3 from a decoded image data
    Figure US20220167019A1-20220526-P00014
    and a set of parameters SP obtained from a bitstream B in accordance with an example of the present principles.
  • Said example is intended to be implemented, at least partially, in any (mid-)device implementing an overlay inserting and mixing mechanism (e.g. Set-Top-Box or UltraHD Blu-ray player) and signaling/sending an event (typically an overlay_present_flag set to 1) to a decision module that an overlay has to be added to the decoded image
    Figure US20220167019A1-20220526-P00014
    .
  • When an overlay (graphics) has not to be added to a decoded image
    Figure US20220167019A1-20220526-P00014
    , the set of parameters SP is obtained (step 10), the decoded image
    Figure US20220167019A1-20220526-P00014
    is obtained (step 11) and the image I3 is reconstructed (step 12) as described in FIG. 1.
  • When an overlay has to added to a decoded image
    Figure US20220167019A1-20220526-P00015
    , the decoded image
    Figure US20220167019A1-20220526-P00015
    is obtained (step 11) and, in step 60, a composite image I′2 is obtained by adding graphics (overlay) to the decoded image
    Figure US20220167019A1-20220526-P00015
    .
  • The information data ID is then obtained (step 103), a recovery mode selected (step 104) and the selected recovery mode RMi applies (step 105) to obtain recovered parameters Pr.
  • The image I3 is then reconstructed (step 12) from the recovered parameters Pr and the decoded image
    Figure US20220167019A1-20220526-P00015
    .
  • According to an embodiment, the parameters Pr are obtained by training a large set of images of different aspects (bright, dark, with logos . . . )
  • Optionally (not shown in FIG. 6), the step 12 may be implemented in a remote device such a TV set. In that case, either the decoded image
    Figure US20220167019A1-20220526-P00015
    plus the parameters P or the composite image I′2 plus the parameters Pr are transmitted to said TV set.
  • On FIG. 1-6, the modules are functional units, which may or not be in relation with distinguishable physical units. For example, these modules or some of them may be brought together in a unique component or circuit, or contribute to functionalities of a software. A contrario, some modules may potentially be composed of separate physical entities. The apparatus which are compatible with the present principles are implemented using either pure hardware, for example using dedicated hardware such ASIC or FPGA or VLSI, respectively «Application Specific Integrated Circuit», «Field-Programmable Gate Array», «Very Large Scale Integration», or from several integrated electronic components embedded in a device or from a blend of hardware and software components.
  • FIG. 7 represents an exemplary architecture of a device 70 which may be configured to implement a method described in relation with FIG. 1-6.
  • Device 70 comprises following elements that are linked together by a data and address bus 71:
      • a microprocessor 72 (or CPU), which is, for example, a DSP (or Digital Signal Processor);
      • a ROM (or Read Only Memory) 73;
      • a RAM (or Random Access Memory) 74;
      • an I/O interface 75 for reception of data to transmit, from an application; and
      • a battery 76
  • In accordance with an example, the battery 76 is external to the device. In each of mentioned memory, the word «register» used in the specification can correspond to area of small capacity (some bits) or to very large area (e.g. a whole program or large amount of received or decoded data). The ROM 73 comprises at least a program and parameters. The ROM 73 may store algorithms and instructions to perform techniques in accordance with present principles. When switched on, the CPU 72 uploads the program in the RAM and executes the corresponding instructions.
  • RAM 64 comprises, in a register, the program executed by the CPU 72 and uploaded after switch on of the device 70, input data in a register, intermediate data in different states of the method in a register, and other variables used for the execution of the method in a register.
  • The implementations described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method or a device), the implementation of features discussed may also be implemented in other forms (for example a program). An apparatus may be implemented in, for example, appropriate hardware, software, and firmware. The methods may be implemented in, for example, an apparatus such as, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones, portable/personal digital assistants (“PDAs”), and other devices that facilitate communication of information between end-users.
  • In accordance with an example, the input video or an original image of an input video is obtained from a source. For example, the source belongs to a set comprising:
      • a local memory (73 or 74), e.g. a video memory or a RAM (or Random Access Memory), a flash memory, a ROM (or Read Only Memory), a hard disk;
      • a storage interface (75), e.g. an interface with a mass storage, a RAM, a flash memory, a ROM, an optical disc or a magnetic support;
      • a communication interface (75), e.g. a wireline interface (for example a bus interface, a wide area network interface, a local area network interface) or a wireless interface (such as a IEEE 802.11 interface or a Bluetooth® interface); and
      • an image capturing circuit (e.g. a sensor such as, for example, a CCD (or Charge-Coupled Device) or CMOS (or Complementary Metal-Oxide-Semiconductor)).
  • In accordance with examples, the bitstreams carrying on the metadata are sent to a destination. As an example, one of these bitstream or both are stored in a local or remote memory, e.g. a video memory (74) or a RAM (74), a hard disk (73). In a variant, at least one of the bitstreams is sent to a storage interface (75), e.g. an interface with a mass storage, a flash memory, ROM, an optical disc or a magnetic support and/or transmitted over a communication interface (75), e.g. an interface to a point to point link, a communication bus, a point to multipoint link or a broadcast network.
  • In accordance with other examples, the bitstream carrying on the metadata is obtained from a source. Exemplarily, the bitstream is read from a local memory, e.g. a video memory (74), a RAM (74), a ROM (73), a flash memory (73) or a hard disk (73). In a variant, the bitstream is received from a storage interface (75), e.g. an interface with a mass storage, a RAM, a ROM, a flash memory, an optical disc or a magnetic support and/or received from a communication interface (75), e.g. an interface to a point to point link, a bus, a point to multipoint link or a broadcast network.
  • In accordance with examples, device 70 being configured to implement the method as described above, belongs to a set comprising:
      • a mobile device;
      • a communication device;
      • a game device;
      • a tablet (or tablet computer);
      • a laptop;
      • a still image camera;
      • a video camera;
      • an encoding/decoding chip;
      • TV set;
      • a set-top-box;
      • a display;
      • a still image server; and
      • a video server (e.g. a broadcast server, a video-on-demand server or a web server).
  • Implementations of the various processes and features described herein may be embodied in a variety of different equipment or applications. Examples of such equipment include an encoder, a decoder, a post-processor processing output from a decoder, a pre-processor providing input to an encoder, a video coder, a video decoder, a video codec, a web server, a set-top box, a laptop, a personal computer, a cell phone, a PDA, and any other device for processing a image or a video or other communication devices. As should be clear, the equipment may be mobile and even installed in a mobile vehicle.
  • Additionally, the methods may be implemented by instructions being performed by a processor, and such instructions (and/or data values produced by an implementation) may be stored on a computer readable storage medium. A computer readable storage medium can take the form of a computer readable program product embodied in one or more computer readable medium(s) and having computer readable program code embodied thereon that is executable by a computer. A computer readable storage medium as used herein is considered a non-transitory storage medium given the inherent capability to store the information therein as well as the inherent capability to provide retrieval of the information therefrom. A computer readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. It is to be appreciated that the following, while providing more specific examples of computer readable storage mediums to which the present principles can be applied, is merely an illustrative and not exhaustive listing as is readily appreciated by one of ordinary skill in the art: a portable computer diskette; a hard disk; a read-only memory (ROM); an erasable programmable read-only memory (EPROM or Flash memory); a portable compact disc read-only memory (CD-ROM); an optical storage device; a magnetic storage device; or any suitable combination of the foregoing.
  • The instructions may form an application program tangibly embodied on a processor-readable medium.
  • Instructions may be, for example, in hardware, firmware, software, or a combination. Instructions may be found in, for example, an operating system, a separate application, or a combination of the two. A processor may be characterized, therefore, as, for example, both a device configured to carry out a process and a device that includes a processor-readable medium (such as a storage device) having instructions for carrying out a process. Further, a processor-readable medium may store, in addition to or in lieu of instructions, data values produced by an implementation.
  • As will be evident to one of skill in the art, implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted. The information may include, for example, instructions for performing a method, or data produced by one of the described implementations. For example, a signal may be formatted to carry as data the rules for writing or reading the syntax of a described example of the present principles, or to carry as data the actual syntax-values written by a described example of the present principles. Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal. The formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream. The information that the signal carries may be, for example, analog or digital information. The signal may be transmitted over a variety of different wired or wireless links, as is known. The signal may be stored on a processor-readable medium.
  • A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made. For example, elements of different implementations may be combined, supplemented, modified, or removed to produce other implementations. Additionally, one of ordinary skill will understand that other structures and processes may be substituted for those disclosed and the resulting implementations will perform at least substantially the same function(s), in at least substantially the same way(s), to achieve at least substantially the same result(s) as the implementations disclosed. Accordingly, these and other implementations are contemplated by this application.

Claims (16)

1-22. (canceled)
23. A method comprising:
obtaining decoded image data and parameters, wherein the parameters were processed from original image data;
in response to a determination that a graphic is to be overlaid on the decoded image:
selecting a recovery mode according to information data indicating how the parameters were processed;
recovering the parameters by applying the selected recovery mode; and
providing, to a display device, data to reconstruct image data based on the decoded image data and the recovered parameters;
wherein the decoded image data and the reconstructed image data have different dynamic ranges.
24. The method of claim 23, wherein the information data is explicitly signaled by a syntax element in a bitstream.
25. The method of claim 23, wherein the information data is implicitly signaled.
26. The method of claim 23, wherein the information data identifies the processing applied to the original image data to process the parameters.
27. The method of claim 23, wherein selecting the recovery mode comprises selecting a recovery mode based on at least one characteristic of the original image data or of a mastering display used to grade the original image data, or at least one characteristic of the reconstructed image data or of a target display.
28. A device comprising at least one processor and at least one memory having stored instructions operative, when executed by the at least one processor, to cause the device to:
obtain decoded image data and parameters, wherein the parameters were processed from original image data;
in response to a determination that a graphic is to be overlaid on the decoded image:
select a recovery mode according to information data indicating how the parameters were processed;
recover the parameters by applying the selected recovery mode; and
provide, to a display device, data to reconstruct image data based on the decoded image data and the recovered parameters;
wherein the decoded image data and the reconstructed image data have different dynamic ranges.
29. The device of claim 28, wherein the information data is explicitly signaled by a syntax element in a bitstream.
30. The device of claim 28, wherein the information data is implicitly signaled.
31. The device of claim 28, wherein the information data identifies the processing applied to the original image data to process the parameters.
32. The device of claim 28, wherein selecting the recovery mode comprises selecting a recovery mode based on at least one characteristic of the original image data or of a mastering display used to grade the original image data, or at least one characteristic of the reconstructed image data or of a target display.
33. A non-transitory computer-readable storage medium having stored instructions that, when executed by a processor, cause the processor to:
obtain decoded image data and parameters, wherein the parameters were processed from original image data;
in response to a determination that a graphic is to be overlaid on the decoded image:
select a recovery mode according to information data indicating how the parameters were processed;
recover the parameters by applying the selected recovery mode; and
provide, to a display device, data to reconstruct image data based on the decoded image data and the recovered parameters;
wherein the decoded image data and the reconstructed image data have different dynamic ranges.
34. The non-transitory computer-readable storage medium of claim 33, wherein the information data is explicitly signaled by a syntax element in a bitstream.
35. The non-transitory computer-readable storage medium of claim 33, wherein the information data is implicitly signaled.
36. The non-transitory computer-readable storage medium of claim 33, wherein the information data identifies the processing applied to the original image data to process the parameters.
37. The non-transitory computer-readable storage medium of claim 33, wherein selecting the recovery mode comprises selecting a recovery mode based on at least one characteristic of the original image data or of a mastering display used to grade the original image data, or at least one characteristic of the reconstructed image data or of a target display.
US17/669,218 2017-02-24 2022-02-10 Method and device for reconstructing image data from decoded image data Pending US20220167019A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/669,218 US20220167019A1 (en) 2017-02-24 2022-02-10 Method and device for reconstructing image data from decoded image data

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
EP17305212.7A EP3367684A1 (en) 2017-02-24 2017-02-24 Method and device for decoding a high-dynamic range image
EP17305212.7 2017-02-24
EP17158481 2017-02-28
EP17158481.6 2017-02-28
JP2017239281A JP7086587B2 (en) 2017-02-24 2017-12-14 Method and device for reconstructing image data from decoded image data
JP2017-239281 2017-12-14
US15/868,111 US11310532B2 (en) 2017-02-24 2018-01-11 Method and device for reconstructing image data from decoded image data
US17/669,218 US20220167019A1 (en) 2017-02-24 2022-02-10 Method and device for reconstructing image data from decoded image data

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/868,111 Continuation US11310532B2 (en) 2017-02-24 2018-01-11 Method and device for reconstructing image data from decoded image data

Publications (1)

Publication Number Publication Date
US20220167019A1 true US20220167019A1 (en) 2022-05-26

Family

ID=63245424

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/868,111 Active 2038-06-15 US11310532B2 (en) 2017-02-24 2018-01-11 Method and device for reconstructing image data from decoded image data
US17/669,218 Pending US20220167019A1 (en) 2017-02-24 2022-02-10 Method and device for reconstructing image data from decoded image data

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/868,111 Active 2038-06-15 US11310532B2 (en) 2017-02-24 2018-01-11 Method and device for reconstructing image data from decoded image data

Country Status (2)

Country Link
US (2) US11310532B2 (en)
JP (1) JP7340659B2 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3367658A1 (en) * 2017-02-24 2018-08-29 Thomson Licensing Method and device for reconstructing an hdr image
US10939158B2 (en) * 2017-06-23 2021-03-02 Samsung Electronics Co., Ltd. Electronic apparatus, display apparatus and control method thereof
US11928796B2 (en) * 2017-12-01 2024-03-12 Interdigital Patent Holdings, Inc. Method and device for chroma correction of a high-dynamic-range image
US10880491B2 (en) * 2018-09-15 2020-12-29 Panasonic Intellectual Property Management Co., Ltd. Imaging apparatus including controller for converting dynamic range image data
JP7341656B2 (en) * 2018-12-11 2023-09-11 キヤノン株式会社 Image processing device, control method, program, and storage medium
EP3906681A4 (en) * 2019-02-01 2022-06-01 Beijing Bytedance Network Technology Co., Ltd. Interactions between in-loop reshaping and inter coding tools
EP3925216A4 (en) 2019-03-23 2022-06-15 Beijing Bytedance Network Technology Co., Ltd. Restrictions on adaptive-loop filtering parameter sets
CN112637589A (en) * 2020-07-31 2021-04-09 西安诺瓦星云科技股份有限公司 Data processing method and device and video processing equipment
US11606605B1 (en) * 2021-09-30 2023-03-14 Samsung Electronics Co., Ltd. Standard dynamic range (SDR) / hybrid log-gamma (HLG) with high dynamic range (HDR) 10+

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080019565A1 (en) * 2003-06-26 2008-01-24 Fotonation Vision Limited Digital Image Adjustable Compression and Resolution Using Face Detection Information
US20120051635A1 (en) * 2009-05-11 2012-03-01 Dolby Laboratories Licensing Corporation Light Detection, Color Appearance Models, and Modifying Dynamic Range for Image Display
US20120099794A1 (en) * 2009-07-06 2012-04-26 Koninklijke Philips Electronics N.V. Retargeting of image with overlay graphic
US20170287433A1 (en) * 2016-03-29 2017-10-05 Bby Solutions, Inc. Dynamic display device adjustment for streamed video

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5856973A (en) * 1996-09-10 1999-01-05 Thompson; Kenneth M. Data multiplexing in MPEG server to decoder systems
JP2006279388A (en) 2005-03-29 2006-10-12 Matsushita Electric Ind Co Ltd Apparatus and method for decoding moving picture
WO2008010023A1 (en) * 2006-07-12 2008-01-24 Freescale Semiconductor, Inc. A method for gamma correction and a device having gamma correction capabilities
WO2010151555A1 (en) 2009-06-24 2010-12-29 Dolby Laboratories Licensing Corporation Method for embedding subtitles and/or graphic overlays in a 3d or multi-view video data
EP2375751A1 (en) 2010-04-12 2011-10-12 Panasonic Corporation Complexity reduction of edge-detection based spatial interpolation
GB2500330A (en) 2010-12-03 2013-09-18 Lg Electronics Inc Receiving device and method for receiving multiview three-dimensional broadcast signal
JP2013066075A (en) 2011-09-01 2013-04-11 Sony Corp Transmission device, transmission method and reception device
CN103237168A (en) 2013-04-02 2013-08-07 清华大学 Method for processing high-dynamic-range image videos on basis of comprehensive gains
US9648351B2 (en) 2013-10-24 2017-05-09 Dolby Laboratories Licensing Corporation Error control in multi-stream EDR video codec
US9819947B2 (en) 2014-01-02 2017-11-14 Vid Scale, Inc. Methods, apparatus and systems for scalable video coding with mixed interlace and progressive content
CN111246050B (en) 2014-02-25 2022-10-28 苹果公司 System, apparatus and method for video data processing
US20150264345A1 (en) * 2014-03-13 2015-09-17 Mitsubishi Electric Research Laboratories, Inc. Method for Coding Videos and Pictures Using Independent Uniform Prediction Mode
PT3324629T (en) 2014-05-28 2019-10-08 Koninklijke Philips Nv Methods and apparatuses for encoding an hdr images, and methods and apparatuses for use of such encoded images
WO2015196456A1 (en) 2014-06-27 2015-12-30 深圳市大疆创新科技有限公司 High dynamic range video record method and apparatus based on bayer color filter array
CN106937121B (en) 2015-12-31 2021-12-10 中兴通讯股份有限公司 Image decoding and encoding method, decoding and encoding device, decoder and encoder
JP7086587B2 (en) 2017-02-24 2022-06-20 インターデジタル ヴイシー ホールディングス, インコーポレイテッド Method and device for reconstructing image data from decoded image data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080019565A1 (en) * 2003-06-26 2008-01-24 Fotonation Vision Limited Digital Image Adjustable Compression and Resolution Using Face Detection Information
US20120051635A1 (en) * 2009-05-11 2012-03-01 Dolby Laboratories Licensing Corporation Light Detection, Color Appearance Models, and Modifying Dynamic Range for Image Display
US20120099794A1 (en) * 2009-07-06 2012-04-26 Koninklijke Philips Electronics N.V. Retargeting of image with overlay graphic
US20170287433A1 (en) * 2016-03-29 2017-10-05 Bby Solutions, Inc. Dynamic display device adjustment for streamed video

Also Published As

Publication number Publication date
US20180249182A1 (en) 2018-08-30
JP7340659B2 (en) 2023-09-07
US11310532B2 (en) 2022-04-19
JP2022130436A (en) 2022-09-06

Similar Documents

Publication Publication Date Title
US20220167019A1 (en) Method and device for reconstructing image data from decoded image data
US11024017B2 (en) Tone mapping adaptation for saturation control
EP3367685B1 (en) Method and device for reconstructing image data from decoded image data
US11741585B2 (en) Method and device for obtaining a second image from a first image when the dynamic range of the luminance of the first image is greater than the dynamic range of the luminance of the second image
US10600163B2 (en) Method and device for reconstructing a display adapted HDR image
US11062432B2 (en) Method and device for reconstructing an HDR image
US11423522B2 (en) Method and apparatus for colour correction during HDR to SDR conversion
EP3557872A1 (en) Method and device for encoding an image or video with optimized compression efficiency preserving image or video fidelity
US11989855B2 (en) Saturation control for high-dynamic range reconstruction
EP3367684A1 (en) Method and device for decoding a high-dynamic range image
CA2986520A1 (en) Method and device for reconstructing a display adapted hdr image
RU2776101C1 (en) Method and device for recovering hdr image adapted to the display
JP2019097013A (en) Method for restructuring display-adaptive hdr image and device
US11575944B2 (en) Method and apparatus for encoding an image
EP3528201A1 (en) Method and device for controlling saturation in a hdr image

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: THOMSON LICENSING, FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ANDRIVON, PIERRE;CARAMELLI, NICOLAS;TOUZE, DAVID;REEL/FRAME:059213/0380

Effective date: 20180118

Owner name: INTERDIGITAL VC HOLDINGS, INC., DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:THOMSON LICENSING SAS;REEL/FRAME:059356/0911

Effective date: 20180223

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED