WO2015177125A1 - Procédé et dispositif pour coder une trame et/ou de décoder un train de bits représentant une trame - Google Patents

Procédé et dispositif pour coder une trame et/ou de décoder un train de bits représentant une trame Download PDF

Info

Publication number
WO2015177125A1
WO2015177125A1 PCT/EP2015/060965 EP2015060965W WO2015177125A1 WO 2015177125 A1 WO2015177125 A1 WO 2015177125A1 EP 2015060965 W EP2015060965 W EP 2015060965W WO 2015177125 A1 WO2015177125 A1 WO 2015177125A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
component
decoded
backlight
residual
Prior art date
Application number
PCT/EP2015/060965
Other languages
English (en)
Inventor
Sebastien Lasserre
Fabrice Leleannec
Yannick Olivier
David Touze
Original Assignee
Thomson Licensing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing filed Critical Thomson Licensing
Publication of WO2015177125A1 publication Critical patent/WO2015177125A1/fr

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/34Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source
    • G09G3/3406Control of illumination source
    • G09G3/342Control of illumination source using several illumination sources separately controlled corresponding to different display panel areas, e.g. along one dimension such as lines
    • G09G3/3426Control of illumination source using several illumination sources separately controlled corresponding to different display panel areas, e.g. along one dimension such as lines the different display panel areas being distributed in two dimensions, e.g. matrix
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/98Adaptive-dynamic-range coding [ADRC]
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/06Adjustment of display parameters
    • G09G2320/0626Adjustment of display parameters for control of overall brightness
    • G09G2320/0646Modulation of illumination source brightness and image signal correlated to each other
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/02Handling of images in compressed format, e.g. JPEG, MPEG
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2350/00Solving problems of bandwidth in display systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/16Calculation or use of calculated indices related to luminance levels in display data

Definitions

  • the present disclosure generally relates to frame encoding and decoding.
  • the technical field of the present disclosure is related to encoding/decoding of a frame whose pixels values belong to a high- dynamic range.
  • LDR frames Low-Dynamic-Range frames
  • HDR frames high- dynamic range frames
  • pixel values are usually represented in floating-point format (either 32-bit or 16-bit for each component, namely float or half-float), the most popular format being openEXR half-float format (16-bit per RGB component, i.e. 48 bits per pixel) or in integers with a long representation, typically at least 16 bits.
  • a typical approach for encoding an HDR frame is to reduce the dynamic range of the frame in order to encode the frame by means of a legacy encoding scheme (initially configured to encode LDR frames).
  • a backlight frame is determined from the luminance component of the input HDR frame.
  • a residual frame is then obtained by dividing the input HDR frame by the backlight frame and both the backlight frame and the residual frame are encoded by a legacy encoder such as H.264/AVC (('Advanced video coding for generic audiovisual Services", SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS, Recommendation ITU-T H.264, Telecommunication Standardization Sector of ITU, January 2012).) or HEVC ("High Efficiency Video Coding", SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS, Recommendation ITU-T H.265, Telecommunication Standardization Sector of ITU, April 2013).
  • aspects of the present disclosure are directed to creating and maintaining semantic relationships between data objects on a computer system.
  • the following presents a simplified summary of the disclosure in order to provide a basic understanding of some aspects of the disclosure. This summary is not an extensive overview of the disclosure. It is not intended to identify key or critical elements of the disclosure. The following summary merely presents some aspects of the disclosure in a simplified form as a prelude to the more detailed description provided below.
  • mapping each component of the residual frame such that the mapping of each pixel of a component of the residual frame depends on the pixel value of either the backlight frame or a decoded version of the backlight frame associated with this pixel;
  • mapping a component of the residual frame comprises applying a mapping function to the pixel values of this component and multiplying the pixel values of the resulting frame by a scaling factor.
  • the method further comprises obtaining the scaling factor from the pixel value of either the backlight frame or a decoded version of the backlight frame associated with this pixel and a maximum bound of the pixel values of the associated component of the frame to be encoded.
  • the scaling factor is obtained also from a minimum bound of the pixel values of a component of the frame to be encoded.
  • the processor is further configured for encoding an information which represents either a maximum bound or a minimum bound or both of the pixel values of the frame to be encoded.
  • the information is a minimum bound or a maximum bound or both for each pixel of the frame.
  • the present disclosure further relates to a method for decoding a frame from at least one bitstream representing a backlight frame obtained from the frame and a residual frame calculated by dividing the frame by the backlight frame.
  • the decoding method comprising a processor configured for:
  • the method is characterized in that the processor is further configured, before obtaining the decoded frame, for unmapping each component of the decoded residual frame such that the unmapping of each pixel of a component of the decoded residual frame depends on the pixel value of the decoded backlight frame associated with this pixel.
  • unmapping a component of the decoded residual frame comprises dividing the pixel values of the component by a scaling factor and applying a mapping function to the pixel values of the resulting component.
  • the processor is further configured for obtaining the scaling factor from the pixel value of the decoded backlight frame associated with this pixel and a maximum bound of the pixel values of the associated component of the frame to be decoded.
  • the processor is further configured for obtaining the maximum bound from an information obtained by a at least partially decoding of at least bitstream.
  • the present disclosure relates to a computer program product, to a processor readable medium and to a non- transitory storage medium.
  • FIG. 1 shows a block diagram of the steps of a method for encoding a frame I in accordance with an embodiment of the disclosure
  • FIG. 1 a shows a block diagram of the steps of a method for encoding a frame I in accordance with a variant of the embodiment of Fig. 1 ;
  • FIG. 2 shows a block diagram of a step of the method in accordance with an embodiment of the disclosure
  • FIG. 3 shows a block diagram of a step of the method in accordance with an embodiment of the disclosure
  • Fig. 4 shows a block diagram of a step of the method in accordance with an embodiment of the disclosure
  • FIG. 5 shows a block diagram of the steps of a method, in accordance with an embodiment of the disclosure, for decoding a bitstream representing a residual frame calculated by dividing a frame by a backlight frame;
  • FIG. 6 shows an example of an architecture of a device in accordance with an embodiment of the disclosure.
  • FIG. 7 shows two remote devices communicating over a communication network in accordance with an embodiment of the disclosure
  • each block represents a circuit element, module, or portion of code which comprises one or more executable instructions for implementing the specified logical function(s).
  • the function(s) noted in the blocks may occur out of the order noted. For example, two blocks shown in succession may, in fact, be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending on the functionality involved.
  • Fig. 1 shows a block diagram of the steps of a method for encoding a frame I in accordance with an embodiment of the disclosure.
  • a module IC obtains the luminance component L and potentially at least one color component C(i) of the frame I to be encoded.
  • a component L or C(i) of the frame I is noted Y
  • the luminance component L is obtained, for instance in the 709 gamut, by a linear combination which is given by:
  • a module BAM determines a backlight frame Bal from the luminance component L of the frame I.
  • a module Bl determines a backlight frame Ba as being a weighted linear combination of shape functions given by:
  • L consists in finding optimal weighting coefficients (and potentially also toptimal shape functions if not known beforehand) in order that the backlight frame Ba fits the luminance component L.
  • weighting coefficients a There are many well-known methods to find the weighting coefficients a,. For example, one may use a least mean square method to minimize the mean square error between the backlight frame Ba and the luminance component L.
  • the disclosure is not limited to any specific method to obtain the backlight frame Ba.
  • shape functions may be the true physical response of a display backlight (made of LED's for instance, each shape function then corresponding to the response of one LED) or may be a pure mathematical construction in order to fit the luminance component at best.
  • the backlight frame Bal output from step 1 1 , is the backlight frame Ba given by equation (1 ).
  • a module BM modulates the backlight frame Ba (given by equation (1 )) with a mean luminance value L MEAN of the frame I obtained by the means of a module HL.
  • the backlight frame Bal output from step 1 1 , is the modulated backlight frame.
  • the module HL is configured to calculate the mean luminance value L MEAN over the whole luminance component L.
  • the module HL is configured to calculate the mean luminance value L MEAN by
  • This last embodiment is advantageous because it avoids that the mean luminance value L MEAN be influenced by a few pixels with extreme high values which usually leads to very annoying temporal mean brightness instability when the frame I belongs to a sequence of frames.
  • the disclosure is not limited to a specific embodiment for calculating the mean luminance value L MEAN .
  • a module N normalizes the backlight frame Ba (given by equation (1 )) by its mean value E(Ba) such that one gets a mid-gray-at-one backlight frame Ba gray for the frame (or for all frames if the frame I belongs to a sequence of frames):
  • the module BM is configured to modulate the mid-gray-at-one backlight frame Ba gray with the mean luminance value L mean of the frame L, by using the following relation
  • cst mod being a modulation coefficient and a being another modulation coefficient less than 1 , typically 1/3.
  • the backlight frame Bal output from step 1 1 , is the modulated backlight frame Ba moa given by equation (2).
  • the modulation coefficient cst mod is tuned to get a good looking brightness for the residual frame and highly depends on the process to obtain the backlight frame. For example, cst m0 d " 1 .7 for a backlight frame obtained by least means squares.
  • step 12 the data needed to determine the backlight frame Bal, output from step 1 1 , are encoded by means of an encoder ENC1 and added in a bitstream F1 which may be stored on a local or remote memory and/or transmitted through a communication interface (e.g. to a bus or over a communication network or a broadcast network).
  • a communication interface e.g. to a bus or over a communication network or a broadcast network.
  • the data to be encoded are limited to the weighting coefficients a t or a; when known non-adaptive shape functions are used, but the shape functions ⁇ , may also be a priori unknown and then encoded in the bitstream F1 , for instance in a case of a somewhat optimal mathematical construction for better fitting. So, all the weighting coefficients a t or a x (and potentially shape functions ⁇ ,) are encoded in the bitstream F1 .
  • the weighting coefficients a t or a t are quantized before encoded in order to reduce the size of the bitstream F1 .
  • at least one component Y Res of a residual frame Res is calculated by dividing each component Y
  • of the frame I obtained from the module IC, is divided by the decoded version Ba of the backlight frame:
  • This division is done pixel per pixel.
  • step 14 the decoded version Bal of the backlight frame is obtained by decoding at least partially the bitstream F1 by means of a decoder DEC1 .
  • step 12 some data needed to obtain the backlight frame, output of step 1 1 , have been encoded (step 12) and then obtained by at least partially decoding the bitstream F1.
  • a module MAP maps each component Y Res of the residual frame Res such that the mapping of each pixel YR es ,p of a component Y Res of the residual frame Res depends on the pixel value Bal p of the backlight frame Bal associated with this pixel p (the pixels YR es ,p and the pixel of the backlight frame have the same spatial position).
  • the module MAP maps each component Y Res of the residual frame Res such that the mapping of each pixel YR eSlP of a component Y Res of the residual frame Res depends on the pixel value Bal p of the decoded version Bal of the backlight frame associated with this pixel p (the pixels Y Res,p and the pixel of the decoded backlight frame Bal have the same spatial position).
  • the pixel value Bal p stands for either a pixel value of the backlight frame Bal or a pixel value of the decoded version Bal of the backlight frame.
  • the mapping thus depends on the values of the backlight frame (or the decoded version Bal of the backlight frame).
  • Such a modulation of the mapping by the backlight frame (or the decoded version Bal of the backlight frame) allows a better usage of the mapped space by avoiding useless codewords out of the data range, thus leading to a better representation of the mapped residual frame. As a consequence, better compression performance is reached.
  • the variant is advantageous because using the decoded version Bal of the backlight frame rather than the backlight frame Bal avoids lossless encoding of the backlight frame which is mandatory to ensure a same modulation of the mapping at both the encoder and decoder sides.
  • mapping a component Y Res of the residual frame Res comprises applying a mapping function mf(.) to the pixel values of this component and multiplying the pixel values of the resulting frame Resr by a scaling factor S(.).
  • mapping function is a gamma function.
  • the resulting (gamma-corrected) residual frame Res R is then given, for example, by:
  • being a coefficient of a gamma curve equal, for example, to 1/2.4.
  • the mapping function is a S-Log function.
  • the S-Logged residual frame Ress L is given, for example, by:
  • a,b and c are coefficients of a SLog curve determined such that 0 and 1 are invariant, and the derivative of the SLog curve is continuous in 1 when prolonged by a gamma curve below 1 .
  • a,b and c are functions of the parameter y.
  • the mapping function mf(.) is either a gamma correction or a SLog correction according to the pixel values of the residual frame.
  • the module MAP applies either the gamma correction or the SLog correction according to the pixel values of the residual frame Res.
  • the gamma correction is applied and otherwise the SLog correction is applied.
  • the resulting residual frame Resr usually has a mean value more or less close to 1 depending on the brightness of the frame I, making the use of the above gamma-Slog combination particularly efficient.
  • the parameter y of the gamma- Slog curve is encoded in a bitstream F3 by means of an encoder ENC3 and added in a bitstream F3 which may be stored on a local or remote memory and/or transmitted through a communication interface (e.g. to a bus or over a communication network or a broadcast network).
  • a module Mn obtains the scaling factor from the pixel value Bal p of the backlight frame Bal (or the decoded version Bal of the backlight frame) associated with a pixel p of a component of the residual frame Res and a maximum bound M of the pixel values of the associated component Y
  • this bound may be provided by the input format of the frame I; the maximum luminance of the format may be fixed to a certain amount of nits, say 10000 nits for instance.
  • N is a number of bits which defines the range of the mapped pixel value YResm.p.
  • This embodiment is advantageous because it ensures that the pixel values of the mapped residual frame Resm belong to the range [0,2 N -1] and thus the mapped residual frame Resm may be encoded with an N-bit depth encoder ENC4 (step 18).
  • the module Mn is further configured to obtain a minimum bound m of the pixel values of a component Y
  • an information Inf which represents either the maximum bound M or the minimum bound m or both is encoded in a bitstream F2 by means of an encoder ENC2 and added in a bitstream F2 which may be stored on a local or remote memory and/or transmitted through a communication interface (e.g. to a bus or over a communication network or a broadcast network).
  • the information Inf is a minimum bound or a maximum bound or both for each pixel of the frame.
  • the information may be carried as it or compressed.
  • a second backlight Ba M may carry the information for the maximum bound M such that, for each pixel p, the value M to be taken is Bai ⁇ ,.
  • a third backlight Ba m may be used to carry the information of the lower bound m.
  • the information of these extra backlights Ba M and/or Ba m can be encoded in the bitstream F2 in a similar way as the backlight Bal is encoded in the stream F1.
  • the mapped residual frame Resr is encoded by means of an encoder ENC4 in a bitstream F4 which may be stored on a local or remote memory and/or transmitted through a communication interface (e.g. on a bus or over a communication network or a broadcast network).
  • a communication interface e.g. on a bus or over a communication network or a broadcast network.
  • the mapping of the residual frame Resr is a parametric process.
  • the parameters may be fixed or not and in the latter case they may be encoded in the bitstream F2 and F3 as explained before.
  • Fig. 5 shows a block diagram of the steps of a method, in accordance with an embodiment of the disclosure, for decoding a frame from at least one bitstream representing a backlight frame obtained from the frame and a residual frame calculated by dividing the frame by the backlight frame.
  • a decoded backlight frame Bal is obtained for example by at least partially decoding a bitstream F1 by means of the decoder DEC1 .
  • the bitstream F1 may have been stored locally or received from a communication network.
  • a decoded residual frame Resm is obtained by a at least partial decoding of a bitstream F4 by means of a decoder DEC4.
  • the bitstream F4 may have been stored locally or received from a communication network.
  • a module IMAP unmaps each component of the decoded residual frame Resm such that the unmapping of each pixel ⁇ n, v of a component of the decoded residual frame Resm depends on the pixel value Bal ⁇ of the decoded backlight frame Bal associated with this pixel p.
  • Unmapping a component of a frame stands for applying an inverse mapping to this previously mapped component.
  • unmapping a component Yr m of the decoded residual frame Resm comprises dividing the pixel values of the component by a scaling factor S(.) and applying a mapping function imf(.) to the pixel values of the resulting component.
  • Y ⁇ p is the value of a pixel p of a component of the decoded residual frame Resm.
  • step 16 the scaling factor S(.) is given by the module Mn according to any one of its embodiments or variants described above in relation with the Fig. 1.
  • the scaling factor is obtained from the pixel value Bal p of the decoded backlight frame associated with a pixel p and a maximum bound M of the pixel values of the associated component Y
  • the scaling factor S(.) is obtained from parameters.
  • these parameters are obtained from an information Inf, obtained by decoding at least partially the bitstream F2 by means of a decoder DEC2.
  • the bitstream F2 may have been stored locally or received from a communication network.
  • the information Inf represents either the maximum bound M or the minimum bound m or both as explained before. But the information Inf may also represents other values according to embodiments and variants explained before in relation with the Fig. 1.
  • the mapping function imf(.) is a parametric function and the parameters, such as for example the parameter y, are obtained, in step 56, by decoding at least partially the bitstream F3 by means of a decoder DEC3.
  • the bitstream F3 may have been stored locally or received from a communication network.
  • mapping function imf(.) may be the same as those of the mapping function described above in relation with the Fig. 1.
  • step 51 a decoded frame / is obtained by multiplying the decoded residual frame Resm output of the step 53, by the decoded backlight frame Bal.
  • the decoders DEC1 -2-3-4 are respectively configured to decode data which have been encoded by the encoders ENC1 -2-3-4.
  • the encoders ENC1 -2-3-4 are not limited to a specific encoder (decoder) but when an entropy encoder (decoder) is required, an entropy encoder such as a Huffmann coder, an arithmetic coder or a context adaptive coder like Cabac used in h264/AVC or HEVC is advantageous.
  • the encoders ENC1 -2-3-4 (and decoders DEC1 -2-3-4) are not limited to a specific encoder which may be, for example, an frame/video coder with loss like JPEG, JPEG2000, MPEG2, h264/AVC or HEVC.
  • the encoders ENC1 -2-3-4 may be a same encoder and the decoders DEC1 -2-3-4 may be a same decoder.
  • the modules are functional units, which may or not be in relation with distinguishable physical units. For example, these modules or some of them may be brought together in a unique component or circuit, or contribute to functionalities of a software. A contra o, some modules may potentially be composed of separate physical entities.
  • the apparatus which are compatible with the disclosure are implemented using either pure hardware, for example using dedicated hardware such ASIC or FPGA or VLSI, respectively « Application Specific Integrated Circuit » « Field- Programmable Gate Array » « Very Large Scale Integration » or from several integrated electronic components embedded in a device or from a blend of hardware and software components.
  • Fig. 6 represents an exemplary architecture of a device 60 which may be configured to implement a method described in relation with Fig. 1-5.
  • Device 60 comprises following elements that are linked together by a data and address bus 61 :
  • microprocessor 62 which is, for example, a DSP (or Digital Signal Processor);
  • ROM Read Only Memory
  • RAM or Random Access Memory
  • the battery 66 is external to the device.
  • the word « register » used in the specification can correspond to area of small capacity (some bits) or to very large area (e.g. a whole program or large amount of received or decoded data).
  • ROM 63 comprises at least a program and parameters. Algorithm of the methods according to the disclosure is stored in the ROM 63.
  • the CPU 62 uploads the program in the RAM and executes the corresponding instructions.
  • RAM 64 comprises, in a register, the program executed by the CPU 62 and uploaded after switch on of the device 60, input data in a register, intermediate data in different states of the method in a register, and other variables used for the execution of the method in a register.
  • the implementations described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method or a device), the implementation of features discussed may also be implemented in other forms (for example a program).
  • An apparatus may be implemented in, for example, appropriate hardware, software, and firmware.
  • the methods may be implemented in, for example, an apparatus such as, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones, portable/personal digital assistants ("PDAs”), and other devices that facilitate communication of information between end-users.
  • PDAs portable/personal digital assistants
  • the frame I is obtained from a source.
  • the source belongs to a set comprising:
  • a local memory e.g. a video memory or a RAM (or Random Access Memory), a flash memory, a ROM (or Read Only Memory), a hard disk ;
  • a storage interface e.g. an interface with a mass storage, a
  • RAM random access memory
  • flash memory a read-only memory
  • ROM read-only memory
  • optical disc an optical disc or a magnetic support
  • a communication interface e.g. a wireline interface (for example a bus interface, a wide area network interface, a local area network interface) or a wireless interface (such as a IEEE
  • an frame capturing circuit e.g. a sensor such as, for example, a CCD (or Charge-Coupled Device) or CMOS (or Complementary Metal-Oxide-Semiconductor)).
  • the decoded frame ⁇ is sent to a destination; specifically, the destination belongs to a set comprising:
  • a local memory e.g. a video memory or a RAM, a flash memory, a hard disk ;
  • a storage interface e.g. an interface with a mass storage, a RAM, a flash memory, a ROM, an optical disc or a magnetic support;
  • a communication interface e.g. a wireline interface (for example a bus interface (e.g. USB (or Universal Serial Bus)), a wide area network interface, a local area network interface, a HDMI (High Definition Multimedia Interface) interface) or a wireless interface (such as a IEEE 802.1 1 interface, WiFi ® or a Bluetooth ® interface); and
  • a wireline interface for example a bus interface (e.g. USB (or Universal Serial Bus)
  • a wide area network interface e.g. USB (or Universal Serial Bus)
  • a wide area network interface e.g. USB (or Universal Serial Bus)
  • a local area network interface e.g. USB (or Universal Serial Bus)
  • HDMI High Definition Multimedia Interface
  • a wireless interface such as a IEEE 802.1 1 interface, WiFi ® or a Bluetooth ® interface
  • the bitstream BF is sent to a destination.
  • the bitstream BF is stored in a local or remote memory, e.g. a video memory (64) or a RAM (64), a hard disk (63).
  • the bitstream BF is sent to a storage interface (65), e.g. an interface with a mass storage, a flash memory, ROM, an optical disc or a magnetic support and/or transmitted over a communication interface (65), e.g. an interface to a point to point link, a communication bus, a point to multipoint link or a broadcast network.
  • a storage interface e.g. an interface with a mass storage, a flash memory, ROM, an optical disc or a magnetic support
  • a communication interface e.g. an interface to a point to point link, a communication bus, a point to multipoint link or a broadcast network.
  • the bitstream BF is obtained from a source.
  • the bitstream is read from a local memory, e.g. a video memory (64), a RAM (64), a ROM (63), a flash memory (63) or a hard disk (63).
  • the bitstream is received from a storage interface (65), e.g. an interface with a mass storage, a RAM, a ROM, a flash memory, an optical disc or a magnetic support and/or received from a communication interface (65), e.g. an interface to a point to point link, a bus, a point to multipoint link or a broadcast network.
  • device 60 being configured to implement an encoding method described in relation with Fig. 1-4, belongs to a set comprising:
  • a video server e.g. a broadcast server, a video-on-demand server or a web server.
  • device 60 being configured to implement a decoding method described in relation with Fig. 5, belongs to a set comprising:
  • the device A comprises means which are configured to implement a method for encoding a frame as described in relation with the Fig. 1-4 and the device B comprises means which are configured to implement a method for decoding as described in relation with Fig. 5.
  • the network is a broadcast network, adapted to broadcast still frames or video frames from device A to decoding devices including the device B.
  • Implementations of the various processes and features described herein may be embodied in a variety of different equipment or applications, particularly, for example, equipment or applications.
  • equipment examples include an encoder, a decoder, a post-processor processing output from a decoder, a pre-processor providing input to an encoder, a video coder, a video decoder, a video codec, a web server, a set-top box, a laptop, a personal computer, a cell phone, a PDA, and other communication devices.
  • the equipment may be mobile and even installed in a mobile vehicle.
  • the methods may be implemented by instructions being performed by a processor, and such instructions (and/or data values produced by an implementation) may be stored on a processor-readable medium such as, for example, an integrated circuit, a software carrier or other storage device such as, for example, a hard disk, a compact diskette (“CD"), an optical disc (such as, for example, a DVD, often referred to as a digital versatile disc or a digital video disc), a random access memory (“RAM”), or a read-only memory (“ROM”).
  • the instructions may form an application program tangibly embodied on a processor-readable medium. Instructions may be, for example, in hardware, firmware, software, or a combination.
  • a processor may be characterized, therefore, as, for example, both a device configured to carry out a process and a device that includes a processor-readable medium (such as a storage device) having instructions for carrying out a process.
  • a processor-readable medium may store, in addition to or in lieu of instructions, data values produced by an implementation.
  • implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted. The information may include, for example, instructions for performing a method, or data produced by one of the described implementations.
  • a signal may be formatted to carry as data the rules for writing or reading the syntax of a described embodiment, or to carry as data the actual syntax-values written by a described embodiment.
  • Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal.
  • the formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream.
  • the information that the signal carries may be, for example, analog or digital information.
  • the signal may be transmitted over a variety of different wired or wireless links, as is known.
  • the signal may be stored on a processor-readable medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention concerne d'une manière générale un procédé et un dispositif pour coder une trame. Le procédé et le dispositif comportent un processeur conçu pour : - coder (12) une trame de rétroéclairage déterminée (11) à partir de la trame ; - obtenir (13) au moins une composante d'une trame résiduelle par division de chaque composante de la trame par une version décodée de la trame de rétroéclairage ; - mapper chaque composante (YRes) de la trame résiduelle (Res) de sorte que le mappage de chaque pixel (YRes,p) d'un composante (YRes) de la trame résiduelle Res dépend de la valeur de pixel (Balp) soit de la trame de rétroéclairage (Bal) soit d'une version décodée de la trame de rétroéclairage (Bal), associée à ce pixel (p) ; et - coder (18) la trame résiduelle mappée ; le mappage d'une composante (YRes) de la trame résiduelle (Res) comprenant l'application d'une fonction de mappage (mf (.)) pour les valeurs de pixel de cette composante et la multiplication des valeurs de pixel de la trame résultante par un facteur d'échelle. L'invention concerne également un procédé et un dispositif de décodage.
PCT/EP2015/060965 2014-05-20 2015-05-19 Procédé et dispositif pour coder une trame et/ou de décoder un train de bits représentant une trame WO2015177125A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP14305739.6 2014-05-20
EP14305739 2014-05-20

Publications (1)

Publication Number Publication Date
WO2015177125A1 true WO2015177125A1 (fr) 2015-11-26

Family

ID=50884321

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2015/060965 WO2015177125A1 (fr) 2014-05-20 2015-05-19 Procédé et dispositif pour coder une trame et/ou de décoder un train de bits représentant une trame

Country Status (2)

Country Link
TW (1) TW201547261A (fr)
WO (1) WO2015177125A1 (fr)

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DAVID TOUZÉ ET AL: "HDR Video Coding based on Local LDR Quantization", HDRI2014 -SECOND INTERNATIONAL CONFERENCE AND SME WORKSHOP ON HDR IMAGING, 4 March 2014 (2014-03-04), XP055112158, Retrieved from the Internet <URL:http://people.irisa.fr/Ronan.Boitard/articles/2014/HDR%20Video%20Coding%20based%20on%20Local%20LDR%20Quantization.pdf> [retrieved on 20140404] *
TAKAO JINNO ET AL: "New local tone mapping and two-layer coding for HDR images", 2012 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2012) : KYOTO, JAPAN, 25 - 30 MARCH 2012 ; [PROCEEDINGS], IEEE, PISCATAWAY, NJ, 25 March 2012 (2012-03-25), pages 765 - 768, XP032227239, ISBN: 978-1-4673-0045-2, DOI: 10.1109/ICASSP.2012.6287996 *

Also Published As

Publication number Publication date
TW201547261A (zh) 2015-12-16

Similar Documents

Publication Publication Date Title
US9924178B2 (en) Method and device for encoding a high-dynamic range image and/or decoding a bitstream
US10148958B2 (en) Method and device for encoding and decoding a HDR picture and a LDR picture
US10574987B2 (en) Method and device for encoding a high-dynamic range image
WO2015177123A1 (fr) Méthode et dispositif pour coder une trame et/ou décoder un train de bits représentant une trame
EP3146718B1 (fr) Procédé et dispositif de codage echelloné d&#39;une trame à plage dynamique étendue et/ou le décodage d&#39;un train de bits représentant une telle trame
EP3096520B1 (fr) Méthode de codage et décodage d&#39;un bloc vidéo
WO2015177125A1 (fr) Procédé et dispositif pour coder une trame et/ou de décoder un train de bits représentant une trame
WO2015177119A1 (fr) Procédé et dispositif pour coder une trame et/ou de décoder un train de bits représentant une trame
WO2015177126A1 (fr) Procédé et dispositif de codage d&#39;une trame et/ou de décodage d&#39;un flux binaire représentant une trame
EP2887665A1 (fr) Procédé et dispositif de codage d&#39;une image de plage dynamique élevée
WO2015097135A1 (fr) Procédé et dispositif de codage d&#39;une image à plage dynamique étendue
EP2947881A1 (fr) Procédé et dispositif de codage échelonné d&#39;une trame à plage dynamique élevée et/ou de décodage d&#39;un train binaire
WO2015097126A1 (fr) Procédé et dispositif pour coder une image de gamme dynamique élevée et/ou décoder un flux binaire
EP2938083A1 (fr) Procédé et dispositif de traitement de données d&#39;informations indiquant qu&#39;une trame contient des échantillons d&#39;au moins deux trames constitutives conditionnées distinctes selon un schéma d&#39;arrangement de conditionnement de trame spécifique
WO2015097124A1 (fr) Procédé et dispositif d&#39;encodage d&#39;une image de plage hautement dynamique et/ou de décodage d&#39;un flux binaire
WO2015097129A1 (fr) Procédé et dispositif pour coder une image de gamme dynamique élevée
WO2015097131A1 (fr) Procédé et dispositif d&#39;encodage d&#39;une image de plage hautement dynamique
WO2015097134A1 (fr) Procédé et dispositif pour coder une image à large plage dynamique et/ou décoder un train de bits
WO2015177136A1 (fr) Procédé et dispositif pour coder une trame de plage dynamique élevée et/ou décoder un train de bits
WO2015177133A1 (fr) Méthode et dispositif pour coder une trame à grande gamme dynamique et/ou décoder un train de bits
WO2015177139A1 (fr) Procédé et dispositif pour coder une trame de plage dynamique élevée et/ou décoder un train de bits

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15723925

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15723925

Country of ref document: EP

Kind code of ref document: A1