WO2017001344A1 - Method and apparatus for determining prediction of current block of enhancement layer - Google Patents
Method and apparatus for determining prediction of current block of enhancement layer Download PDFInfo
- Publication number
- WO2017001344A1 WO2017001344A1 PCT/EP2016/064868 EP2016064868W WO2017001344A1 WO 2017001344 A1 WO2017001344 A1 WO 2017001344A1 EP 2016064868 W EP2016064868 W EP 2016064868W WO 2017001344 A1 WO2017001344 A1 WO 2017001344A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- patch
- prediction
- base layer
- enhancement layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/33—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/36—Scalability techniques involving formatting the layers as a function of picture distortion after decoding, e.g. signal-to-noise [SNR] scalability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
Definitions
- the present disclosure relates to a method and an apparatus for determining a prediction of a current block of an enhancement layer.
- TMO High Dynamic Range
- LDR Low Dynamic Range
- the TMO is directly applied to the HDR signal so as to obtain an LDR image, and this image can be displayed on a classical LDR display.
- TMOs There is a wide variety of TMOs, and many of them are non-linear operators.
- the Sk can be set at a non-null minimum value ⁇ instead.
- the value v is then rounded to obtain an integer in the interval [ 0, 2 n -l ] .
- the decoder in order to apply the inverse tone mapping (iTMO) , the decoder must know the curve in FIG. 1.
- decoded here corresponds to a de-quantization operation that is different from the term “decoded” of the video coder/decoder.
- TMO laplacian pyramid may be used based on the disclosure of Peter J. Burt Edward H. Adelson. "The Laplacian Pyramid as a compact image code,” IEEE Transactions on Communications, vol. COM-31, no. 4, April 1983, Burt P.J., "The Pyramid as Structure for Efficient Computation.
- the efficiency of the TMO consists in the extraction of different intermediate LDR images from an HDR image where the intermediate LDR images correspond to different exposures.
- the over-exposed LDR image contains the fine details in the dark regions while the lighting regions (of the original HDR image) are saturated.
- the under-exposed LDR image contains the fine details in the lighting zone while the dark regions are clipped.
- each LDR image is decomposed in laplacian pyramid of n levels, while the highest level is dedicated to the lowest resolution, and the other levels provide the different spectral bands (of gradient) . So, at this stage, each LDR image corresponds to a laplacian pyramid, and further we can notice that each LDR image can be rebuilt from its laplacian pyramid by using an inverse decomposition or "collapse", only if there is not a rounding miscalculation .
- the tone mapping is implemented with the fusion of the different pyramid levels of the set of intermediate LDR images, and the resulting blended pyramid is collapsed so as to give the final LDR image.
- this tone mapping is non-linear, it is difficult to implement the inverse tone mapping of the LDR so as to give an acceptable prediction to a current block of HDR layer in the case of SNR (Signal-to-Noise Ratio) or spatial video scalability.
- SNR Signal-to-Noise Ratio
- WO2010/ 018137 discloses a method for modifying a reference block of a reference image, a method for encoding or decoding a block of an image with help from a reference block and device therefore and a storage medium or signal carrying a block encoded with help from a modified reference B.
- a transfer function is estimated from neighboring mean values, and this function is used to correct an inter-image prediction.
- the approach was limited to the mean value so as to give a first approximation of the current block and the collocated one.
- a method comprising, building a first intermediate patch of a low dynamic range with the neighboring pixels of the collocated block of the base layer and a first prediction block predicted from neighboring pixels of a collocated block of a base layer with a coding mode of the base layer; building a second intermediate patch of a high dynamic range with the neighboring pixels of the current block of the enhancement layer and a second prediction block predicted from neighboring pixels of a current block of an enhancement layer with the coding mode; building a patch by applying a transfer function to a transformed initial patch of the base layer in a transform domain and then applying an inverse transform to the resulting patch so as to return in a pixel domain, wherein the transfer function is determined to transform the first intermediate patch to the second intermediate patch in a transform domain; predicting a prediction of the current block of the enhancement layer by extracting a block from the patch, the extracted block in the patch being collocated to the current block of the enhancement layer in the second intermediate patch; and encoding a
- an apparatus comprising, a first intermediate patch creation unit configured to predict a first prediction block from neighboring pixels of the collocated block of a base layer with a coding mode of the base layer and to build a first intermediate patch of a low dynamic range with the neighboring pixels of the collocated block of the base layer and the first prediction block; a second intermediate patch creation unit configured to predict a second prediction block from neighboring pixels of a current block of an enhancement layer with the coding mode and to build a second intermediate patch of a high dynamic range with the neighboring pixels of the current block of the enhancement layer and the second prediction block; a unit to determine a transfer function to transform the first intermediate patch to the second intermediate patch in a transform domain, to build a patch by applying the transfer function to a transformed initial patch of the base layer in a transform domain and then applying an inverse transform to the resulting patch so as to return in a pixel domain and to predict a prediction of the current block of the enhancement layer by extracting a block from the patch,
- a method comprising, decoding a residual prediction error; building a first
- an apparatus comprising, a decoder for decoding a residual prediction error; a first intermediate patch creation unit configured to build a first intermediate patch of a low dynamic range with the neighboring pixels of a collocated block of a base layer and a first prediction block predicted from neighboring pixels of a collocated block of a base layer with a coding mode of the base layer; a second intermediate patch creation unit configured to build a second intermediate patch of a high dynamic range with the neighboring pixels of the current block of the enhancement layer and a second prediction block predicted from neighboring pixels of a current block of an enhancement layer with the coding mode and; a unit to build a patch by applying the transfer function to a transformed initial patch of the base layer in a transform domain and then applying an inverse transform to the resulting patch so as to return in a pixel domain, wherein the transfer function is to transform the first intermediate patch to the second intermediate patch in a transform domain and to predict a prediction of the current block of the enhancement layer by extracting
- FIGs. 2A and 2B are an image of a reconstructed base layer and an image of a current block of an enhancement layer to be encoded;
- FIGs. 3A through 3J are drawings illustrating an example of Intra 4x4 prediction specified in H.264 standards ;
- FIGs . 4A and 4B are block diagrams illustrating an apparatus for determining a prediction of a current block of an enhancement layer of the first embodiment and FIG. 4A is an encoder side and FIG. 4B is a decoder side;
- FIGs . 5A and 5B are block diagrams illustrating a configuration of an apparatus for determining a prediction of a current block of an enhancement layer of a second embodiment of the present disclosure embodiment and FIG. 5A is an encoder side and FIG. 5B is a decoder side;
- FIG. 6 is a block diagram illustrating a configuration of an apparatus for determining a prediction of a current block of an enhancement layer of a fourth embodiment of the present disclosure.
- Fig. 7 is a flow diagram illustrating an exemplary method for determining a prediction of a current block of an enhancement layer according to an embodiment of the present disclosure.
- the embodiments of the present disclosure aim to improve the processing of an inverse Tone Mapping Operations (which may be hereinafter called an "iTMO"), and the previous TMO used in a global or local (the non-linear) manner, obviously if the base layer signal is still usable.
- iTMO inverse Tone Mapping Operations
- the idea relates to, for example, an HDR SNR scalable video coding with a first tone mapped base layer lb using a given TMO dedicated to the LDR video encoding, and a second enhancement layer l e dedicated to the HDR video encoding.
- SNR scalability for a current block b e (to be encoded) of the enhancement layer, a block of prediction extracted from the base layer (the collocated block) should be found, and the block has to be processed by inverse tone mapping.
- a function of transformation should be estimated to allow the pixels of the patch p b (composed of a virtual block b b (homologous of b ⁇ ) and its neighbor) to be transformed to the current patch p e (composed of a virtual block b e (homologous of b e ) and its neighbor) .
- the function of transformation can be applied to the patch (composed of the block b ⁇ and its neighbor) giving the patch p£
- the last step resides on the extraction of the block b e collocated to the current block in the patch p£ .
- the block b e corresponds to the prediction of the block b e .
- the coding mode of the collocated block b ⁇ of the base layer is needed, or a mode of prediction is needed to be extracted from the reconstructed image (of the l ⁇ ) among the set of available coding modes (of the encoder of the enhancement layer) based on the base layer.
- SNR scalability a block of prediction extracted from the base layer (the collocated block) should be found for a current block b e (to be encoded) of the enhancement layer, and the block of prediction has to be processed by inverse tone mapping.
- FIGs. 2A and 2B illustrate an image of a reconstructed base layer and an image of a current block to be encoded separately.
- the current patch is : X (4)
- the collocated block (known) of the base layer, (that is effectively collocated to the current block to predict of the enhancement layer) is: F /
- T ollocated patch (collocated of X) is:
- the goal is to determine a block of prediction for the current block X from the block ⁇ .
- the transformation will be estimated between the patches Y and X, this transformation corresponding to a kind of inverse tone mapping.
- the block X is not available (remember that the decoder will implement the same processing) , but there are a lot of possible modes of prediction that could provide a first approximation (more precisely prediction) of the current block Xu .
- the first approximation of the current block Xu and its neighbor compose the intermediate patch X' of the patch X.
- the first approximation of the block Xu is used so as to find a transformation function Trf ( ⁇ 3 ⁇ 4 ⁇ l e ) which allows the intermediate patch of X to be transformed into the intermediate patch of Y (respectively noticed X' and Y' ) , and this transformation is finally applied to the initial patch Y allowing the definitive block of prediction to be provided.
- the first embodiment of the present disclosure is about the SNR scalability, that is to say, the same spatial resolution between the LDR base layer and the HDR enhancement layers.
- the collocated block of the current block Xu had been encoded with one of the intra coding modes of the coder of the enhancement layer, for example, the intra modes of H.264 standard defined in MPEG-4 AVC/H.264 and described in the document ISO/IEC 14496-10.
- Intra 4x4 and Intra 8x8 predictions correspond to a spatial estimation of the pixels of the current block to be coded based on the neighboring reconstructed pixels.
- the H.264 standard specifies different directional prediction modes in order to elaborate the pixel prediction.
- Nine (9) intra prediction modes are defined on 4x4 and 8x8 block sizes of the macroblock (MB) . As depicted in FIG.3, eight (8) of these modes consist of a ID directional extrapolation of the pixels (from the left column and the top line) surrounding the current block to predict.
- the intra prediction mode 2 (DC mode) defines the predicted block pixels as the average of available surrounding pixels.
- the pixels e, f, g, and h are predicted with (left column) the reconstructed pixel J.
- two intermediate patches X' and Y' can be composed as the following formulas (6) and (7) .
- the desired transform Trf is computed between Y' and X' , in a Transform Domain (TF) , and the transformation could be Hadamard, Discrete Cosine Transform (DCT) , Discrete Sine Transform (DST) or Fourier transform and the like.
- TF Transform Domain
- DCT Discrete Cosine Transform
- DST Discrete Sine Transform
- the following formulas (8) and (9) are provided.
- TF (Y' ) corresponds to the 2D transform "TF" (for example, DCT) of the patch Y' .
- the next step is to compute the transfer function Trf that allows ⁇ ⁇ , to be transformed to T ⁇ , in which the following formulas (10) and (11) are applied to each couple of coefficients. If
- Trf (u,v) ⁇ ⁇ / ( ⁇ , ⁇ )/ ⁇ ⁇ / (u,v) (10) else
- Trf (u,v) 0 (11) end if
- u and v are the transfer transform coordinates of the coefficients of ⁇ ⁇ / ⁇ ⁇ / and Trf
- th is a threshold of a given value, which avoids singularities in the Trf transfer function .
- th could be equal to 1 in the context of H.264 or HEVC standards compression.
- HEVC High Efficiency Video Coding
- the function Trf is applied to the transformation (TF) of the initial patch of the base layer Y which gives the patch Y' ' after inverse transform (TF -1 ) .
- the patch Y' ' is composed of the template Y" T and the block Y ⁇ B as shown by formulas (12) through (14) .
- the prediction of the current block Xu resides on the extraction of the block Y ⁇ B from the patch Y" , and the notation m indicating that the block of prediction is built with help from m intra mode index of the base layer.
- FIGs . 4A and 4B are block diagrams illustrating an apparatus for determining a prediction of a current block of an enhancement layer of the first embodiment. The principle of this description of intra SNR scalability is also illustrated in the FIGs. 4A and 4B.
- SVC Scalable Video Coding
- T neighbor of the block (usually called “Template” in the video compression domain)
- Y, X, Y' , X' , and Y' ' are patches which are composed of a block and a template with reference to FIGs. 2A and 2B
- An original block 401 b e is tone mapped using the TMO 406 that gives the original tone mapped block .
- the motion estimator (motion estimation unit) 42 9 finds the best inter image prediction block with a given motion vector (temporal prediction unit) and the temporal prediction (Temp Pred Pred) unit 430 gives the temporal prediction block. From the available intra prediction modes (illustrated with the FIG. 3 , in case of H2 64 ) and neighboring reconstructed (or decoded) pixels the spatial prediction (Sp Pred) unit 42 8 gives the intra prediction block.
- the mode decision process (unit 425 ) chooses the intra image prediction mode (of m index, from N intra available modes) , the residual error prediction rb is computed (by the combiner 42 1 ) with the difference between the original block bbc and the prediction block 1 ⁇ 4. (3 ⁇ 4 «)
- the residual error prediction rb is transformed and quantized to rbq by T Q unit 422 and finally entropy coded by entropy coder unit 423 and sent in the bitstream base layer.
- the decoded block is locally rebuilt, by adding (with the combiner 427 ) the inverse transformed and dequantized by T _1 T 1 unit 424 prediction error block rbdq to the prediction block 3 ⁇ 4> giving the reconstructed (base layer) block
- Enhancement layer (el)
- the structure of the coder of the enhancement layer is similar to the coder of the base layer, for example the units 407, 408, 409 and 413 have the same function than the respective units 425, 426, 429 and 430 of the coder of the base layer in terms of coding mode decision, temporal prediction and reference frames buffer.
- the original enhancement layer block b e to encode .
- the entropy decoder (entropy decoder unit) 471 decodes the quantized error prediction rb q and the associated coding intra mode of m index
- the decoded block is locally rebuilt , by adding (with the combiner 473) the decoded and dequantized prediction error block r bd q to the prediction block 3 ⁇ 4> (or r ? « i. « ) giving the reconstructed block of the base layer .
- Enhancement layer (el)
- the entropy decoder 451 decodes the quantized error prediction r eq .
- the residual error prediction r eq is dequantized and inverse transformed by T _1 CT 1 unit 452 and output r e d q . h) If the coding mode of the block to decode corresponds to our inter-layer mode, then we consider the intra mode (of m index) of the collocated block of the base layer .
- the apparatus of the first embodiment can be configured as illustrated by FIGs . 4A and 4B, by which the method of the first embodiment can be performed .
- the prediction of the current block of the enhancement layer can be readily and accurately obtained.
- the intra mode of prediction of the base layer can be used in the objective to have first approximation of the current block and the collocated blocks, and the next steps correspond to the algorithm detailed with the formulas (8) through (14) .
- a simple example can correspond to a base layer encoded with JPEG2000 (e.g., which is described in The JPEG-2000 Still Image Compression Standard, ISO/IEC JTC Standard, 1/SC29/WG1, 2005, and Jasper Software Reference Manual (Version 1.900.0), ISO/IEC JTC, Standard 1/SC29/WG1, 2005) and an enhancement layer encoded with H.264.
- JPEG2000 e.g., which is described in The JPEG-2000 Still Image Compression Standard, ISO/IEC JTC Standard, 1/SC29/WG1, 2005, and Jasper Software Reference Manual (Version 1.900.0)
- ISO/IEC JTC, Standard 1/SC29/WG1, 2005 Standard 1/SC29/WG1, 2005
- the first embodiment is not applicable, because the m intra mode is not available in the (for example, JPEG2000) base layer.
- testing the modes of prediction (available in the encoder of the enhancement layer) is performed on the pixels of the base layer to check those decoded pixels are obviously available, and finally the best intra mode is selected, according to a given criterion .
- the current patch is: k
- a virtual prediction error is computed with the virtual prediction Yprd j (°f the collocated block Y ⁇ ) according to a given mode of j index, and an error of virtual prediction ERj between the block F / and the virtual prediction Yp r ⁇ j as shown by the following formula (18) .
- p corresponds to the coordinates of the pixel in the block to predict F / and the block of virtual prediction Yp rd ;
- V / (p) is a pixel value of the block to predict F / ;
- Y rd j (P) is a pixel value of the block of virtual prediction according to the intra mode of index j .
- the best virtual prediction mode is given by the minimum of the virtual prediction error from the n available intra modes prediction as the following formula (19) .
- the metric used to calculate the virtual prediction error by formula (18) is not limited to the sum of square error (SSE) , other metrics are possible: sum of absolute difference (SAD), sum of absolute Hadamard transform difference (SATD) .
- the virtual prediction Yprd, jmode appropriated to the collocated block F / is obtained, and then the same mode (/mode) is used so as to compute a virtual prediction
- the new intermediates patches are provided as the following formulas (20) and (21) .
- this function is applied to the patch Y that gives, after inverse transform, the patch Y" from which the desired prediction is extracted, as shown by formula (22) .
- FIG. 5 is a block diagram illustrating a configuration of an apparatus for determining a prediction of a current block of an enhancement layer of a second embodiment of the present disclosure.
- An original HDR image im e i composed of block b e 501, is tone mapped using the TMO 506 that gives the original tone mapped image ⁇ 3 ⁇ 4 ⁇ .
- Enhancement layer (el)
- the function is respectively dedicated to the classical coding mode decision and the motion estimation for the inter-image prediction .
- Fig 5B (unit 550) Base layer (bl)
- the base layer sequence is decoded with the decoder 584.
- the reconstructed image buffer 582 stores the decoded frames used to the inter-layer prediction .
- Enhancement layer (el)
- the entropy decoder 551 decodes the quantized error prediction r eq b) The residual error prediction r eq is dequantized and inverse transformed by T _1 T 1 unit 552 to generate r e d q . c) If the coding mode of the block to decode corresponds to our inter-layer mode, then we need of an intra mode (of Jmode index) of the collocated block of the base layer.
- this intra mode (of Jmode index) of the base layer we determine:
- An intermediate intra block of prediction 3 ⁇ 4 « Struktur,,,, at the enhancement layer level with el Spatial Pred (Sp Pred) unit 555,
- An inverse transform (for example, DCT -1 ) is computed on ⁇ ⁇ , giving Y » _ xr- 1 ⁇ ) where the resulting patch is composed as formula (22) .
- the reconstructed (or decoded) image is stored in the (el) reference frames buffer 556, the decoded frames being used for the next (el) intra image prediction and inter image prediction using the motion compensation unit 558
- the motion compensation unit 558 According to the method and apparatus for determining a prediction of a current block of an enhancement layer, even when the coding mode of the base layer is different from that of the enhancement layer, the appropriate inter layer coding mode is selected, and then the prediction of the current block can be obtained.
- the spatial resolution of the base layer ( l e ) and the enhancement layer ( l ⁇ ) are different from each other, but regarding the availability of the mode of prediction of the base layer, there are different possibilities.
- the prediction mode m of the base layer can be utilized, and the processing explained in the first embodiment can be applied to this case. For example (in case of spatial scalability NxN ⁇ 2Nx2N) , a given 8x8 current block has a 4x4 collocated block in the base layer.
- the intra mode m corresponds to the intra coding mode used to encode this 4x4 block (of l ⁇ layer ) and the 8x8 block of prediction Yp r a m could be the up-sampled prediction of the base layer (4x4 ⁇ 8x8), or the prediction Yp r a, m could be computed on the up-sampled image of the base layer with the same m coding mode.
- the base layer and enhancement layer intermediate prediction blocks are built. After from the two intermediate patchs, the transfer function is estimated using the formula 8 to 11. Finally, the transfer function is applied to the up-sampled and transformed (ex DCT) patch of the base layer, the inter layer prediction being extracted as in the first embodiment.
- the coding mode m is not really available. In this case, the principle explained in the second embodiment can be-used . In other words , the best coding mode m has to be estimated in the up-sampled base layer, the remaining processing (dedicated to the inter-layer prediction) being the same than the second embodiment; knowing that the estimated transfer function ( Trf ) is applied to the up-sampled and transformed (ex DCT) base-layer patch.
- a fourth embodiment of the present disclosure provides a coding mode choice algorithm for the block of the base layer, in order to re-use the selected mode to build the prediction ( l b ⁇ l e ) with the technique provided in the first embodiment .
- the choice of the coding mode at the base layer level, may cause the inherent distortions at the two layers level .
- the RDO (Rate Distortion Optimization) technique serves to address the distortions of LDR and HDR and the coding costs of the current HDR and collocated LDR blocks, and the RDO criterion gives the prediction mode that provides the best compromise in terms of reconstruction errors and coding costs of the base and enhancement layers.
- the classical RDO criteria for the two layers are provided as the following formulas (23) and (24).
- B gj St are composed of the coding cost of the DCT coefficients of the error residual of prediction of the base layer and the enhancement layer, respectively, and the syntax elements (block size, coding mode ...) contained in the header of the blocks and B gj St ) that allow the predictions to be rebuilt at the decoder side .
- the quantized coefficients of the error residual of prediction after inverse quantization and inverse transform for example, DCT -1
- this residual error added to the prediction provides the reconstructed (or decoded) block ( Y ⁇ EC ) .
- the base layer distortion associated to this block is provided as the following formula (25) .
- the best mode (according to formula (29)) gives the mode of the base layer, which produces the minimum global cost Cst via one of the N coding modes of the base layer as shown by the following formula (30) .
- the choice of the coding mode principally focuses on the enhancement layer, and there is a risk of the base layer containing a lot of visual artifacts.
- FIG. 6 shows a block diagrams illustrating an apparatus for determining a prediction of a current block of an enhancement layer of the fourth embodiment.
- An original block 601 b e is tone mapped using the TMO 606 that gives the original tone mapped block .
- the units 625 and 607 (corresponding to the coding mode decision units of the base and enhancement layers) are not used.
- the unit 642 replace the units 625 and 607, in fact the unit 642 selects the best intra ]mode mode using the formula 30 and sends that mode ( Jmode ) to the units 625 and 607.
- Base layer intra coding mode selection ( Jmode ) ⁇ n un it 642 For a given blending parameter that allows a global compromise between base layers and enhancement layers as the following formula (29, and for each N available intra prediction modes (illustrated with the FIG. 3, in case of H264) We operate N iterations on the coding modes:
- the (Sp Pred) unit 658 gives an intra base layer prediction block b) With the neighboring reconstructed (or decoded) pixels of the enhancement layer the spatial prediction and the same m intra coding mode (Sp Pred) unit 612 gives an intermediate intra enhancement layer prediction block
- the unit 611 builds the patch of the base layer composed of the intra base layer neighbor and the block of prediction of the step (a) o
- the unit 611 builds the patch of the enhancement layer composed of intra enhancement layer neighbor and the block of prediction of the step (b)
- the motion estimator (motion estimation unit) 629 finds the best inter image prediction block with a given motion vector (temporal prediction unit) and the temporal prediction (Temp Pred Pred) unit 630 gives the temporal prediction bloc
- the residual error prediction rb is computed (by the combiner 621) with the difference between the original block b b c and the prediction block 3 ⁇ 4> (3 ⁇ 4* ⁇ )
- the residual error prediction rb is transformed and quantized to 3 ⁇ 4q by T Q unit 622 and finally entropy coded by entropy coder unit 623 and sent in the bitstream base layer.
- the decoded block is locally rebuilt, by adding (with the combiner 657) the inverse transformed and dequantized by T _1 T 1 unit 624 prediction error block r bd q to the prediction block 3 ⁇ 4, giving the reconstructed (base layer) block
- the reconstructed (or decoded) frame is stored in the (bl) reference frames buffer 626.
- Enhancement layer (el)
- the structure of the coder of the enhancement layer is similar to the coder of the base layer, for example the units 607, 608, 609 and 613 have the same function than the respective units 625, 626, 629 and 630 of the coder of the base layer in terms of coding mode decision, temporal prediction and reference frames buffer.
- the original enhancement layer block b e to encode .
- the decoded block is locally rebuilt, by adding (with the combiner 610) the inverse transformed and dequantized prediction error block by T _1 T 1 unit 605, redq to the prediction 3 ⁇ 4 , and the reconstructed (or decoded) image is stored in the (el) reference frames buffer 608.
- the embodiments of the present disclosure relates to the SNR and spatial scalable LDR/HDR video encoding with the same or different encoders for the two layers.
- the LDR video can be implemented from the HDR video with any tone mapping operators: global or local, linear or non-linear.
- the inter layer prediction is implemented on the fly without additional specific meta-data.
- the embodiments of the present disclosure concern both the encoder and the decoder.
- the embodiments of the present disclosure can be applied to image and video compression.
- the embodiments of the present disclosure may be submitted to the ITU-T or MPEG standardization groups as part of the development of a new generation encoder dedicated to the archiving and distribution of LDR/HDR video content.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Priority Applications (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/741,251 US20180199032A1 (en) | 2015-06-30 | 2016-06-27 | Method and apparatus for determining prediction of current block of enhancement layer |
| KR1020177037683A KR20180021733A (ko) | 2015-06-30 | 2016-06-27 | 강화 계층의 현재 블록의 예측을 결정하기 위한 방법 및 장치 |
| JP2017567607A JP2018524916A (ja) | 2015-06-30 | 2016-06-27 | 拡張レイヤの現在ブロックの予測を求める方法および装置 |
| CN201680050362.9A CN107950025A (zh) | 2015-06-30 | 2016-06-27 | 用于确定增强层的当前块的预测的方法和装置 |
| EP16733456.4A EP3318062A1 (en) | 2015-06-30 | 2016-06-27 | Method and apparatus for determining prediction of current block of enhancement layer |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP15306049.6 | 2015-06-30 | ||
| EP15306049.6A EP3113492A1 (en) | 2015-06-30 | 2015-06-30 | Method and apparatus for determining prediction of current block of enhancement layer |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2017001344A1 true WO2017001344A1 (en) | 2017-01-05 |
Family
ID=53724154
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2016/064868 Ceased WO2017001344A1 (en) | 2015-06-30 | 2016-06-27 | Method and apparatus for determining prediction of current block of enhancement layer |
Country Status (6)
| Country | Link |
|---|---|
| US (1) | US20180199032A1 (enExample) |
| EP (2) | EP3113492A1 (enExample) |
| JP (1) | JP2018524916A (enExample) |
| KR (1) | KR20180021733A (enExample) |
| CN (1) | CN107950025A (enExample) |
| WO (1) | WO2017001344A1 (enExample) |
Families Citing this family (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP3301925A1 (en) * | 2016-09-30 | 2018-04-04 | Thomson Licensing | Method for local inter-layer prediction intra based |
| CN111491168A (zh) * | 2019-01-29 | 2020-08-04 | 华为软件技术有限公司 | 视频编解码方法、解码器、编码器和相关设备 |
| US11570479B2 (en) * | 2020-04-24 | 2023-01-31 | Samsung Electronics Co., Ltd. | Camera module, image processing device and image compression method |
| CN119496901A (zh) * | 2023-08-15 | 2025-02-21 | 华为技术有限公司 | 编码方法、解码方法和相关设备 |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140140392A1 (en) * | 2012-11-16 | 2014-05-22 | Sony Corporation | Video processing system with prediction mechanism and method of operation thereof |
| WO2014082982A1 (en) * | 2012-11-29 | 2014-06-05 | Thomson Licensing | Method for predicting a block of pixels from at least one patch |
| WO2014106746A1 (en) * | 2013-01-04 | 2014-07-10 | Canon Kabushiki Kaisha | Method and device for processing prediction information for encoding an image |
Family Cites Families (17)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7809200B2 (en) * | 2005-11-15 | 2010-10-05 | Teledyne Licensing, Llc | Dynamic range compression of high dynamic range imagery |
| US8014445B2 (en) * | 2006-02-24 | 2011-09-06 | Sharp Laboratories Of America, Inc. | Methods and systems for high dynamic range video coding |
| CN101601300B (zh) * | 2006-12-14 | 2012-07-18 | 汤姆逊许可公司 | 用自适应增强层预测对位深度可分级视频数据进行编码和/或解码的方法和设备 |
| US8665942B2 (en) * | 2007-01-23 | 2014-03-04 | Sharp Laboratories Of America, Inc. | Methods and systems for inter-layer image prediction signaling |
| WO2009003499A1 (en) * | 2007-06-29 | 2009-01-08 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Scalable video coding supporting pixel value refinement scalability |
| EP2835976B1 (en) * | 2008-04-16 | 2016-09-21 | GE Video Compression, LLC | Bit-depth scalability |
| US9338475B2 (en) * | 2008-04-16 | 2016-05-10 | Intel Corporation | Tone mapping for bit-depth scalable video codec |
| EP2144444B1 (en) * | 2008-07-10 | 2012-06-27 | The University Of Warwick | HDR video data compression devices and methods |
| EP2154893A1 (en) | 2008-08-13 | 2010-02-17 | Thomson Licensing | Method for modifying a reference block of a reference image, method for encoding or decoding a block of an image by help of a reference block and device therefor and storage medium or signal carrying a block encoded by help of a modified reference block |
| CN102473295B (zh) | 2009-06-29 | 2016-05-04 | 汤姆森特许公司 | 基于区的色调映射 |
| GB2500835B (en) * | 2010-12-10 | 2014-02-12 | Ibm | High-dynamic range video tone mapping |
| US20140003527A1 (en) * | 2011-03-10 | 2014-01-02 | Dolby Laboratories Licensing Corporation | Bitdepth and Color Scalable Video Coding |
| KR101759954B1 (ko) * | 2013-09-06 | 2017-07-21 | 엘지전자 주식회사 | 디지털 방송 시스템에서 광역 밝기 표현을 위한 초고화질 방송 신호 송수신 방법 및 장치 |
| KR101797505B1 (ko) * | 2013-11-13 | 2017-12-12 | 엘지전자 주식회사 | Hdr 방송 서비스 제공을 위한 방송 신호 송수신 방법 및 장치 |
| US10536731B2 (en) * | 2014-05-12 | 2020-01-14 | Apple Inc. | Techniques for HDR/WCR video coding |
| US20160286226A1 (en) * | 2015-03-24 | 2016-09-29 | Nokia Technologies Oy | Apparatus, a method and a computer program for video coding and decoding |
| US20190138786A1 (en) * | 2017-06-06 | 2019-05-09 | Sightline Innovation Inc. | System and method for identification and classification of objects |
-
2015
- 2015-06-30 EP EP15306049.6A patent/EP3113492A1/en not_active Withdrawn
-
2016
- 2016-06-27 CN CN201680050362.9A patent/CN107950025A/zh active Pending
- 2016-06-27 JP JP2017567607A patent/JP2018524916A/ja not_active Withdrawn
- 2016-06-27 KR KR1020177037683A patent/KR20180021733A/ko not_active Withdrawn
- 2016-06-27 EP EP16733456.4A patent/EP3318062A1/en not_active Withdrawn
- 2016-06-27 WO PCT/EP2016/064868 patent/WO2017001344A1/en not_active Ceased
- 2016-06-27 US US15/741,251 patent/US20180199032A1/en not_active Abandoned
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140140392A1 (en) * | 2012-11-16 | 2014-05-22 | Sony Corporation | Video processing system with prediction mechanism and method of operation thereof |
| WO2014082982A1 (en) * | 2012-11-29 | 2014-06-05 | Thomson Licensing | Method for predicting a block of pixels from at least one patch |
| WO2014106746A1 (en) * | 2013-01-04 | 2014-07-10 | Canon Kabushiki Kaisha | Method and device for processing prediction information for encoding an image |
Non-Patent Citations (2)
| Title |
|---|
| WINKEN M ET AL: "CE2: SVC bit-depth scalability", 24. JVT MEETING; 81. MPEG MEETING; 29-6-2007 - 5-7-2007; GENEVA, CH;(JOINT VIDEO TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ),, no. JVT-X057, 4 July 2007 (2007-07-04), XP030007164, ISSN: 0000-0082 * |
| WU Y ET AL: "CE1: SVC study on inter-layer prediction: bit-depth scalability", 25. JVT MEETING; 82. MPEG MEETING; 21-10-2007 - 26-10-2007; SHENZHEN,CN; (JOINT VIDEO TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ),, no. JVT-Y081, 21 October 2007 (2007-10-21), XP030007285, ISSN: 0000-0137 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN107950025A (zh) | 2018-04-20 |
| EP3318062A1 (en) | 2018-05-09 |
| JP2018524916A (ja) | 2018-08-30 |
| EP3113492A1 (en) | 2017-01-04 |
| KR20180021733A (ko) | 2018-03-05 |
| US20180199032A1 (en) | 2018-07-12 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| KR101232420B1 (ko) | 컨텍스트-적응형 가변 길이 코딩 (cavlc) 을 위한 레이트-왜곡 양자화 | |
| JP5290325B2 (ja) | Cabacコーダのためのレート歪みモデリングに基づいた量子化 | |
| US8798158B2 (en) | Method and apparatus for block-based depth map coding and 3D video coding method using the same | |
| RU2697929C1 (ru) | Устройство декодирования изображения и способ декодирования изображения | |
| CN111819852A (zh) | 用于变换域中残差符号预测的方法及装置 | |
| CN101653003A (zh) | 用于混合视频编码的量化 | |
| KR20130135659A (ko) | 다계층 비디오 코딩을 위한 레이트 제어 방법, 이를 이용한 비디오 인코딩 장치 및 비디오 신호 처리 시스템 | |
| CA2837755A1 (en) | Method and system for structural similarity based perceptual video coding | |
| CN101009839A (zh) | 基于正交变换和向量量化的视频编码/解码的方法和设备 | |
| CN101548549A (zh) | 精细粒度可伸缩图像编码和解码 | |
| EP3520409A1 (en) | Method for local inter-layer prediction intra based | |
| EP3318062A1 (en) | Method and apparatus for determining prediction of current block of enhancement layer | |
| KR101529903B1 (ko) | 블록기반 깊이정보 맵의 코딩 방법과 장치, 및 이를 이용한 3차원 비디오 코딩 방법 | |
| CN103959788B (zh) | 通过模式匹配在解码器层面的运动估计 | |
| CN112188195A (zh) | 图像编码/解码方法和设备以及相应的计算机可读介质 | |
| JP5583762B2 (ja) | 原画像を符号化する方法及び装置並びに復号化する方法及び装置 | |
| KR102020953B1 (ko) | 카메라 영상의 복호화 정보 기반 영상 재 부호화 방법 및 이를 이용한 영상 재부호화 시스템 | |
| Arrivukannamma et al. | A study on CODEC quality metric in video compression techniques | |
| Choi et al. | Implicit line-based intra 16× 16 prediction for H. 264/AVC high-quality video coding | |
| HK1140348B (en) | Encoding/decoding method and encoder/decoder for quantization for hybrid video coding |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16733456 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2017567607 Country of ref document: JP Kind code of ref document: A |
|
| ENP | Entry into the national phase |
Ref document number: 20177037683 Country of ref document: KR Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2016733456 Country of ref document: EP |