GB2516022A - Method, device, and computer program for processing high bit-depth content in video encoder and decoder - Google Patents

Method, device, and computer program for processing high bit-depth content in video encoder and decoder Download PDF

Info

Publication number
GB2516022A
GB2516022A GB1312144.7A GB201312144A GB2516022A GB 2516022 A GB2516022 A GB 2516022A GB 201312144 A GB201312144 A GB 201312144A GB 2516022 A GB2516022 A GB 2516022A
Authority
GB
United Kingdom
Prior art keywords
sub
samples
transformation
applying
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1312144.7A
Other versions
GB201312144D0 (en
Inventor
Christophe Gisquet
Patrice Onno
Guillaume Laroche
Edouard Francois
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB1312144.7A priority Critical patent/GB2516022A/en
Publication of GB201312144D0 publication Critical patent/GB201312144D0/en
Publication of GB2516022A publication Critical patent/GB2516022A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/34Scalability techniques involving progressive bit-plane based encoding of the enhancement layer, e.g. fine granular scalability [FGS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/88Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving rearrangement of data among different coding units, e.g. shuffling, interleaving, scrambling or permutation of pixel data or permutation of transform coefficient data among different blocks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Encoding an image portion by splitting 501 each sample or pixel value into at least two sub-samples 502 of a fixed bit-depth and encoding each set of sub-samples using transformation 504 and/or quantisation 505 steps. The method is useful for encoding a high bit-depth monochrome image using known coding architectures such as HEVC. Some least significant bits may be dropped to create truncated samples before splitting. The first sub-sample may be a set of most significant bits and the second sub-sample may be a set of least significant bits. Alternatively, each sub-sample may contains bits extracted from the set of least significant bits. The transformation type, quantisation parameters or coding partitioning used to encode the second sub-sample array may be dependent on those used to encode the first sub-sample array. For example, the partitioning of each sub-sample picture into transformation blocks may be identical. An equivalent decoding method and corresponding device claims are included.

Description

METHOD, DEVICE, AND COMPUTER PROGRAM FOR PROCESSING HIGH BIT-
DEPTH CONTENT IN VIDEO ENCODER AND DECODER
FIELD OF THE INVENTION
The invention generally relates to the field of video coding and decoding.
More particularly, the invention concerns a method, device, and computer program for processing high bit-depth content in video encoder and decoder.
In embodiments of the invention the image is composed of blocks of pixels and is part of a digital video sequence. Embodiments of the invention relate to the field of video coding, in particular to video coding applicable to the High Efficiency Video Coding (HEVC) standard and to its extension for professional applications (HEVC Range Extensions, known as HEVC RExt).
BACKGROUND OF THE INVENTION
Video data is typically composed of a series of still images which are shown rapidly in succession as a video sequence to give the idea of a moving image. Video applications are continuously moving towards higher and higher resolution. A large quantity of video material is distributed in digital form over broadcast channels, digital networks and packaged media, with a continuous evolution towards higher quality and resolution (e.g. higher number of pixels per frame, higher frame rate, higher bit-depth or extended colour gamut). This technological evolution puts higher pressure on the distribution networks that are already facing difficulties in bringing HDTV resolution and high data rates economically to the end user.
Video coding is a way of transforming a series of video images into a compact bit stream so that the capacities required for transmitting and storing the video images can be reduced. Video coding techniques typically use spatial and temporal redundancies of images in order to generate data bit streams of reduced size compared with the original video sequences. Spatial prediction techniques (also referred to as Intra coding) exploit the mutual correlation between neighbouring image pixels, while temporal prediction techniques (also referred to as INTER coding) exploit the correlation between images of sequential images. Such compression techniques render the transmission and/or storage of the video sequences more effective since they reduce the capacity required of a transfer network, or storage device, to transmit or store the bit-stream code.
An original video sequence to be encoded or decoded generally comprises a succession of digital images which may be represented by one or more matrices the coefficients of which represent pixels. An encoding device is used to code the video images, with an associated decoding device being available to reconstruct the bit stream for display and viewing.
Common standardized approaches have been adopted for the format and method of the coding process. A video standard being standardized is HEVC, in which the macroblocks are replaced by what are referred to as Coding Units and are partitioned and adjusted according to the characteristics of the original image segment under consideration. This allows more detailed coding of areas of the video image which contain relatively more information and less coding effort for those areas with fewer features.
The video images may be processed by coding each smaller image portion individually, in a manner resembling the digital coding of still images or pictures.
Different coding models provide prediction of an image portion in one frame, from neighboring pixels in that frame, or by association with a similar portion in a neighboring frame. This allows use of already available coded information, thereby reducing the amount of coding bit-rate needed overall. In general, the more information that can be compressed at a given visual quality, the better the performance in terms of compression efficiency.
It is known that to ensure good image rendering after encoding and decoding steps, it is important that bit-length of variables used to process images, encoded or not, is higher or equal than the one of the image data, encoded or not, referred to as bit-depth, in particular because of rounding and clipping processes applied in transformation and quantization steps. It also turns out that when the bit-depth of the images is too high, there might be issues in the encoding and decoding processes regarding the precision of the internal operations. This may result in noticeable coding efficiency losses and in a saturation phenomenon of the reconstructed images quality, whatever the bitrate is. Accordingly, video encoders and decoders are to be adapted to the bit-depth content of the processed images.
However, it can be desirable to re-use state-of-the-art video encoders and decoders, with few changes, for processing images having high bit-depth content, in particular for processing monochrome images having very high bit-depth content such as medical images.
The present invention has been devised to address one or more of the foregoing concerns.
SUMMARY OF THE INVENTION
Faced with these constraints, the inventors provide a method and a device for processing high bit-depth content in video encoder and decoder.
It is a broad object of the invention to remedy the shortcomings of the prior art as described above.
According to a first aspect of the invention there is provided a method of processing an image portion in a process of encoding an image from which is obtained the image portion, the image portion comprising a plurality of samples of a bit-depth determined as a function of a bit-depth of data of the image, the method comprises: splitting each of the samples of the image portion into at least a first sub-sample and a second sub-sample, each of the at least first and second sub-samples being of a predetermined bit-depth, the bit-depth of the samples being higher than the predetermined bit-depth of each of the at least first and second sub-samples; and applying at least one of a transformation and a quantization steps to the first sub-samples to obtain first sub-data; applying at least one of a transformation and a quantization steps to the second sub-samples to obtain second sub-data; and merging the first and the second sub-data to encode the image portion.
Such a method provides an efficient way of processing high bit-depth content in video encoder and decoder, for example encoder and decoder conforming to the HEVC standard. According to this embodiment, few changes are required in the current design of such encoders and decoders in order to re-use as far as possible the existing design, taking into account rounding and clipping processes applied in the transformation and quantization steps. In addition, some steps of the method can be executed in a parallel way so as to improve processing efficiency in terms of response time.
In an embodiment, the steps of applying at least one of a transformation and a quantization steps to the first sub-samples and of applying at least one of a transformation and a quantization steps to the second sub-samples are performed independently.
In an embodiment, the method further comprises a step of entropy coding the merged first and second sub-data.
In an embodiment, the step of splitting each of the samples of the image portion into at least a first sub-sample and a second sub-sample comprises a step of extracting a set of most significant bits from each of the samples to form the first sub-samples and a step of extracting a set of least significant bits from each of the samples to form the second sub-samples.
In an embodiment, the method further comprises a preliminary step of withdrawing a predetermined number of least significant bits in each of the samples to obtain truncated samples.
In an embodiment, the predetermined number of least significant bits to be withdrawn in each of the samples is determined so that the truncated samples comprise an even number of bits.
In an embodiment, the step of splitting each of the samples of the image portion into at least a first sub-sample and a second sub-sample comprises a step of extracting a set of most significant bits from each of the truncated samples to form the first sub-samples and a step of extracting a set of least significant bits from each of the truncated samples to form the second sub-samples.
In an embodiment, the predetermined number of least significant bits is equal to one and the bit-depth of data of the image is equal to sixteen.
In an embodiment, the set of most significant bits and the set of least significant bits each comprises eight bits.
In an embodiment, the step of splitting each of the samples of the image portion into at least a first sub-sample and a second sub-sample comprises, for each of the samples, a step of extracting a set of least significant bits from the sample and a step of splitting the set of least significant bits so that each of the first and second sub-samples comprises bits of the set of least significant bits.
In an embodiment, the step of applying at least one of a transformation and a quantization steps to the first sub-samples comprises a discrete cosine or sine transform step or a transformation step of the transform skip type.
In an embodiment, the step of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a transformation step of the transform skip type if the step of applying at least one of a transformation and a quantization steps to the first sub-samples comprises a transformation step of the transform skip type.
In an embodiment, the step of applying at least one of a transformation and a quantization steps to the first sub-samples comprises a transformation step of a type different than the transform skip type if the step of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a transformation step of a type different than the transform skip type.
In an embodiment, each of the steps of applying at least one of a transformation and a quantization steps to the first sub-samples and of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a step of partitioning a block of sub-samples into transformation units, the partitioning steps being performed independently each other.
In an embodiment, each of the steps of applying at least one of a transformation and a quantization steps to the first sub-samples and of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a step of partitioning a block of sub-samples into transformation units, one partitioning step depending on another partitioning step.
In an embodiment, each of the steps of applying at least one of a transformation and a quantization steps to the first sub-samples and of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a step of partitioning a block of sub-samples into transformation units, the partitioning steps being identical.
In an embodiment, the step of applying at least one of a transformation and a quantization steps to the first sub-samples comprises a quantization step using a first quantization parameter and the step of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a quantization step using a second quantization parameter, the second quantization parameter being equal to the first quantization parameter plus a value depending on the bit-depth of data of the image and of the predetermined bit-depth of each ot the at least first and second sub-samples.
In an embodiment, the value is equal to six time the predetermined bit-depth of the second sub-samples.
In an embodiment, the image portion is a prediction residual block.
In an embodiment, the encoded format conforms to the HEVC standard.
In an embodiment, the encoded format conforms to the HEVC RExt standard.
According to a second aspect of the invention there is provided a method of processing received data in a process of decoding an image, the processed received data being representative of an image portion comprising a plurality of samples of a bit-depth determined as a function of a bit-depth of data of the image, the method comprises: splitting received data into at least first sub-data and second sub-data; applying at least one of an inverse quantization and an inverse transformation steps to the first sub-data to obtain first sub-samples; applying at least one of an inverse quantization and an inverse transformation steps to the second sub-data to obtain second sub-samples; merging the first and the second sub-samples to creates samples of the image portion, wherein each of the first and second sub-samples are of a predetermined bit-depth, the bit-depth of the samples being higher than the predetermined bit-depth of each of the first and second sub-samples.
Such a method provides an efficient way of processing high bit-depth content in video encoder and decoder, for example encoder and decoder conforming to the HEVC standard. According to this embodiment, few changes are required in the current design of such encoders and decoders in order to re-use as far as possible the existing design, taking into account rounding and clipping processes applied in the transformation and quantization steps. In addition, some steps of the method can be executed in a parallel way so as to improve processing efficiency in terms of response time.
In an embodiment, the steps of applying at least one of an inverse transformation and an inverse quantization steps to the first sub-data and of applying at least one of an inverse transformation and an inverse quantization steps to the second sub-data are performed independently.
In an embodiment, the method further comprises a preliminary step of entropy decoding the received data.
In an embodiment, the step of merging the first and second sub-samples comprises a step of obtaining a first set of bits from the first sub-sample and a step of obtaining a second set of bits from the second sub-sample, the first set of bits forming a set of most significant bits and the second set of bits forming a set of least significant bits, the sets of most significant bits and of least significant bits being used to form one sample.
In an embodiment, the number of bits obtained from the first sub-sample is lower than the bit-depth of the first sub-sample.
In an embodiment, the number of bits obtained from the second sub-sample is lower than the bit-depth of the second sub-sample.
In an embodiment, the difference between the number of bits obtained from the first sub-sample and the bit-depth of the first sub-sample and the difference between the number of bits obtained from the second sub-sample and the bit-depth of the second sub-sample is equal to one and the bit-depth of data of the image is equal to sixteen.
In an embodiment, the first and second sets of bits each comprises eight bits.
In an embodiment, the step of merging the first and second sub-samples comprises a step of obtaining a first set of bits from the first sub-sample, a step of obtaining a second set of bits from the second sub-sample, and a step of forming a set of least significant bits from bits of the first set of bits and from bits of the second set of bits, the set of least significant bits being used to form one sample.
In an embodiment, the step of applying at least one of an inverse transformation and an inverse quantization steps to the first sub-data comprises an inverse discrete cosine or sine transform step or a transformation step of the transform skip type.
In an embodiment, the step of applying at least one of an inverse transformation and an inverse quantization steps to the second sub-data comprises a transformation step of the transform skip type if the step of applying at least one of an inverse transformation and an inverse quantization steps to the first sub-data comprises a transformation step of the transform skip type.
In an embodiment, the step of applying at least one of an inverse transformation and an inverse quantization steps to the first sub-data comprises a transformation step of a type different than the transform skip type if the step of applying at least one of an inverse transformation and an inverse quantization steps to the second sub-data comprises a transformation step of a type different than the transform skip type.
In an embodiment, the step of applying at least one of an inverse transformation and an inverse quantization steps to the first sub-data comprises an inverse quantization step using a first quantization parameter and the step of applying at least one of an inverse transformation and an inverse quantization steps to the second sub-data comprises an inverse quantization step using a second quantization parameter, the second quantization parameter being equal to the first quantization parameter plus a value depending on the bit-depth of data of the image and of the predetermined bit-depth of each of the at least first and second sub-samples.
In an embodiment, the value is equal to six time the predetermined bit-depth of the second sub-samples.
In an embodiment, the image portion is a prediction residual block.
In an embodiment, the encoded format conforms to the HEVC standard.
In an embodiment, the encoded format conforms to the HEVO RExt standard.
According to a third aspect of the invention there is provided a device of processing an image portion in a process of encoding an image from which is obtained the image portion, the image portion comprising a plurality of samples of a bit-depth determined as a function of a bit-depth of data of the image, the device comprising at least one microprocessor configured for carrying out the steps of: splitting each of the samples of the image portion into at least a first sub-sample and a second sub-sample, each of the at least first and second sub-samples being of a predetermined bit-depth, the bit-depth of the samples being higher than the predetermined bit-depth of each of the at least first and second sub-samples; and applying at least one of a transformation and a quantization steps to the first sub-samples to obtain first sub-data; applying at least one of a transformation and a quantization steps to the second sub-samples to obtain second sub-data; and merging the first and the second sub-data to encode the image portion.
Such a device provides an efficient way of processing high bit-depth content in video encoder and decoder, for example encoder and decoder conforming to the HEVC standard. According to this embodiment, few changes are required in the current design of such encoders and decoders in order to re-use as far as possible the existing design, taking into account rounding and clipping processes applied in the transformation and quantization steps. In addition, some steps of the method can be executed in a parallel way so as to improve processing efficiency in terms of response time.
In an embodiment, the at least one microprocessor is further configured so that the steps of applying at least one of a transformation and a quantization steps to the first sub-samples and of applying at least one of a transformation and a quantization steps to the second sub-samples are performed independently.
In an embodiment, the at least one microprocessor is further configured for carrying out the step of entropy coding the merged first and second sub-data.
In an embodiment, the at least one microprocessor is further configured so that the step of splitting each of the samples of the image portion into at least a first sub-sample and a second sub-sample comprises a step of extracting a set of most significant bits from each of the samples to form the first sub-samples and a step of extracting a set of least significant bits from each of the samples to form the second sub-samples.
In an embodiment, the at least one microprocessor is further configured for carrying out the preliminary step of withdrawing a predetermined number of least significant bits in each of the samples to obtain truncated samples.
In an embodiment, the at least one microprocessor is further configured so that the predetermined number of least significant bits to be withdrawn in each of the samples is determined so that the truncated samples comprise an even number of bits.
In an embodiment, the at least one microprocessor is further configured so that the step of splitting each of the samples of the image portion into at least a first sub-sample and a second sub-sample comprises a step of extracting a set of most significant bits from each of the truncated samples to form the first sub-samples and a step of extracting a set of least significant bits from each of the truncated samples to form the second sub-samples.
In an embodiment, the predetermined number of least significant bits is equal to one and the bit-depth of data of the image is equal to sixteen.
In an embodiment, the set of most significant bits and the set of least significant bits each comprises eight bits.
In an embodiment, the at least one microprocessor is further configured so that the step of splitting each of the samples of the image portion into at least a first sub-sample and a second sub-sample comprises, for each of the samples, a step of extracting a set of least significant bits from the sample and a step of splitting the set of least significant bits so that each of the first and second sub-samples comprises bits of the set of least significant bits.
In an embodiment, the at least one microprocessor is further configured so that the step of applying at least one of a transformation and a quantization steps to the first sub-samples comprises a discrete cosine or sine transform step or a transformation step of the transform skip type.
In an embodiment, the at least one microprocessor is further configured so that the step of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a transformation step of the transform skip type if the step of applying at least one of a transformation and a quantization steps to the first sub-samples comprises a transformation step of the transform skip type.
In an embodiment, the at least one microprocessor is further configured so that the step of applying at least one of a transformation and a quantization steps to the first sub-samples comprises a transformation step of a type different than the transform skip type if the step of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a transformation step of a type different than the transform skip type.
In an embodiment, the at least one microprocessor is further configured so that each of the steps of applying at least one of a transformation and a quantization steps to the first sub-samples and of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a step of partitioning a block of sub-samples into transtormation units, the partitioning steps being performed independently each other.
In an embodiment, the at least one microprocessor is further configured so that each of the steps of applying at least one of a transformation and a quantization steps to the first sub-samples and of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a step of partitioning a block of sub-samples into transformation units, one partitioning step depending on another partitioning step.
In an embodiment, the at least one microprocessor is further configured so that each of the steps of applying at least one of a transformation and a quantization steps to the first sub-samples and of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a step of partitioning a block of sub-samples into transformation units, the partitioning steps being identical.
In an embodiment, the at least one microprocessor is further configured so that the step of applying at least one of a transformation and a quantization steps to the first sub-samples comprises a quantization step using a first quantization parameter and the step of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a quantization step using a second quantization parameter, the second quantization parameter being equal to the first quantization parameter plus a value depending on the bit-depth of data of the image and of the predetermined bit-depth of each of the at least first and second sub-samples.
In an embodiment, the value is equal to six time the predetermined bit-depth of the second sub-samples.
In an embodiment, the image portion is a prediction residual block.
In an embodiment, the encoded format conforms to the HEVC standard.
In an embodiment, the encoded format conforms to the HEVO RExt standard.
According to a fourth aspect of the invention there is provided a video encoder comprising the device as described above.
According to a fifth aspect of the invention there is provided a device of processing received data in a process of decoding an image, the processed received data being representative of an image portion comprising a plurality of samples of a bit-depth determined as a function of a bit-depth of data of the image, the device comprising at least one microprocessor configured for carrying out the steps of: splitting received data into at least first sub-data and second sub-data; applying at least one of an inverse quantization and an inverse transformation steps to the first sub-data to obtain first sub-samples; applying at least one of an inverse quantization and an inverse transformation steps to the second sub-data to obtain second sub-samples; merging the first and the second sub-samples to creates samples of the image portion, wherein each of the first and second sub-samples are of a predetermined bit-depth, the bit-depth of the samples being higher than the predetermined bit-depth of each of the first and second sub-samples.
Such a device provides an efficient way of processing high bit-depth content in video encoder and decoder, for example encoder and decoder conforming to the HEVC standard. According to this embodiment, few changes are required in the current design of such encoders and decoders in order to re-use as far as possible the existing design, taking into account rounding and clipping processes applied in the transformation and quantization steps. In addition, some steps of the method can be executed in a parallel way so as to improve processing efficiency in terms of response time.
In an embodiment, the at least one microprocessor is further configured so that the steps of the steps of applying at least one of an inverse transformation and an inverse quantization steps to the first sub-data and of applying at least one of an inverse transformation and an inverse quantization steps to the second sub-data are performed independently.
In an embodiment, the at least one microprocessor is further configured for carrying out the preliminary step of entropy decoding the received data.
In an embodiment, the at least one microprocessor is further configured so that the step of merging the first and second sub-samples comprises a step of obtaining a first set of bits from the first sub-sample and a step of obtaining a second set of bits from the second sub-sample, the first set of bits forming a set of most significant bits and the second set of bits forming a set of least significant bits, the sets of most significant bits and of least significant bits being used to form one sample.
In an embodiment, the number of bits obtained from the first sub-sample is lower than the bit-depth of the first sub-sample.
In an embodiment, the number of bits obtained from the second sub-sample is lower than the bit-depth of the second sub-sample.
In an embodiment, the difference between the number of bits obtained from the first sub-sample and the bit-depth of the first sub-sample and the difference between the number of bits obtained from the second sub-sample and the bit-depth of the second sub-sample is equal to one and the bit-depth of data of the image is equal to sixteen.
In an embodiment, the first and second sets of bits each comprises eight bits.
In an embodiment, the at least one microprocessor is further configured so that the step of merging the first and second sub-samples comprises a step of obtaining a first set of bits from the first sub-sample, a step of obtaining a second set of bits from the second sub-sample, and a step of forming a set of least significant bits from bits of the first set of bits and from bits of the second set of bits, the set of least significant bits being used to form one sample.
In an embodiment, the at least one microprocessor is further configured so that the step of applying at least one of an inverse transformation and an inverse quantization steps to the first sub-data comprises an inverse disciete cosine or sine transform step or a transformation step of the transform skip type.
In an embodiment, the at least one microprocessor is further configured so that the step of applying at least one of an inverse tiansformation and an inverse quantization steps to the second sub-data comprises a transformation step of the transfoim skip type if the step of applying at least one of an inveise transformation and an inverse quantization steps to the first sub-data comprises a transformation step of the transform skip type.
In an embodiment, the at least one microprocessor is further configured so that the step of applying at least one of an inverse transformation and an inverse quantization steps to the first sub-data comprises a transformation step of a type diffeient than the transform skip type if the step of applying at least one of an inveise transformation and an inverse quantization steps to the second sub-data comprises a transformation step of a type different than the transform skip type.
In an embodiment, the at least one microprocessor is further configured so that the step of applying at least one of an inverse tiansformation and an inveise quantization steps to the fiist sub-data comprises an inverse quantization step using a first quantization parameter and the step of applying at least one of an inverse transformation and an inverse quantization steps to the second sub-data comprises an inverse quantization step using a second quantization palameter, the second quantization parameter being equal to the first quantization parameter plus a value depending on the bit-depth of data of the image and of the predetermined bit-depth of each of the at least first and second sub-samples.
In an embodiment, the value is equal to six time the predeteimined bit-depth of the second sub-samples.
In an embodiment, the image portion is a prediction residual block.
In an embodiment, the encoded format conforms to the HEVC standard.
In an embodiment, the encoded foimat conforms to the HEVC RExt standard.
According to a sixth aspect of the invention there is provided a video decoder comprising the device as described above.
Since the present invention can be implemented in software, the present invention can be embodied as computer readable code for provision to a programmable apparatus on any suitable carrier medium. A tangible carrier medium may comprise a storage medium such as a floppy disk, a CD-ROM, a hard disk drive, a magnetic tape device or a solid state memory device and the like. A transient carrier medium may include a signal such as an electrical signal, an electronic signal, an optical signal, an acoustic signal, a magnetic signal or an electromagnetic signal, e.g. a microwave or RF signal.
BRIEF DESCRIPTION OF THE DRAWINGS
Further advantages of the present invention will become apparent to those skilled in the art upon examination of the drawings and detailed description. It is intended that any additional advantages be incorporated herein.
Embodiments of the invention will now be described, by way of example only, and with reference to the following drawings in which: Figure 1 schematically illustrates an example of data structure used in HEVC; Figure 2 illustrates the architecture of an example of an HEVC video encoder; Figure 3 illustrates the architecture of an example of an HEVC video decoder; Figure 4, comprising Figures 4a and 4b, illustrates the transform and quantization processes as carried out in an encoder and a decoder, respectively; Figure 5, comprising Figure 5a and Figure Sb, illustrates an embodiment of the invention at the encoder and decoder ends, respectively; and Figure 6, comprising Figure 6a, Figure 6b, and Figure 6c, illustrates different possible configurations for re-arranging the bits of a sample into two sub-samples according to a particular embodiment of the invention.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
Figure 1 illustrates an example of coding structure used in HEVC.
According to HEVC and one of its previous predecessors, the original video sequence 101 is a succession of digital images "images i'. A digital image is represented by one or more matrices the coefficients of which represent pixels.
It should be noted that the word "image" should be broadly interpreted as video images in the following. For instance, it designates the pictures (or frames) in a video sequence.
The images 102 are divided into slices 103. A slice is a part of the image or the entire image. In HEVC these slices are divided into non-overlapping Largest Coding Units (LCUs), also referred to as Coding Tree Blocks (CTB) 104, generally blocks of size 64 pixels x 64 pixels. Each CTB may in its turn be iteratively divided into smaller variable size Coding Units (CU5) 105 using a quadtree decomposition. Coding units are the elementary coding elements and are constituted of two sub units which are Prediction Units (PU) and Transform Units (TU) of maximum size equal to the CU's size. The prediction Units correspond to the partition of the CU for prediction of pixels values. Each CU can be further partitioned into a maximum of 4 square Partition Units or 2 rectangular Partition Units 106. Transform units are used to represent the elementary units that are spatially transformed with DCT (standing for Discrete Cosine Transform). A CU can be partitioned in TU based on a quadtree representation (107).
Each slice is embedded in one NAL unit. In addition, the coding parameters of the video sequence are stored in dedicated NAL units referred to as parameter sets.
In HEVC and H.264/AVC two kinds of parameter set NAL units are employed: first, the Sequence Parameter Set (SPS) NAL unit which comprises all parameters that are unchanged during the whole video sequence. Typically, it handles the coding profile, the size of the video frames and other parameters. Secondly, Picture (Image) Parameter Sets (PPS) code the different values that may change from one frame to another.
An additional structure, referred to as a Tile, is also defined in HEVC. A Tile is a rectangular set of LCUs. The division of each image or slice into tiles is a partitioning. This structure is well adapted to specify Regions of Interest. In HEVC, a Tile is built (coded and decoded) independently from neighbouring tiles of the same image. However a tile may be predicted from several tiles from previously processed images.
In summary, in HEVC, specific syntax headers or parameter sets are defined for the different levels: -video level: a Video Parameter Set (VPS) is defined to specify the structure of the video; a video is made of several layers, corresponding to several versions of the same content, for instance such as different views of the same scene, different spatial resolutions of the same view; the VPS specifies the layered structure of the video content; -sequence level: a Sequence Parameter Set (SPS) is defined to specify the structure of the sequence; in particular it defines the spatial resolution of the images, the frame rate, the chroma format, the bit-depth of luma and chroma samples; an SPS refers to a VPS via a VPS id.
-image level: a Picture (Image) Parameter Set (PPS) is defined to specify a set of features relating to images of the sequence; parameters such as the default luma and chroma quantization parameters, the weighted prediction usage, the tiles usage, the loop filtering parameters are signalled in the PPS; a PPS refers to an SPS via an SPS id.
-slice level: a Slice Header (referred to as in the HEVC specitication as Slice Segment Header) is defined to specify a set of features relating to the Slice of the image; similarly to the PPS, it specifies specific settings for the coding tools, such as the slice type (intra, inter), the reference images used for the temporal prediction, the activation of coding tools, the number and structure of tiles composing the slice; a Slice Segment Header refers to a PPS via a PPS id.
In what follows coding tools and coding modes will be described. Coding tools are the different processes that apply in the coding/decoding processes. For instance, intra coding, inter coding, motion compensation, transform, quantization, entropy coding, deblocking filtering. Coding modes relate to coding tools and correspond to different available parameterizations of these coding tools. For simpler terniinology, it is considered that both terms are equivalent and can be used in the same way.
Figure 2 illustrates a schematic diagram of an example of an HEVC video encoder 200 that can be considered as a superset of one of its predecessors (H.264/AVC).
Each frame of the original video sequence 101 is first divided into a grid of coding units (CU) in a module referenced 201 which is also used to control the definition of the coding slices.
The subdivision of the largest coding units (LCU5) into coding units (CU5) and the partitioning of the coding units into transform units (TUs) and prediction units (PUs) is determined as a function of a rate distortion criterion. Each prediction unit of the coding unit being processed is predicted spatially by an "Intra" predictor during a step carried out in a module referenced 217 or temporally by an Inter" predictor during a step carried out by a module referenced 218. Each predictor is a block of pixels issued from the same image (i.e. the processed image) or another image, from which a difference block (or "residual") is derived. Thanks to the identification of a predictor block and a coding of the residual, it is possible to reduce the quantity of information actually to be encoded.
The encoded frames are of two types: temporally predicted frames (either predicted from one reference frame called P-frames or predicted from two reference frames called B-frames) and non-temporally predicted frames (called Intra frames or I-frames). In I-frames, only Intra prediction is considered for coding coding units and prediction units. In P-frames and B-frames, Intra and Inter prediction are considered for coding coding units and prediction units.
In the "Intra" prediction processing module 217, the current block is predicted by means of an "Intra" predictor that is to say a block of pixels constructed from the information already encoded in the current image. More precisely, the module 202 determines an intra prediction mode that is to be used to predict pixels from the neighbouring PU pixels. In HEVC, up to 35 intra prediction modes are considered.
A residual block is obtained by computing the difference between the intra predicted block and the current block of pixels. An intra-predicted block therefore comprises a prediction mode and a residual. The coding of the intra prediction mode is inferred from the neighbours prediction units' prediction mode. This process for infeiring a prediction mode, carried out in module 203, enables reduction of the coding rate of the intra prediction mode. Intra prediction processing module uses also the spatial dependencies of the frame either for predicting the pixels but also to infer the intra prediction mode of the prediction unit.
With regard to the second processing module 218 that is directed to "Inter" coding, two prediction types are possible. The first prediction type referred to as mono-prediction and denoted P-type consists of predicting a block by referring to one reference block from one reference picture. The second prediction type referred to as bi-prediction (B-type) consists in predicting a block by referring to two reference blocks from one or two reference pictures.
An estimation of motion between the current prediction unit and blocks of pixels of reference images 215 is made in module 204 in order to identify, in one or several of these reference images, one (P-type) or several (B-type) blocks of pixels to be used as predictors to encode the current block. In cases where several predictors are to be used (B-type), they are merged to generate a single prediction block. It is to be recalled that reference images are images in a video sequence that have already been coded and then reconstructed (by decoding).
The reference block is identified in the reference frame by a motion vector (MV) that is equal to the displacement between the prediction unit in the current frame and the reference block. After having determined a reference block, the difference between the prediction block and current block is computed in module 205 of processing module 218 carrying out the inter prediction process. This block of differences represents the residual of the inter predicted block. At the end of the inter prediction process, the current PU is composed of one motion vector and a residual.
Thanks to spatial dependencies of movements between neighbouring prediction units, HEVC provides a method to predict a motion vector for each prediction unit. To that end, several types of motion vector predictors are employed (generally two types, one of the spatial type and one of the temporal type). Typically, the motion vector associated with the prediction units located on the top, the left, and the top left corner of the current prediction unit form a first set of spatial predictors. A temporal motion vector candidate is generally also used. It is typically the one associated with the collocated prediction unit in a reference frame (i.e. the prediction unit at the same cooidinate). According to the HEVC standard, one of the predictors is selected based on a criterion that minimizes the difference between the motion vector predictor and the motion vector associated with the current prediction unit. According to the HEVC standard, this process is referred to as AMVP (standing for Adaptive Motion Vector Prediction).
After having been determined, the motion vector of the current prediction unit is coded in module 206, using an index that identifies the predictor within the set of motion vector candidates and a motion vector difference (MVD) between the prediction unit motion vector and the selected motion vector candidate. The Inter prediction processing module relies also on spatial dependencies between motion information of prediction units to increase the compression ratio of inter predicted coding units.
The spatial coding and the temporal coding (modules 217 and 218) thus supply several texture residuals (i.e. the difference between a current block and a predictor block) which are compared each other in module 216 for selecting the best coding mode that is to be used.
The residual obtained at the end of the inter or intra prediction process is then transformed in module 207. The transform applies to a transform unit that is included into a coding unit. A transform unit can be further split into smaller transform units using a so-called Residual QuadTree decomposition (RQT). According to the HEVC standard, two or thiee levels of decompositions are generally used and authorized transform block sizes are 32x32, 16x16, 8x8, and 4x4. The transform function is derived from a discrete cosine transform DCT.
The residual tiansformed coefficients aie then quantized in module 208 and the coefficients of the quantized transformed residual are coded by means of entropy coding in module 209 to be added in compressed bit-stream 210. Coding syntax elements are also coded in module 209. This entropy coding module uses spatial dependencies between syntax elements to increase the coding efficiency.
In order to calculate the "Intra" predictors oi to make an estimation of the motion for the Inter" predictors, the encoder performs a decoding of the blocks already encoded. This is done by means of a so-called "decoding" loop carried out in modules 211, 212, 213, and 214. This decoding loop makes it possible to reconstruct blocks and images from quantized transformed residuals.
According to the decoding loop, a quantized transformed residual is dequantized in module 211 by applying an inveise quantization that corresponds to the inverse of the one piovided in module 208. Next, the iesidual is reconstructed in module 212 by applying the inverse transform of the transform applied in module 207.
On the one hand, if the residual comes from an "Intra" coding, that is to say froni module 217, the used Intra" predictor is added to the decoded residual in order to recovei a reconstiucted block coiiesponding to the original processed block (i.e. the block lossy modified by transform and quantization modules 207 and 208).
On the other hand, if the residual comes from an "Inter" coding module 218, the blocks pointed to by the current motion vectors (these blocks belong to the reference images 215 referred by the current image indices) are merged before being added to the processed received residual.
A final loop filter processing module 219 is used to filter the reconstructed residuals in order to reduce the effects resulting from heavy quantization of the residuals and thus, reducing encoding artefacts. Accoiding to the HEVC standard, several types of loop filters are used among which the deblocking filter and sample adaptive offset (SAC) carried out in modules 213 and 214, respectively. The parameteis used by these filters aie coded and transmitted to the decoder using a header of the bit stream, typically a slice header.
The filtered images, also called reconstructed images, are stored as reference images 215 in order to allow the subsequent "Inter" predictions taking place during the compression of the following images of the current video sequence.
In the context of HEVC, it is possible to use several reference images 215 for the estimation of motion vectors and for motion compensation of blocks of the current image. In other words, the motion estimation is carried out on a set of several images. Thus, the best "Inter" predictors of the current block, for the motion compensation, are selected in some of the multiple reference images. Consequently two adjoining blocks may have two predictor blocks that come from two distinct reference images. This is in particular the reason why, in the compressed bit stream, the index of the reference image (in addition to the motion vector) used for the predictor block is indicated.
The use of multiple reference images (the Video Coding Experts Group recommends limiting the number of reference images to four) is useful for withstanding errors and improving the compression efficiency.
It is to be noted that the resulting bit stream 210 of the encoder 200 comprises a set of NAL units that corresponds to parameter sets and coded slices.
Figure 3 illustrates a schematic diagram of a video decoder of the HEVC type. The illustrated decoder 300 receives as an input a bit stream, for example the bit stream 210 corresponding to video sequence 101 compressed by encoder 200 of the HEVC type as described by reference to Figure 2.
During the decoding process, the bit stream 210 is parsed in an entropy decoding module 301. This processing module uses the previously entropy decoded elements to decode the encoded data. In particular, it decodes the parameter sets of the video sequence to initialize the decoder. It also decodes largest coding units of each video frame. Each NAL unit that corresponds to coding slices are then decoded.
The partition of a current largest coding unit is parsed and the subdivisions of coding units, prediction units, and transform units are identified. The decoder successively processes each coding unit in intra processing module 307 or inter processing module 306 and in inverse quantization module 311, inverse transform module 312, and loop filter processing module 319.
It is to be noted that inverse quantization module 311, inverse transform module 312, and 1oop filter processing module 319 are similar to inverse quantization module 211, inverse transform module 212, and loop filter processing module 219 as described by reference to Figure 2.
The "Inter" or "Intra" prediction mode for the current block is parsed from the bit stream 210 in parsing process module 301. Depending on the prediction mode, either intra prediction processing module 307 or inter prediction processing module 306 is selected to be used.
If the prediction mode of the current block is "Intra" type, the prediction mode is extracted from the bit stream and decoded with help of neighbours' prediction mode in module 304 of intra prediction processing module 307. The intra predicted block is then computed in module 303 with the decoded prediction mode and the already decoded pixels at the boundaries of current prediction unit. The residual associated with the current block is recovered from the bit stream and then entropy decoded in module 301.
On the contrary, if the prediction mode of the current block indicates that this block is of the "Inter" type, the motion information is extracted from the bit stream and decoded in module 304 and the AMVP process is carried out. Motion information of the neighbouring prediction units already decoded are also used to compute the motion vector of the current prediction unit. This motion vector is used in the reverse motion compensation module 305 in order to determine the "Inter" predictor block contained in the reference images 215 of the decoder 300. In a similar way to what is done in the encoder, the reference images 215 are composed of images that precede in decoding order the image currently being decoded and that are reconstructed from the bit stream (and therefore decoded previously).
A following decoding step consists in decoding a residual block corresponding to the current coding unit, that has been transmitted in the bit stream.
The parsing module 301 extracts the residual coefficients from the bit stream and performs successively an inverse quantization in module 311 and an inverse transform in module 312 to obtain the residual block. This residual block is added to the predicted block obtained at output of intra or inter processing module.
After having decoded all the blocks of a current image, loop filter processing module 319 is used to eliminate block effects and to reduce encoding artefacts in order to obtain reference images 215. Like the encoder and as described by reference to Figure 2, loop filter processing module 319 may comprise a deblocking filter and sample adaptive offset.
As illustrated, the decoded images are used as an output video signal 308 of the decoder, which can then be displayed.
As mentioned above, the transform carried out in module 207 and the inverse transform carried out in modules 212 and 312 can apply to blocks having a size varying from 4x4 to 32x32. It is also possible to skip the transform for 4x4 blocks when it turns out that the transformed coefficients are more costly to encode than the non-transformed residual signal (this is known as the Transform-Skip mode).
The DOT-like transform matrices used in HEVO are all derived from a 32x32 matrix denoted T3232 whose content is given in the Appendix. The transform matrix of size NxN (N=4, 8, 16) is generally derived from the T3232 transform matrix according to the following relation: TNXN [x, y] = T3232 [x, y*R]with S = N132, for x=0..N-1 and y=0..N-1 For 4x4 blocks, it is also possible to use a DST-like transform (standing for Discrete Sine Transform) as defined in the Appendix.
Quantization parameters are used to control the quantization process and the deblocking filtering process. They can vary from one coding unit to another coding unit. According to the HEVC standard, a first set quantization parameters, common for all the coding units of a slice, are signalled within the bit stream and if required, further delta quantization parameters are transmitted, one per coding unit.
Regarding the quantization process, the quantization parameter referred to as QFcu that is associated with a particular coding unit does not directly apply.
Quantization is done by using a quantization parameter referred to as QP which depends on the input quantization parameter QPcu and the bit-depth B according to the following relation: QP = QPCU + 6.(B-8) Regarding the deblocking filter, the input quantization parameter known as QPcu is used without any change.
Figure 4, comprising Figures 4a and 4b, illustrates the transform and quantization processes as carried out in an encoder and a decoder, respectively.
For the sake of illustration, it is considered that the prediction residual block to be processed denoted R and referenced 401 in Figure 4, R is the output of a prediction process, for example the output of module 217 or 218 described by reference to Figure 2 depending on the coding mode. It is assumed that the size of S is NxN, N being equal to 2, and that B is the bit-depth of the image data.
Regarding the encoder and as illustrated in Figure 4a, a first step (step 402) aims at computing a block of transform coefficients Dl. It may be computed according to the following relation: Dl = TNXN X Rt where t' is the matrix transposition operator and x' the matrix multiplication operator.
Next, in step 403, each sample of the block of transform coefficients Dl is shifted according to the following relation: C1[x,y]=D1 [x,y]>>(n+B-9) which means that each bit of each sample of the block of transform coefficients Dl is shifted (n + B -9) bits to the right (the (n + B -9) least significant bits being withdrawn).
After block Dl samples having been shifted, the resulting block Cl is used to compute a block of transform coefficients 02 according to the following relation (step 404): 02 = TNXN x Cl Next, in step 405, each sample of the block of transform coefficients D2 is shifted according to the following relation: 02 [x, y] = D2 [x, y] >> (n + 6) which means that each bit of each sample of the block of transform coefficients 02 is shifted (n + 6) bits to the right (the (n + 6) least significant bits being withdrawn).
It is to be noted that if the Transform Skip mode is to be used, steps 402 to 405 are replaced by a single step according to which the resulting block 02 is obtained by shifting the prediction residual block R according to the following relation: C2[x, y]= REx, y]<c(15-n-B) which means that each bit of each sample of the prediction residual block R is shifted (15-n -B) bits to the left).
Next, in step 406, each coefficient of the resulting block 02 is quantized according to the following relation: C3[x, y] =(C2[x, y]. qScale[OP%6]+ md)>> (14 +0P16 +tShift) where OP is the Quantization Parameter used in the quantization process, tShift is a variable set to the value (15 -n -B), and md is a rounding parameter and where qScale is an array defined as follows: qScale[ k] = INT( (26.214) / levScale[QP%6] } where INT{ . } is the nearest integer operator and levscale[] is defined as follows: levScale[ k] = { 40, 45, 51, 57, 64, 72} with k = 0.5.
This actually leads to the following definition: qScale[ k] = {26214, 23302, 20560, 18396, 16384, 14564}with k = 0..5.
After having quantized the coefficients of the resulting block C2, each coefficient of the resulting block C3 is clipped (step 407). According to the HEVC standard, each value C3[ x, y]is clipped between -32,768 and 32,767 (i.e. between - 215 to 2l51). The clipped values represent the quantized coefficients 408.
It is to be recalled that the clipping operation of a variable x between values A and B, with B >= A, consists in forcing the value ot x to A or B if it does not belong to the range [A; B] as follows: -ifx<A,x=A; -otherwise if x> B, x = B; -otherwise, x is not modified.
The quantized coefficients are encoded (entropy encoding) and transmitted to the decoder within the bit stream.
Accordingly, at the decoder end, block of decoded coefficients Q, referenced 409 in Figure 4b, actually corresponding to block 03 as described above by reference to Figure 4a, can be obtained from an entropy decoder, for example the entropy decoder 301 described by reference to Figure 3.
After having been obtained, each of the decoded quantized coefficients is dequantized according to the following relation (step 410): d[x, y] ={ [Q[x, y] . m[x, y] . levscale[QP%6] cc (QPI6)] + (lcc(bdS-1))} >> bdS where bdS is set to the value (B + n -5) and m[ x, y] is a coefficient of a scaling matrix that is received within the bit stream. By default (i.e. when no quantization matrix is used), m[ x, y] is set to the value 16 (m[ x, y] = 16).
Next, in step 411, each of the dequantized coefficients d[ x, y] is clipped between -32,768 and 32,767 (i.e. between 215 to 2b51). Resulting values are stored in block of decoded coefficients 0 (replacing the decoded values).
Next, in step 412, a block of transform coefficients El is computed as a function of the block Q of dequantized and clipped coefficients and of the transform matrix TNXN according to the following relation: Elt = TNXNt x 0 Each sample of El is then shifted in step 413 according to the following relation: G[x,y]=(El[x,y]÷64)>>7 which means that value 64 is added to the processed sample and that each bit of the processed sample is shifted seven bits to the right (the seven least significant bits being withdrawn).
Each resulting value GE x, y] is then clipped in step 414 between -32,768 and 32,767 (i.e. between 2b5 to 2151).
Next, in step 415, the block of decoded residual samples E2, denoted 416 in Figure 4b, is computed as a function of the clipped resulting value G and of the transform matrix TNXN according to the following relation: E2t = TNXNt X G If the Transform Skip mode is to be used, steps 411 to 415 are replaced by a single step according to which each sample of the decoded residual samples E2 is obtained by shifting the corresponding decoded quantized coefficient Q[x, y] according to the following relation: E2[ x, y] = Q[ x, y] <<7 Despite the advantages offered by the HEVC standard, it has been primarily designed to handle only 8 bit-depth content, most of the intermediate computations involved in the transform I inverse transform processes being achieved using 16-bit integer registers. The Range Extension of HEVC enables to support more than 8-bits, but when reaching more than 12 bits, it is no more possible to guaranty that internal computations can be achieved using 16-bit integer registers.
If such a feature can be considered as offering satisfactory results for data bit-depth coded on 12 bits oi less, higher bit-depth leads to problems that are linked, in particular, to the transform process to be applied because of the right shifting and clipping sub-processes that may cause important losses, even before applying the quantization process.
Therefore, the transform process conforming to the current HEVC standard is not adapted to handle high bit-depth content, especially when 16-bits content are considered, which can be encountered for instance in medical applications.
According to a particular embodiment, the samples of prediction residual blocks are split, during an encoding process, into two sets of sub-samples of lower bit-depth. These sub-samples are then processed in two different tracks of transform and quantization. Next, the outputs of these two tracks are encoded by an entropy coder before being transmitted in a bit stream.
Symmetrically, at the decoder end, once coded coefficients are decoded, they are split into two sets of coefficients. These two sets are processed through two different tracks of inverse quantization and inverse transform, resulting in two sets of sub-samples of limited bit-depth that are further re-organized to generate prediction residual block samples of full bit-depth.
Figure 5, comprising Figure 5a and Figure Sb, illustrates an embodiment of the invention at the encoder and decoder ends, respectively.
More precisely, Figure 5a illustrates a transform process and a quantization process applied to an image portion, typically a prediction residual block. For the sake of illustration, the image portion can be the prediction residual block 401 described by reference to Figure 4.
According to a first step, the samples ot the prediction residual block 401 are processed in a bit distribution module 501 which split the bits of each of these samples into two sub-samples referenced 502 (sub-samples 1) and 503 (sub-samples 2). The set of sub-samples 1 obtained from the samples of the prediction residual block 401 forms a sub-image portion denoted sub-image portion 1. Likewise, the set of sub- samples 2 obtained from the samples of the prediction residual block 401 forms a sub-image portion denoted sub-image portion 2.
The bit-depth of sub-sample 1 and of sub-sample 2 is lower than the one of the samples of the prediction residual block 401.
Next, sub-image portion 1 is processed in a first track comprising a transform module referenced 504 (and denoted transform 1) and a quantization module referenced 505 (and denoted quantization 1). This process applied to sub-image portion 1 results in a block of data referenced 508 (and denoted sub-data 1). It corresponds to the quantized transform coefficients of sub-samples 1.
Similarly, sub-image portion 2 is processed in a second track comprising a transform module referenced 506 (and denoted transform 2) and a quantization module referenced 507 (and denoted quantization 2). This process applied to sub-image portion 2 results in a block of data referenced 509 (and denoted sub-data 2). It corresponds to the quantized transform coefficients of sub-samples 2.
Steps carried out in the first track (i.e. in modules 504 and 505) and steps carried out in the second track (i.e. in modules 506 and 507) are preferably performed simultaneously. However, they can be performed in a sequential way or in an interlock way.
Sub-data 1 and sub-data 2 are then combined in module 510, the resulting combined data being of higher bit-depth than sub-data 1 and sub-data 2. Next, the combined data are processed in entropy coding module 511 to be added into a final bit stream 512.
Figure Sb illustrates an inverse transform process and an inverse quantization process applied to encoded prediction residual blocks received in a bit-stream 512.
As illustrated, a first step is an entropy decoding step carried out in module 513 to decode received data. The decoded data are then split in module 514 into two sets of blocks of data. A first set of blocks of data represents sub-data 1 (referenced 515) and a second set of blocks of data represents sub-data 2 (referenced 516) as described by reference to Figure 5a. Sub-data 1 can be distinguished from sub-data 2 according to a particular data structure or as a function of an item of information.
Next, sub-data 1 are processed in a first track comprising an inverse quantization module referenced 517 (and denoted inverse quantization 1) and an inverse transform module referenced 518 (and denoted inverse transform 1). This process applied to sub-data 1 results in a block of sub-samples referenced 521 (and denoted decoded sub-samples 1).
Similarly, sub-data 2 are processed in a second track comprising an inverse quantization module referenced 519 (and denoted inverse quantization 2) and an inverse transform module referenced 520 (and denoted inverse transform 2). This process applied to sub-data 2 results in a block of sub-samples referenced 522 (and denoted decoded sub-samples 2).
Steps carried out in the first track (i.e. in modules 517 and 518) and steps carried out in the second track (i.e. in modules 519 and 520) are preferably performed simultaneously. However, they can be performed in a sequential way or in an interlock way.
Bits of sub-samples 1 and of sub-samples 2 are then combined in module 523 to generate decoded samples of higher bit-depth, corresponding to the decoded residual samples referenced 524.
As illustrated with arrows between modules 504 and 506, 505 and 507, 517 and 519, and 518 and 520, some parameters can be shared and exploited between transform and quantization modules of the two tracks of the encoder and between inverse transform and inverse quantization modules of the two tracks of the decoder.
For the sake of illustration, the transform type of transform 1 can be used to determine the transform type of transform 2. Another example is directed to the inter-dependency of the quadtree decomposition into transform units (i.e. TU5 partitioning) of blocks of sub-samples 1 and 2. According to another example, the quantization parameter of quantization 1 can be used to derive the quantization parameter of quantization 2.
As described above, it is possible to perform the steps carried out by the two tracks of the encoder and/or of the decoder, in a parallel way since the two tracks are used to process the two blocks of sub-samples 1 and 2, as it is commonly done for luma and chroma blocks in actual encoder and decoder implementations.
For the sake of illustration, transform module 504 (transform 1) may carry out one of the transform process among the DCT and the DST. It may also be of the transform skip type.
Still for the sake of illustration, transform module 506 (transform 2) may be of the transform skip type.
According to a particular embodiment, if transform module 504 is of the transform skip type then transform module 506 is enforced to be of the transform skip type.
Still according to a particular embodiment, if transform module 506 is not of the transform skip type then transform module 504 cannot be of the transform skip type.
According to a particular embodiment, the partitioning of transform units of sub-samples 1 performed in transform module 504 is fully independent from the partitioning of transform units of sub-samples 2 performed in transform module 506 and the signalizations of these sub-samples in the bit stream are independent.
Still according to a particular embodiment, the partitioning of transform units of sub-samples 2 performed in transform module 506 is dependent on the partitioning of transform units of sub-samples 1 performed in transform module 504.
Still according to a particular embodiment, the partitioning of transform units of sub-samples 1 performed in transform module 504 is identical to the partitioning of transform units of sub-samples 2 performed in transform module 506.
According to a particular embodiment, the quantization parameter (QP1) used in quantization module 505 (quantization 1) and the quantization parameter (0F2) used in quantization module 507 (quantization 2) are dependent each other according to the following relation: QP2 = QP1 + DOP where DQP is a parameter depending on the bit-depth of the original image samples, on the bit-depth A of sub-samples 1, and on the bit-depth B of sub-samples 2.
Still according to a particular embodiment, DQP is set equal to 6.B.
According to a particular embodiment, bit distribution module 501 splits each of the samples of a prediction residual block to be processed into a set of a number A of most significant bits (MSBs) and a set of a number B of least significant bits (LSBs). A and B are pre-defined numbers that are lower than the actual bit-depth of the prediction residual block samples.
According to a particular embodiment, bit distribution module 501 is configured to perform first a truncation step for withdrawing a number C of least significant bits from each of the samples of the prediction residual blocks to be processed, C being a pre-defined number.
Still according to a particular embodiment, the number C is defined in order to obtain a representation of the truncated samples on an even number of bits.
Figure 6, comprising Figure 6a, Figure 6b, and Figure 6c, illustrates different possible configurations for re-arranging the bits of a sample into two sub-samples.
For the sake of illustration, the samples of the image from which the prediction residual blocks to be processed are obtained comprise 16 bits (i.e. bit-depth = 16).
As illustrated in Figure 6a, if the samples of the processed image are coded on 16 bits, each of the samples of the prediction residual blocks to be processed, referenced 601, is coded on 17 bits (because it corresponds to a difference between a predicted image portion and the corresponding original image portion, which is potentially of double range compared to the original image).
According to the illustrated example, a truncation of the least significant bit (i.e. bit-number 0), which corresponds to C = 1, is performed in order to represent each of the prediction residual block samples on 16-bits.
Next, the remaining 16 bits are split into most significant bits (sub-sample 1) and least significant bits (sub-sample 2), referenced 602.
According to a particular embodiment, the remaining 16 bits are split into a first set of bits (sub-sample 1) comprising the 8 most significant bits and a second set of bits (sub-sample 2) comprising the 8 least significant bits.
Still according to a particular embodiment, the least significant bits are split among two sets of bits that is to say between sub-samples 1 and sub-samples 2. Such an embodiment is illustrated in Figure 6b where least significant bits 0, 2, 4 and 6 are stored in sub-sample 2 while the least significant bits 1, 3, 5 and 7 are stored in sub-sample 1. In the illustrated example, some bits of sub-sample 2 are actually empty and are preferably filed with value 0 (not represented for the sake of clarity).
Still according to a particular embodiment illustrated in Figure 6c, the 16 bits of the samples of the processed image are split into a first set of bits (sub-sample 1) comprising the ten most significant bits and a second set of bits (sub-sample 2) comprising the six least significant bits. Then, the ten most significant bits are put into the most significant bits of sub-samples 1 that length is equal to 12 bits in this example and the six least significant bits are put into the most significant bits of sub-samples 2 that length is equal to 8 bits in this example. The empty bits of both sub-samples 1 and sub-samples 2 are filed with value zero (not represented for the sake of clarity).
Naturally, in order to satisfy local and specific requirements, a person skilled in the art may apply to the solution described above many modifications and alterations all of which, however, are included within the scope of protection of the invention as defined by the following claims.
APPENDIX
64 64 64 64 64 64 64 64 64 64 64 64 64 64 64 64 64 64 64 64 64 64 64 64 64 64 64 64 64 64 64 64 90 33 85 82 78 73 67 61 54 46 33 31 22 13 4 4 13 22 31 33 46 54 61 67 73 73 82 35 88 90 90 87 30 70: 57 43 25: 9 -9 -25 -43: -57 -70 -80 -37.-90:-90:-87:-80 -70 -57 -43 -25: -9: 9 25 43 57: 70: 80 37 90 82 67 46 22 4 31 54 73 85 90 88 78 61 38 13 13 38 61 78 88 90 85 73 54 31 4 22 46 67 82 90 89 75 50 18 18 50 75 39 39 75 50 13 18 50 75 39 89 75 50 18 13 50 75 89 39 75 50 18 13 50 75 39 98. 67 31 -13 -54 -82 -90 -78 -46 -4 39: 73. 90. 95 61 22 -22 -St -85 -90 -73 -38 4: 46: 78 90 82 54 13 -31 -67 -88 87 57 9 43 80 90 70 23 25 70 90 30 49 9 37 87 87 37 9 43 80 90 70 29 25 70 90 80 43 9 37 87 35: 48 -13 -67 -90 -73 -22. 38 22 28 54: -4:-el:-go -78 -31 31 78 90 61 4 -54 -38:-32:-38 22 73 90. 67 13 -46 -85 83 36 36 83 83 36 36 33 83 36 36 83 33 36 36 83 83 36 36 83 83 36 36 83 83 36 36 83 83 36 36 83 82 22 54 90 61 12 78 85 31 46 90 67 4 73 88 38 38 88 73 4 67 90 46 31 85 76 12 61 90 54 22 82 9O 9 -70 -87 -25 57 90: 43 -43 -90 57 25* ST 70 -9 -20 -90 -9 70 37 25 -57 -90 43 43 90 57 -25: -37 -70 9 30 73 4 82 73 13 85 67 22 88 61 31 90 54 33 90 46 46 90 33 54 90 31 61 38 22 67 35 13 73 82 4 78 18 89 50 50 89 18 75 75 18 89 50 50 89 18 75 75 18 89 50 50 89 18 75 75 18 89 50 50 89 18 75 73-31-60-22.7867-38:-90-13 8261-4&-3&-435.54.-54.-35. 438 46-61-3T1T90 38-67-78:22.9031-73 43 87 9 90 25 80 57 57 80 25 90 9 87 43 70 70 43 67 9 90 25 80 57 57 60 25 90 9 67 43 70 67 54 78 33 85 22 90 4 90 13 33 31 82 46 73 61 61 73 46 82 31 88 13 90 4 90 22 85 38 73 54 67 54 54: 64 64 -64 -64E 64E 64 -64 -64 94 64 -64 -64 64 64 -64 -64E 64 64 -64 -64 64 64 -64 -64 64E 64 -64E -ME 64 61 73 46 62 31 88 13 90 4 90 22 85 36 78 54 67 67 54 78 36 85 22 90 4 90 13 88 31 82 46 73 61 57 80 25 90 9 37 43 70 70 43 87 9 90 25 30 57 57 30 25 90 9 37 43 70 70 43 37 9 90 25 30 57 54 -85 -4 88: -46 -61 82E 13 -90 33 67: -73 -22 90 -31: -73: 73: 31: -90 22 73 -67 -38: go: -13 -82 61 46E -88: 4 65 -54 89 18 75 75 18 89 50 50 89 18 75 75 18 89 50 50 89 18 75 75 18 89 50 50 89 18 75 75 18 89 50 46 -90 33 54. -90 31 61: -33 22 67 -85: 13 73* -62 4. 73. -73. -4. 82 -73 -13 35 -67: -22: 88 -61 -31 90: -54. -38 90 -46 43 90 57 25 67 70 9 30 30 9 70 37 25 57 90 43 43 90 57 25 37 70 9 30 30 9 70 87 25 57 90 43 33 -83 73 -4: -67 60 -46E -31 85 -73 13: 61: -90 54 22: -82: 32: -22: -54 90 -61 -13 78: -35: 31 46 -90 67 4: -73 33 -38 36: -33 83 -36: -36 83 -33: 36 36 -33 83: -36 33 -83: 36: 35:-33: 33 -36 -36 83 -83: 36: 36 -33 33 35:35. 33 -33 36 31 78 60 61 4 54 88 32 38 22 73 90 67 13 46 35 85 46 13 67 90 73 22 33 32 88 54 4 61 90 73 31 70 60 80 43 9 57 37 37 57 9 43 30 90 70 25 25 70 90 30 43 9 57 87 37 57 9 43 30 90 70 25 22 -61 85 -90 73 -38 46 -78 90 -32: 54 -13 -31 67 -88 38: -67 31 13 -54 82 -90: 78: -46 4 38 _73: 90* -85 61 -22 18 50 75 89 89 75 50 18 18 50 75 89 89 75 50 18 18 50 75 89 89 75 50 18 18 50 75 89 89 75 50 18 13 -38 61 73: 88 90 95: -73 54 -31: 22 -46 67 82 90:90: 82: -67 46 -22 -4 31: 73 -95 90 _99: 75: -61 38 -13 9 25 43 57 70 80 97 90 90 87 80 70 57 43 25 9 9 25 43 57 70 80 87 90 90 87 80 70 57 43 25 9 4 -13 22 -31 38 46 54: -61 67 -73 79: -82 85 -38 90* 90* 90* 90* 83 -85 82 -78 73: -67: 61 54 46 33: 32 -22 13 -4 HEVC T32x32 matrix 29 55: 74 84 74: 74: 0 74 84 _29: 74 55 Typical HEVC T44 matrix

Claims (84)

  1. CLAIMS1. A method of processing an image portion in a process of encoding an image from which is obtained the image portion, the image portion comprising a plurality of samples of a bit-depth determined as a function of a bit-depth of data of the image, the method comprises: splitting each of the samples of the image portion into at least a first sub-sample and a second sub-sample, each of the at least first and second sub-samples being of a predetermined bit-depth, the bit-depth of the samples being higher than the predetermined bit-depth of each of the at least first and second sub-samples; and applying at least one of a transformation and a quantization steps to the first sub-samples to obtain first sub-data; applying at least one of a transformation and a quantization steps to the second sub-samples to obtain second sub-data; and merging the first and the second sub-data to encode the image portion.
  2. 2. The method according to claim 1 wherein the steps of applying at least one of a transformation and a quantization steps to the first sub-samples and of applying at least one of a transformation and a quantization steps to the second sub-samples are performed independently.
  3. 3. The method according to claim 1 further comprising a step of entropy coding the merged first and second sub-data.
  4. 4. The method according to claim 1 wherein the step of splitting each of the samples of the image portion into at least a first sub-sample and a second sub-sample comprises a step of extracting a set of most significant bits from each of the samples to form the first sub-samples and a step of extracting a set of least significant bits from each of the samples to form the second sub-samples.
  5. 5. The method according to claim 1 further comprising a preliminary step of withdrawing a predetermined number of least significant bits in each of the samples to obtain truncated samples.
  6. 6. The method according to claim 5 wherein the predetermined number of least significant bits to be withdrawn in each of the samples is determined so that the truncated samples comprise an even number of bits.
  7. 7. The method according to claim 5 wherein the step of splitting each of the samples of the image portion into at least a first sub-sample and a second sub-sample comprises a step of extracting a set of most significant bits from each of the truncated samples to form the first sub-samples and a step of extracting a set of least significant bits from each of the truncated samples to form the second sub-samples.
  8. 8. The method according to any one of the claims 5 to 7 wherein the predetermined number of least significant bits is equal to one and wherein the bit-depth of data of the image is equal to sixteen.
  9. 9. The method according to any one of the claims 4 to 8 wherein the set of most significant bits and the set of least significant bits each comprises eight bits.
  10. 10. The method according to claim 1 wherein the step of splitting each of the samples of the image portion into at least a first sub-sample and a second sub-sample comprises, for each of the samples, a step of extracting a set of least significant bits from the sample and a step of splitting the set of least significant bits so that each of the first and second sub-samples comprises bits of the set of least significant bits.
  11. 11. The method according to claim 1 wherein the step of applying at least one of a transformation and a quantization steps to the first sub-samples comprises a discrete cosine or sine transform step or a transformation step of the transform skip type.
  12. 12. The method according to claim 1 wherein the step of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a transformation step of the transform skip type if the step of applying at least one of a transformation and a quantization steps to the first sub-samples comprises a transformation step of the transform skip type.
  13. 13. The method according to claim 1 wherein the step of applying at least one of a transformation and a quantization steps to the first sub-samples comprises a transformation step of a type different than the transform skip type if the step of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a transformation step of a type different than the transform skip type.
  14. 14. The method according to claim 1 wherein each of the steps of applying at least one of a transformation and a quantization steps to the first sub-samples and of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a step of partitioning a block of sub-samples into transformation units, the partitioning steps being performed independently each other.
  15. 15. The method according to claim 1 wherein each of the steps of applying at least one of a transformation and a quantization steps to the first sub-samples and of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a step of partitioning a block of sub-samples into transformation units, one partitioning step depending on another partitioning step.
  16. 16. The method according to claim 1 wherein each of the steps of applying at least one of a transformation and a quantization steps to the first sub-samples and of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a step of partitioning a block of sub-samples into transformation units, the partitioning steps being identical.
  17. 17. The method according to claim 1 wherein the step of applying at least one of a transformation and a quantization steps to the first sub-samples comprises a quantization step using a first quantization parameter and wherein the step of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a quantization step using a second quantization parameter, the second quantization parameter being equal to the first quantization parameter plus a value depending on the bit-depth of data of the image and of the predetermined bit-depth of each of the at least first and second sub-samples.
  18. 18. The method according to claim 17 wherein the value is equal to six time the predetermined bit-depth of the second sub-samples.
  19. 19. The method according to claim 1 wherein the image portion is a prediction residual block.
  20. 20. The method according to any one of claims 1 to 19 wherein the encoded fomiat conforms to the HEVC standard.
  21. 21. The method according to any one of claims 1 to 19 wherein the encoded format conforms to the HEVC RExt standard.
  22. 22. A method of processing received data in a process of decoding an image, the processed received data being representative of an image portion comprising a plurality of samples of a bit-depth determined as a function of a bit-depth of data of the image, the method comprises: splitting received data into at least first sub-data and second sub-data; applying at least one of an inverse quantization and an inverse transformation steps to the first sub-data to obtain first sub-samples; applying at least one of an inverse quantization and an inverse transformation steps to the second sub-data to obtain second sub-samples; merging the first and the second sub-samples to creates samples of the image portion, wherein each of the first and second sub-samples are of a predetermined bit-depth, the bit-depth of the samples being higher than the predetermined bit-depth of each of the first and second sub-samples.
  23. 23. The method according to claim 22 wherein the steps of applying at least one of an inverse transformation and an inverse quantization steps to the first sub-data and of applying at least one of an inverse transformation and an inverse quantization steps to the second sub-data are performed independently.
  24. 24. The method according to claim 22 further comprising a preliminary step of entropy decoding the received data.
  25. 25. The method according to claim 22 wherein the step of merging the first and second sub-samples comprises a step of obtaining a first set of bits from the first sub-sample and a step of obtaining a second set of bits from the second sub-sample, the first set of bits forming a set of most significant bits and the second set of bits forming a set of least significant bits, the sets of most significant bits and of least significant bits being used to form one sample.
  26. 26. The method according to claim 25 wherein the number of bits obtained from the first sub-sample is lower than the bit-depth of the first sub-sample.
  27. 27. The method according to claim 25 wherein the number of bits obtained from the second sub-sample is lower than the bit-depth of the second sub-sample.
  28. 28. The method according to any one of the claims 25 to 27 wherein the difference between the number of bits obtained from the first sub-sample and the bit-depth of the first sub-sample and the difference between the number of bits obtained from the second sub-sample and the bit-depth of the second sub-sample is equal to one and wherein the bit-depth of data of the image is equal to sixteen.
  29. 29. The method according to any one of the claims 25 to 28 wherein the first and second sets of bits each comprises eight bits.
  30. 30. The method according to claim 22 wherein the step of merging the first and second sub-samples comprises a step of obtaining a first set of bits from the first sub-sample, a step of obtaining a second set of bits from the second sub-sample, and a step of forming a set of least significant bits from bits of the first set of bits and from bits of the second set of bits, the set of least significant bits being used to form one sample.
  31. 31. The method according to claim 22 wherein the step of applying at least one of an inverse transformation and an inverse quantization steps to the first sub-data comprises an inverse discrete cosine or sine transform step or a transformation step of the transform skip type.
  32. 32. The method according to claim 22 wherein the step of applying at least one of an inverse transformation and an inverse quantization steps to the second sub-data comprises a transformation step of the transform skip type if the step of applying at least one of an inverse transformation and an inverse quantization steps to the first sub-data comprises a transformation step of the transform skip type.
  33. 33. The method according to claim 22 wherein the step of applying at least one of an inverse transformation and an inverse quantization steps to the first sub-data comprises a transformation step of a type different than the transform skip type if the step of applying at least one of an inverse transformation and an inverse quantization steps to the second sub-data comprises a transformation step of a type different than the transform skip type.
  34. 34. The method according to claim 22 wherein the step of applying at least one of an inverse transformation and an inverse quantization steps to the first sub-data comprises an inverse quantization step using a first quantization parameter and wherein the step of applying at least one of an inverse transformation and an inverse quantization steps to the second sub-data comprises an inverse quantization step using a second quantization parameter, the second quantization parameter being equal to the first quantization parameter plus a value depending on the bit-depth of data of the image and of the predetermined bit-depth of each of the at least first and second sub-samples.
  35. 35. The method according to claim 34 wherein the value is equal to six time the predetermined bit-depth of the second sub-samples.
  36. 36. The method according to claim 22 wherein the image portion is a prediction residual block.
  37. 37. The method according to any one of claims 22 to 36 wherein the encoded format conforms to the HEVC standard.
  38. 38. The method according to any one of claims 22 to 36 wherein the encoded format conforms to the HEVC RExt standard.
  39. 39. A computer program product for a programmable apparatus, the computer program product comprising instructions for carrying out each step of the method according to any one of claims 1 to 38 when the program is loaded and executed by a programmable apparatus.
  40. 40. A computer-readable storage medium storing instructions of a computer program for implementing the method according to any one of claims 1 to 38.
  41. 41. A device of processing an image portion in a process of encoding an image from which is obtained the image portion, the image portion comprising a plurality of samples of a bit-depth determined as a function of a bit-depth of data of the image, the device comprising at least one microprocessor configured for carrying out the steps of: splitting each of the samples of the image portion into at least a first sub-sample and a second sub-sample, each of the at least first and second sub-samples being of a predetermined bit-depth, the bit-depth of the samples being higher than the predetermined bit-depth of each of the at least first and second sub-samples; and applying at least one of a transformation and a quantization steps to the first sub-samples to obtain first sub-data; applying at least one of a transformation and a quantization steps to the second sub-samples to obtain second sub-data; and merging the first and the second sub-data to encode the image portion.
  42. 42. The device according to claim 41 wherein the at least one microprocessor is further configured so that the steps of applying at least one of a transformation and a quantization steps to the first sub-samples and of applying at least one of a transformation and a quantization steps to the second sub-samples are performed independently.
  43. 43. The device according to claim 41 wherein the at least one microprocessor is further configured for carrying out the step of entropy coding the merged first and second sub-data.
  44. 44. The device according to claim 41 wherein the at least one microprocessor is further configured so that the step of splitting each of the samples of the image portion into at least a first sub-sample and a second sub-sample comprises a step of extracting a set of most significant bits from each of the samples to form the first sub-samples and a step of extracting a set of least significant bits from each of the samples to form the second sub-samples.
  45. 45. The device according to claim 41 wherein the at least one microprocessor is further configured for carrying out the preliminary step of withdrawing a predetermined number of least significant bits in each of the samples to obtain truncated samples.
  46. 46. The device according to claim 45 wherein the at least one microprocessor is further configured so that the predetermined number of least significant bits to be withdrawn in each of the samples is determined so that the truncated samples comprise an even number of bits.
  47. 47. The device according to claim 45 wherein the at least one microprocessor is further configured so that the step of splitting each of the samples of the image portion into at least a first sub-sample and a second sub-sample comprises a step of extracting a set of most significant bits from each of the truncated samples to form the first sub-samples and a step of extracting a set of least significant bits from each of the truncated samples to form the second sub-samples.
  48. 48. The device according to any one of the claims 45 to 47 wherein the predetermined number of least significant bits is equal to one and wherein the bit-depth of data of the image is equal to sixteen.
  49. 49. The device according to any one of the claims 44 to 48 wherein the set of most significant bits and the set of least significant bits each comprises eight bits.
  50. 50. The device according to claim 41 wherein the at least one microprocessor is further configured so that the step of splitting each of the samples of the image portion into at least a first sub-sample and a second sub-sample comprises, for each of the samples, a step of extracting a set of least significant bits from the sample and a step of splitting the set of least significant bits so that each of the first and second sub-samples comprises bits of the set of least significant bits.
  51. 51. The device according to claim 41 wherein the at least one microprocessor is further configured so that the step of applying at least one of a transformation and a quantization steps to the first sub-samples comprises a discrete cosine or sine transform step or a transformation step of the transform skip type.
  52. 52. The device according to claim 41 wherein the at least one microprocessor is further configured so that the step of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a transformation step of the transform skip type if the step of applying at least one of a transformation and a quantization steps to the first sub-samples comprises a transformation step of the transform skip type.
  53. 53. The device according to claim 41 wherein the at least one microprocessor is further configured so that the step of applying at least one of a transformation and a quantization steps to the first sub-samples comprises a transformation step of a type different than the transform skip type if the step of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a transformation step of a type different than the transform skip type.
  54. 54. The device according to claim 41 wherein the at least one microprocessor is further configured so that each of the steps of applying at least one of a transformation and a quantization steps to the first sub-samples and of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a step of partitioning a block of sub-samples into transformation units, the partitioning steps being performed independently each other.
  55. 55. The device according to claim 41 wherein the at least one microprocessor is further configured so that each of the steps of applying at least one of a transformation and a quantization steps to the first sub-samples and of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a step of partitioning a block of sub-samples into transformation units, one partitioning step depending on another partitioning step.
  56. 56. The device according to claim 41 wherein the at least one microprocessor is further configured so that each of the steps of applying at least one of a transformation and a quantization steps to the first sub-samples and of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a step of partitioning a block of sub-samples into transformation units, the partitioning steps being identical.
  57. 57. The device according to claim 41 wherein the at least one microprocessor is further configured so that the step of applying at least one of a transformation and a quantization steps to the first sub-samples comprises a quantization step using a first quantization parameter and wherein the step of applying at least one of a transformation and a quantization steps to the second sub-samples comprises a quantization step using a second quantization parameter, the second quantization parameter being equal to the first quantization parameter plus a value depending on the bit-depth of data of the image and of the predetermined bit-depth of each of the at least first and second sub-samples.
  58. 58. The device according to claim 57 wherein the value is equal to six time the predetermined bit-depth of the second sub-samples.
  59. 59. The device according to claim 41 wherein the image portion is a prediction residual block.
  60. 60. The device according to any one of claims 41 to 59 wherein the encoded format conforms to the HEVC standard.
  61. 61. The device according to any one of claims 41 to 59 wherein the encoded format conforms to the HEVC RExt standard.
  62. 62. A device of processing received data in a process of decoding an image, the processed received data being representative of an image portion comprising a plurality of samples of a bit-depth determined as a function of a bit-depth of data of the image, the device comprising at least one microprocessor configured for carrying out the steps of: splitting received data into at least first sub-data and second sub-data; applying at least one of an inverse quantization and an inverse transformation steps to the first sub-data to obtain first sub-samples; applying at least one of an inverse quantization and an inverse transformation steps to the second sub-data to obtain second sub-samples; merging the first and the second sub-samples to creates samples of the image portion, wherein each of the first and second sub-samples are of a predetermined bit-depth, the bit-depth of the samples being higher than the predetermined bit-depth of each of the first and second sub-samples.
  63. 63. The device according to claim 62 wherein the at least one microprocessor is further configured so that the steps of applying at least one of an inverse transformation and an inverse quantization steps to the first sub-data and of applying at least one of an inverse transformation and an inverse quantization steps to the second sub-data are performed independently.
  64. 64. The device according to claim 62 wherein the at least one microprocessor is further configured for carrying out the preliminary step of entropy decoding the received data.
  65. 65. The device according to claim 62 wherein the at least one microprocessor is further configured so that the step of merging the first and second sub-samples comprises a step of obtaining a first set of bits from the first sub-sample and a step of obtaining a second set of bits from the second sub-sample, the first set of bits forming a set of most significant bits and the second set of bits forming a set of least significant bits, the sets of most significant bits and of least significant bits being used to form one sample.
  66. 66. The device according to claim 65 wherein the number of bits obtained from the first sub-sample is lower than the bit-depth of the first sub-sample.
  67. 67. The device according to claim 65 wherein the number of bits obtained from the second sub-sample is lower than the bit-depth of the second sub-sample.
  68. 68. The device according to any one of the claims 65 to 67 wherein the difference between the number of bits obtained from the first sub-sample and the bit-depth of the first sub-sample and the difference between the number of bits obtained from the second sub-sample and the bit-depth of the second sub-sample is equal to one and wherein the bit-depth of data of the image is equal to sixteen.
  69. 69. The device according to any one of the claims 65 to 68 wherein the first and second sets of bits each comprises eight bits.
  70. 70. The device according to claim 62 wherein the at least one microprocessor is further configured so that the step of merging the first and second sub-samples comprises a step of obtaining a first set of bits from the first sub-sample, a step of obtaining a second set of bits from the second sub-sample, and a step of forming a set of least significant bits from bits of the first set of bits and from bits of the second set of bits, the set of least significant bits being used to form one sample.
  71. 71. The device according to claim 62 wherein the at least one microprocessor is further configured so that the step of applying at least one of an inverse transformation and an inverse quantization steps to the first sub-data comprises an inverse discrete cosine or sine transform step or a transformation step of the transform skip type.
  72. 72. The device according to claim 62 wherein the at least one microprocessor is further configured so that the step of applying at least one of an inverse transformation and an inverse quantization steps to the second sub-data comprises a transformation step of the transform skip type if the step of applying at least one of an inverse transformation and an inverse quantization steps to the first sub-data comprises a transformation step of the transform skip type.
  73. 73. The device according to claim 62 wherein the at least one microprocessor is further configured so that the step of applying at least one of an inverse transformation and an inverse quantization steps to the first sub-data comprises a transformation step of a type different than the transform skip type if the step of applying at least one of an inverse transformation and an inverse quantization steps to the second sub-data comprises a transformation step of a type different than the transform skip type.
  74. 74. The device according to claim 62 wherein the at least one microprocessor is further configured so that the step of applying at least one of an inverse transformation and an inverse quantization steps to the first sub-data comprises an inverse quantization step using a first quantization parameter and wherein the step of applying at least one of an inverse transformation and an inverse quantization steps to the second sub-data comprises an inverse quantization step using a second quantization parameter, the second quantization parameter being equal to the first quantization parametel plus a value depending on the bit-depth of data of the image and of the predetermined bit-depth of each of the at least first and second sub-samples.
  75. 75. The device according to claim 74 wherein the value is equal to six time the predetermined bit-depth of the second sub-samples.
  76. 76. The device according to claim 62 wherein the image portion is a prediction residual block.
  77. 77. The device according to any one of claims 62 to 76 wherein the encoded format conforms to the HEVC standard.
  78. 78. The device according to any one of claims 62 to 76 wherein the encoded format conforms to the HEVC RExt standard.
  79. 79. A video encoder comprising the device according to any one of the claims 41 to 61.
  80. 80. A video decoder comprising the device according to any one of the claims 62 to 78.
  81. 81. A method of processing an image portion in a process of encoding an image from which is obtained the image portion, the image portion comprising a plurality of samples of a bit-depth determined as a function of a bit-depth of data of the image, substantially as hereinbefore described with reference to, and as shown in Figure Sa.
  82. 82. A method of processing received data in a process of decoding an image, the processed received data being representative of an image portion comprising a plurality of samples of a bit-depth determined as a function of a bit-depth of data of the image, substantially as hereinbefore described with reference to, and as shown in Figure 5b.
  83. 83. A device for processing an image portion in a process of encoding an image from which is obtained the image portion, the image portion comprising a plurality of samples of a bit-depth determined as a function of a bit-depth of data of the image, substantially as hereinbefore described with reference to, and as shown in Figure 2.
  84. 84. A device for processing received data in a process of decoding an image, the processed received data being representative of an image portion comprising a plurality of samples of a bit-depth determined as a function of a bit-depth of data of the image, substantially as hereinbefore described with reference to, and as shown in Figure 3.
GB1312144.7A 2013-07-05 2013-07-05 Method, device, and computer program for processing high bit-depth content in video encoder and decoder Withdrawn GB2516022A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1312144.7A GB2516022A (en) 2013-07-05 2013-07-05 Method, device, and computer program for processing high bit-depth content in video encoder and decoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1312144.7A GB2516022A (en) 2013-07-05 2013-07-05 Method, device, and computer program for processing high bit-depth content in video encoder and decoder

Publications (2)

Publication Number Publication Date
GB201312144D0 GB201312144D0 (en) 2013-08-21
GB2516022A true GB2516022A (en) 2015-01-14

Family

ID=49033431

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1312144.7A Withdrawn GB2516022A (en) 2013-07-05 2013-07-05 Method, device, and computer program for processing high bit-depth content in video encoder and decoder

Country Status (1)

Country Link
GB (1) GB2516022A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018044441A1 (en) * 2016-08-29 2018-03-08 Apple Inc. Multidimensional quantization techniques for video coding/decoding systems

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070217704A1 (en) * 2006-02-08 2007-09-20 Zhifang Zeng Encoding method, encoding apparatus, decoding method, and decoding apparatus
US20100002943A1 (en) * 2008-07-02 2010-01-07 Samsung Electronics Co., Ltd. Method and apparatus for encoding and decoding image using image separation based on bit location
RU2420021C2 (en) * 2009-03-24 2011-05-27 Корпорация "САМСУНГ ЭЛЕКТРОНИКС Ко., Лтд." Method to compress images and video sequences
US20120057788A1 (en) * 2010-09-06 2012-03-08 Tokyo Metropolitan University Image processing apparatus and method
EP2541937A1 (en) * 2011-06-29 2013-01-02 Canon Kabushiki Kaisha Compression of high bit-depth images
WO2013072889A1 (en) * 2011-11-18 2013-05-23 Koninklijke Philips Electronics N.V. Encoding high quality (medical) images using standard lower quality (web) image formats

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070217704A1 (en) * 2006-02-08 2007-09-20 Zhifang Zeng Encoding method, encoding apparatus, decoding method, and decoding apparatus
US20100002943A1 (en) * 2008-07-02 2010-01-07 Samsung Electronics Co., Ltd. Method and apparatus for encoding and decoding image using image separation based on bit location
RU2420021C2 (en) * 2009-03-24 2011-05-27 Корпорация "САМСУНГ ЭЛЕКТРОНИКС Ко., Лтд." Method to compress images and video sequences
US20120057788A1 (en) * 2010-09-06 2012-03-08 Tokyo Metropolitan University Image processing apparatus and method
EP2541937A1 (en) * 2011-06-29 2013-01-02 Canon Kabushiki Kaisha Compression of high bit-depth images
WO2013072889A1 (en) * 2011-11-18 2013-05-23 Koninklijke Philips Electronics N.V. Encoding high quality (medical) images using standard lower quality (web) image formats

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018044441A1 (en) * 2016-08-29 2018-03-08 Apple Inc. Multidimensional quantization techniques for video coding/decoding systems
CN109644270A (en) * 2016-08-29 2019-04-16 苹果公司 Multidimensional quantification technique for encoding and decoding of video system
US11153594B2 (en) 2016-08-29 2021-10-19 Apple Inc. Multidimensional quantization techniques for video coding/decoding systems
US11539974B2 (en) 2016-08-29 2022-12-27 Apple Inc. Multidimensional quantization techniques for video coding/decoding systems

Also Published As

Publication number Publication date
GB201312144D0 (en) 2013-08-21

Similar Documents

Publication Publication Date Title
US10666938B2 (en) Deriving reference mode values and encoding and decoding information representing prediction modes
US20150010068A1 (en) Method, device, and computer program for pre-encoding and post-decoding high bit-depth content in video encoder and decoder
US9674531B2 (en) Data encoding and decoding
EP2868080B1 (en) Method and device for encoding or decoding an image
US10958938B2 (en) Data encoding and decoding
US11412235B2 (en) Color transform for video coding
CN114598890A (en) Method for encoding and decoding image and related device and system
KR102521034B1 (en) Video coding method and apparatus using palette mode
GB2516022A (en) Method, device, and computer program for processing high bit-depth content in video encoder and decoder
CN112020860B (en) Encoder, decoder and methods thereof for selective quantization parameter transmission
WO2015054816A1 (en) Encoder-side options for base color index map mode for video and image coding
US20240179304A1 (en) Systems and methods for signaling of downsampling filters for chroma from luma intra prediction mode
WO2024118114A1 (en) Systems and methods for signaling of downsampling filters for chroma from luma intra prediction mode
GB2521349A (en) Data encoding and decoding

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)