EP3854085A1 - Procédés et dispositifs de codage et de décodage d'un flux de données représentatif d'au moins une image - Google Patents
Procédés et dispositifs de codage et de décodage d'un flux de données représentatif d'au moins une imageInfo
- Publication number
- EP3854085A1 EP3854085A1 EP19783583.8A EP19783583A EP3854085A1 EP 3854085 A1 EP3854085 A1 EP 3854085A1 EP 19783583 A EP19783583 A EP 19783583A EP 3854085 A1 EP3854085 A1 EP 3854085A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- current block
- coding mode
- pixel
- block
- decoded
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 88
- 238000003672 processing method Methods 0.000 claims abstract description 27
- 238000001914 filtration Methods 0.000 claims description 32
- 238000004590 computer program Methods 0.000 claims description 14
- 230000009849 deactivation Effects 0.000 claims description 4
- 239000004289 sodium hydrogen sulphite Substances 0.000 claims description 4
- 235000010267 sodium hydrogen sulphite Nutrition 0.000 claims description 4
- 239000004296 sodium metabisulphite Substances 0.000 claims description 3
- 235000010262 sodium metabisulphite Nutrition 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 2
- 238000013139 quantization Methods 0.000 description 31
- 238000012805 post-processing Methods 0.000 description 19
- 238000012545 processing Methods 0.000 description 15
- 230000006870 function Effects 0.000 description 13
- 208000028167 Southeast Asian ovalocytosis Diseases 0.000 description 9
- 230000006835 compression Effects 0.000 description 9
- 238000007906 compression Methods 0.000 description 9
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 6
- 238000013459 approach Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 238000011002 quantification Methods 0.000 description 5
- 238000005457 optimization Methods 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 239000004306 orthophenyl phenol Substances 0.000 description 3
- 235000010292 orthophenyl phenol Nutrition 0.000 description 3
- 239000001797 sucrose acetate isobutyrate Substances 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 238000011282 treatment Methods 0.000 description 3
- XDLMVUHYZWKMMD-UHFFFAOYSA-N 3-trimethoxysilylpropyl 2-methylprop-2-enoate Chemical compound CO[Si](OC)(OC)CCCOC(=O)C(C)=C XDLMVUHYZWKMMD-UHFFFAOYSA-N 0.000 description 2
- 239000005711 Benzoic acid Substances 0.000 description 2
- 235000010233 benzoic acid Nutrition 0.000 description 2
- 239000004301 calcium benzoate Substances 0.000 description 2
- 235000010237 calcium benzoate Nutrition 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 239000004403 ethyl p-hydroxybenzoate Substances 0.000 description 2
- 235000010228 ethyl p-hydroxybenzoate Nutrition 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000001814 pectin Substances 0.000 description 2
- 235000010987 pectin Nutrition 0.000 description 2
- 239000000256 polyoxyethylene sorbitan monolaurate Substances 0.000 description 2
- 235000010486 polyoxyethylene sorbitan monolaurate Nutrition 0.000 description 2
- 239000001818 polyoxyethylene sorbitan monostearate Substances 0.000 description 2
- 235000010989 polyoxyethylene sorbitan monostearate Nutrition 0.000 description 2
- 239000004300 potassium benzoate Substances 0.000 description 2
- 235000010235 potassium benzoate Nutrition 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 239000000661 sodium alginate Substances 0.000 description 2
- 235000010413 sodium alginate Nutrition 0.000 description 2
- 239000004291 sulphur dioxide Substances 0.000 description 2
- 235000010269 sulphur dioxide Nutrition 0.000 description 2
- 239000001226 triphosphate Substances 0.000 description 2
- 235000011178 triphosphate Nutrition 0.000 description 2
- 239000001842 Brominated vegetable oil Substances 0.000 description 1
- 239000001828 Gelatine Substances 0.000 description 1
- 239000001825 Polyoxyethene (8) stearate Substances 0.000 description 1
- 239000004285 Potassium sulphite Substances 0.000 description 1
- 239000001852 Succistearin Substances 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 239000001809 ammonium phosphatide Substances 0.000 description 1
- 235000010986 ammonium phosphatide Nutrition 0.000 description 1
- 239000004294 calcium hydrogen sulphite Substances 0.000 description 1
- 235000010260 calcium hydrogen sulphite Nutrition 0.000 description 1
- 239000004295 calcium sulphite Substances 0.000 description 1
- 235000010261 calcium sulphite Nutrition 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 239000001806 glycerol esters of wood rosin Substances 0.000 description 1
- 235000010985 glycerol esters of wood rosin Nutrition 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000002620 method output Methods 0.000 description 1
- 239000001194 polyoxyethylene (40) stearate Substances 0.000 description 1
- 235000011185 polyoxyethylene (40) stearate Nutrition 0.000 description 1
- 239000000244 polyoxyethylene sorbitan monooleate Substances 0.000 description 1
- 235000010482 polyoxyethylene sorbitan monooleate Nutrition 0.000 description 1
- 239000000249 polyoxyethylene sorbitan monopalmitate Substances 0.000 description 1
- 235000010483 polyoxyethylene sorbitan monopalmitate Nutrition 0.000 description 1
- 239000001816 polyoxyethylene sorbitan tristearate Substances 0.000 description 1
- 235000010988 polyoxyethylene sorbitan tristearate Nutrition 0.000 description 1
- 239000004297 potassium metabisulphite Substances 0.000 description 1
- 235000010263 potassium metabisulphite Nutrition 0.000 description 1
- 239000004405 propyl p-hydroxybenzoate Substances 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 239000004402 sodium ethyl p-hydroxybenzoate Substances 0.000 description 1
- 235000010226 sodium ethyl p-hydroxybenzoate Nutrition 0.000 description 1
- GEHJYWRUCIMESM-UHFFFAOYSA-L sodium sulphite Substances [Na+].[Na+].[O-]S([O-])=O GEHJYWRUCIMESM-UHFFFAOYSA-L 0.000 description 1
- 235000010265 sodium sulphite Nutrition 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
Definitions
- the field of the invention is that of coding and decoding of images or sequences of images, and in particular of video streams.
- the invention relates to the compression of images or sequences of images using a block representation of the images.
- the invention can in particular be applied to image or video coding implemented in current or future coders (JPEG, MPEG, H.264, HEVC, etc. and their amendments), and to the corresponding decoding.
- JPEG Joint Photographic Experts Group
- MPEG MPEG
- H.264 High Efficiency Video Coding
- HEVC High Efficiency Video Coding
- Digital images and image sequences occupy a lot of memory space, which means that when transmitting these images, they must be compressed to avoid congestion problems on the network used for this transmission.
- HEVC compression standard High Efficiency Video Coding, Coding Tools and Specification
- Matthias Wien, Signais and Communication Technology proposes to implement a pixel prediction of a current image compared to other pixels belonging to the same image (intra prediction) or to a previous or next image (inter prediction).
- intra prediction exploits spatial redundancies within an image.
- the images are cut into blocks of pixels.
- the pixel blocks are then predicted using information already reconstructed, corresponding to the blocks previously coded / decoded in the current image according to the order of traversal of the blocks in the image.
- the coding of a current block is carried out using a prediction of the current block, known as the predictor block, and of a prediction residue or "residual block", corresponding to a difference between the current block and the predictor block.
- the residual block obtained is then transformed, for example by using a transform of the DOT type (transformed into discrete cosine).
- the coefficients of the transformed residual block are then quantified, then coded by an entropy coding and transmitted to the decoder, which can reconstruct the current block by adding this residual block to the predictor block.
- Decoding is done image by image, and for each image, block by block. For each block, the corresponding elements of the flow are read. The inverse quantization and the inverse transformation of the coefficients of the residual block are carried out. Then the prediction of the block is calculated to obtain the predictor block and the current block is reconstructed by adding the prediction (ie the predictor block) to the decoded residual block.
- a DPCM (for Differential Dist Code Modulation) coding technique for coding blocks in Intra mode is inserted in a HEVC coder.
- One such technique consists in predicting a set of pixels of an intra block by another set of pixels of the same block which have been previously reconstructed.
- a set of pixels of the intra block to be coded corresponds to a line of the block, or a column or a line and a column and the intra prediction used to predict the set of pixels is one of the intra directional predictions defined in the HEVC standard.
- the reconstruction of a set of pixels of the intra block corresponds either to the addition of a prediction residue in the case of lossless coding, therefore offering a fairly low compression rate, or to the addition a prediction residue after inverse transformation and / or inverse quantization of said other set of pixels serving as prediction.
- Such a technique therefore does not make it possible to predict each pixel of the intra block using a local prediction function and to reconstruct the predicted pixel before predicting a next pixel.
- this technique requires to reconstruct a set of pixels (row / column of the block for example) to predict another set of pixels. In other words, each time a part of the block is predicted and reconstructed, several pixels of the block are predicted and reconstructed.
- the invention improves the state of the art. To this end, it relates to a method for decoding a coded data stream representative of at least one image divided into blocks.
- Such a decoding method comprises, for at least one block of the image, called the current block:
- the application of processing operations to a reconstructed block is not carried out in the case of a block decoded according to a coding mode using a prediction of the pixels from previously reconstructed pixels of the same block.
- this coding mode the prediction residue associated with each pixel is not transformed.
- the processing methods aim to improve the quality of the blocks of reconstructed pixels, for example by reducing the effects of discontinuities between blocks due to the coding of prediction residue with a transform ("deblocking" filter), or by correcting the value each pixel (also known as SAO for Sample Adaptive Offset in English).
- the second coding mode does not use a transform of the prediction residue since the prediction residue associated with each pixel must be available immediately to reconstruct the pixel and that it can be used to predict the following pixels of the current block.
- the value of each pixel is coded individually using a prediction residue associated with each pixel. It is therefore not necessary to correct the value of each pixel.
- the processing methods applied to the reconstructed blocks generally require the transmission of parameters at the block level. Disabling these processing methods for blocks coded according to the second coding mode thus allows a bit rate gain. In addition, the decoding process can be substantially accelerated since these processing methods are not applied for these blocks.
- the invention also relates to a method for coding a stream of coded data representative of at least one image divided into blocks.
- a coding method comprises, for at least one block of the image, called the current block: the coding of information indicating a coding mode of the current block among at least a first coding mode and a second coding mode, the second coding mode being a coding mode according to which the current block is coded via, for each pixel of the current block:
- the processing method is an unblocking filtering applied to the pixels of the reconstructed current block which are located at the border of the reconstructed current block with a neighboring block reconstructed in the image.
- the processing method corresponds to a "deblocking" filter conventionally applied at block boundaries to reduce the effects of discontinuities between blocks.
- the unblocking filtering is applied to a pixel of the reconstructed current block if said pixel is located on a border of said reconstructed current block with a neighboring reconstructed block in the image and if said neighboring block is decoded or coded according to a distinct coding mode of the second coding mode.
- the unblocking filtering is applied only to the pixels on the border of two blocks which are both coded or decoded according to coding modes distinct from the second coding mode.
- the unblocking filtering is deactivated for the pixels of the reconstructed current block which are located on the border with a neighboring block coded or decoded according to the second coding mode.
- the application of the unblocking filtering to the reconstructed current block is deactivated for a pixel of the reconstructed current block if said pixel is located on a border of said reconstructed current block with a neighboring block in the image and if said neighboring block is decoded or coded according to the second coding mode, and
- the unblocking filtering is applied to a pixel of the reconstructed current block, if said pixel is located on a border of said reconstructed current block with a neighboring block reconstructed in the image and if said neighboring block is decoded or coded according to a coding mode separate from the second coding mode.
- the unblocking filtering is applied to the pixels situated at the border of two blocks of which at least one of the blocks is coded or decoded according to a coding mode distinct from the second coding mode.
- the unblocking filtering is deactivated for the pixels situated at the border of two blocks which are both coded or decoded according to the second coding mode.
- This particular embodiment of the invention makes it possible to smooth the block effects for the blocks coded or decoded according to the first coding mode or any other coding mode distinct from the second coding mode, even when these are close to a reconstructed block which has been coded or decoded according to the second coding mode.
- the processing method is a method of correcting at least one pixel of the reconstructed current block by adding to the reconstructed value of said pixel a value obtained from information encoded in the data stream or decoded from the data stream.
- the processing method corresponds to the so-called SAO method which has been integrated into the HEVC compression standard.
- SAO method which has been integrated into the HEVC compression standard.
- the application of said correction method to the reconstructed current block is deactivated for all the pixels of the reconstructed current block.
- the invention also relates to a decoding device configured to implement the decoding method according to any one of the particular embodiments defined above.
- This decoding device could of course include the various characteristics relating to the decoding method according to the invention.
- the characteristics and advantages of this decoding device are the same as those of the decoding method, and are not described in more detail.
- the decoding device notably comprises a processor configured for, for at least one block of the image, called the current block:
- decoding information indicating a coding mode of the current block from at least a first coding mode and a second coding mode, the second coding mode being a coding mode according to which the current block is decoded via, for each pixel of the current block:
- such a decoding device is included in a terminal.
- the invention also relates to an encoding device configured to implement the encoding method according to any one of the particular embodiments defined above.
- This coding device could of course include the various characteristics relating to the coding method according to the invention. Thus, the characteristics and advantages of this coding device are the same as those of the coding method, and are not described in more detail.
- the coding device notably comprises a processor configured for, for at least one block of the image, known as the current block:
- coding information indicating a coding mode of the current block among at least a first coding mode and a second coding mode, the second coding mode being a coding mode according to which the current block is coded via, for each pixel of the current block:
- such a coding device is included in a terminal, or a server.
- the decoding method, respectively the coding method, according to the invention can be implemented in various ways, in particular in wired form or in software form.
- the decoding method, respectively the coding method is implemented by a computer program.
- the invention also relates to a computer program comprising instructions for implementing the decoding method or the coding method according to any one of the particular embodiments described above, when said program is executed by a processor.
- Such a program can use any programming language. It can be downloaded from a communication network and / or saved on a computer-readable medium.
- This program can use any programming language, and be in the form of source code, object code, or intermediate code between source code and object code, such as in a partially compiled form, or in any other desirable form.
- the invention also relates to a recording medium or information medium readable by a computer, and comprising instructions of a computer program as mentioned above.
- the recording media mentioned above can be any entity or device capable of storing the program.
- the support may include a storage means such as a memory.
- the recording media can correspond to a transmissible medium such as an electrical or optical signal, which can be routed via an electrical or optical cable, by radio or by other means.
- the program according to the invention can in particular be downloaded from a network of the Internet type.
- the recording media can correspond to an integrated circuit in which the program is incorporated, the circuit being adapted to execute or to be used in the execution of the process in question.
- FIG. 1 presents steps of the coding method according to a particular embodiment of the invention
- FIG. 2 illustrates an example of the position of the neighboring blocks of a current block for determining an intra prediction mode according to a particular embodiment of the invention
- FIG. 3 illustrates an example of the position of the reference pixels used to predict pixels of a current block according to a particular embodiment of the invention
- FIG. 4 presents steps of the decoding method according to a particular embodiment of the invention
- FIGS. 5A and 5B illustrate blocks of reconstructed pixels on which post-processing is applied or not to the pixels according to the coding mode of the block to which the pixels belong, according to particular embodiments of the invention
- FIG. 6 shows the simplified structure of a coding device suitable for implementing the coding method according to any one of the particular embodiments of the invention
- - Figure 7 shows the simplified structure of a decoding device adapted to implement the decoding method according to any one of the particular embodiments of the invention.
- Processing performed after decoding an image is integrated with video coding standards in order to improve the quality of the reconstructed images.
- post-processing can be the application of a deblocking filter in English or else a post-processing called SAO for Sample Adaptive Offset in English.
- the deblocking filtering makes it possible to erase, after decoding of each block, the discontinuity which exists between each block and to which the human eye is very sensitive.
- the ODS processing makes it possible to individually modify the value of each pixel of a decoded block.
- ILR coding for In-Loop Residual in English
- ODS processing aims to correct the individual value of certain pixels after conventional coding.
- the ILR coding mode which will be described below, already makes it possible to code the value of each pixel individually. So there is no need for ODS processing for these pixels.
- the general principle of the invention is therefore to enable or not to activate the application of a post-processing method to a reconstructed block according to whether the block has been coded / decoded according to the ILR coding mode.
- FIG. 1 presents steps of the coding method according to a particular embodiment of the invention.
- a sequence of images l ; l 2 , ..., l N b in the form of a STR coded data stream according to a particular embodiment of the invention is implemented by a coding device as described below with reference to FIG. 6.
- a sequence of images h, l 2 , ..., l Nb , Nb being the number of images of the sequence to be coded, is supplied at the input of the coding method.
- the coding method outputs a stream of STR coded data representative of the sequence of images supplied as input.
- the coding of the sequence of images h, l 2 , ..., l N b is done image by image, according to a coding order previously established and known to the coder.
- the images can be coded in time order h, l 2 , ..., l N b or in another order, for example li, l 3 , l 2, ⁇ , iNb-
- an image I, to be coded of the sequence of images, l 2 , ..., l N b is cut into blocks, for example into blocks of size 32 ⁇ 32, or 64 ⁇ 64 pixels or more.
- Such a block can be subdivided into square or rectangular sub-blocks, for example of size 16 ⁇ 16, 8 ⁇ 8, 4x4, 16 ⁇ 8, 8 ⁇ 16, etc.
- a first block or sub-block X b to code of the image I is selected according to a direction of travel of the image I predetermined. For example, it can be the first block in the lexicographic order of the image.
- the encoder will choose the coding mode for coding the current block X b .
- the encoder selects the coding mode for coding the current block X b from a first coding mode M1 and a second coding mode M2. Additional coding modes (not described here) can be used.
- the first coding mode M1 corresponds to the coding of the current block by intra classical prediction, for example as defined according to the HEVC standard and the second coding mode M2 corresponds to the coding by In Loop Residual prediction (ILR).
- ILR In Loop Residual prediction
- the principle of the invention can be extended to other types of coding modes for the first coding mode M1.
- the first coding mode can correspond to any type of coding mode using a transformation of the prediction residue (coding by inter-image prediction, coding by spatial prediction with template matching, etc.).
- the coder can perform a bit rate / distortion optimization to determine the best coding mode for coding the current block.
- additional coding modes distinct from the first and second coding mode can be tested, for example a coding mode in inter mode.
- the coder simulates the coding of the current block X b according to the different coding modes available in order to determine the bit rate and the distortion associated with each coding mode and selects the coding mode offering the best compromise.
- rate / distortion for example according to the function D + 2.R, where R represents the rate necessary to code the current block according to the coding mode evaluated, D the distortion measured between the decoded block and the original current block and l a multiplier Lagrangian, for example entered by the user or defined at the coder.
- step E20 information indicating the coding mode selected for the current block is coded in the data stream STR.
- the method goes to step E21 of coding the block according to M1. If the current block X b is coded according to the second coding mode M2, the method goes to step E22 of coding the block according to M2.
- the first coding mode corresponds to an intra classical prediction, such as that defined in the HEVC standard.
- a quantization step 3 ⁇ 4 is determined.
- the quantization step 3 ⁇ 4 can be set by the user, or calculated using a quantization parameter setting a compromise between compression and quality and entered by the user or defined by the coder.
- a quantization parameter can be the parameter L, used in the rate-distortion cost function D + 2.R, where D represents the distortion introduced by the coding and R the bit rate used to code. This function is used to make coding choices. Conventionally, we are looking for the way to code the image which minimizes this function.
- the quantification parameter can be QP, corresponding to the quantification parameter conventionally used in AVC or HEVC standards.
- a prediction of the current block is determined using an intra-classical prediction mode. According to this intra classical prediction, each predicted pixel is calculated only from decoded pixels from neighboring blocks (reference pixels) located above the current block, and to the left of the current block. The way in which the pixels are predicted from the reference pixels depends on a prediction mode which is transmitted to the decoder, and which is chosen by the coder from a predetermined set of modes known to the coder and the decoder.
- HEVC there are 35 possible prediction modes: 33 modes which interpolate the reference pixels in 33 different angular directions, and 2 other modes: the DC mode in which each pixel of the predicted block is produced from the average reference pixels, and PLANAR mode, which performs plane and non-directional interpolation.
- This so-called “intra classical prediction” approach is well known and also used in the ITU-T H.264 standard (where there are only 9 different modes) as well as in the experimental JEM software available at the internet address (https : //ivet.hhi.fraunhofer.de/), where there are 67 different prediction modes.
- the intra classical prediction respects the two aspects mentioned above (pixel prediction from neighboring blocks and transmission to the decoder of an optimal prediction mode).
- the coder therefore chooses one of the prediction modes available from the predetermined list of prediction modes.
- One way of choosing is, for example, to evaluate all the prediction modes and to keep the prediction mode which minimizes a cost function such as, conventionally, the bit rate-distortion cost.
- the prediction mode chosen for the current block is coded from the neighboring blocks of the current block.
- FIG. 2 illustrates an example of the position of the neighboring blocks A b and B b of the current block X b for coding the prediction mode of the current block X b .
- the intra prediction mode chosen for the current block is coded using the intra prediction modes associated with the neighboring blocks.
- such an approach consists in identifying the intra m A prediction mode associated with the block A b located above the current block, and the intra m B prediction mode associated with the block B b located just to the left of the current block.
- MPM for Most Probable Mode
- non-BPM list containing the 32 other prediction modes
- syntax elements are transmitted:
- an index in the non-BPM list corresponding to the prediction mode of the current block is coded.
- the prediction residue R for the current block is constructed.
- a predicted block P is constructed as a function of the prediction mode chosen in step E21 1. Then the prediction residue R is obtained by calculating the difference for each pixel, between the predicted block P and the original current block.
- the prediction residue R is transformed into R T.
- a frequency transform is applied to the block of residue R so as to produce the block R T comprising transformed coefficients.
- the transform could be a DCT type transform for example. It is possible to choose the transform to be used in a predetermined set of transforms E T and to signal the transform used to the decoder.
- the transformed residue block R T is quantified using for example a scalar quantization of quantization step This produces the quantized transformed prediction residue block R TQ .
- the coefficients of the quantized block R TQ are coded by an entropy coder.
- an entropy coder One can for example use the entropy coding specified in the HEVC standard.
- the current block is decoded by de-quantizing the coefficients of the quantized block R TQ , then by applying the inverse transform to the de-quantized coefficients to obtain the decoded prediction residue.
- the prediction is then added to the decoded prediction residue in order to reconstruct the current block and obtain its decoded version.
- the decoded version of the current block can then be used later to spatially predict other neighboring blocks of the image or else to predict blocks of other images by inter-image prediction.
- step E22 of coding the block according to the second coding mode M2 is described below, according to a particular embodiment of the invention.
- the second coding mode corresponds to coding by ILR prediction.
- a local predictor PL for the current block is determined.
- the pixels of the current block are predicted by pixels previously reconstructed from a neighboring block of the current block or of the current block itself.
- the first coding mode uses a first group of intra prediction modes, for example the intra prediction modes defined by the HEVC standard, and the second coding mode, here the ILR mode, uses a second group of prediction modes distinct from the first group of intra prediction modes.
- the local predictor PL can be unique or it can be selected from a set of predetermined local predictors (second group of prediction modes).
- 4 local predictors are defined.
- X is called a current pixel to predict from the current block
- A the pixel located immediately to the left of X
- B the pixel located immediately to the left and above X
- C the pixel located immediately above X, as illustrated in FIG. 3 showing a current block X b .
- 4 local predictors PL1, PL2, PL3, PL4 can be defined as follows:
- min (A, B) corresponds to the function returning the smallest value between the value of A and the value of B and max (A, B) corresponds to the function returning the largest value between the value of A and the value of B.
- step E220 it is determined which local predictor PL to use for the current block.
- the same local predictor will be used for all the pixels of the current block, i.e. the same prediction function.
- the coding of the current block with each of the predictors can be simulated (similar to an optimization for choosing a coding mode for the current block), and the local predictor which optimizes a cost function (for example, which minimizes the function D + AR, where R is the bit rate used to code the block, D is the distortion of the decoded block compared to the original block, and l is a parameter set by the user) is selected.
- a cost function for example, which minimizes the function D + AR, where R is the bit rate used to code the block, D is the distortion of the decoded block compared to the original block, and l is a parameter set by the user
- an orientation of the texture of the previously coded pixels is analyzed. For example, the pixels previously coded in the block which are located above or to the left of the current block are analyzed using a Sobel operator. If it is determined that:
- the local predictor PL2 is selected
- the local predictor PL3 is selected
- the local predictor PL4 is selected
- the local predictor PL1 is selected.
- a syntax element is coded in the STR data stream to indicate to the decoder which local predictor was used to predict the current block.
- a quantization step d 2 is determined.
- the quantization step d 2 depends on the same quantization parameter as the quantization step 3 ⁇ 4 which would be determined in step E210 if the current block was coded according to the first coding mode.
- a prediction residue R1 is calculated for the current block. To do this, once the local predictor has been chosen, for each current pixel of the current block:
- the current pixel X of the current block is predicted by the local predictor PL selected, using either pixels outside the block and already reconstructed (and therefore available with their decoded value), or pixels previously reconstructed in the current block, either of the two, in order to obtain a predicted value PRED.
- the predictor PL uses previously reconstructed pixels.
- FIG. 3 it can be seen that the pixels of the current block situated on the first line and / or the first column of the current block will use as reference pixels (to construct the predicted value PRED) pixels external to the block and already reconstructed (pixels in gray in FIG. 3) and possibly already reconstructed pixels of the current block.
- the reference pixels used to construct the predicted value PRED are located inside the current block;
- Q (X) is the quantized residue associated with X. It is calculated in the spatial domain, ie calculated directly from the difference between the predicted PRED value of the pixel X and the original value of X. Such a quantized residue Q (X ) for the pixel X is stored in a quantized prediction residue block R1 Q , which will be coded later;
- the decoded predicted value P1 (X) of X is calculated by adding to the predicted value PRED the de-quantized value of the quantized residue Q (X).
- ScalarDequant (A, x) D x x.
- the decoded predicted value P1 (X) thus makes it possible to predict possible pixels which remain to be processed in the current block.
- the block P1 comprising the decoded / reconstructed values of the pixels of the current block constitutes the predictor ILR of the current block (as opposed to the intra-classical predictor).
- the sub-steps described above are performed for all the pixels of the current block, in a traversing order which ensures that the pixels used for the prediction chosen from PL1, ..., PL4 are available.
- the order of traversal of the current block is the lexicographic order, i.e. from left to right, and from top to bottom.
- step E222 the quantized residue block R1 Q has been determined. This quantized residue block R1 Q must be coded to be transmitted to the decoder. The predictor P1 of the current block was also determined.
- the quantized residue block R1 Q is coded in order to transmit it to the decoder. It is possible to use any known approach, such as the method described in HEVC to code the quantized coefficients of a classical prediction residue.
- the values of the quantized residue block R1 Q are coded using an entropy coder from the data stream STR.
- an additional prediction residue R2 from the predictor ILR obtained for the current block.
- the coding of an additional prediction residue R2 is however optional. It is indeed possible to simply code the current block by its predicted version P1 and the quantized residue R1 Q.
- the following steps correspond to the conventional steps of coding this residue R2.
- step E225 the residue R2 is transformed using a frequency transform so as to produce the block of coefficients R2 T.
- the transform can be a DCT type transform for example. It is possible to choose the transform to be used in a predetermined set of transforms E T2 and to signal the transform used to the decoder. In this case, the set E T2 can be different from the set E T , in order to adapt to the particular statistics of the residue R2.
- the block of coefficients R2 T is quantized, for example using a scalar quantization of quantization step d. This produces the R2 TQ block.
- the quantization step d can be set by the user. It can also be calculated using another parameter l fixing the compromise between compression and quality and entered by the user or the encoder. For example, the quantization step d may correspond to the quantization step 3 ⁇ 4 or be determined in a similar manner to this.
- the coefficients of the quantized block R2 TQ are then transmitted in a coded manner.
- the coding specified in the HEVC standard can be used.
- the current block is decoded by de-quantizing the coefficients of the quantized block R2 TQ , then by applying the inverse transform to the de-quantized coefficients to obtain the decoded prediction residue.
- the prediction P1 is then added to the decoded prediction residue in order to reconstruct the current block and to obtain its decoded version X rec .
- the decoded version X rec of the current block can then be used later to spatially predict other neighboring blocks of the image or else to predict blocks of other images by inter-image prediction.
- step E23 it is checked whether the current block is the last block of the image to be processed by the coding method, taking into account the travel order defined above. If the current block is not the last block of the image to be processed, during a step E24, the next block of the image to be processed is selected according to the path of the image defined above and the coding method go to step E2, where the selected block becomes the current block to be processed.
- the method proceeds to the application of post-processing methods to be applied to the reconstructed image during a step E231.
- these post-processing methods can be deblocking filtering and / or an ODS method.
- the application of the post-processing operations being carried out in a similar manner to the coder and the decoder, step E231 will be described later.
- FIG. 4 presents steps of the method of decoding a stream STR of coded data representative of a sequence of images h, l 2 , Iisi b to be decoded according to a particular embodiment of the invention.
- the STR data stream was generated via the coding method presented in relation to FIG. 1.
- the STR data stream is supplied at the input of a DEC decoding device, as described in relation to FIG. 7 .
- the decoding method decodes the image-by-image stream and each image is decoded block by block.
- an image I to be decoded is subdivided into blocks.
- Each block will undergo a decoding operation consisting of a series of steps which are detailed below.
- the blocks can be the same size or different sizes.
- a first block or sub-block X b to be decoded from the image I is selected as the current block according to a direction of travel of the image I which is predetermined. For example, it can be the first block in the lexicographic order of the image.
- step E42 information indicating an encoding mode for the current block is read from the data stream STR.
- this information indicates whether the current block is coded according to a first coding mode M1 or according to a second coding mode M2.
- the first coding mode M1 corresponds to the coding of the current block by intra classical prediction, for example as defined according to the HEVC standard
- the second coding mode M2 corresponds to the coding by In Loop prediction Residual (ILR).
- ILR In Loop prediction Residual
- the information read from the stream STR can also indicate the use of other coding modes for coding the current block (not described here).
- step E43 of decoding the current block is described when the current block is coded according to the first coding mode M1.
- a quantization step 3 ⁇ 4 is determined.
- the quantization step 3 ⁇ 4 is determined from the quantization parameter QP read during step E401 or in a similar manner to what was done at the coder.
- the quantization step 3 ⁇ 4 can be calculated using the quantization parameter QP read during step E401.
- the QP quantization parameter can be the quantification parameter conventionally used in AVC or HEVC standards.
- the prediction mode used to code the current block is decoded from the neighboring blocks. For this, like what was done at the coder, the intra prediction mode chosen for the current block is decoded, using the intra prediction modes associated with the neighboring blocks of the current block.
- the binary indicator and the prediction mode index are therefore read for the current block from the STR data stream, to decode the intra prediction mode of the current block.
- the decoder constructs a predicted block P for the current block from the decoded prediction mode.
- the decoder decodes the coefficients of the quantized block R TQ from the data stream STR, for example using the decoding specified in the HEVC standard.
- the decoded block R TQ is de-quantized, for example using a scalar de-quantization of quantization step This produces the block of quantized coefficients R QD ⁇
- an inverse frequency transform is applied to the block of de-quantified coefficients R T Q D so as to produce the block of decoded prediction residue RTQDI.
- the transform could be a reverse DCT type transform for example. It is possible to choose the transform to be used in a predetermined set of transforms by decoding an indicator from the data stream STR.
- step E44 describes the decoding of the current block when the current block is coded according to the second coding mode M2.
- the local predictor PL used to predict the pixels of the current block is determined. If only one predictor is available, the local predictor is by example defined by default at the decoder level and no syntax element needs to be read in the STR stream to determine it.
- a syntax element is decoded from the data stream STR to identify which local predictor was used to predict the current block.
- the local predictor is therefore determined from this decoded syntax element.
- the quantization step d 2 is determined, in a similar manner to what has been done at the coder.
- the quantized residue R1 Q is decoded from the data stream STR. It is possible to use any known approach, such as the method described in HEVC to decode the quantized coefficients of the classical prediction residue.
- the quantized residue block R1 Q is de-quantified using the quantization step d 2 , so as to produce the de-quantized residue block R1 QD .
- step E444 when the de-quantized residue block R1 QD is obtained, the predicted block P1 is constructed using the local predictor PL determined during step E440.
- each pixel of the current block is predicted and reconstructed as follows:
- the current pixel X of the current block is predicted by the predictor PL selected, using either the pixels outside the block and already decoded, or previously reconstructed pixels of the current block, or both, in order to obtain a predicted value PRED.
- the predictor PL uses previously decoded pixels;
- the route order is the lexicographic order (from left to right, then the lines from top to bottom).
- the predicted block P1 comprising the decoded predicted values P1 (X) of each pixel of the current block here constitutes the decoded current block X rec .
- an additional prediction residue has been coded for the current block. It is therefore necessary to decode this additional prediction residue in order to reconstruct the decoded version of the current block X rec .
- this other particular embodiment can be activated or not by default at the level of the coder and the decoder.
- an indicator can be encoded in the data stream with the block level information to indicate for each block encoded according to the ILR encoding mode whether an additional prediction residue is encoded.
- an indicator can be coded in the data stream with the image level or image sequence information to indicate for all the blocks of the image or of the image sequence coded according to the ILR coding mode if a additional prediction residue is coded.
- the coefficients of the quantized prediction residue R2 TQ are decoded from the data stream STR, using means adapted to those implemented to the coder, for example the means implemented in a HEVC decoder.
- the block of quantized coefficients R2 TQ is de-quantified, for example using a scalar de-quantization of quantization step This produces the block of unquantified coefficients R2 TQD .
- an inverse frequency transform is applied to the block R2 TQD so as to produce the block of decoded prediction residue R2 TQDI
- the reverse transform could be a reverse DCT type transform for example.
- the transform to be used in a predetermined set of transforms E T2 and to decode the information signaling the transform to be used at the decoder.
- the set E T2 is different from the set E T , in order to adapt to the particular statistics of the residue R2.
- the current block is reconstructed by adding the predicted block P1 obtained during step E444 to the decoded prediction residue R2 TQDI ⁇
- step E45 it is checked whether the current block is the last block of the image to be processed by the decoding method, taking into account the course order defined above. If the current block is not the last block of the image to be processed, during a step E46, the next block of the image to be processed is selected according to the path of the image defined previously and the decoding method goes to step E42, the selected block becoming the current block to be processed.
- the method proceeds to the application of at least one post-processing method to be applied to the image reconstructed during a step E451.
- these post-processing methods can be deblocking filtering and / or an ODS method.
- the method proceeds to decoding (step E47) of the next image of the video if necessary.
- decoding step E47
- the steps E231 and E451 of applying at least one post-processing method respectively to the coder and to the decoder according to the invention are described below.
- Post-processing generally requires access to data contained in the neighboring blocks of a current block to be processed, including the “future” blocks or those not yet reconstructed according to the order of traversal of the blocks in the image used. to the encoder and the decoder. Post-processing is therefore generally carried out by making a second complete loop on all the reconstructed blocks of the image. Thus, at the coder and the decoder, a first loop on all the blocks of the image builds a reconstructed version of the blocks from the information coded for the blocks, then a post-processing loop again traverses the reconstructed blocks in order to improve their reconstruction. Two examples of improvement are given above, the general principle of the invention applying of course to other post-processing methods.
- a filter called “deblocking” is applied to reconstructed blocks of the image.
- This filtering generally consists in applying a low-pass filter to the pixels which are at the border of a reconstructed block.
- Such a filter is generally described in the article
- the deblocking filtering is only applied at the border of two reconstructed blocks which have been previously coded by a conventional coding mode, i.e. other than ILR.
- FIG. 5A This particular embodiment of the invention is for example illustrated in FIG. 5A showing:
- the hatched pixels correspond to the pixels for which the application of deblocking filtering is deactivated
- - the pixels filled with dots are pixels which, because of their location in the reconstructed block, are not affected by deblocking filtering
- the white pixels are the pixels to which deblocking filtering is applied.
- the application of the deblocking filtering to the reconstructed current block is deactivated for all the pixels of the current block. This is illustrated in FIG. 5A, in which all the pixels on the border of block 80 are hatched.
- the deblocking filtering is applied to a pixel of the reconstructed current block if the pixel is located on a border of the current block reconstructed with a neighboring block and if the neighboring block has been decoded or coded according to a conventional coding mode, ie not ILR.
- a conventional coding mode ie not ILR.
- the deblocking filtering is only applied at the border of two blocks of which at least one of the two blocks is a block coded / decoded according to a conventional coding mode (for example M1 in the example described in relation to Figures 2 and 4).
- a conventional coding mode for example M1 in the example described in relation to Figures 2 and 4.
- FIG. 5B This particular embodiment of the invention is for example illustrated in FIG. 5B showing:
- the hatched pixels correspond to the pixels for which the application of deblocking filtering is deactivated
- the pixels filled with dots are pixels which, because of their location in the block, are not affected by deblocking filtering
- the white pixels are the pixels to which deblocking filtering is applied.
- the application of deblocking filtering is deactivated for a pixel of the reconstructed current block 84 if the pixel is located on a border of the reconstructed current block 84 with a neighboring block and if said neighboring block has been decoded or coded according to the M2 coding mode (ILR).
- ILR M2 coding mode
- the deblocking filtering is applied to a pixel of the reconstructed current block (84), if the pixel is located on a border of the reconstructed current block with a neighboring block and if the neighboring block has been decoded or coded according to a coding mode distinct from the coding mode M2. This is illustrated in FIG. 5B in which all the pixels of block 84 situated at the border with block 83 are white.
- the SAO processing applies to all the pixels of a reconstructed block.
- Such ODS processing consists in shifting the decoded value of each pixel of the block by a value explicitly transmitted to the decoder, according to the environment of said pixel.
- SAO treatment is described in Chih-Ming Fu, Maria Alshina, Alexander Alshin, Yu-Wen Huang, Ching-Yeh Chen, and Chia-Yang Tsai, Chih-Wei Hsu, Shaw-Min Lei, Jeong-Hoon Park, and Woo -Jin Han, “Sample Adaptive Offset in the HEVC Standard 'IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 22, NO. 12, DECEMBER 2012, 1755.
- the SAO processing is applied only to the reconstructed blocks which have been coded by a conventional coding mode, i.e. not ILR.
- a conventional coding mode i.e. not ILR.
- the application of the SAO method to the reconstructed current block is deactivated for all the pixels of the reconstructed current block.
- FIG. 6 shows the simplified structure of a COD coding device suitable for implementing the coding method according to any one of the particular embodiments of the invention.
- the steps of the coding method are implemented by computer program instructions.
- the coding device COD has the conventional architecture of a computer and notably comprises a memory MEM, a processing unit UT, equipped for example with a processor PROC, and controlled by the computer program PG stored in MEM memory.
- the computer program PG includes instructions for implementing the steps of the coding method as described above, when the program is executed by the processor PROC.
- the code instructions of the computer program PG are for example loaded into a memory RAM (not shown) before being executed by the processor PROC.
- the processor PROC of the processing unit UT implements in particular the steps of the coding method described above, according to the instructions of the computer program PG.
- FIG. 7 shows the simplified structure of a DEC decoding device suitable for implementing the decoding method according to any one of the particular embodiments of the invention.
- the DEC decoding device has the conventional architecture of a computer and in particular comprises a MEMO memory, a UTO processing unit, equipped for example with a PROCO processor, and controlled by the PGO computer program stored in MEMO memory.
- the PGO computer program includes instructions for implementing the steps of the decoding method as described above, when the program is executed by the PROCO processor.
- the code instructions of the PGO computer program are for example loaded into a RAM memory (not shown) before being executed by the PROCO processor.
- the processor PROCO of the processing unit UTO implements in particular the steps of the decoding method described above, according to the instructions of the computer program PGO.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR1858573A FR3086487A1 (fr) | 2018-09-21 | 2018-09-21 | Procedes et dispositifs de codage et de decodage d'un flux de donnees representatif d'au moins une image. |
PCT/FR2019/052029 WO2020058595A1 (fr) | 2018-09-21 | 2019-09-03 | Procédés et dispositifs de codage et de décodage d'un flux de données représentatif d'au moins une image |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3854085A1 true EP3854085A1 (fr) | 2021-07-28 |
Family
ID=65494291
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP19783583.8A Pending EP3854085A1 (fr) | 2018-09-21 | 2019-09-03 | Procédés et dispositifs de codage et de décodage d'un flux de données représentatif d'au moins une image |
Country Status (8)
Country | Link |
---|---|
US (2) | US11516465B2 (fr) |
EP (1) | EP3854085A1 (fr) |
JP (2) | JP7487185B2 (fr) |
KR (1) | KR20210062048A (fr) |
CN (2) | CN112740690B (fr) |
BR (1) | BR112021003486A2 (fr) |
FR (1) | FR3086487A1 (fr) |
WO (1) | WO2020058595A1 (fr) |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012134046A2 (fr) * | 2011-04-01 | 2012-10-04 | 주식회사 아이벡스피티홀딩스 | Procédé de codage vidéo |
US9338476B2 (en) | 2011-05-12 | 2016-05-10 | Qualcomm Incorporated | Filtering blockiness artifacts for video coding |
US9510020B2 (en) * | 2011-10-20 | 2016-11-29 | Qualcomm Incorporated | Intra pulse code modulation (IPCM) and lossless coding mode deblocking for video coding |
US9253508B2 (en) | 2011-11-04 | 2016-02-02 | Futurewei Technologies, Inc. | Differential pulse code modulation intra prediction for high efficiency video coding |
WO2015054811A1 (fr) * | 2013-10-14 | 2015-04-23 | Microsoft Corporation | Fonctions de mode de prédiction de copie intrabloc pour codage et décodage vidéo et d'image |
FR3012714A1 (fr) * | 2013-10-25 | 2015-05-01 | Orange | Procede de codage et de decodage d'images, dispositif de codage et de decodage d'images et programmes d'ordinateur correspondants |
AU2014202921B2 (en) * | 2014-05-29 | 2017-02-02 | Canon Kabushiki Kaisha | Method, apparatus and system for de-blocking a block of video samples |
US9924175B2 (en) * | 2014-06-11 | 2018-03-20 | Qualcomm Incorporated | Determining application of deblocking filtering to palette coded blocks in video coding |
US10924744B2 (en) * | 2017-11-17 | 2021-02-16 | Intel Corporation | Selective coding |
US11470329B2 (en) | 2018-12-26 | 2022-10-11 | Tencent America LLC | Method and apparatus for video coding |
-
2018
- 2018-09-21 FR FR1858573A patent/FR3086487A1/fr not_active Withdrawn
-
2019
- 2019-09-03 BR BR112021003486-2A patent/BR112021003486A2/pt unknown
- 2019-09-03 WO PCT/FR2019/052029 patent/WO2020058595A1/fr active Application Filing
- 2019-09-03 KR KR1020217011404A patent/KR20210062048A/ko active Search and Examination
- 2019-09-03 EP EP19783583.8A patent/EP3854085A1/fr active Pending
- 2019-09-03 US US17/277,945 patent/US11516465B2/en active Active
- 2019-09-03 CN CN201980061930.9A patent/CN112740690B/zh active Active
- 2019-09-03 JP JP2021515568A patent/JP7487185B2/ja active Active
- 2019-09-03 CN CN202410401679.9A patent/CN118175323A/zh active Pending
-
2022
- 2022-10-26 US US17/973,966 patent/US11962761B2/en active Active
-
2024
- 2024-05-07 JP JP2024075259A patent/JP2024092045A/ja active Pending
Also Published As
Publication number | Publication date |
---|---|
CN112740690A (zh) | 2021-04-30 |
BR112021003486A2 (pt) | 2021-05-18 |
US11516465B2 (en) | 2022-11-29 |
JP7487185B2 (ja) | 2024-05-20 |
KR20210062048A (ko) | 2021-05-28 |
US20230050410A1 (en) | 2023-02-16 |
JP2022501910A (ja) | 2022-01-06 |
US11962761B2 (en) | 2024-04-16 |
WO2020058595A1 (fr) | 2020-03-26 |
FR3086487A1 (fr) | 2020-03-27 |
JP2024092045A (ja) | 2024-07-05 |
US20210352272A1 (en) | 2021-11-11 |
CN118175323A (zh) | 2024-06-11 |
CN112740690B (zh) | 2024-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2991351B1 (fr) | Procédé de décodage d'images | |
EP2777269B1 (fr) | Procédé de codage et décodage d'images, dispositif de codage et décodage et programmes d'ordinateur correspondants | |
FR2947134A1 (fr) | Procedes de codage et de decodages d'images, dispositifs de codage et de decodage, flux de donnees et programme d'ordinateur correspondants. | |
WO2015059400A1 (fr) | Procédé de codage et de décodage d'images, dispositif de codage et de décodage d'images et programmes d'ordinateur correspondants | |
EP3075155B1 (fr) | Procédé de codage et de décodage d'images, dispositif de codage et de décodage d'images et programmes d'ordinateur correspondants | |
CN110290384A (zh) | 图像滤波方法、装置及视频编解码器 | |
EP3180914B1 (fr) | Procédé de codage et de décodage d'images, dispositif de codage et de décodage d'images et programmes d'ordinateur correspondants | |
EP3854090A1 (fr) | Procédés et dispositifs de codage et de décodage d'un flux de données représentatif d'au moins une image | |
EP3815369B1 (fr) | Procédés et dispositifs de codage et de décodage d'un flux de données représentatif d'au moins une image | |
EP3854085A1 (fr) | Procédés et dispositifs de codage et de décodage d'un flux de données représentatif d'au moins une image | |
WO2020002795A1 (fr) | Procédés et dispositifs de codage et de décodage d'un flux de données représentatif d'au moins une image | |
EP3922017A1 (fr) | Procédés et dispositifs de codage et de décodage d'un flux de données représentatif d'au moins une image | |
WO2020058593A1 (fr) | Procédés et dispositifs de codage et de décodage d'un flux de données représentatif d'au moins une image | |
FR2957744A1 (fr) | Procede de traitement d'une sequence video et dispositif associe | |
EP3596923A1 (fr) | Procédé de codage et décodage d'images, dispositif de codage et décodage et programmes d'ordinateur correspondants | |
FR3098070A1 (fr) | Procédé d’encodage et de décodage vidéo par signalisation d’un sous-ensemble de candidat | |
FR2956552A1 (fr) | Procede de codage ou de decodage d'une sequence video, dispositifs associes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20210216 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20240626 |