CN104521232A - Method and apparatus for coding/decoding image - Google Patents

Method and apparatus for coding/decoding image Download PDF

Info

Publication number
CN104521232A
CN104521232A CN201380042182.2A CN201380042182A CN104521232A CN 104521232 A CN104521232 A CN 104521232A CN 201380042182 A CN201380042182 A CN 201380042182A CN 104521232 A CN104521232 A CN 104521232A
Authority
CN
China
Prior art keywords
current block
zoom factor
conversion
block
factor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201380042182.2A
Other languages
Chinese (zh)
Inventor
金晖容
林成昶
李镇浩
崔振秀
金镇雄
朴光勋
金耿龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electronics and Telecommunications Research Institute ETRI
Industry Academic Cooperation Foundation of Kyung Hee University
Original Assignee
Electronics and Telecommunications Research Institute ETRI
Industry Academic Cooperation Foundation of Kyung Hee University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to CN202210014961.2A priority Critical patent/CN115052155A/en
Priority to CN202210015295.4A priority patent/CN115052159A/en
Priority to CN202010544830.6A priority patent/CN111629208B/en
Priority to CN202210015293.5A priority patent/CN115065823A/en
Priority to CN202210024647.2A priority patent/CN114786016A/en
Priority to CN202210015290.1A priority patent/CN115052158A/en
Priority to CN202210014962.7A priority patent/CN115052156A/en
Priority to CN201910417316.3A priority patent/CN110392257A/en
Application filed by Electronics and Telecommunications Research Institute ETRI, Industry Academic Cooperation Foundation of Kyung Hee University filed Critical Electronics and Telecommunications Research Institute ETRI
Priority to CN202011526334.4A priority patent/CN112969073A/en
Priority to CN202210015288.4A priority patent/CN115052157A/en
Priority claimed from KR1020130077047A external-priority patent/KR102399795B1/en
Priority claimed from PCT/KR2013/005864 external-priority patent/WO2014007520A1/en
Publication of CN104521232A publication Critical patent/CN104521232A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • H04N19/126Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/18Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)

Abstract

Disclosed are a method and an apparatus for coding/decoding an image. The method for decoding the image comprises the steps of: deriving a scale factor of a current block, depending on whether the current block is a conversion skip block; and scaling the current block on the basis of the scale factor, wherein the scale factor of the current block is derived on the basis of the position of a conversion coefficient inside the current block, and wherein the conversion skip block is the current block to which conversion is not applied and is specified on the basis of information indicating whether to apply reconversion to the current block.

Description

For the method and apparatus of encoding/decoding image
Technical field
The present invention relates to the coding/decoding of image, and more specifically, relate to the method and apparatus for scale transformation coefficient.
Background technology
There is the broadcast service of high definition (HD) resolution (1280x1024 or 1920x1080) in the whole nation and whole world expansion.Therefore, many user habits are in the video with high-resolution and high picture quality.Therefore, many mechanisms are just promoting the exploitation of image device of future generation.In addition, because exist for the ultrahigh resolution (UHD) of resolution and the growth interest of HDTV with higher than HDTV four times, so mobile image standardization organizes the demand becoming and recognize for having more high-resolution and the more compress technique of the image of high picture quality.In addition, exist for maintaining same frame quality and passing through the pressing needs that the compression efficiency higher than the compression efficiency H.264/AVC used now in HDTV, mobile phone and Blu-ray player also has the new standard of many advantages in frequency band or storage.
Nowadays, moving image expert group (MPEG) and Video Coding Experts group (VCEG) be the efficient video coding (HEVC) of combination with standard (namely, Video Codec of future generation), and object utilizes the compression efficiency of H.264/AVC twice to encode the image comprising UHD image.This can provide even has more low frequency and the more image of high picture quality and HD and UHD image than present image in 3D broadcast and mobile communications network.
Summary of the invention
[technical problem]
The invention provides can improve coding/decoding efficiency, for the method and apparatus of Code And Decode image.
The invention provides can improve coding/decoding efficiency, for the method and apparatus of scale transformation coefficient (or residual signals).
The invention provides can improve coding/decoding efficiency, for quantizing/method and apparatus of inverse quantization conversion skipped blocks.
[technical scheme]
According to an aspect of the present invention, a kind of picture decoding method is provided.This picture decoding method comprises: depend on whether current block is the zoom factor that conversion skipped blocks derives for current block, and perform convergent-divergent based on this zoom factor to current block.
The described zoom factor for current block is derived in place based on the conversion coefficient in current block, and this conversion skipped blocks is wherein also not to the block of current block application conversion, and specifies based on the information indicated whether to current block application inverse transformation.
Be used in described derivation in the step of the zoom factor of current block, if this current block is conversion skipped blocks, then can derive basic zoom factor, and the place of conversion coefficient no matter in current block.
This basic zoom factor can have specific values of zoom factor, and this specific values of zoom factor can be 16.
Whether this basic zoom factor can be depending on current block and uses quantization matrix and have different zoom factor values.
This basic zoom factor can be depending on current block and is luminance block or chrominance block and has different zoom factor values.
Indicate whether with signal transmission the mark using conversion jump algorithm in the picture comprising current block by parameter sets (PPS).
This basic zoom factor can comprise the information about the zoom factor for luminance signal and carrier chrominance signal.
Be used in described derivation in the step of the zoom factor of current block, if current block is conversion skipped blocks or current block do not use quantization matrix, then can derive basic zoom factor, and the place of no matter current block inner conversion coefficient.
Be used in described derivation in the step of the zoom factor of current block, if current block is not conversion skipped blocks, then quantization matrix can be used to derive zoom factor for current block based on the place of current block inner conversion coefficient.
According to a further aspect in the invention, a kind of image decoding apparatus is provided.This image decoding apparatus comprises: inverse quantization unit, for depending on whether current block is the zoom factor that conversion skipped blocks derives for current block, and performs convergent-divergent based on this zoom factor to current block.
The described zoom factor for current block can be derived based on the place of the conversion coefficient in current block, and this conversion skipped blocks can be wherein also not to the block of current block application conversion, and specifies based on the information indicated whether to current block application inverse transformation.
According to a further aspect in the invention, a kind of method for encoding images is provided.This method for encoding images comprises the following steps: depend on whether current block is the zoom factor that conversion skipped blocks derives for current block, and come to perform convergent-divergent to current block based on this zoom factor.
The described zoom factor for current block can be derived based on the place of the conversion coefficient in current block, and this conversion skipped blocks can be wherein also not to the block of current block application conversion, and specifies based on the information indicated whether to current block application inverse transformation.
Be used in described derivation in the step of the zoom factor of current block, if this current block is conversion skipped blocks, then can derive basic zoom factor, and the place of conversion coefficient no matter in current block.
This basic zoom factor can have specific values of zoom factor, and this specific values of zoom factor can be 16.
Whether this basic zoom factor can be depending on current block and uses quantization matrix and have different zoom factor values.
This basic zoom factor can be depending on current block and is luminance block or chrominance block and has different zoom factor values.
Indicate whether with signal transmission the mark using conversion jump algorithm in the picture comprising current block by parameter sets (PPS).
This basic zoom factor can comprise the information about the zoom factor for luminance signal and carrier chrominance signal.
Be used in described derivation in the step of the zoom factor of current block, if current block is conversion skipped blocks or current block do not use quantization matrix, then can derive basic zoom factor, and the place of no matter current block inner conversion coefficient.
Be used in described derivation in the step of the zoom factor of current block, if current block is not conversion skipped blocks, then quantization matrix can be used to derive zoom factor for current block based on the place of current block inner conversion coefficient.
According to a further aspect in the invention, a kind of image encoding apparatus is provided.This image encoding apparatus comprises quantifying unit, for depending on whether current block is the zoom factor that conversion skipped blocks derives for current block, and comes to perform convergent-divergent to current block based on this zoom factor.
The described zoom factor for current block can be derived based on the place of the conversion coefficient in current block, and this conversion skipped blocks can be wherein also not to the block of current block application conversion, and specifies based on the information indicated whether to current block application inverse transformation.
[advantageous effects]
Convert the block of jump algorithm to its application and to its existing piece of performing conversion/inversion process, there is different conversion coefficient characteristics, because not to performing conversion/inversion process to the block of its application conversion jump algorithm.That is, if to being applied to conversion skipped blocks to the scan method that it performs existing piece of application of conversion/inversion process, then coding/decoding efficiency can be reduced.Therefore, by applying zoom factor equally to conversion skipped blocks, and the place of conversion coefficient not in plumber block, Code And Decode efficiency can be improved.
Accompanying drawing explanation
Fig. 1 shows the block diagram of the structure of the image encoding apparatus that the embodiment of the present invention is applied to;
Fig. 2 shows the block diagram of the structure of the image decoding apparatus that the embodiment of the present invention is applied to;
Fig. 3 is the figure of the partitioned organization diagrammatically illustrating the image when coded image;
Fig. 4 shows the figure of the form of the PU that can comprise at CU;
Fig. 5 shows the figure of the form of the TU that can comprise at CU;
Fig. 6 be a diagram that the flow chart of the Zoom method for residual signals (or conversion coefficient) according to the embodiment of the present invention; With
Fig. 7 be a diagram that according to another embodiment of the present invention for the flow chart of the Zoom method of residual signals (or conversion coefficient).
Embodiment
Thereafter, example embodiment more of the present invention are described in detail with reference to accompanying drawing.In addition, when describing the embodiment of this specification, by omitting the detailed description of known function and formation, if it is fuzzy to think that it makes main idea of the present invention unnecessary.
In this specification, when thinking that an element is connected with another element or couples, this might mean that a described element can directly be connected with another element described or couple, or third element can connect or couple between these two elements during.In addition, in this specification, when think comprise particular element time, this might mean that the element do not got rid of except this particular element, and add ons can be included in the scope of embodiments of the invention or technical spirit of the present invention.
The term of such as first and second can be used to describe various element, but these elements are not limited to these terms.These terms are used only an element and another element to be distinguished.Such as, the first element can be called as the second element, and does not depart from the scope of the present invention.Equally, the second element can be called as the first element.
In addition, the cell described in embodiments of the invention is independently shown, to indicate difference and feature functionality, and it is not intended each cell and formed by an independent hardware or a software.Namely, for convenience of description, arrange and comprise these cells, and in these cells at least two can form a cell, or an element can be divided into multiple cell, and described multiple divided cell can n-back test.Wherein integrated component embodiment or be also included within the scope of this invention from its embodiment being separated some elements, unless they depart from marrow of the present invention.
In addition, in the present invention, some elements are not the necessary elements for performing necessary function, and can be only for improvement of the selectable unit of performance.The element that the present invention only can use the necessary element for realizing marrow of the present invention instead of be used for only improving performance realizes, and only comprises necessary element and the structure not comprising the selectable unit for only improving function is included within the scope of this invention.
First, in order to improve the facility of description and help understanding of the present invention, the term used in concise and to the point this specification of description.
Unit means Image Coding or decoding unit.In other words, when coding or decoded picture, coding or decoding unit represent the division unit of an image when carrying out son to image and dividing and encode or decode.This unit also can be called as block, macro block (MB), coding unit (CU), predicting unit (PU), converter unit (TU), encoding block (CB), prediction block (PB) or transform block (TB).A unit can be divided into less subelement.
Block represents the MxN array of sample.M and N has positive integer value.Block can mean the array of 2-D form jointly.
Converter unit (TU) is the elementary cell when performing coding/decoding to residual signals, such as transformation series transformation of variable, inverse transformation, quantification, inverse quantization and coding/decoding.A TU can be partitioned multiple less TU.Here, if there is residual signals in block form, then residual signals can be called as residual block.
Quantization matrix means at the matrix quantized or use in inverse quantization process, to improve subjectivity or the objective picture-quality of image.Quantization matrix is also referred to as convergent-divergent list.
Quantization matrix can be divided into default matrix, non-default matrix and flat (flat) matrix.Default matrix can mean particular quantization matrix predetermined in the encoder and the decoder.Non-default matrix can not be predetermined in encoder, but may imply that the quantization matrix that user transmits or receives.Flat matrix may imply that wherein all elements has the matrix of identical value.
Convergent-divergent represents the process of conversion coefficient level being multiplied by the factor.As the result of this process, generate conversion coefficient.Convergent-divergent is also referred to as inverse quantization.
Conversion coefficient represents the coefficient value generated after performing conversion.In this specification, after the quantification obtained by quantizing to conversion coefficient application, conversion coefficient level is also referred to as conversion coefficient.
Quantization parameter represents in the value quantized and in inverse quantization, scale transformation coefficient levels uses.Here, quantization parameter can be the value being mapped to quantization step size.
Parameter set corresponds to the information about the header in the structure in bit stream.Parameter set has the implication of common specified sequence parameter set, parameter sets and auto-adaptive parameter collection.
Fig. 1 shows the block diagram of the structure of the image encoding apparatus that the embodiment of the present invention is applied to.
With reference to figure 1, image encoding apparatus 100 comprises motion estimation module 111, motion compensating module 112, intra-framed prediction module 120, switch 115, subtracter 125, conversion module 130, quantization modules 140, entropy code module 150, inverse quantization module 160, inverse transform module 170, adder 175, filter module 180 and reference picture buffer 190.
Image encoding apparatus 100 can perform coding according to frame mode or inter-frame mode to input picture, and output bit flow.When frame mode, switch 115 can be switched to frame mode.When inter-frame mode, switch 115 can be switched to inter-frame mode.Infra-frame prediction means frame intra-prediction, and inter prediction means between frame.Image encoding apparatus 100 can generate the prediction block of the input block being used for input picture, and the difference then between coding input block and prediction block.Here, input picture can mean raw frames.
When frame mode, intra-framed prediction module 120 performs spatial prediction by the value of the pixel using the coded block adjacent with current block, carrys out generation forecast block.
When inter-frame mode, motion estimation module 111, by search in the reference picture of storage in reference picture buffer 190 in motion prediction process and the region of input block optimum Match, obtains motion vector.Motion compensating module 112 performs motion compensation by the reference picture using motion vector and store in reference picture buffer 190, carrys out generation forecast block.Here, motion vector is two dimension (2-D) vector used in inter prediction, and motion vector can indicate the skew between picture and reference picture wanting present coding/decoding.
Subtracter 125 based on the difference between input block and the prediction block of generation, can generate residual block.
Conversion module 130 can perform conversion to residual block, and according to the block output transform coefficient after conversion.In addition, quantization modules 140, by quantizing the conversion coefficient received according to quantization parameter, carrys out the coefficient after output quantization.
The value that entropy code module 150 can calculate based on quantization modules 140, the encoded parameter values etc. calculated in the encoding process, perform entropy code according to probability distribution to code element, and carry out output bit flow according to the code element after entropy code.If application entropy code, then can reduce the size of the bit stream for the code element that will encode, because pass through to the symbol allocation peanut bit with the high frequency of occurrences and to the symbol allocation big figure bit with the low frequency of occurrences, represent code element.Therefore, the compression performance of Image Coding is improved by entropy code.Entropy code module 150 can use such as index Columbus, the coding method of context-adaptive variable length code (CAVLC) and context adaptive binary arithmetic coding (CABAC) is used for entropy code.
Perform inter prediction encoding (that is, predictive coding between frame) according to the image encoding apparatus 100 of the embodiment of Fig. 1, and picture encoded thus needs and stores decoded, to be used as reference picture.Therefore, the coefficient after quantification is by inverse quantization module 160 inverse quantization and by inverse transform module 170 inverse transformation.Coefficient after inverse quantization and inverse transformation adds prediction block to by adder 175, generates reconstructed blocks thus.
Reconstructed blocks is through (experiences) filter module 180.Filter module 180 can one or more in reconstructed blocks or reconstruction picture application deblocking filter, sample self adaptation skew (SAO) and auto-adaptive loop filter (ALF).Filter module 180 also can be called as filter in self adaptation ring.Deblocking filter can remove the block distortion generated at the boundary of block.SAO can add suitable deviant to compensate encoding error to pixel value.ALF based on by comparing reconstruction picture and raw frames and the value obtained, can perform filtering.The reconstructed blocks of wave filter module 180 can be stored in reference picture buffer 190.
Fig. 2 shows the block diagram of the structure of the image decoding apparatus that the embodiment of the present invention is applied to.
With reference to figure 2, image decoding apparatus 200 comprises entropy decoder module 210, inverse quantization module 220, inverse transform module 230, intra-framed prediction module 240, motion compensating module 250, filter module 260 and reference picture buffer 270.
Image decoding apparatus 200 can receive the bit stream exported from encoder, comes to perform decoding to bit stream, and export reconstructed image (that is, the image of reconstruct) according to frame mode or inter-frame mode.When frame mode, switch can be switched to frame mode.When inter-frame mode, switch can be switched to inter-frame mode.
Image decoding apparatus 200 can obtain the residual block of reconstruct from the bit stream received, generation forecast block, and by adding the residual block of reconstruct to prediction block, generates reconstructed blocks (that is, recovery block).
Entropy decoder module 210, by performing entropy decoding according to probability distribution to the bit stream received, generates the code element of the code element comprising the coefficient form after having quantification.
If application entropy decoding method, then can reduce the size of the bit stream for each code element, because pass through to the symbol allocation peanut bit with the high frequency of occurrences and to the symbol allocation big figure bit with the low frequency of occurrences, represent code element.
Coefficient after quantification is by inverse quantization module 220 inverse quantization and by inverse transform module 230 inverse transformation.As the result of the inverse quantization/inverse transformation to the coefficient after quantification, the residual block of reconstruct can be generated.
When frame mode, intra-framed prediction module 240 performs spatial prediction by using the value of the pixel of the coded block around current block, carrys out generation forecast block.When inter-frame mode, motion compensating module 250 performs motion compensation by using the reference picture stored in motion vector and reference picture buffer 270, carrys out generation forecast block.
Together with residual block is added to prediction block by adder 255.The block device module 260 be after filtering added.Filter module 260 can at least one in the block of reconstruct or the picture application deblocking filter of reconstruct, SAO and ALF.Filter module 260 exports reconstructed image (that is, the image of reconstruct).Reconstructed image can be stored in reference picture buffer 270 also can be used for inter prediction.
Fig. 3 is the figure of the partitioned organization diagrammatically illustrating the image when coded image.
In efficient video coding (HEVC), in coding unit, perform coding so that efficient zoned image.
With reference to figure 3, in HEVC, image 300 is sequential partition in maximum coding unit (being called LCU thereafter), and determines partitioned organization based on LCU.Partitioned organization means the distribution of the coding unit (being called CU thereafter) for the image in efficient coding LCU310.Whether can will be partitioned four CU (its each width dimensions and height dimension reduce half from a CU) based on a CU, determine this distribution.Equally, the CU of subregion can be four CU by recursive partitioning, and its each width dimensions and height dimension reduce half from the CU of subregion.
Here, the subregion of CU can be performed at the most (up to) desired depth by recurrence.Information about the degree of depth is the information of the size of instruction CU, and stores the information about the degree of depth of each CU.Such as, the degree of depth of LCU can be 0, and the degree of depth of minimum code unit (SCU) can be predetermined depth capacity.Here, LCU is the CU with above-mentioned maximum CU size, and SCU is the CU with minimum CU size.
As long as perform the subregion of width dimensions and height dimension half from LCU 310, the degree of depth of CU just increases by 1.Also the CU of subregion is not performed to it, for each degree of depth, there is 2N × 2N size, and to its CU performing subregion from the CU with 2N × 2N size be partitioned its each there are four CU of N × N size.As long as the size that the degree of depth increases by 1, N just reduces half.
With reference to figure 3, the size with the LCU of minimum-depth 0 can be 64 × 64 pixels, and the size with the SCU of depth capacity 3 can be 8 × 8 pixels.Here, the LCU with 64 × 64 pixels can represent by the degree of depth 0, and the CU with 32 × 32 pixels can represent by the degree of depth 1, and the CU with 16 × 16 pixels can represent by the degree of depth 2, and the SCU with 8 × 8 pixels can represent by the degree of depth 3.
In addition, whether the partition information of the information be partitioned by 1 bit being used for each CU is represented about specific CU.This partition information can be included in all CU except SCU.Such as, if CU is not partitioned, then can partition holding information 0.If CU is partitioned, then can partition holding information 1.
Therebetween, predicting unit (PU) (or prediction block (PB)) can be comprised (namely from the CU of LCU subregion, elementary cell for predicting) and converter unit (TU) (or transform block (TB)) (that is, the elementary cell for converting).
Fig. 4 shows the figure of the form of the PU that can comprise at CU.
From among the CU of LCU subregion, the CU that do not rezone is partitioned one or more PU.The behavior oneself is also referred to as subregion.Predicting unit (being called PU thereafter) be to its perform prediction elementary cell, and dancing mode, inter-frame mode and frame mode any one in encode.PU can depend on that each pattern carrys out subregion according to various forms.
With reference to figure 4, when dancing mode, in CU, the 2Nx2N pattern 410 with CU with same size can be supported without the need to subregion.
When inter-frame mode, the form of 8 kinds of subregions can be supported in CU, such as, 2Nx2N pattern 410,2NxN pattern 415, Nx2N pattern 420, NxN pattern 425,2NxnU pattern 430,2NxnD pattern 435, nLx2N pattern 440 and nRx2N pattern 445.
When frame mode, 2Nx2N pattern 410 and NxN pattern 425 can be supported in CU.
Fig. 5 shows the figure of the form of the TU that can comprise at CU.
Converter unit (being called TU thereafter) is the elementary cell used in order to the spatial alternation in CU and quantification/inverse quantization (convergent-divergent) process.TU can have rectangle or square form.From among the CU of LCU subregion, the CU that do not rezone can be partitioned one or more TU.
Here, the partitioned organization of TU can be quad-tree structure.Such as, as shown in Figure 5, it is one or more that CU 510 can depend on that quad-tree structure is partitioned, and forms the TU with various sizes thus.
Therebetween, in HEVC, in H.264/AVC, inter prediction (being called infra-frame prediction thereafter) coding can be performed.Here, by deriving the intra prediction mode (or prediction direction) for current block from the adjacent block be positioned near current block, coding is performed.
As mentioned above, the predicted picture of the signal obtained by performing prediction based on intra prediction mode can have the difference with original image.The residual image with the difference between predicted picture and original image can stand entropy code after experience frequency domain transform and quantization.Here, in order to increase the code efficiency of frequency domain conversion, can depend on the size of block, and the integer transform of intra prediction mode, discrete cosine transform (DCT), discrete sine transform (DST) or DCT/DST are depended in selectivity and adaptability application.
In addition, in order to increase file and picture in such as PowerPoint or speech image, code efficiency in screen content, can use and convert jump algorithm.
If use conversion jump algorithm, then the residual image (or residual block) of encoder to the difference had between predicted picture and original image directly quantizes, and without the need to frequency conversion process, and entropy code is performed to residual block.In addition, decoder performs entropy decoding to residual block, and generates the residual block of reconstruct by performing inverse quantization (convergent-divergent) to the block after entropy code.Therefore, frequency translation/inversion process is skipped to the block of its application conversion jump algorithm.
In the process of quantification/inverse quantization, the place of the conversion coefficient in block can be depended on and differently apply zoom factor, to improve the main body image quality of image.On the contrary, there is the place of the conversion coefficient when execution quantification/inverse quantization not in plumber block and apply the method for zoom factor in the same manner.Signal whether apply the method by the sequence parameter set (SPS) of bit stream or parameter sets (PPS).
As the embodiment of this process, the convergent-divergent process for conversion coefficient can be performed as follows.
for the convergent-divergent process of conversion coefficient
In this case, input is as follows.
The width of-Current Transform block; NW
The height of-Current Transform block; NH
-there is element c ijthe array of conversion coefficient; (nWxnH) array d
-for the luminance signal of current block and the index of carrier chrominance signal; CIdx
If cIdx is 0, then this means luminance signal.If cIdx be 1 or cIdx be 2, then this means carrier chrominance signal.In addition, if cIdx is 1, then this means the Cb in carrier chrominance signal.If cIdx is 2, then this means the Cr in carrier chrominance signal.
-quantization parameter; QP
In this case, output is as follows.
The array of conversion coefficient after-convergent-divergent: (nWxnH) array d ij
Derived parameter " log2TrSize " is carried out by log2TrSize=(Log2 (nW)+Log2 (nH)) >>1.Depend on cIdx and differently derived parameter displacement.If cIx is 0 (when luminance signal), then from " shift=BitDepth y+ log2TrSize – 5 " carry out derived parameter displacement.If cIx is not 0 (when carrier chrominance signal), then from " shift=BitDepth c+ log2TrSize – 5 " carry out derived parameter displacement.Here, BitDepth yand BitDepth cmean the number (such as, 8 bits) of the bit of the sample for present image.
Array " levelScale [] " for zooming parameter is identical with following equation 1.
[equation 1]
[k]={ 40,45,51,57,64,72} is k=0..5 wherein for levelScale
The conversion coefficient after convergent-divergent is calculated by following process.
First, zoom factor m is derived by following process ij.
If-scaling_list_enable_flag is 0, then as following equation 2, derive zoom factor m ij.
[equation 2]
m ij=16
If-scaling_list_enable_flag is not 0, then as following equation 3, derive zoom factor m ij.
[equation 3]
m ij=ScalingFactor[SizeID][RefMatrixID][trafoType][i*nW+j]
In equation 3, the size according to transform block derives SizeID by following form 1, and derives RefMatrixID and trafoType respectively from following equation 4 and equation 5.In addition, in equation 4, by sequence parameter set (SPS) or the parameter sets (PPS) of bit stream, scaling_list_pred_matrix_id_delta is transmitted with signal.
[equation 4]
RefMatrixID=MatrixID-scaling_list_pred_matrix_id_delta
[equation 5]
trafoType=((nW==nH)?0:((nW>nH)?1:2))
Form 1 shows the example of the SizeID value of the size according to transform block
[form 1]
The size of quantization matrix SizeID
4x4 0
8x8(16x4,4x16) 1
16x16(32x8,8x32) 2
32x32 3
Next, the conversion coefficient d after convergent-divergent is derived from following equation 6 ij.
[equation 6]
d ij=Clip3(-32768,32767,((c ij*m ij*levelScale[qP%6]<<(qP/6))+(1<<(shift-1)))>>shift)
Therebetween, not to performing frequency conversion process to the block (being called thereafter conversion skipped blocks) of its application conversion jump algorithm as mentioned above.Therefore, different conversion coefficient characteristic can be had to existing piece that it performs frequency conversion process with conversion skipped blocks.That is, if by not alternatively being applied to conversion skipped blocks to the Zoom method that it performs existing piece of application of frequency conversion process, then can code efficiency be reduced.
Therefore, the invention provides by consider wherein block be conversion skipped blocks situation, perform the method for convergent-divergent.
If use quantization matrix (default matrix and non-default matrix) to improve the subjective image quality of image in the encoder and the decoder, then can depend on that the zoom factor of deriving from quantization matrix is differently applied in the place of the conversion coefficient in block.In the method, when converting this block, by use wherein by the energy compression of residual block to block upper left (namely, low frequency region) characteristic, the more insensitive high-frequency region of human eye (instead of low frequency region of human eye sensitivity) is performed there is the quantification of relatively largeization step size.According to the method, when coded image, the subjective image quality in the region of human eye sensitivity can be improved.
But, if application conversion jump algorithm, then residual block is not compressed towards the low frequency region in residual block, because do not perform frequency domain conversion/inverse transformation to residual block.In this case, if be applied in the quantification/quantification method used in existing frequency domain, then the shortcoming existed is that the distortion in image or block becomes serious.Therefore, if use quantization matrix in image, then exist for can the demand of convergent-divergent (quantification/inverse quantization) method of distortion in the block (that is, converting skipped blocks) it not being performed to frequency domain conversion/inverse transformation, in reduction image or block.Such as, exist not to the method for conversion skipped blocks application quantization matrix.In the method, the basic zoom factor of application can be equal to, and the place of conversion coefficient not in plumber block.
[embodiment 1] for the place of the conversion coefficient in not plumber block to the equivalent application of conversion skipped blocks the method and apparatus of zoom factor
Fig. 6 be a diagram that the flow chart of the Zoom method for residual signals (or conversion coefficient) according to the embodiment of the present invention.
The Zoom method of Fig. 6 can perform in the decoding device of the encoding device of Fig. 1 or Fig. 2.More specifically, the Zoom method of Fig. 6 can perform in the quantifying unit of Fig. 1 or 2 or inverse quantization unit.In the embodiment in fig 6, although the Zoom method of Fig. 6 is illustrated as and performs in encoding device for convenience of description, the Zoom method of Fig. 6 can be equal to application in decoding device.
With reference to figure 6, can depend on whether current block is conversion skipped blocks, and derive the zoom factor m of the application when performing convergent-divergent (quantizing or inverse quantization) to the residual signals (or conversion coefficient) in current block ij.
In step S600, encoding device determines whether current block is conversion skipped blocks.
Whether can be the information converting skipped blocks based on instruction current block, determine whether current block is conversion skipped blocks.Such as, whether indicate current block to be the information of conversion skipped blocks can be mark " transSkipFlag ".By to the information and executing entropy decoding about conversion skipped blocks in bit stream, derive the value of mark " transSkipFlag ".If current block is conversion skipped blocks, then indicate that the value of " transSkipFlag " can be 1.If current block is not conversion skipped blocks, then indicate that the value of " transSkipFlag " can be 0.
If be conversion skipped blocks (such as, the value of mark " transSkipFlag " is 1) as determination result determination current block, then encoding device derives zoom factor m in step S610 ijand the no matter place of residual signals (or conversion coefficient) in current block.
Here, as shown in Figure 6, zoom factor m ijspecific basic values of zoom factor T can be set to.Such as, this specific basic values of zoom factor T can be 16.
If be not convert skipped blocks (such as determination result determination current block, the value of mark " transSkipFlag " is 0), then encoding device derives zoom factor m in step S620 based on the place of residual signals in current block (or conversion coefficient) ij.
Here, quantization matrix can be used to depend on and zoom factor m is differently set for residual signals (or conversion coefficient) in current block ij.As shown in Figure 6, zoom factor m can be derived as in equation 7 ij.
[equation 7]
m ij=ScalingFactor[SizeID][RefMatrixID][trafoType][i*nW+j]
In equation 7, ScalingFactor is the array wherein storing zoom factor.SizeID can be the value of size of instruction current block (transform block or quantization matrix), and the value of SizeID can depend on that the size of current block (transform block) derives as in form 1 above.RefMatrixID and trafoType can derive respectively from equation 8 and equation 9.NW is the width of current block.
[equation 8]
RefMatrixID=MatrixID-scaling_list_pred_matrix_id_delta
In equation 8, the value of MatrixID can mean the type of the quantization matrix according to predictive mode and color component.Such as, the value of MatrixID can derive as in form 2 below.By the sequence parameter set (SPS) in bit stream or parameter sets (PPS), transmit scaling_list_pred_matrix_id_delta with signal.
[equation 9]
trafoType=((nW==nH)?0:((nW>nH)?1:2))
In equation 9, nW means the width of current block, and nH means the height of current block.
Form 2 shows the MatrixID value according to predictive mode and color component.
[form 2]
Fig. 7 be a diagram that according to another embodiment of the present invention for the flow chart of the Zoom method of residual signals (or conversion coefficient).
The Zoom method of Fig. 7 can perform in the decoding device of the encoding device of Fig. 1 or Fig. 2.More specifically, the Zoom method of Fig. 7 can perform in the quantifying unit of Fig. 1 or 2 or inverse quantization unit.In the embodiment of Fig. 7, although the Zoom method of Fig. 7 is illustrated as and performs in encoding device for convenience of description, the Zoom method of Fig. 7 can be equal to application in decoding device.
With reference to figure 7, can depend on whether current block is conversion skipped blocks and whether uses quantization matrix, and derive the zoom factor m of the application when the residual signals (or conversion coefficient) to current block performs convergent-divergent (quantizing or inverse quantization) ij.
In step S700, encoding device determines whether current block uses whether quantization matrix and current block are conversion skipped blocks.
Whether can use the information of quantization matrix based on instruction current block, determine whether current block uses quantization matrix.Such as, current block is indicated whether to use the information of quantization matrix can be mark " scaling_list_enable_flag ".By decoding about the information and executing entropy of the use of quantization matrix in bit stream, derive the value of mark " scaling_list_enable_flag ".If current block uses quantization matrix, then indicate that the value of " scaling_list_enable_flag " can be 1.If current block does not use quantization matrix, then indicate that the value of " scaling_list_enable_flag " can be 0.
In addition, whether can be the information converting skipped blocks based on instruction current block, determine whether current block is conversion skipped blocks.Such as, whether indicate current block to be the information of conversion skipped blocks can be mark " transSkipFlag ".By to the information and executing entropy decoding about conversion skipped blocks in bit stream, derive the value of mark " transSkipFlag ".If current block is conversion skipped blocks, then indicate that the value of " transSkipFlag " can be 1.If current block is not conversion skipped blocks, then indicate that the value of " transSkipFlag " can be 0.
If be that conversion skipped blocks or current block do not use quantization matrix (such as determination result determination current block, transSkipFlag==1 or scaling_list_enable_flag==0), then encoding device derives zoom factor m in step S710 ij, and the no matter place of residual signals (or conversion coefficient) in current block.
As shown in Figure 7, zoom factor m ijspecific basic values of zoom factor T can be set to.Such as, this specific basic values of zoom factor T can be 16.
If be not conversion skipped blocks as determination result determination current block and current block use quantization matrix, then encoding device derives zoom factor m in step S720 based on the place of residual signals in current block (or conversion coefficient) ij.
Zoom factor m ijquantization matrix can be used to depend on, and the place of residual signals in current block (or conversion coefficient) is differently arranged, and can derive as in the equation shown in the step S720 of Fig. 7.The zoom factor m that equation shown in step S720 is derived ijstep S620 with reference to figure 6 is described, and the descriptions thereof are omitted.
As described by Fig. 6 and 7, if current block (namely, the object block will encoded or decode now) be conversion skipped blocks, then (namely the zoom factor with particular value T is applied to current block, conversion skipped blocks), and the no matter place of coefficient (or signal) in current block.Here, can be depending on each coding parameter to corresponding blocks application according to the value of the zoom factor of the embodiment of the present invention and differently arrange.
Such as, can depend on the value indicating whether the parameter (such as, scaling_list_enable_flag) using quantization matrix, arranging as follows will to the value of the zoom factor of corresponding blocks application.
If-using quantization matrix (such as, scaling_list_enable_flag==1), then basic values of zoom factor is set to " T1 " (m ij=T1)
If-not using quantization matrix (such as, scaling_list_enable_flag==0), then basic values of zoom factor is set to " T2 " (m ij=T2)
T1 and/or T2 value can be determined by encoder and use signal transmission, or can use predetermined T1 and/or T2 value.If transmit T1 and/or T2 value by bit stream signal, then decoder obtains T1 and/or T2 value by resolving this bit stream.
For another example, can based on the value of the information (such as, color component index cIdx) of the color characteristics about the signal can derived for corresponding blocks, arranging as follows will to the value of the zoom factor of corresponding blocks application.Color component index cIdx depends on its value and indicates luminance signal (that is, Y-signal) or carrier chrominance signal (that is, Cb signal or Cr signal).
-example 1: whether the signal depending on corresponding blocks is luminance signal, and basic values of zoom factor is set to " Ty " or " Tc ".Such as, if the signal of corresponding blocks is luminance signal, then basic values of zoom factor is set to " Ty ".If if the signal of corresponding blocks is not luminance signal (that is, carrier chrominance signal), then basic values of zoom factor is set to " Tc ".
-example 2: each color component according to corresponding blocks arranges basic values of zoom factor.Such as, if the color component of corresponding blocks is luminance signal (that is, Y-signal), then basic values of zoom factor is set to " Ty ".If carrier chrominance signal is Cb signal, then basic values of zoom factor is set to " Tcb ".If carrier chrominance signal is Cr signal, then basic values of zoom factor is set to " Tcr ".
Here, Ty, Tc, Tcb and/or Tcr value can be determined by encoder and use signal transmission, or can use predetermined Ty, Tc, Tcb and/or Tcr value.If transmit Ty, Tc, Tcb and/or Tcr value by bit stream signal, then decoder obtains Ty, Tc, Tcb and/or Tcr value by resolving this bit stream.
According to the embodiment of the present invention for depending on that coding odd number determines that the place of basic zoom factor can independence or Combination application, but need always to identical conversion skipped blocks application same zoom factor values, and the place of coefficient (or signal) no matter in this block (that is, the object block encoded or decode).
The convergent-divergent process for conversion coefficient that embodiments of the invention have been merged into can be performed as follows.
for the convergent-divergent process of conversion coefficient
In this case, input is as follows.
The width of-Current Transform block; NW
The height of-Current Transform block; NH
-there is element c ijthe array of conversion coefficient; (nWxnH) array d
-indicate whether to the information of Current Transform block application conversion jump algorithm
-for the luminance signal of Current Transform block and the index of carrier chrominance signal; CIdx
If cIdx is 0, then this means luminance signal.If cIdx be 1 or cIdx be 2, then this means carrier chrominance signal.In addition, if cIdx is 1, then this means the Cb in carrier chrominance signal.If cIdx is 2, then this means the Cr in carrier chrominance signal.
-quantization parameter; QP
In this case, output is as follows.
The array of conversion coefficient after-convergent-divergent: (nWxnH) array d ij
Derived parameter " log2TrSize " is carried out by " log2TrSize=(Log2 (nW)+Log2 (nH)) >>1 ".The differently derived parameter displacement according to cIdx.If cIx was 0 (that is, when luminance signal), then from " shift=BitDepth y+ log2TrSize – 5 " carry out derived parameter displacement.If cIx was not 0 (that is, when carrier chrominance signal), then from " shift=BitDepth c+ log2TrSize – 5 " carry out derived parameter displacement.Here, BitDepth yand BitDepth cmean the number (such as, 8 bits) of the bit of the sample for present image.
Array " levelScale [] " for zooming parameter is identical with equation 10.
[equation 10]
[k]={ 40,45,51,57,64,72} is k=0..5 wherein for levelScale
By conversion coefficient after following process calculating convergent-divergent.
First, zoom factor m is derived by following process ij.
If-scaling_list_enable_flag be 0 or Current Transform block be conversion skipped blocks, then as following equation 11, derive zoom factor m ij.
[equation 11]
m ij=16
-if not, then as following equation 12, derive zoom factor m ij.
[equation 12]
m ij=ScalingFactor[SizeID][RefMatrixID][trafoType][i*nW+j]
In equation 12, the size according to transform block derives SizeID by above form 1, and derives RefMatrixID and trafoType respectively from following equation 13 and equation 14.In addition, in equation 13, by the sequence parameter set (SPS) of bit stream, scaling_list_pred_matrix_id_delta is transmitted with signal.
[equation 13]
RefMatrixID=MatrixID-scaling_list_pred_matrix_id_delta
[equation 14]
trafoType=((nW==nH)?0:((nW>nH)?1:2))
Next, conversion coefficient d after deriving convergent-divergent from equation 15 below ij.
[equation 15]
d ij=Clip3(-32768,32767,((c ij*m ij*levelScale[qP%6]<<(qP/6))+(1<<(shift-1)))>>shift)
Therebetween, as mentioned above inversion process is performed to the conversion coefficient by convergent-divergent process convergent-divergent.Here, do not perform inversion process, but only the Current Transform block that conversion jump algorithm has been applied to is performed with shift-down oepration process.
If 1. the cIdx of current block is 0 (when luminance signal), then shift (displacement)=13-BitDepth y.If the cIdx of current block is not 0 (when carrier chrominance signal), then shift=13-BitDepth c.
2. the following array r arranged for residual block ij(i=0 ... (nW)-1, j=0.. (nH)-1).
If displacement is greater than 0, then r ij=(d ij+ (1<< (shift-1))) >>shift.If displacement is not more than 0, then r ij=(d ij<< (-shift).
Here, d ijthe array of conversion coefficient after convergent-divergent, and r ijmean the array of the residual block obtained by performing inverse transformation to conversion coefficient after convergent-divergent.
As the embodiment that the inversion process of conversion coefficient after convergent-divergent has been merged into, the conversion process for conversion coefficient after convergent-divergent can be performed as follows.
for the conversion process of conversion coefficient after convergent-divergent
In this case, input is as follows.
The width of-Current Transform block; NW
The height of-Current Transform block; NH
-there is element d ijthe array of conversion coefficient; (nWxnH) array d
-indicate whether to the information of Current Transform block application conversion jump algorithm
-for the luminance signal of Current Transform block and the index of carrier chrominance signal; CIdx
If cIdx is 0, then this means luminance signal.If cIdx be 1 or cIdx be 2, then this means carrier chrominance signal.In addition, if cIdx is 1, then this means the Cb in carrier chrominance signal.If cIdx is 2, then this means the Cr in carrier chrominance signal.
-quantization parameter; QP
In this case, output is as follows.
-the array of residual block that obtains by performing inverse transformation to conversion coefficient after convergent-divergent; (nWxnH) array r
If be intra prediction mode for the coding mode " PredMode " of current block, the value of Log2 (nW*nH) is 4, and the value of cIdx is 0, then depend on that the intra prediction mode of luminance signal obtains parameter " horizTrType " and " vertTrType " by following form 3.If not, then parameter " horizTrType " and " vertTrType " are set to 0.
Form 3 shows the example of the value of parameter " horizTrType " according to intra prediction mode and " vertTrType ".
[form 3]
IntraPredMode 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
vertTrType 1 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1
horizTrType 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
IntraPredMode 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
vertTrType 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
horizTrType 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0
The residual signals being used for current block is obtained according to following order.
First, if applied the conversion jump algorithm for current block, then performed following process.
If 1. cIdx is 0, then shift=13 – BitDepth y.If cIdx is not 0, then shift=13 – BitDepth c.
2. the following array r arranged for residual block ij(i=0.. (nW)-1, j=0.. (nH)-1).
If-displacement is greater than 0, then r ij=(d ij+ (1<< (shift-1))) >>shift.If displacement is not more than 0, then r ij=(d ij<< (-shift).
If also do not apply the conversion jump algorithm for current block, then perform following process.
1. the value of operation parameter " horizTrType " and " vertTrType " is come to perform inversion process to conversion coefficient after convergent-divergent.First, receive the size (nW of current block, nH), for the array " (nWxnH) array d " of conversion coefficient after convergent-divergent and parameter " horizTrType ", and perform 1 dimension inversion by level and bring output array " (nWxnH) array e ".
2. following, receiving array " (nWxnH) array e ", and as in equation 16, derive array " (nWxnH) array g ".
[equation 16]
g ij=Clip3(-32768,32767,(e ij+64)>>7)
3. following, receive the size (nW, nH) of current block, array " (nWxnH) array g " and parameter " vertTrType ", and level performs 1 dimension inverse transformation.
4. following, depend on that cIdx arranges the array " (nWxnH) array r " for residual block as in equation 17.
[equation 17]
r ij=(f ij+(1<<(shift-1)))>>shift
In equation 17, when cIdx is 0, shift=20 – BitDepth y.If not, then shift=20 – BitDepth c.BitDepth means the number (such as, 8 bits) of the bit of the sample for present image.
By performing above-mentioned convergent-divergent process to conversion coefficient and performing above-mentioned conversion process to conversion coefficient after convergent-divergent, the residual block of reconstruct can be generated.In addition, by adding to the residual block of reconstruct the prediction block generated by infra-frame prediction or inter prediction, and the block of reconstruct is generated.Here, the block of reconstruct can be to the block of its application loop filter or also not to the block of its application loop filter.
Thereafter, the invention provides and depend on that whether current block is conversion skipped blocks and the method for basic zoom factor that derives with signal transmission.
According to embodiments of the invention, depend on whether current block is conversion skipped blocks and the basic zoom factor of deriving uses signal transmission by sequence parameter set (SPS).
Form 4 shows transmitting about the example of the SPS grammer of the information of basic zoom factor with signal according to the embodiment of the present invention.
[form 4]
Reference table 4, transform_skip_enabled_flag indicates whether to use conversion jump algorithm in current sequence.
If use conversion jump algorithm in current sequence, then transmit flat_scale_factor_y_minus16, flat_scale_factor_cb_minus16 and flat_scale_factor_cr_minus16 with signal.Each value can encode (se (v)) according to the form with plus or minus symbol.Or each value can encode (ue (v)) according to the form with 0 and plus sign.
Flat_scale_factor_y_minus16 means the zoom factor for luminance signal.Such as, if the value of flat_scale_factor_y_minus16 is 0, then the zoom factor for luminance signal has value 16, wherein adds 16 to 0.
Flat_scale_factor_cb_minus16 means the zoom factor for carrier chrominance signal Cb.Flat_scale_factor_cr_minus16 means the zoom factor for carrier chrominance signal Cr.
The zoom factor for luminance signal or carrier chrominance signal can be derived as equation 18 to 20.
Here, the storage of basic zoom factor FlatScalingFactor [cIdx] is used for the zoom factor of luminance signal and carrier chrominance signal.Such as, if color component index cIdx is 0, then basic zoom factor can indicate brightness (Y) signal.If color component index cIdx is 1, then basic zoom factor can indicate Cb carrier chrominance signal.If color component index cIdx is 2, then basic zoom factor can indicate Cr carrier chrominance signal.In addition, the value of basic zoom factor " FlatScalingFactor [cIdx] " can have the scope of particular value.Such as, 8 bit signals can have from-15 to the value of " 255-16 ".
The basic zoom factor for luminance signal can be derived as in equation 18.
[equation 18]
FlatScalingFactor[0]=16+(transform_skip_enabled_flag==1)?:flat_scale_factor_y_minus16:0
The basic zoom factor for Cb carrier chrominance signal can be derived as in equation 19.
[equation 19]
FlatScalingFactor[1]=16+(transform_skip_enabled_flag==1)?:flat_scale_factor_cb_minus16:0
The basic zoom factor for Cr carrier chrominance signal can be derived as in equation 20.
[equation 20]
FlatScalingFactor[2]=16+(transform_skip_enabled_flag==1)?:flat_scale_factor_cr_minus16:0
By by the method for basic zoom factor that derives is merged in convergent-divergent process according to the depending on whether current block is conversion skipped blocks with signal transmission of the embodiment of the present invention, the convergent-divergent process for conversion coefficient can be performed as follows.
for the convergent-divergent process of conversion coefficient
In this case, input is as follows.
The width of-Current Transform block; NW
The height of-Current Transform block; NH
-there is element c ijthe array of conversion coefficient; (nWxnH) array d
-indicate whether to the information of Current Transform block application conversion jump algorithm; TransSkipFlag
If the value of transSkipFlag is 1, then its instruction is to current block application conversion jump algorithm.If the value of transSkipFlag is 0, then its instruction is not also to current block application conversion jump algorithm.
-for the luminance signal of current block and the index of carrier chrominance signal; CIdx
If cIdx is 0, then this means luminance signal.If cIdx be 1 or cIdx be 2, then this means carrier chrominance signal.In addition, if cIdx is 1, then this means the Cb in carrier chrominance signal.If cIdx is 2, then this means the Cr in carrier chrominance signal.
-quantization parameter; QP
In this case, output is as follows.
The array of conversion coefficient after-convergent-divergent: (nWxnH) array d ij
Derived parameter log2TrSize is carried out by log2TrSize=(Log2 (nW)+Log2 (nH)) >>1.Depend on cIdx and differently derived parameter displacement.If cIx is 0 (when luminance signal), then from " shift=BitDepth y+ log2TrSize – 5 " carry out derived parameter displacement.If cIx was not 0 (that is, when carrier chrominance signal), then from " shift=BitDepth c+ log2TrSize – 5 " carry out derived parameter displacement.Here, BitDepth yand BitDepth cmean the number (such as, 8 bits) of the bit of the sample for present image.
Array " levelScale [] " for zooming parameter is identical with equation 21.
[equation 21]
[k]={ 40,45,51,57,64,72} is k=0..5 wherein for levelScale
Conversion coefficient after convergent-divergent is calculated by following process.
First, zoom factor m is derived by following process ij.
If-scaling_list_enable_flag is 0, then as in following equation 22, derive zoom factor m ij.
[equation 22]
m ij=(transSkipFlag==1)?FlatScaleFactor[cIdx]:16
-if not (if that is, scaling_list_enable_flag is 1), then as in following equation 23, derive zoom factor m ij.
[equation 23]
m ij=(transSkipFlag==1)?FlatScaleFactor[cIdx]:ScalingFactor[SizeID][ReMatrixID][trafoType][i*nW+j]
In equation 23, depend on that the size of block derives SizeID by above form 1.RefMatrixID and trafoType is derived respectively from equation 24 below and equation 25.In equation 24, transmit scaling_list_pred_matrix_id_delta by the sequence parameter set (SPS) of bit stream with signal.
[equation 24]
RefMatrixID=MatrixID-scaling_list_pred_matrix_id_delta
[equation 25]
trafoType=((nW==nH)?0:((nW>nH)?1:2))
Next, conversion coefficient dij after deriving convergent-divergent from equation 26.
[equation 26]
d ij=Clip3(-32768,32767,((c ij*m ij*levelScale[qP%6]<<(qP/6))+(1<<(shift-1)))>>shift)
Therebetween, except above-mentioned SPS, by parameter sets (PPS) or section headers " SliceHeader ", transmit with signal and depend on that whether current block is conversion skipped blocks and the basic zoom factor of deriving according to the embodiment of the present invention.In addition, in CU unit or TU unit, basic zoom factor can be transmitted with signal.
The value flat_scale_factor_y_minus16, flat_scale_factor_cb_minus16 and flat_scale_factor_cr_minus16 that are transmitted by above-mentioned SPS signal can be upgraded and use in PPS (or SliceHeader, CU or TU).
Form 5 shows and transmits the example about the PPS grammer of the information of basic zoom factor with signal according to another embodiment of the present invention.
[form 5]
Whether reference table 5, transform_skip_enabled_flag indicates conversion jump algorithm will use in current picture.If use conversion jump algorithm, then transmit pps_flat_scaling_factor_present_flag with signal.
Such as, if the value of pps_flat_scaling_factor_present_flag is 0, then use flat_scale_factor_y_minus16, flat_scale_factor_cb_minus16 and flat_scale_factor_cr_minus16 of applying to above-mentioned SPS as the zoom factor for converting skipped blocks.If the value of pps_flat_scaling_factor_present_flag is 1, then transmit respective value with signal, to upgrade the value of flat_scale_factor_y_minus16, flat_scale_factor_cb_minus16 and the flat_scale_factor_cr_minus16 applied in above-mentioned SPS.
The zoom factor of the conversion skipped blocks for current picture is used as by the value " flat_scale_factor_y_minus16, flat_scale_factor_cb_minus16 and flat_scale_factor_cr_minus16 " that signal transmits.Here, continue to use these values, until they do not change again.Or, only can apply these values to current picture, and can be applied in next picture the values of zoom factor used in SPS.
Here, each in value " flat_scale_factor_y_minus16, flat_scale_factor_cb_minus16 and flat_scale_factor_cr_minus16 " can encode (se (v)) according to the form with plus or minus symbol.Or each in these values can encode (ue (v)) according to the form with 0 and plus sign.
Different value signal delivery value " flat_scale_factor_y_minus16, flat_scale_factor_cb_minus16 and flat_scale_factor_cr_minus16 " can be utilized according to each luminance signal and each carrier chrominance signal.Such as, can use value " flat_scale_factor_y_minus16 " come with the zoom factor of signal transmission for luminance signal, use value " flat_scale_factor_cb_minus16 " can come with the zoom factor of signal transmission for Cb carrier chrominance signal, and can use value " flat_scale_factor_cr_minus16 " come with the zoom factor of signal transmission for Cr carrier chrominance signal.Or, flat_scale_factor_y_minus16 can be used to come with the zoom factor of signal transmission for luminance signal, and flat_scale_factor_cb_cr_minus16 can be used to come with the zoom factor of signal transmission for carrier chrominance signal.Or, a value " flat_scale_factor_y_cb_cr_minus16 " can be used to transmit zoom factor for luminance signal and carrier chrominance signal with signal.
As mentioned above, can upgrade and use in SliceHeader (or CU or TU) by the value " flat_scale_factor_y_minus16, flat_scale_factor_cb_minus16 and flat_scale_factor_cr_minus16 " that signal transmits in SPS or PPS.
Form 6 shows and transmits the example about section headers " SliceHeader " grammer of the information of basic zoom factor with signal according to another embodiment of the present invention.
[form 6]
Reference table 6, transform_skip_enabled_flag indicates whether to use conversion jump algorithm in current clip.If by use conversion jump algorithm, then transmit the value of flat_scaling_factor_override_flag with signal.
Such as, if the value of flat_scaling_factor_override_flag is 0, be then used in flat_scale_factor_y_minus16, flat_scale_factor_cb_minus16 and flat_scale_factor_cr_minus16 of applying in above-mentioned SPS or PPS as the zoom factor for converting skipped blocks.If the value of flat_scaling_factor_override_flag is 1, then transmit respective value with signal, to upgrade the value of flat_scale_factor_y_minus16, flat_scale_factor_cb_minus16 and the flat_scale_factor_cr_minus16 applied in above-mentioned SPS or PPS.
Use value " flat_scale_factor_y_delta, flat_scale_factor_cb_delta and flat_scale_factor_cr_delta " is as the zoom factor of the conversion skipped blocks for current clip.
Here, each of value " flat_scale_factor_y_delta, flat_scale_factor_cb_delta and flat_scale_factor_cr_delta " can encode (se (v)) according to the form with plus or minus symbol.Or each value can encode (ue (v)) according to the form with 0 and plus sign.
Different value signal delivery value " flat_scale_factor_y_delta, flat_scale_factor_cb_delta and flat_scale_factor_cr_delta " can be utilized according to each luminance signal and each carrier chrominance signal.Such as, can come with the zoom factor of signal transmission for luminance signal by use value flat_scale_factor_y_delta, can come with the zoom factor of signal transmission for Cb carrier chrominance signal by use value flat_scale_factor_cb_delta, and can come with the zoom factor of signal transmission for Cr carrier chrominance signal by use value flat_scale_factor_cr_delta.Or, flat_scale_factor_y_delta can be used to come with the zoom factor of signal transmission for luminance signal, and flat_scale_factor_cb_cr_delta can be used to come with the zoom factor of signal transmission for carrier chrominance signal.Or, a value flat_scale_factor_y_cb_cr_delta can be used to transmit zoom factor for luminance signal and carrier chrominance signal with signal.
Can utilize and as in equation 27 to 29, derive basic zoom factor by the value " flat_scale_factor_y_delta, flat_scale_factor_cb_delta and flat_scale_factor_cr_delta " that signal transmits as mentioned above.
Here, basic zoom factor " FlatScalingFactor [cIdx] " stores the zoom factor being used for luminance signal and carrier chrominance signal.Such as, if color component index cIdx is 0, then basic zoom factor can indicate brightness (Y) signal.If color component index cIdx is 1, then basic zoom factor can indicate Cb carrier chrominance signal.If color component index cIdx is 2, then basic zoom factor can indicate Cr carrier chrominance signal.In addition, the value of basic zoom factor " FlatScalingFactor [cIdx] " can indicate the scope of particular value.Such as, 8 bit signals can have from-15 to the scope of " 255-16 ".
Can use flat_scale_factor_y_delta as in equation 27, derive basic zoom factor for luminance signal.
[equation 27]
FlatScalingFactor[0]=16+(transform_skip_enabled_flag==1)?:(flat_scale_factor_y_minus16+flat_scale_factor_y_delta):0
Can use flat_scale_factor_cb_delta as in equation 28, derive basic zoom factor for Cb carrier chrominance signal.
[equation 28]
FlatScalingFactor[1]=16+(transform_skip_enabled_flag==1)?:(flat_scale_factor_cb_minus16+flat_scale_factor_cb_delta):0
Can use flat_scale_factor_cr_delta as in equation 29, derive basic zoom factor for Cr carrier chrominance signal.
[equation 29]
FlatScalingFactor[2]=16+(transform_skip_enabled_flag==1)?:(flat_scale_factor_cr_minus16+flat_scale_factor_cr_delta):0
Therebetween, above-described embodiment can depend on the degree of depth of the size of block, the degree of depth of CU or TU and have different ranges of application.Determine that the parameter (such as, about the size of block or the information of the degree of depth) of range of application can be arranged by encoder, make this parameter have predetermined value or can be set to according to distribution map or level and have predetermined value.When encoder in the bitstream write parameters value time, decoder can obtain this value from bit stream and use this value.
If range of application depends on the degree of depth of CU and different, then following three kinds of methods can be applied to above-described embodiment as illustrated in form 7.Method A is only applied to has certain depth or the higher degree of depth, and method B is only applied to has certain depth or the lower degree of depth, and method C is only applied to certain depth.
Form 7 shows for depending on that the degree of depth of CU (or TU) determines the example of the method for the scope wherein applying method of the present invention.In form 7, mark " O " means corresponding good application corresponding method to CU (or TU), and marks " X " and mean not to the corresponding good application corresponding method of CU (or TU).
[form 7]
The degree of depth of the CU (or TU) of the scope of instruction application Method A Method B Method C
0 × ×
1 × ×
2
3 × ×
4 or higher × ×
Reference table 7, if the degree of depth of CU (or TU) is 2, then can apply all method A, method B and method C to embodiments of the invention.
If not to all good application embodiments of the invention of CU (or TU), then it can use certain indicators (such as, mark) indicate, or by using the value of the degree of depth of the CU of instruction range of application, representing by the value of signal transmission larger than the maximum of the degree of depth of CU 1.
In addition, for depending on that the above-mentioned degree of depth of CU (or TU) determines that the method for the scope wherein applying method of the present invention can depend on the size of luminance block and chrominance block and differently apply, and luminance picture and chromatic diagram picture can be depended on and differently apply.
Form 8 shows to schematically show and depends on that the size of luminance block and chrominance block is to determine the example of the combination of the method for range of application.
[form 8]
When method " G 1 " in the method listed in form 8, if the size of luminance block is 8 (8x8,8x4,2x8 etc.) and the size of chrominance block is 4 (4x4,4x2,2x4), then embodiments of the invention 1 (G1-embodiment 1) can be applied to luminance signal and carrier chrominance signal and horizontal signal and vertical signal.
In above demonstration system, although the flow chart based on the form of succession of steps or block describes these methods, but the invention is not restricted to the order of these steps, and some steps can perform according to the order different from other steps, or can perform with other steps simultaneously.In addition, it will be understood by those skilled in the art that the step shown in flow chart is not exclusiveness, and these steps can comprise additional step, or the one or more steps in flow chart can be deleted, and not affect scope of the present invention.
More than describing is only the example of technical spirit of the present invention, and those skilled in the art can change according to various mode and revise the present invention, and does not depart from intrinsic propesties of the present invention.Therefore, the disclosed embodiments should not be interpreted as limiting technical spirit of the present invention, but should be interpreted as illustrating technical spirit of the present invention.The scope of technical spirit of the present invention is not subject to the restriction of these embodiments, and should explain scope of the present invention based on claims.Therefore, the present invention should be interpreted as covering all modifications or modification of drawing from implication and the scope of claims and equivalence thereof.

Claims (20)

1. a picture decoding method, comprises step:
Depend on whether current block is the zoom factor that conversion skipped blocks derives for current block; With
Based on this zoom factor, convergent-divergent is performed to current block,
Wherein based on the place of the conversion coefficient in current block, derive the described zoom factor for current block, and
This conversion skipped blocks is wherein also not to the block of current block application conversion, and specifies based on the information indicated whether to current block application inverse transformation.
2. picture decoding method according to claim 1, the step of zoom factor that wherein said derivation is used for current block comprises: if this current block is conversion skipped blocks, then derive basic zoom factor, and the place of conversion coefficient no matter in current block.
3. picture decoding method according to claim 2, wherein:
This basic zoom factor has specific values of zoom factor, and
This specific values of zoom factor is 16.
4. picture decoding method according to claim 2, wherein this basic zoom factor depends on whether current block uses quantization matrix and have different zoom factor values.
5. picture decoding method according to claim 2, wherein this basic zoom factor depends on that current block is luminance block or chrominance block and has different zoom factor values.
6. picture decoding method according to claim 1, wherein indicates whether with signal transmission the mark using conversion jump algorithm in the picture comprising current block by parameter sets (PPS).
7. picture decoding method according to claim 6, wherein this basic zoom factor comprises the information about the zoom factor for luminance signal and carrier chrominance signal.
8. picture decoding method according to claim 1, the step that wherein said derivation is used for the zoom factor of current block comprises: if current block is conversion skipped blocks or current block do not use quantization matrix, then derive basic zoom factor, and the place of no matter current block inner conversion coefficient.
9. picture decoding method according to claim 1, the step of zoom factor that wherein said derivation is used for current block comprises: if current block is not conversion skipped blocks, then use quantization matrix to derive zoom factor for current block based on the place of current block inner conversion coefficient.
10. an image decoding apparatus, comprising:
Inverse quantization unit, for depending on whether current block is the zoom factor that conversion skipped blocks derives for current block, and performs convergent-divergent based on this zoom factor to current block,
The described zoom factor for current block is derived in place wherein based on the conversion coefficient in current block, and
This conversion skipped blocks is wherein also not to the block of current block application conversion, and specifies based on the information indicated whether to current block application inverse transformation.
11. 1 kinds of method for encoding images, comprise the following steps:
Depend on whether current block is the zoom factor that conversion skipped blocks derives for current block; With
Come to perform convergent-divergent to current block based on this zoom factor,
Wherein based on the place of the conversion coefficient in current block, derive the described zoom factor for current block, and
This conversion skipped blocks is wherein also not to the block of current block application conversion, and specifies based on the information indicated whether to current block application inverse transformation.
12. method for encoding images according to claim 11, the step of zoom factor that wherein said derivation is used for current block comprises: if this current block is conversion skipped blocks, then derive basic zoom factor, and the place of conversion coefficient no matter in current block.
13. method for encoding images according to claim 12, wherein:
This basic zoom factor has specific values of zoom factor, and
This specific values of zoom factor is 16.
14. method for encoding images according to claim 12, wherein this basic zoom factor depends on whether current block uses quantization matrix and have different zoom factor values.
15. method for encoding images according to claim 12, wherein this basic zoom factor depends on that current block is luminance block or chrominance block and has different zoom factor values.
16. method for encoding images according to claim 11, wherein indicate whether with signal transmission the mark using conversion jump algorithm in the picture comprising current block by parameter sets (PPS).
17. method for encoding images according to claim 16, wherein this basic zoom factor comprises the information about the zoom factor for luminance signal and carrier chrominance signal.
18. method for encoding images according to claim 11, the step that wherein said derivation is used for the zoom factor of current block comprises: if current block is conversion skipped blocks or current block do not use quantization matrix, then derive basic zoom factor, and the place of no matter current block inner conversion coefficient.
19. method for encoding images according to claim 11, the step of zoom factor that wherein said derivation is used for current block comprises: if current block is not conversion skipped blocks, then use quantization matrix to derive zoom factor for current block based on the place of current block inner conversion coefficient.
20. 1 kinds of image encoding apparatus, comprising:
Quantifying unit, for depending on whether current block is the zoom factor that conversion skipped blocks derives for current block, and comes to perform convergent-divergent to current block based on this zoom factor,
Wherein based on the place of the conversion coefficient in current block, derive the described zoom factor for current block, and
This conversion skipped blocks is wherein also not to the block of current block application conversion, and specifies based on the information indicated whether to current block application inverse transformation.
CN201380042182.2A 2012-07-02 2013-07-02 Method and apparatus for coding/decoding image Pending CN104521232A (en)

Priority Applications (10)

Application Number Priority Date Filing Date Title
CN202210014962.7A CN115052156A (en) 2012-07-02 2013-07-02 Video encoding/decoding method and non-transitory computer-readable recording medium
CN202010544830.6A CN111629208B (en) 2012-07-02 2013-07-02 Image encoding/decoding method and non-transitory computer-readable recording medium
CN202210015293.5A CN115065823A (en) 2012-07-02 2013-07-02 Video encoding/decoding method and non-transitory computer-readable recording medium
CN202210024647.2A CN114786016A (en) 2012-07-02 2013-07-02 Image encoding/decoding method and non-transitory computer-readable recording medium
CN202210015290.1A CN115052158A (en) 2012-07-02 2013-07-02 Video encoding/decoding method and non-transitory computer-readable recording medium
CN202210014961.2A CN115052155A (en) 2012-07-02 2013-07-02 Image encoding/decoding method and non-transitory computer-readable recording medium
CN201910417316.3A CN110392257A (en) 2012-07-02 2013-07-02 Video decoding/coding method and computer readable recording medium
CN202210015295.4A CN115052159A (en) 2012-07-02 2013-07-02 Video encoding/decoding method and non-transitory computer-readable recording medium
CN202011526334.4A CN112969073A (en) 2012-07-02 2013-07-02 Image encoding/decoding method and non-transitory computer-readable recording medium
CN202210015288.4A CN115052157A (en) 2012-07-02 2013-07-02 Image encoding/decoding method and non-transitory computer-readable recording medium

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
KR10-2012-0071938 2012-07-02
KR20120071938 2012-07-02
KR10-2013-0077047 2013-07-02
PCT/KR2013/005864 WO2014007520A1 (en) 2012-07-02 2013-07-02 Method and apparatus for coding/decoding image
KR1020130077047A KR102399795B1 (en) 2012-07-02 2013-07-02 Method and apparatus for image encoding/decoding

Related Child Applications (10)

Application Number Title Priority Date Filing Date
CN202210015295.4A Division CN115052159A (en) 2012-07-02 2013-07-02 Video encoding/decoding method and non-transitory computer-readable recording medium
CN202210014962.7A Division CN115052156A (en) 2012-07-02 2013-07-02 Video encoding/decoding method and non-transitory computer-readable recording medium
CN202210024647.2A Division CN114786016A (en) 2012-07-02 2013-07-02 Image encoding/decoding method and non-transitory computer-readable recording medium
CN202210015293.5A Division CN115065823A (en) 2012-07-02 2013-07-02 Video encoding/decoding method and non-transitory computer-readable recording medium
CN202210014961.2A Division CN115052155A (en) 2012-07-02 2013-07-02 Image encoding/decoding method and non-transitory computer-readable recording medium
CN202210015290.1A Division CN115052158A (en) 2012-07-02 2013-07-02 Video encoding/decoding method and non-transitory computer-readable recording medium
CN201910417316.3A Division CN110392257A (en) 2012-07-02 2013-07-02 Video decoding/coding method and computer readable recording medium
CN202210015288.4A Division CN115052157A (en) 2012-07-02 2013-07-02 Image encoding/decoding method and non-transitory computer-readable recording medium
CN202010544830.6A Division CN111629208B (en) 2012-07-02 2013-07-02 Image encoding/decoding method and non-transitory computer-readable recording medium
CN202011526334.4A Division CN112969073A (en) 2012-07-02 2013-07-02 Image encoding/decoding method and non-transitory computer-readable recording medium

Publications (1)

Publication Number Publication Date
CN104521232A true CN104521232A (en) 2015-04-15

Family

ID=52794274

Family Applications (11)

Application Number Title Priority Date Filing Date
CN202210024647.2A Pending CN114786016A (en) 2012-07-02 2013-07-02 Image encoding/decoding method and non-transitory computer-readable recording medium
CN202210015288.4A Pending CN115052157A (en) 2012-07-02 2013-07-02 Image encoding/decoding method and non-transitory computer-readable recording medium
CN202210015295.4A Pending CN115052159A (en) 2012-07-02 2013-07-02 Video encoding/decoding method and non-transitory computer-readable recording medium
CN202210014962.7A Pending CN115052156A (en) 2012-07-02 2013-07-02 Video encoding/decoding method and non-transitory computer-readable recording medium
CN202011526334.4A Pending CN112969073A (en) 2012-07-02 2013-07-02 Image encoding/decoding method and non-transitory computer-readable recording medium
CN202010544830.6A Active CN111629208B (en) 2012-07-02 2013-07-02 Image encoding/decoding method and non-transitory computer-readable recording medium
CN202210015290.1A Pending CN115052158A (en) 2012-07-02 2013-07-02 Video encoding/decoding method and non-transitory computer-readable recording medium
CN201910417316.3A Pending CN110392257A (en) 2012-07-02 2013-07-02 Video decoding/coding method and computer readable recording medium
CN202210014961.2A Pending CN115052155A (en) 2012-07-02 2013-07-02 Image encoding/decoding method and non-transitory computer-readable recording medium
CN201380042182.2A Pending CN104521232A (en) 2012-07-02 2013-07-02 Method and apparatus for coding/decoding image
CN202210015293.5A Pending CN115065823A (en) 2012-07-02 2013-07-02 Video encoding/decoding method and non-transitory computer-readable recording medium

Family Applications Before (9)

Application Number Title Priority Date Filing Date
CN202210024647.2A Pending CN114786016A (en) 2012-07-02 2013-07-02 Image encoding/decoding method and non-transitory computer-readable recording medium
CN202210015288.4A Pending CN115052157A (en) 2012-07-02 2013-07-02 Image encoding/decoding method and non-transitory computer-readable recording medium
CN202210015295.4A Pending CN115052159A (en) 2012-07-02 2013-07-02 Video encoding/decoding method and non-transitory computer-readable recording medium
CN202210014962.7A Pending CN115052156A (en) 2012-07-02 2013-07-02 Video encoding/decoding method and non-transitory computer-readable recording medium
CN202011526334.4A Pending CN112969073A (en) 2012-07-02 2013-07-02 Image encoding/decoding method and non-transitory computer-readable recording medium
CN202010544830.6A Active CN111629208B (en) 2012-07-02 2013-07-02 Image encoding/decoding method and non-transitory computer-readable recording medium
CN202210015290.1A Pending CN115052158A (en) 2012-07-02 2013-07-02 Video encoding/decoding method and non-transitory computer-readable recording medium
CN201910417316.3A Pending CN110392257A (en) 2012-07-02 2013-07-02 Video decoding/coding method and computer readable recording medium
CN202210014961.2A Pending CN115052155A (en) 2012-07-02 2013-07-02 Image encoding/decoding method and non-transitory computer-readable recording medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210015293.5A Pending CN115065823A (en) 2012-07-02 2013-07-02 Video encoding/decoding method and non-transitory computer-readable recording medium

Country Status (3)

Country Link
US (8) US9843809B2 (en)
JP (6) JP2015526013A (en)
CN (11) CN114786016A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105900424A (en) * 2013-10-11 2016-08-24 索尼公司 Decoding device, decoding method, encoding device, and encoding method
CN110677655A (en) * 2019-06-21 2020-01-10 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and storage medium
CN111316641A (en) * 2018-05-03 2020-06-19 Lg电子株式会社 Method and apparatus for decoding image using transform according to block size in image encoding system
WO2020211869A1 (en) * 2019-04-18 2020-10-22 Beijing Bytedance Network Technology Co., Ltd. Parameter derivation in cross component mode
US11082713B2 (en) 2015-11-20 2021-08-03 Mediatek Inc. Method and apparatus for global motion compensation in video coding system
CN113228651A (en) * 2018-12-26 2021-08-06 韩国电子通信研究院 Quantization matrix encoding/decoding method and apparatus, and recording medium storing bit stream
US20210321140A1 (en) 2019-03-08 2021-10-14 Beijing Bytedance Network Technology Co., Ltd. Signaling of reshaping information in video processing
CN113812161A (en) * 2019-05-14 2021-12-17 北京字节跳动网络技术有限公司 Scaling method in video coding and decoding
CN114128273A (en) * 2019-06-20 2022-03-01 Lg电子株式会社 Video or image coding based on luminance mapping
CN114342381A (en) * 2019-07-05 2022-04-12 Lg电子株式会社 Video or image coding based on mapping of luma samples and scaling of chroma samples
CN114342398A (en) * 2019-08-20 2022-04-12 北京字节跳动网络技术有限公司 Use of default scaling matrices and user-defined scaling matrices
CN114521326A (en) * 2019-09-19 2022-05-20 韦勒斯标准与技术协会公司 Video signal processing method and apparatus using scaling
US11463713B2 (en) 2019-05-08 2022-10-04 Beijing Bytedance Network Technology Co., Ltd. Conditions for applicability of cross-component coding
US11533487B2 (en) 2019-07-07 2022-12-20 Beijing Bytedance Network Technology Co., Ltd. Signaling of chroma residual scaling
US11659164B1 (en) 2019-04-23 2023-05-23 Beijing Bytedance Network Technology Co., Ltd. Methods for cross component dependency reduction
US11924472B2 (en) 2019-06-22 2024-03-05 Beijing Bytedance Network Technology Co., Ltd. Syntax element for chroma residual scaling

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014050676A1 (en) * 2012-09-28 2014-04-03 ソニー株式会社 Image processing device and method
GB2518823A (en) * 2013-09-25 2015-04-08 Sony Corp Data encoding and decoding
MY183347A (en) * 2013-09-30 2021-02-18 Japan Broadcasting Corp Image encoding device, image decoding device, and the programs thereof
CN106165417A (en) * 2014-04-23 2016-11-23 索尼公司 Image processing equipment and image processing method
KR102465914B1 (en) 2016-03-04 2022-11-14 한국전자통신연구원 Encoding method of image encoding device
KR102424419B1 (en) 2016-08-31 2022-07-22 주식회사 케이티 Method and apparatus for processing a video signal
KR102401851B1 (en) * 2017-06-14 2022-05-26 삼성디스플레이 주식회사 Method of compressing image and display apparatus for performing the same
US10567772B2 (en) * 2017-07-11 2020-02-18 Google Llc Sub8×8 block processing
CN117834920A (en) 2018-01-17 2024-04-05 英迪股份有限公司 Method of decoding or encoding video and method for transmitting bit stream
CN112740689B (en) * 2018-09-18 2024-04-12 华为技术有限公司 Video encoder, video decoder and corresponding methods
WO2020111749A1 (en) * 2018-11-27 2020-06-04 엘지전자 주식회사 Method and device for coding transform skip flag
WO2020130577A1 (en) 2018-12-18 2020-06-25 엘지전자 주식회사 Image coding method based on secondary transform, and device therefor
JP7522137B2 (en) 2019-06-14 2024-07-24 フラウンホファー ゲセルシャフト ツール フェールデルンク ダー アンゲヴァンテン フォルシュンク エー.ファオ. Encoder, decoder, method, and computer program using improved transform-based scaling
MX2021016156A (en) * 2019-06-19 2022-02-22 Lg Electronics Inc Image encoding and decoding method and device for limiting partition condition of chroma block, and method for transmitting bitstream.
KR20220019257A (en) * 2019-07-10 2022-02-16 엘지전자 주식회사 Video decoding method and apparatus for residual coding
CN114731405A (en) * 2019-09-23 2022-07-08 Lg电子株式会社 Image encoding/decoding method and apparatus using quantization matrix and method of transmitting bitstream
US20220368912A1 (en) * 2019-10-02 2022-11-17 Interdigital Vc Holdings France, Sas Derivation of quantization matrices for joint cb-br coding
JP7360984B2 (en) * 2020-03-31 2023-10-13 Kddi株式会社 Image decoding device, image decoding method and program
CN111314703B (en) * 2020-03-31 2022-03-08 电子科技大学 Time domain rate distortion optimization method based on distortion type propagation analysis

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102256126A (en) * 2011-07-14 2011-11-23 北京工业大学 Method for coding mixed image

Family Cites Families (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100386639B1 (en) * 2000-12-04 2003-06-02 주식회사 오픈비주얼 Method for decompression of images and video using regularized dequantizer
US20020163964A1 (en) * 2001-05-02 2002-11-07 Nichols James B. Apparatus and method for compressing video
JP3866538B2 (en) * 2001-06-29 2007-01-10 株式会社東芝 Video coding method and apparatus
US7760950B2 (en) * 2002-09-26 2010-07-20 Ntt Docomo, Inc. Low complexity and unified transforms for video coding
US8014450B2 (en) * 2003-09-07 2011-09-06 Microsoft Corporation Flexible range reduction
KR20050026318A (en) * 2003-09-09 2005-03-15 삼성전자주식회사 Video encoding and decoding device comprising intra skip mode
JP2005184042A (en) 2003-12-15 2005-07-07 Sony Corp Image decoding apparatus, image decoding method, and image decoding program
KR100703770B1 (en) * 2005-03-25 2007-04-06 삼성전자주식회사 Video coding and decoding using weighted prediction, and apparatus for the same
US20090028239A1 (en) * 2005-05-03 2009-01-29 Bernhard Schuur Moving picture encoding method, moving picture decoding method and apparatuses using the methods
EP1761069A1 (en) * 2005-09-01 2007-03-07 Thomson Licensing Method and apparatus for encoding video data using block skip mode
CN100466745C (en) * 2005-10-11 2009-03-04 华为技术有限公司 Predicting coding method and its system in frame
US8848789B2 (en) 2006-03-27 2014-09-30 Qualcomm Incorporated Method and system for coding and decoding information associated with video compression
KR100927733B1 (en) * 2006-09-20 2009-11-18 한국전자통신연구원 An apparatus and method for encoding / decoding selectively using a transformer according to correlation of residual coefficients
US8279946B2 (en) * 2007-11-23 2012-10-02 Research In Motion Limited System and method for providing a variable frame rate and adaptive frame skipping on a mobile device
US8175158B2 (en) * 2008-01-04 2012-05-08 Sharp Laboratories Of America, Inc. Methods and systems for inter-layer image prediction parameter determination
KR101431545B1 (en) 2008-03-17 2014-08-20 삼성전자주식회사 Method and apparatus for Video encoding and decoding
CN102014190A (en) * 2009-08-07 2011-04-13 深圳富泰宏精密工业有限公司 Scheduled task management system and method
KR101474756B1 (en) * 2009-08-13 2014-12-19 삼성전자주식회사 Method and apparatus for encoding and decoding image using large transform unit
WO2011074919A2 (en) * 2009-12-17 2011-06-23 에스케이텔레콤 주식회사 Image encoding/decoding method and device
TWI503735B (en) * 2009-12-28 2015-10-11 Chiun Mai Comm Systems Inc System and method of application jump prediction
JP5377395B2 (en) * 2010-04-02 2013-12-25 日本放送協会 Encoding device, decoding device, and program
CN102223525B (en) * 2010-04-13 2014-02-19 富士通株式会社 Video decoding method and system
WO2011129672A2 (en) * 2010-04-16 2011-10-20 에스케이텔레콤 주식회사 Video encoding/decoding apparatus and method
KR101813189B1 (en) * 2010-04-16 2018-01-31 에스케이 텔레콤주식회사 Video coding/decoding apparatus and method
KR101791078B1 (en) * 2010-04-16 2017-10-30 에스케이텔레콤 주식회사 Video Coding and Decoding Method and Apparatus
WO2012008925A1 (en) * 2010-07-15 2012-01-19 Agency For Science, Technology And Research Method, apparatus and computer program product for encoding video data
KR20120010097A (en) * 2010-07-20 2012-02-02 에스케이 텔레콤주식회사 Deblocking filtering method and apparatus, method and apparatus for encoding and decoding using deblocking filtering
KR20120033218A (en) 2010-09-29 2012-04-06 한국전자통신연구원 Methdo and apparatus of adaptive encoding and decoding target region determination
KR101269116B1 (en) 2010-12-14 2013-05-29 엠앤케이홀딩스 주식회사 Decoding method of inter coded moving picture
US9854275B2 (en) * 2011-06-25 2017-12-26 Qualcomm Incorporated Quantization in video coding
GB2492333B (en) * 2011-06-27 2018-12-12 British Broadcasting Corp Video encoding and decoding using transforms
EP2726912A2 (en) 2011-06-29 2014-05-07 L-3 Communications Security and Detection Systems, Inc. Vehicle-mounted cargo inspection system
US9894386B2 (en) * 2012-04-12 2018-02-13 Goldpeak Innovations Inc. Transform method based on block information, and apparatus using said method
US20130294524A1 (en) * 2012-05-04 2013-11-07 Qualcomm Incorporated Transform skipping and lossless coding unification
CN109905710B (en) 2012-06-12 2021-12-21 太阳专利托管公司 Moving picture encoding method and apparatus, and moving picture decoding method and apparatus
US10257520B2 (en) * 2012-06-26 2019-04-09 Velos Media, Llc Modified coding for transform skipping
TWI535222B (en) 2012-06-29 2016-05-21 Sony Corp Image processing apparatus and method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102256126A (en) * 2011-07-14 2011-11-23 北京工业大学 Method for coding mixed image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BENJAMIN BROSS 等: "《High efficiency video coding (HEVC) text specification draft 6》", 《JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC)OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 7TH MEETING: GENEVA, CH, 21–30 NOVEMBER, 2011,JCTVC-H1003》 *
CUILING LAN,ET AL.: "《CE5.f: Residual Scalar Quantization for HEVC》", 《JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 8TH MEETING: SAN JOSÉ, CA, USA,JCTVC-H0361》 *
CUILING LAN,ET AL.: "《Intra transform skipping》", 《JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 9TH MEETING:GENEVA,CH,JCTVC-I0408》 *

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105900424A (en) * 2013-10-11 2016-08-24 索尼公司 Decoding device, decoding method, encoding device, and encoding method
CN105900424B (en) * 2013-10-11 2019-05-28 索尼公司 Decoding apparatus, coding/decoding method, code device and coding method
US11546594B2 (en) 2013-10-11 2023-01-03 Sony Corporation Decoding device, decoding method, encoding device, and encoding method
US10687060B2 (en) 2013-10-11 2020-06-16 Sony Corporation Decoding device, decoding method, encoding device, and encoding method
US11082713B2 (en) 2015-11-20 2021-08-03 Mediatek Inc. Method and apparatus for global motion compensation in video coding system
CN111316641A (en) * 2018-05-03 2020-06-19 Lg电子株式会社 Method and apparatus for decoding image using transform according to block size in image encoding system
US11647200B2 (en) 2018-05-03 2023-05-09 Lg Electronics Inc. Method and apparatus for decoding image by using transform according to block size in image coding system
US11206403B2 (en) 2018-05-03 2021-12-21 Lg Electronics Inc. Method and apparatus for decoding image by using transform according to block size in image coding system
CN115243041A (en) * 2018-05-03 2022-10-25 Lg电子株式会社 Image encoding method, image decoding apparatus, storage medium, and image transmission method
CN115243041B (en) * 2018-05-03 2024-06-04 Lg电子株式会社 Image encoding and decoding method, decoding device, storage medium, and transmission method
CN113228651A (en) * 2018-12-26 2021-08-06 韩国电子通信研究院 Quantization matrix encoding/decoding method and apparatus, and recording medium storing bit stream
US12034929B2 (en) 2018-12-26 2024-07-09 Electronics And Telecommunications Research Institute Quantization matrix encoding/decoding method and device, and recording medium in which bitstream is stored
US20210321140A1 (en) 2019-03-08 2021-10-14 Beijing Bytedance Network Technology Co., Ltd. Signaling of reshaping information in video processing
US11910020B2 (en) 2019-03-08 2024-02-20 Beijing Bytedance Network Technology Co., Ltd Signaling of reshaping information in video processing
WO2020211869A1 (en) * 2019-04-18 2020-10-22 Beijing Bytedance Network Technology Co., Ltd. Parameter derivation in cross component mode
US11463714B2 (en) 2019-04-18 2022-10-04 Beijing Bytedance Network Technology Co., Ltd. Selective use of cross component mode in video coding
US11616965B2 (en) 2019-04-18 2023-03-28 Beijing Bytedance Network Technology Co., Ltd. Restriction on applicability of cross component mode
US11553194B2 (en) 2019-04-18 2023-01-10 Beijing Bytedance Network Technology Co., Ltd. Parameter derivation in cross component mode
US11659164B1 (en) 2019-04-23 2023-05-23 Beijing Bytedance Network Technology Co., Ltd. Methods for cross component dependency reduction
US11750799B2 (en) 2019-04-23 2023-09-05 Beijing Bytedance Network Technology Co., Ltd Methods for cross component dependency reduction
US11463713B2 (en) 2019-05-08 2022-10-04 Beijing Bytedance Network Technology Co., Ltd. Conditions for applicability of cross-component coding
US12034942B2 (en) 2019-05-08 2024-07-09 Beijing Bytedance Network Technology Co., Ltd. Conditions for applicability of cross-component coding
CN113812161A (en) * 2019-05-14 2021-12-17 北京字节跳动网络技术有限公司 Scaling method in video coding and decoding
CN113812161B (en) * 2019-05-14 2024-02-06 北京字节跳动网络技术有限公司 Scaling method in video encoding and decoding
CN114128273B (en) * 2019-06-20 2023-11-17 Lg电子株式会社 Image decoding and encoding method and data transmission method for image
CN114128273A (en) * 2019-06-20 2022-03-01 Lg电子株式会社 Video or image coding based on luminance mapping
US11924448B2 (en) 2019-06-20 2024-03-05 Lg Electronics Luma-mapping-based video or image coding
CN110677655B (en) * 2019-06-21 2022-08-16 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and storage medium
CN110677655A (en) * 2019-06-21 2020-01-10 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and storage medium
US11924472B2 (en) 2019-06-22 2024-03-05 Beijing Bytedance Network Technology Co., Ltd. Syntax element for chroma residual scaling
CN114342381A (en) * 2019-07-05 2022-04-12 Lg电子株式会社 Video or image coding based on mapping of luma samples and scaling of chroma samples
CN114342381B (en) * 2019-07-05 2023-11-17 Lg电子株式会社 Image decoding and encoding method and data transmission method for image
US11956439B2 (en) 2019-07-07 2024-04-09 Beijing Bytedance Network Technology Co., Ltd. Signaling of chroma residual scaling
US11533487B2 (en) 2019-07-07 2022-12-20 Beijing Bytedance Network Technology Co., Ltd. Signaling of chroma residual scaling
CN114342398A (en) * 2019-08-20 2022-04-12 北京字节跳动网络技术有限公司 Use of default scaling matrices and user-defined scaling matrices
CN114521326A (en) * 2019-09-19 2022-05-20 韦勒斯标准与技术协会公司 Video signal processing method and apparatus using scaling
US12034945B2 (en) 2019-09-19 2024-07-09 Humax Co., Ltd. Video signal processing method and apparatus using scaling process

Also Published As

Publication number Publication date
US20150189289A1 (en) 2015-07-02
JP7266515B2 (en) 2023-04-28
JP2022116271A (en) 2022-08-09
US20180310006A1 (en) 2018-10-25
CN115065823A (en) 2022-09-16
JP2015526013A (en) 2015-09-07
CN111629208B (en) 2021-12-21
JP2020048211A (en) 2020-03-26
US10187643B2 (en) 2019-01-22
US20190356924A1 (en) 2019-11-21
US10187644B2 (en) 2019-01-22
JP2018137761A (en) 2018-08-30
JP2024038230A (en) 2024-03-19
CN115052157A (en) 2022-09-13
US20180310003A1 (en) 2018-10-25
CN115052159A (en) 2022-09-13
US20180054621A1 (en) 2018-02-22
CN112969073A (en) 2021-06-15
US20180310004A1 (en) 2018-10-25
US10554982B2 (en) 2020-02-04
CN111629208A (en) 2020-09-04
US9843809B2 (en) 2017-12-12
CN115052155A (en) 2022-09-13
CN115052158A (en) 2022-09-13
JP6660970B2 (en) 2020-03-11
JP2020108155A (en) 2020-07-09
US10419765B2 (en) 2019-09-17
CN114786016A (en) 2022-07-22
US10045031B2 (en) 2018-08-07
CN110392257A (en) 2019-10-29
US10554983B2 (en) 2020-02-04
US20180310005A1 (en) 2018-10-25
US20180310007A1 (en) 2018-10-25
CN115052156A (en) 2022-09-13

Similar Documents

Publication Publication Date Title
CN104521232A (en) Method and apparatus for coding/decoding image
KR101538704B1 (en) Method and apparatus for coding and decoding using adaptive interpolation filters
CN104170382B (en) Method for coding and decoding quantization matrix and use its equipment
CN104488270A (en) Method and device for encoding/decoding images
KR20180085526A (en) A method for encoding and decoding video using a processing of an efficent transform
AU2016253621A1 (en) Method and apparatus for encoding image and method and apparatus for decoding image
KR20180019092A (en) Block prediction method and apparatus based on illumination compensation in video coding system
CN104221373A (en) Devices and methods for sample adaptive offset coding and/or signaling
CN105007497A (en) Method for video encoding
CN104488273A (en) Method and device for encoding/decoding image
KR20140129607A (en) Method and apparatus for processing moving image
KR102577480B1 (en) Method and apparatus for image encoding/decoding
WO2014100111A1 (en) Devices and methods for using base layer intra prediction mode for enhancement layer intra mode prediction
KR101659343B1 (en) Method and apparatus for processing moving image
KR101914667B1 (en) Method and apparatus for processing moving image
KR20130083405A (en) Method for deblocking filtering and apparatus thereof
KR101609427B1 (en) Method and apparatus for encoding/decoding video
KR20140073430A (en) Method and apparatus for image encoding/decoding
KR20140120396A (en) Fast Video coding method
KR20140130571A (en) Method and apparatus for processing moving image
KR20140130269A (en) Method and apparatus for processing moving image
KR20140130266A (en) Method and apparatus for processing moving image
KR20140130572A (en) Method and apparatus for processing moving image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20150415

Assignee: Neo Lab Convergence Inc.

Assignor: Korea Electronics Research Institute|Kyung Hee University School of Science and Technology

Contract record no.: 2016990000255

Denomination of invention: Method and apparatus for encoding/decoding image for performing intra-prediction using pixel value filtered according to prediction mode

License type: Exclusive License

Record date: 20160630

LICC Enforcement, change and cancellation of record of contracts on the licence for exploitation of a patent or utility model
RJ01 Rejection of invention patent application after publication

Application publication date: 20150415

RJ01 Rejection of invention patent application after publication