WO2020184229A1 - 画像符号化装置、画像符号化方法、及びプログラム - Google Patents
画像符号化装置、画像符号化方法、及びプログラム Download PDFInfo
- Publication number
- WO2020184229A1 WO2020184229A1 PCT/JP2020/008439 JP2020008439W WO2020184229A1 WO 2020184229 A1 WO2020184229 A1 WO 2020184229A1 JP 2020008439 W JP2020008439 W JP 2020008439W WO 2020184229 A1 WO2020184229 A1 WO 2020184229A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- quantization
- quantization matrix
- orthogonal conversion
- matrix
- image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 91
- 238000013139 quantization Methods 0.000 claims abstract description 418
- 239000011159 matrix material Substances 0.000 claims abstract description 318
- 238000006243 chemical reaction Methods 0.000 claims description 150
- 230000008569 process Effects 0.000 claims description 31
- 230000009466 transformation Effects 0.000 abstract description 44
- 230000006870 function Effects 0.000 description 16
- 238000004590 computer program Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 6
- 238000000926 separation method Methods 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
- H04N19/126—Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/12—Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/14—Coding unit complexity, e.g. amount of activity or edge presence estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/177—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/18—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
Definitions
- the present invention relates to an image coding technique.
- HEVC High Efficiency Video Coding
- HEVC High Efficiency Video Coding
- a basic block having a size larger than that of the conventional macroblock (16 ⁇ 16 pixels) has been adopted.
- This large size basic block is called a CTU (Coding Tree Unit), and its size is a maximum of 64 x 64 pixels.
- the CTU is further divided into sub-blocks that serve as units for prediction and conversion.
- a quantization matrix is used to weight the coefficient after orthogonal conversion (hereinafter referred to as the orthogonal conversion coefficient) according to the frequency component.
- the quantization matrix it is possible to improve the compression efficiency while maintaining the image quality by reducing the data of high frequency components, which are less noticeable in human vision, than the data of low frequency components.
- VVC Very Video Coding
- JVET Joint Video Experts Team
- an image coding device capable of generating a bit stream by encoding an image in units of a plurality of blocks including a block of P ⁇ Q pixels (P and Q are integers), the block of P ⁇ Q pixels
- An orthogonal conversion means that generates N ⁇ M (N is an integer satisfying N ⁇ P and M is an integer satisfying M ⁇ Q) orthogonal conversion coefficients by executing orthogonal conversion for the prediction error. It has a quantization means for generating N ⁇ M quantization coefficients by quantizing the N ⁇ M orthogonal conversion coefficients using at least a quantization matrix having N ⁇ M elements.
- FIG. 1 It is a block diagram which shows the structure of the image coding apparatus in Embodiment 1.
- FIG. 2 It is a block diagram which shows the structure of the image decoding apparatus in Embodiment 2.
- It is a flowchart which shows the image coding process in the image coding apparatus which concerns on Embodiment 1.
- FIG. It is a figure which shows an example of the bit stream output in Embodiment 1.
- FIG. 1 It is a figure which shows an example of the bit stream output in Embodiment 1.
- a basic block or subunit may be referred to as a basic unit or subunit, or may simply be referred to as a block or unit.
- a rectangle is a quadrangle in which four internal angles are right angles and two diagonal lines have the same length, as is generally defined.
- a square is a rectangle in which all four corners are equal and all four sides are equal. That is, a square is a kind of rectangle.
- the zero-out is a process of forcibly setting a part of the orthogonal conversion coefficients of the block to be encoded to 0.
- a block of 64 ⁇ 64 pixels in an input image (picture) is a block to be encoded.
- the orthogonal conversion coefficient also has a size of 64 ⁇ 64.
- the zero-out means, for example, that a part of the 64 ⁇ 64 orthogonal conversion coefficient is encoded by assuming that it is 0 even if the result of the orthogonal conversion has a value other than 0. It is a process.
- the low frequency components corresponding to the predetermined range on the upper left including the DC component in the two-dimensional orthogonal conversion coefficient are not forcibly set to 0, but correspond to the frequency components higher than those low frequency components.
- the orthogonal conversion coefficient to be used is always set to 0.
- FIG. 1 is a block diagram showing an image coding apparatus of this embodiment.
- 101 is a terminal for inputting image data.
- Reference numeral 102 denotes a block division unit, which divides the input image into a plurality of basic blocks and outputs an image in units of basic blocks to the subsequent stage.
- Reference numeral 103 denotes a quantization matrix holding unit that generates and stores a quantization matrix.
- the quantization matrix is for weighting the quantization process with respect to the orthogonal conversion coefficient according to the frequency component.
- the quantization step for each orthogonal conversion coefficient is, for example, a scale value (quantization scale) based on a reference parameter value (quantization parameter), and each element in the quantization matrix. Weighted by multiplying the values.
- the method of generating the quantization matrix stored by the quantization matrix holding unit 110 is not particularly limited.
- the user may input information indicating the quantization matrix, or the image coding apparatus may calculate from the characteristics of the input image. Moreover, you may use the thing specified in advance as an initial value.
- two types of 32 ⁇ 32 two dimensions shown in FIGS. 8B and 8C are generated by enlarging the base quantization matrix. It is assumed that the quantization matrix of is generated and stored.
- the quantization matrix of FIG. 8B is a 32 ⁇ 32 quantization matrix in which each element of the 8 ⁇ 8 base quantization matrix of FIG.
- the quantization matrix of FIG. 8C is a 32 ⁇ 32 quantization matrix expanded by repeating each element of the upper left 4 ⁇ 4 portion of the base quantization matrix of FIG. 8A eight times in the vertical and horizontal directions.
- the base quantization matrix is a quantization matrix used not only for quantization in a subblock of 8 ⁇ 8 pixels but also for creating a quantization matrix having a size larger than that of the base quantization matrix. is there.
- the size of the base quantization matrix is assumed to be 8 ⁇ 8, but is not limited to this size.
- another base quantization matrix may be used depending on the size of the subblock. For example, when three types of subblocks of 8 ⁇ 8, 16 ⁇ 16, and 32 ⁇ 32 are used, three types of base quantization matrices corresponding to each can be used.
- the 104 is a prediction unit, and determines subblock division for image data in basic block units. That is, it is determined whether or not to divide the basic block into subblocks, and if so, how to divide it. If not divided into subblocks, the subblock will be the same size as the basic block.
- the subblock may be a square or a rectangle (non-square) other than a square.
- the prediction unit 104 performs intra-frame prediction, which is intra-frame prediction, inter-frame prediction, which is inter-frame prediction, and the like, and generates prediction image data.
- the prediction unit 104 selects a prediction method for one subblock from intra-prediction and inter-prediction, performs the selected prediction, and generates prediction image data for the sub-block.
- the prediction method used is not limited to these, and a prediction that combines an intra prediction and an inter prediction may be used.
- the prediction unit 104 calculates and outputs a prediction error from the input image data and the predicted image data. For example, the prediction unit 104 calculates the difference between each pixel value of the sub-block and each pixel value of the predicted image data generated by the prediction for the sub-block, and calculates it as a prediction error.
- the prediction unit 104 also outputs information necessary for prediction, for example, information indicating the division state of the subblock, the prediction mode indicating the prediction method of the subblock, and information such as a motion vector together with the prediction error.
- information necessary for this prediction is collectively referred to as prediction information.
- the conversion / quantization unit 105 is a conversion / quantization unit.
- the conversion / quantization unit 105 orthogonally converts the prediction error calculated by the prediction unit 104 in subblock units to obtain an orthogonal conversion coefficient representing each frequency component of the prediction error. Then, the conversion / quantization unit 105 further performs quantization using the quantization matrix stored in the quantization matrix holding unit 103 and the quantization parameter, and the quantization is the quantized orthogonal conversion coefficient. Get the coefficient.
- the function of performing orthogonal conversion and the function of performing quantization may be configured separately.
- the inverse quantization / inverse conversion unit 106 dequantizes the quantization coefficient output from the conversion / quantization unit 105 by using the quantization matrix stored in the quantization matrix holding unit 103 and the quantization parameters. To reproduce the orthogonal conversion coefficient. Then, the inverse quantization / inverse conversion unit 106 further performs inverse orthogonal conversion to reproduce the prediction error. In this way, the process of reproducing (deriving) the orthogonal conversion coefficient using the quantization matrix and the quantization parameter is referred to as inverse quantization.
- the function of performing dequantization and the function of performing dequantization may be configured separately.
- the information for the image decoding apparatus to derive the quantization parameter is also encoded in the bit stream by the coding unit 110.
- 108 is a frame memory for storing the reproduced image data.
- the frame memory 108 is appropriately referred to to generate the prediction image data, and the reproduced image data is generated and output from the input prediction error.
- In-loop filter processing such as deblocking filter and sample adaptive offset is performed on the reproduced image, and the filtered image is output.
- the quantization coefficient output from the conversion / quantization unit 105 and the prediction information output from the prediction unit 104 are encoded to generate and output code data.
- Reference numeral 113 denotes a quantization matrix coding unit.
- the base quantization matrix output from the quantization matrix holding unit 103 is encoded, and the image decoding apparatus generates and outputs the quantization matrix code data for deriving the base quantization matrix.
- Header code data is generated using the quantization matrix code data that is the output from the quantization matrix coding unit 113. Further, a bit stream is formed and output together with the code data output from the coding unit 110.
- Reference numeral 112 denotes a terminal, which outputs a bit stream generated by the integrated coding unit 111 to the outside.
- moving image data is input in frame units.
- the block division section 101 will be described as being divided into basic blocks of 64 ⁇ 64 pixels, but the present invention is not limited to this.
- a block having 128 ⁇ 128 pixels may be used as a basic block, or a block having 32 ⁇ 32 pixels may be used as a basic block.
- the image coding device generates and encodes a quantization matrix prior to image coding.
- the horizontal direction in the quantization matrix 800 and each block is the x-coordinate
- the vertical direction is the y-coordinate
- the right direction is positive and the downward direction is positive, respectively.
- the coordinates of the upper leftmost element in the quantization matrix 800 are set to (0,0). That is, the coordinates of the lower right element of the 8 ⁇ 8 base quantization matrix are (7,7).
- the coordinates of the lower right element of the 32 ⁇ 32 quantization matrix are (31, 31).
- the quantization matrix holding unit 103 generates a quantization matrix.
- a quantization matrix is generated according to the size of the subblock, the size of the orthogonal conversion coefficient to be quantized, and the type of prediction method.
- an 8 ⁇ 8 base quantization matrix used for generating the quantization matrix shown in FIG. 8A which will be described later, is generated.
- this base quantization matrix is expanded to generate two types of 32 ⁇ 32 quantization matrices shown in FIGS. 8B and 8C.
- the quantization matrix of FIG. 8B is a 32 ⁇ 32 quantization matrix in which each element of the 8 ⁇ 8 base quantization matrix of FIG. 8A is expanded four times by repeating it four times in the vertical and horizontal directions.
- each element in the range of 0 to 3 in the x coordinate and 0 to 3 in the y coordinate in the 32 ⁇ 32 quantization matrix is the value of the upper left end element of the base quantization matrix. 1 will be assigned. Further, 15 which is the value of the lower right element of the base quantization matrix is assigned to each element in the range of the x-coordinate of 28 to 31 and the y-coordinate of 28 to 31 in the 32 ⁇ 32 quantization matrix. It becomes. In the example of FIG. 8B, all the values of each element in the base quantization matrix are assigned to any of the elements of the 32 ⁇ 32 quantization matrix.
- the quantization matrix of FIG. 8C is a 32 ⁇ 32 quantization matrix expanded by repeating each element of the upper left 4 ⁇ 4 portion of the base quantization matrix of FIG. 8A eight times in the vertical and horizontal directions.
- each element in the range of 0 to 7 for the x coordinate and 0 to 7 for the y coordinate in the 32 ⁇ 32 quantization matrix has the upper left 4 ⁇ 4 portion of the base quantization matrix. 1 which is the value of the element on the upper left is assigned. Further, for each element in the range of the x-coordinate of 24 to 31 and the y-coordinate of 24-31 in the 32 ⁇ 32 quantization matrix, the value of the lower right element of the upper left 4 ⁇ 4 portion of the base quantization matrix is used. A certain 7 will be assigned. In the example of FIG.
- the generated quantization matrix is not limited to this, and when the size of the orthogonal conversion coefficient to be quantized exists other than 32 ⁇ 32, the quantum such as 16 ⁇ 16 or 8 ⁇ 8, 4 ⁇ 4 is obtained.
- a quantization matrix corresponding to the size of the orthogonal conversion coefficient to be converted may be generated.
- the method for determining the base quantization matrix and each element constituting the quantization matrix is not particularly limited. For example, a predetermined initial value may be used, or may be set individually. Further, it may be generated according to the characteristics of the image.
- the quantization matrix holding unit 103 holds the base quantization matrix and the quantization matrix generated in this way.
- FIG. 8B is an example of a quantization matrix used for quantization of the orthogonal conversion coefficient corresponding to the 32 ⁇ 32 subblock described later and FIG. 8C is the 64 ⁇ 64 subblock.
- the 800 in the thick frame represents the quantization matrix.
- each of the 32 ⁇ 32 1024 pixels is configured, and each square in the thick frame represents each element constituting the quantization matrix.
- the three types of quantization matrices shown in FIGS. 8B and 8C are held in a two-dimensional shape, but each element in the quantization matrix is of course not limited to this.
- the quantization matrix coding unit 113 reads out each element of the base quantization matrix stored in a two-dimensional shape in order from the quantization matrix holding unit 106, scans each element, calculates a difference, and calculates each difference. Arrange in a one-dimensional matrix.
- the base quantization matrix shown in FIG. 8A uses the scanning method shown in FIG. 9 to calculate the difference from the immediately preceding element in the scanning order for each element. For example, the 8x8 base quantization matrix shown in FIG. 8A is scanned by the scanning method shown in FIG. 9, but the first element 1 located in the upper left is followed by the element 2 located immediately below it. It is scanned and the difference +1 is calculated.
- the difference from a predetermined initial value (for example, 8) is calculated, but of course, the difference is not limited to this, and is arbitrary.
- the difference from the value or the value of the first element itself may be used.
- the base quantization matrix of FIG. 8A uses the scanning method of FIG. 9, and the difference matrix shown in FIG. 10 is generated.
- the quantization matrix coding unit 113 further encodes the difference matrix to generate the quantization matrix code data.
- coding is performed using the coding table shown in FIG. 11A, but the coding table is not limited to this, and for example, the coding table shown in FIG. 11B may be used.
- the quantization matrix code data generated in this way is output to the integrated coding unit 111 in the subsequent stage.
- the integrated coding unit 111 encodes the header information necessary for encoding the image data and integrates the code data of the quantization matrix.
- the image data for one frame input from the terminal 101 is input to the block dividing unit 102.
- the block division unit 102 divides the input image data into a plurality of basic blocks, and outputs an image in basic block units to the prediction unit 104. In this embodiment, it is assumed that an image of a basic block unit of 64 ⁇ 64 pixels is output.
- the prediction unit 104 executes prediction processing on the image data of the basic block unit input from the block division unit 102. Specifically, the subblock division that divides the basic block into smaller subblocks is determined, and the prediction mode such as intra prediction or inter prediction is determined for each subblock.
- FIG. 7 shows an example of the subblock division method.
- the 700 in the thick frame represents a basic block, and for the sake of simplicity, a 64 ⁇ 64 pixel configuration is used, and each quadrangle in the thick frame represents a subblock.
- FIG. 7B shows an example of square subblock division of a quadtree, and a basic block of 648 ⁇ 64 pixels is divided into subblocks of 32 ⁇ 32 pixels.
- FIGS. 7C to 7F show an example of rectangular sub-block division.
- the basic block is divided into a vertically long rectangular sub-block of 32 ⁇ 64 pixels
- FIG. 7D it is divided into a horizontally long rectangular sub-block of 64 ⁇ 32 pixels. There is. Further, in FIGS.
- the basic block 7E and 7F it is divided into rectangular subblocks at a ratio of 1: 2: 1. In this way, not only squares but also rectangular subblocks other than squares are used for encoding processing. Further, the basic block may be further divided into a plurality of square blocks, and the subblocks may be divided based on the divided square blocks. In other words, the size of the basic block is not limited to 64 ⁇ 64 pixels, and basic blocks of a plurality of sizes may be used.
- the quadtree division as shown in FIGS. 7A and 7B which does not divide the basic block of 64 ⁇ 64 pixels, is used, but the subblock division method is not limited to this.
- a ternary tree division as shown in FIGS. 7E and 7F or a binary tree division as shown in FIGS. 7C and 7D may be used.
- a quantization matrix corresponding to the subblock used in the quantization matrix holding unit 103 is generated.
- the new base quantization matrix is also encoded by the quantization matrix coding unit 113.
- Intra-prediction uses coded pixels located in the spatial periphery of the block to be encoded to generate prediction pixels of the block to be encoded, and is used among intra-prediction methods such as horizontal prediction, vertical prediction, and DC prediction. It also generates information on the intra-prediction mode that indicates the intra-prediction method.
- inter-prediction the predicted pixels of the coded target block are generated using the encoded pixels of the frame which is different in time from the coded target block, and the motion information indicating the frame to be referred to, the motion vector, and the like is also generated.
- the prediction unit 194 may use a prediction method that combines intra-prediction and inter-prediction.
- Predicted image data is generated from the determined prediction mode and encoded pixels, and a prediction error is generated from the input image data and the predicted image data, and is output to the conversion / quantization unit 105. Further, information such as subblock division and prediction mode is output as prediction information to the coding unit 110 and the image reproduction unit 107.
- the conversion / quantization unit 105 performs orthogonal conversion / quantization on the input prediction error and generates a quantization coefficient. First, the orthogonal conversion process corresponding to the size of the subblock is performed to generate the orthogonal conversion coefficient, and then the orthogonal conversion coefficient is calculated by using the quantization matrix stored in the quantization matrix holding unit 103 according to the prediction mode. Quantize and generate quantization coefficients. A more specific orthogonal conversion / quantization process will be described below.
- the 32 ⁇ 32 prediction error is subjected to an orthogonal transformation using a 32 ⁇ 32 orthogonal transformation matrix, and the 32 ⁇ 32 orthogonality is performed. Generate a conversion factor. Specifically, the 32 ⁇ 32 orthogonal transform matrix represented by the discrete cosine transform (DCT) is multiplied by the 32 ⁇ 32 prediction error to calculate the intermediate coefficient of the 32 ⁇ 32 matrix. The 32 ⁇ 32 matrix-like intermediate coefficient is further multiplied by the transposed matrix of the 32 ⁇ 32 orthogonal transformation matrix described above to generate a 32 ⁇ 32 orthogonal transformation coefficient.
- DCT discrete cosine transform
- the 32 ⁇ 32 orthogonal conversion coefficient thus generated is quantized using the 32 ⁇ 32 quantization matrix shown in FIG. 8B and the quantization parameters to generate a 32 ⁇ 32 quantization coefficient. Since there are four 32 ⁇ 32 subblocks in the 64 ⁇ 64 basic block, the above process is repeated four times.
- the 64 ⁇ 64 division state (no division) shown in FIG. 7A when the 64 ⁇ 64 division state (no division) shown in FIG. 7A is selected, the odd-numbered rows (hereinafter, odd rows) in the 64 ⁇ 64 orthogonal transformation matrix with respect to the prediction error of 64 ⁇ 64 are selected.
- a 32 ⁇ 64 orthogonal transformation matrix generated by thinning out) is used. That is, a 32 ⁇ 32 orthogonal transformation coefficient is generated by performing an orthogonal transformation using the 32 ⁇ 64 orthogonal transformation matrix generated by thinning out the odd-numbered rows.
- an odd number of rows is thinned out from the 64 ⁇ 64 orthogonal transformation matrix to generate a 64 ⁇ 32 orthogonal transformation matrix.
- the 64 ⁇ 32 orthogonal transformation matrix is multiplied by the 64 ⁇ 64 prediction error to generate a 64 ⁇ 32 matrix intermediate coefficient.
- the 64 ⁇ 32 matrix-like intermediate coefficient is multiplied by the 32 ⁇ 64 transposed matrix obtained by transposing the 64 ⁇ 32 orthogonal transformation matrix described above to generate a 32 ⁇ 32 orthogonal transformation coefficient.
- the conversion / quantization unit 105 sets the generated 32 ⁇ 32 orthogonal conversion coefficient as the coefficient of the upper left portion (x coordinate is 0 to 31 and y coordinate is 0 to 31) of the 64 ⁇ 64 orthogonal conversion coefficient. , Others are set to 0 to execute zero out.
- the 64 ⁇ 32 orthogonal transformation matrix and the 32 ⁇ 64 transposed matrix obtained by transposing the 64 ⁇ 32 orthogonal transformation matrix are used with respect to the 64 ⁇ 64 prediction error.
- Zero-out is executed by generating the orthogonal conversion coefficient of 32 ⁇ 32 in this way.
- a part of the 64 ⁇ 64 orthogonal conversion coefficient generated by performing the 64 ⁇ 64 orthogonal conversion is forcibly set to 0 even if the value is not 0, with a smaller amount of calculation.
- a 32 ⁇ 32 orthogonal conversion coefficient can be generated.
- the orthogonal transformation is performed using the orthogonal transformation matrix of 64 ⁇ 64, and as a result, the orthogonal transformation coefficient to be zeroed out is regarded as 0 and encoded regardless of whether or not it is 0.
- the amount of calculation in the orthogonal transformation can be reduced.
- the amount of calculation can be reduced by using a method of calculating the orthogonal conversion coefficient of 32 ⁇ 32 from the prediction error of 64 ⁇ 64 by the orthogonal conversion coefficient, but the zero-out method is not limited to this method, and various methods can be used. It can also be used.
- information indicating that the orthogonal conversion coefficient of the range targeted for zero-out is 0 may be encoded, or simply information indicating that zero-out has been performed (flag, etc.). May be encoded.
- the image decoding device can regard each block as 0 as the target of zero out and decode each block.
- the conversion / quantization unit 105 quantizes using the 32 ⁇ 32 quantization matrix shown in FIG. 8C and the quantization parameters. To generate a quantization coefficient of 32 ⁇ 32.
- the quantization matrix of FIG. 8B is used for the 32 ⁇ 32 orthogonal conversion coefficient corresponding to the 32 ⁇ 32 subblock, and the 32 ⁇ 32 orthogonal conversion coefficient corresponding to the 64 ⁇ 64 subblock is obtained.
- 8C assumes that the quantization matrix of FIG. 8C is used. That is, FIG. 8B is used for the 32 ⁇ 32 orthogonal conversion coefficient in which zero out is not executed, and the quantization in FIG. 8C is used for the 32 ⁇ 32 orthogonal conversion coefficient corresponding to the 64 ⁇ 64 subblock in which zero out is executed.
- a matrix shall be used. However, the quantization matrix used is not limited to this.
- the generated quantization coefficient is output to the coding unit 110 and the inverse quantization / inverse conversion unit 106.
- the input quantization coefficient is inversely quantized using the quantization matrix stored in the quantization matrix holding unit 103 and the quantization parameter, and the orthogonal conversion coefficient is reproduced. .. Then, the inverse quantization / inverse conversion unit 106 further performs inverse orthogonal conversion of the reproduced orthogonal conversion coefficient to reproduce the prediction error.
- a quantization matrix corresponding to the size of the subblock to be encoded is used as in the conversion / quantization unit 105.
- the inverse quantization / inverse conversion unit 106 When the 32 ⁇ 32 subblock division of FIG. 7B is selected, the inverse quantization / inverse conversion unit 106 has the 32 ⁇ 32 quantization coefficient generated by the conversion / quantization unit 105, which is the quantization of FIG. 8B. Inverse quantization is performed using a matrix to reproduce a 32 ⁇ 32 orthogonal conversion coefficient. Then, the inverse quantization / inverse transformation unit 106 multiplies the above-mentioned 32 ⁇ 32 transposed matrix and the 32 ⁇ 32 orthogonal transformation to calculate the intermediate coefficient of the 32 ⁇ 32 matrix.
- the inverse quantization / inverse transformation unit 106 multiplies the 32 ⁇ 32 matrix-like intermediate coefficient by the 32 ⁇ 32 orthogonal transformation matrix described above to reproduce the 32 ⁇ 32 prediction error.
- the same processing is performed for each 32 ⁇ 32 subblock.
- the 32 ⁇ 32 quantization coefficient generated by the conversion / quantization unit 105 is inversely quantized using the quantization matrix of FIG. 8C, and 32.
- the orthogonal conversion coefficient of ⁇ 32 is reproduced.
- the above-mentioned 32 ⁇ 64 transposed matrix is multiplied by the 32 ⁇ 32 orthogonal transformation to calculate the intermediate coefficient of the 32 ⁇ 64 matrix.
- the 32 ⁇ 64 matrix intermediate coefficient is multiplied by the 64 ⁇ 32 orthogonal transformation matrix described above to reproduce the 64 ⁇ 64 prediction error.
- the same quantization matrix used in the conversion / quantization unit 105 is used according to the size of the subblock, and the inverse quantization process is executed.
- the reproduced prediction error is output to the image reproduction unit 107.
- the image reproduction unit 107 reproduces the predicted image by appropriately referring to the data necessary for reproducing the predicted image stored in the frame memory 108 based on the prediction information input from the prediction unit 104. Then, the image data is reproduced from the reproduced predicted image and the reproduced prediction error input from the inverse quantization / inverse conversion unit 106, input to the frame memory 108, and stored.
- the in-loop filter unit 109 reads the reproduced image from the frame memory 108 and performs in-loop filter processing such as a deblocking filter. Then, the filtered image is input to the frame memory 108 again and stored again.
- the coding unit 110 entropy-encodes the quantization coefficient generated by the conversion / quantization unit 105 and the prediction information input from the prediction unit 104 in block units to generate code data.
- the method of entropy coding is not particularly specified, but Golomb coding, arithmetic coding, Huffman coding and the like can be used.
- the generated code data is output to the integrated coding unit 111.
- the integrated coding unit 111 forms a bit stream by multiplexing the code data input from the coding unit 110 together with the code data of the header described above. Eventually, the bitstream is output from terminal 112 to the outside.
- FIG. 6A is an example of the bit stream output in the first embodiment.
- the sequence header contains the code data of the base quantization matrix and is composed of the coded results of each element.
- the position where the code data of the base quantization matrix is encoded is not limited to this, and of course, a configuration in which the code data is encoded in the picture header portion or other header portions may be adopted.
- the quantization matrix when the quantization matrix is changed in one sequence, it can be updated by newly encoding the base quantization matrix. At this time, all the quantization matrices may be rewritten, or a part of them may be changed by specifying the size of the subblock of the quantization matrix corresponding to the quantization matrix to be rewritten. ..
- FIG. 3 is a flowchart showing a coding process in the image coding apparatus according to the first embodiment.
- the quantization matrix holding unit 103 Prior to image coding, in step S301, the quantization matrix holding unit 103 generates and holds a two-dimensional quantization matrix.
- the base quantization matrix shown in FIG. 8A and the quantization matrix shown in FIGS. 8B and 8C generated from the base quantization matrix are generated and held.
- step S302 the quantization matrix coding unit 113 scans the base quantization matrix used for generating the quantization matrix in step S301, and calculates the difference between the elements before and after in the scanning order. Generate a one-dimensional difference matrix.
- the base quantization matrix shown in FIG. 8A uses the scanning method of FIG. 9, and the difference matrix shown in FIG. 10 is generated.
- the quantization matrix coding unit 113 further encodes the generated difference matrix to generate the quantization matrix code data.
- step S303 the integrated coding unit 111 encodes and outputs the header information necessary for coding the image data together with the generated quantization matrix code data.
- step S304 the block division unit 102 divides the input image in frame units into basic block units of 64 ⁇ 64 pixels.
- step S305 the prediction unit 104 executes prediction processing on the image data in basic block units generated in step S304 using the prediction method described above, and predicts information such as subblock division information and prediction mode. And generate predicted image data.
- the prediction unit 104 executes prediction processing on the image data in basic block units generated in step S304 using the prediction method described above, and predicts information such as subblock division information and prediction mode. And generate predicted image data.
- two types of subblock sizes are used: the 32 ⁇ 32 pixel subblock division shown in FIG. 7B and the 64 ⁇ 64 pixel subblock shown in FIG. 7A.
- the prediction error is calculated from the input image data and the predicted image data.
- step S306 the conversion / quantization unit 105 orthogonally transforms the prediction error calculated in step S305 to generate an orthogonal conversion coefficient. Then, the conversion / quantization unit 105 further performs quantization using the quantization matrix generated / held in step S301 and the quantization parameter to generate a quantization coefficient. Specifically, the prediction error of the 32 ⁇ 32 pixel subblock of FIG. 7B is multiplied using a 32 ⁇ 32 orthogonal transformation matrix and its transposed matrix to generate a 32 ⁇ 32 orthogonal transformation coefficient. On the other hand, the prediction error of the 64 ⁇ 64 pixel subblock of FIG.
- the orthogonal conversion coefficient of the 32 ⁇ 32 subblock of FIG. 7B is 8B
- the orthogonal conversion coefficient of the 64 ⁇ 64 subblock of FIG. 7A is 32 ⁇ using the quantization matrix of FIG. 8C. It is assumed that the orthogonal conversion coefficient of 32 is quantized.
- step S307 the inverse quantization / inverse conversion unit 106 dequantizes the quantization coefficient generated in step S306 by using the quantization matrix generated / held in step S301 and the quantization parameter. And regenerate the orthogonal conversion factor. Further, the inverse orthogonal conversion is performed with respect to the orthogonal conversion coefficient, and the prediction error is reproduced.
- the same quantization matrix used in step S306 is used, and the inverse quantization process is performed. Specifically, the 32 ⁇ 32 quantization coefficient corresponding to the 32 ⁇ 32 pixel subblock of FIG. 7B is subjected to inverse quantization processing using the quantization matrix of FIG. 8B to obtain 32 ⁇ 32. Reproduce the orthogonal conversion factor.
- the 32 ⁇ 32 orthogonal transformation coefficient is multiplied by using the 32 ⁇ 32 orthogonal transformation matrix and its transposed matrix, and the prediction error of 32 ⁇ 32 pixels is reproduced.
- the 32 ⁇ 32 quantization coefficient corresponding to the 64 ⁇ 64 pixel subblock of FIG. 7A is subjected to the inverse quantization process using the quantization matrix of FIG. 8C, and the orthogonal conversion coefficient of 32 ⁇ 32 is obtained.
- the 32 ⁇ 32 orthogonal transformation coefficient is multiplied by using the 64 ⁇ 32 orthogonal transformation matrix and its transposed matrix to reproduce the prediction error of 64 ⁇ 64 pixels.
- step S308 the image reproduction unit 107 reproduces the predicted image based on the prediction information generated in step S305. Further, the image data is reproduced from the reproduced predicted image and the predicted error generated in step S307.
- step S309 the coding unit 110 encodes the prediction information generated in step S305 and the quantization coefficient generated in step S306 to generate code data. It also generates a bitstream including other code data.
- step S310 the image coding apparatus determines whether or not the coding of all the basic blocks in the frame is completed, and if it is completed, proceeds to step S311. If not, the next basic block Return to step S304.
- step S311 the in-loop filter unit 109 performs in-loop filter processing on the image data reproduced in step S308, generates a filtered image, and ends the processing.
- step S305 by reducing the number of orthogonal conversion coefficients and performing quantization processing using a quantization matrix corresponding to the reduced orthogonal conversion coefficients, the quantization is controlled for each frequency component while reducing the amount of calculation and subjective.
- the image quality can be improved.
- a quantization matrix that enlarges only the low-frequency portion of the base quantization matrix as shown in FIG. 8C is used.
- Optimal quantization control can be realized in the low frequency part.
- the low frequency portion referred to here is in the range of 0 to 3 in the x coordinate and 0 to 3 in the y coordinate.
- the base quantization matrix of FIG. 8A which is commonly used for the generation of the quantization matrix of FIGS. 8B and 8C, is encoded.
- the quantization matrix itself of FIG. 8C may be encoded.
- finer quantization control can be realized for each frequency component.
- the quantization matrix corresponding to the orthogonal conversion coefficient of 64 ⁇ 64 expands the upper left 4 ⁇ 4 portion of the 8 ⁇ 8 base quantization matrix by 8 times, but instead performs the 8 ⁇ 8 base quantization.
- the entire matrix may be magnified four times. In this way, finer quantization control can be realized even for an orthogonal conversion coefficient of 64 ⁇ 64.
- the quantization matrix for the 64 ⁇ 64 subblock using zero out is uniquely determined, but it may be configured to be selectable by introducing an identifier.
- FIG. 6B selectively introduces a quantization matrix coding method information code to selectively encode a quantization matrix for a 64 ⁇ 64 subblock using zero out.
- FIG. 8C which is an independent quantization matrix for the orthogonal conversion coefficient corresponding to the subblock of 64 ⁇ 64 pixels using zero out, is shown. Used.
- FIG. 8C which is an independent quantization matrix for the orthogonal conversion coefficient corresponding to the subblock of 64 ⁇ 64 pixels using zero out
- 8B which is a quantization matrix for a normal non-zero-out subblock, is used for a 64 ⁇ 64 pixel sub-block using zero-out.
- the coding method information code indicates 2
- all the elements of the quantization matrix used for the 64 ⁇ 64 pixel subblock using zero out not the 8 ⁇ 8 base quantization matrix. Is encoded, and so on. This makes it possible to selectively realize the reduction of the quantization matrix code amount and the original quantization control for the subblock using zero out.
- the number of subblocks processed using zero out is only 64 ⁇ 64, but the subblock processed using zero out is not limited to this.
- the 32 ⁇ 32 orthogonal conversion coefficients in the lower half and the right half may be forcibly set to 0. Absent.
- FIG. 8B shows the 32 ⁇ 32 orthogonal conversion coefficient of the upper half and the left half.
- the quantization process will be performed using different quantization matrices.
- the value of the quantization matrix corresponding to the DC coefficient located at the upper left end is the value of each element of the 8 ⁇ 8 base matrix. It may be configured to be set and encoded separately from the above. 12B and 12C show an example in which the value of the element located at the upper left end corresponding to the DC component is changed as compared with FIGS. 8B and 8C.
- the quantization matrix shown in FIGS. 12B and 12C is set by separately encoding the information indicating "2" located in the DC portion in addition to the information of the base quantization matrix of FIG. 8A. Can be done. As a result, finer quantization control can be applied to the DC component of the orthogonal conversion coefficient, which has the greatest effect on image quality.
- FIG. 2 is a block diagram showing a configuration of an image decoding device according to a second embodiment of the present invention.
- an image decoding device that decodes the coded data generated in the first embodiment will be described as an example.
- 201 is a terminal to which an encoded bit stream is input.
- Reference numeral 202 denotes a separation / decoding unit, which separates the bitstream into code data related to information related to decoding processing and coefficients, and decodes the code data existing in the header part of the bitstream.
- the quantization matrix code is separated and output to the subsequent stage.
- the separation / decoding unit 202 performs the reverse operation of the integrated coding unit 111 of FIG.
- Reference numeral 209 denotes a quantization matrix decoding unit, which decodes the quantization matrix code from the bit stream, reproduces the base quantization matrix, and further executes a process of generating each quantization matrix from the base quantization matrix.
- Reference numeral 203 denotes a decoding unit, which decodes the code data output from the separation decoding unit 202 and reproduces (derives) the quantization coefficient and the prediction information.
- Reference numeral 204 denotes an inverse quantization / inverse conversion unit, and similarly to the inverse quantization / inverse conversion unit 106 in FIG. 1, the regenerated quantization matrix and the quantization parameters are used to inverse quantize the quantization coefficient. This is performed to obtain the orthogonal conversion coefficient, and further inverse orthogonal conversion is performed to reproduce the prediction error.
- the information for deriving the quantization parameter is also decoded from the bit stream by the decoding unit 203. Further, the function of performing dequantization and the function of performing dequantization may be configured separately.
- 206 is a frame memory. Stores the image data of the reproduced picture.
- the frame memory 206 is appropriately referred to to generate the prediction image data. Then, the reproduced image data is generated and output from the predicted image data and the predicted error reproduced by the inverse quantization / inverse conversion unit 204.
- 207 is an in-loop filter unit. Similar to 109 in FIG. 1, the reproduced image is subjected to in-loop filter processing such as a deblocking filter, and the filtered image is output.
- in-loop filter processing such as a deblocking filter
- 208 is a terminal and outputs the reproduced image data to the outside.
- bit stream generated in the first embodiment is input in frame units (picture units).
- the bit stream for one frame input from the terminal 201 is input to the separation / decoding unit 202.
- the separation / decoding unit 202 separates the bitstream into code data related to information related to decoding processing and coefficients, and decodes the code data existing in the header part of the bitstream. More specifically, the quantization matrix code data is reproduced.
- the quantization matrix code data is extracted from the sequence header of the bitstream shown in FIG. 6A, and is output to the quantization matrix decoding unit 209.
- the quantization matrix code data corresponding to the base quantization matrix shown in FIG. 8A is extracted and output.
- the code data of the basic block unit of the picture data is reproduced and output to the decoding unit 203.
- the quantization matrix decoding unit 209 first decodes the input quantization matrix code data, and reproduces the one-dimensional difference matrix shown in FIG. In the present embodiment, as in the first embodiment, decoding is performed using the coding table shown in FIG. 11A, but the coding table is not limited to this, and other as long as the same one as in the first embodiment is used. A coded table may be used. Further, the quantization matrix decoding unit 209 reproduces the two-dimensional quantization matrix from the reproduced one-dimensional difference matrix. Here, the operation opposite to the operation of the quantization matrix coding unit 113 of the first embodiment is performed. That is, in the present embodiment, the difference matrix shown in FIG. 10 regenerates and holds the base quantization matrix shown in FIG. 8A by using the scanning method shown in FIG.
- the quantization matrix decoding unit 209 reproduces each element in the quantization matrix by sequentially adding each difference value in the difference matrix from the above-mentioned initial value. Then, the quantization matrix decoding unit 209 sequentially associates each of the reproduced one-dimensional elements with each element of the two-dimensional quantization matrix according to the scanning method shown in FIG. 9, thereby associating the two-dimensional quantization matrix with each element. To play.
- the quantization matrix decoding unit 209 expands the regenerated base quantization matrix in the same manner as in the first embodiment to generate two types of 32 ⁇ 32 quantization matrices shown in FIGS. 8B and 8C. ..
- the quantization matrix of FIG. 8B is a 32 ⁇ 32 quantization matrix in which each element of the 8 ⁇ 8 base quantization matrix of FIG. 8A is expanded four times by repeating it four times in the vertical and horizontal directions.
- the quantization matrix of FIG. 8C is a 32 ⁇ 32 quantization matrix expanded by repeating each element of the upper left 4 ⁇ 4 portion of the base quantization matrix of FIG. 8A eight times in the vertical and horizontal directions.
- the generated quantization matrix is not limited to this, and if the size of the quantization coefficient to be dequantized in the subsequent stage exists other than 32 ⁇ 32, 16 ⁇ 16, 8 ⁇ 8, 4 ⁇ 4 A quantization matrix corresponding to the size of the quantization coefficient to be dequantized may be generated. These generated quantization matrices are retained and used for the subsequent inverse quantization process.
- the decoding unit 203 decodes the code data from the bit stream and reproduces the quantization coefficient and the prediction information.
- the size of the sub-block to be decoded is determined based on the decoded prediction information, the regenerated quantization coefficient is output to the inverse quantization / inverse conversion unit 204, and the reproduced prediction information is output to the image reproduction unit 205. It is output.
- the quantization coefficient of 32 ⁇ 32 for each subblock regardless of the size of the subblock to be decoded that is, 64 ⁇ 64 in FIG. 7A or 32 ⁇ 32 in FIG. 7B. Supposes to be regenerated.
- the input quantization coefficient is inversely quantized using the quantization matrix reproduced by the quantization matrix decoding unit 209 and the quantization parameter, and the orthogonal conversion coefficient is performed. Is generated, and the prediction error is reproduced by performing inverse orthogonal transformation. More specific inverse quantization / inverse orthogonal conversion processing is described below.
- the 32 ⁇ 3 quantization coefficient reproduced by the decoding unit 203 is inversely quantized using the quantization matrix of FIG. 8B, and the 32 ⁇ 32 Reproduce the orthogonal conversion factor. Then, the above-mentioned 32 ⁇ 32 transposed matrix is multiplied by the 32 ⁇ 32 orthogonal transformation to calculate the intermediate coefficient of the 32 ⁇ 32 matrix. The 32 ⁇ 32 matrix intermediate coefficient is multiplied by the 32 ⁇ 32 orthogonal transformation matrix described above to reproduce the 32 ⁇ 32 prediction error. The same processing is performed for each 32 ⁇ 32 subblock.
- the 32 ⁇ 32 quantization coefficient reproduced by the decoding unit 203 is inversely quantized using the quantization matrix of FIG. 8C, and is 32 ⁇ 32 orthogonal. Play the conversion factor. Then, the above-mentioned 32 ⁇ 64 transposed matrix is multiplied by the 32 ⁇ 32 orthogonal transformation to calculate the intermediate coefficient of the 32 ⁇ 64 matrix. The 32 ⁇ 64 matrix intermediate coefficient is multiplied by the 64 ⁇ 32 orthogonal transformation matrix described above to reproduce the 64 ⁇ 64 prediction error.
- the reproduced prediction error is output to the image reproduction unit 205.
- the quantization matrix used in the inverse quantization process is determined according to the size of the sub-block to be decoded determined by the prediction information reproduced by the decoding unit 203. That is, the quantization matrix of FIG. 8B is used for the inverse quantization process in each of the 32 ⁇ 32 subblocks of FIG. 7B, and the quantization matrix of FIG. 8C is used for the 64 ⁇ 64 subblock of FIG. 7A. .
- the quantization matrix used is not limited to this, and may be the same as the quantization matrix used in the conversion / quantization unit 105 and the inverse quantization / inverse conversion unit 106 of the first embodiment.
- the image reproduction unit 205 appropriately refers to the frame memory 206 based on the prediction information input from the decoding unit 203, acquires the data necessary for reproducing the prediction image, and reproduces the prediction image.
- the prediction unit 104 of the first embodiment two types of prediction methods, intra prediction and inter prediction, are used. Further, as described above, a prediction method that combines intra-prediction and inter-prediction may be used. Further, as in the first embodiment, the prediction process is performed in sub-block units.
- the image reproduction unit 205 reproduces image data from the prediction image generated by the prediction processing and the prediction error input from the inverse quantization / inverse conversion unit 204. Specifically, the image reproduction unit 205 reproduces the image data by adding the predicted image and the prediction error.
- the reproduced image data is appropriately stored in the frame memory 206.
- the stored image data is appropriately referred to when predicting other sub-blocks.
- the in-loop filter unit 207 reads the reproduced image from the frame memory 206 and performs in-loop filter processing such as a deblocking filter. Then, the filtered image is input to the frame memory 206 again.
- the reproduced image stored in the frame memory 206 is finally output from the terminal 208 to the outside.
- the reproduced image is output to, for example, an external display device or the like.
- FIG. 4 is a flowchart showing an image decoding process in the image decoding apparatus according to the second embodiment.
- step S401 the separation / decoding unit 202 separates the bit stream into code data related to information related to decoding processing and coefficients, and decodes the code data of the header portion. More specifically, the quantization matrix code data is reproduced.
- step S402 the quantization matrix decoding unit 209 first decodes the quantization matrix code data reproduced in step S401, and reproduces the one-dimensional difference matrix shown in FIG. Next, the quantization matrix decoding unit 209 reproduces the two-dimensional base quantization matrix from the reproduced one-dimensional difference matrix. Further, the quantization matrix decoding unit 209 expands the regenerated two-dimensional base quantization matrix to generate a quantization matrix.
- the quantization matrix decoding unit 209 reproduces the difference matrix shown in FIG. 10 and the base quantization matrix shown in FIG. 8A by using the scanning method shown in FIG. Further, the quantization matrix decoding unit 209 expands the regenerated base quantization matrix to generate and hold the quantization matrix shown in FIGS. 8B and 8C.
- step S403 the decoding unit 203 decodes the code data separated in step S401 and reproduces the quantization coefficient and the prediction information. Further, the size of the subblock to be decoded is determined based on the decoded prediction information. In the present embodiment, the quantization coefficient of 32 ⁇ 32 is reproduced for each subblock regardless of the size of the subblocks to be decoded, that is, 64 ⁇ 64 in FIG. 7A or 32 ⁇ 32 in FIG. 7B. And.
- step S404 the inverse quantization / inverse conversion unit 204 performs inverse quantization on the quantization coefficient using the quantization matrix reproduced in step S402 to obtain the orthogonal conversion coefficient, and further performs inverse orthogonal conversion. , Reproduce the prediction error.
- the quantization matrix used in the inverse quantization process is determined according to the size of the subblock to be decoded determined by the prediction information reproduced in step S403. That is, the quantization matrix of FIG. 8B is used for the inverse quantization process in each of the 32 ⁇ 32 subblocks of FIG. 7B, and the quantization matrix of FIG. 8C is used for the 64 ⁇ 64 subblock of FIG. 7A. ..
- the quantization matrix used is not limited to this, and may be the same as the quantization matrix used in steps S306 and S307 of the first embodiment.
- step S405 the image reproduction unit 205 reproduces the predicted image from the predicted information generated in step S403.
- two types of prediction methods intra prediction and inter prediction, are used. Further, the image data is reproduced from the reproduced predicted image and the predicted error generated in step S404.
- step S406 the image decoding apparatus determines whether or not all the basic blocks in the frame have been decoded, and if so, proceeds to step S407, otherwise the next basic block is targeted. Return to step S403.
- step S407 the in-loop filter unit 207 performs in-loop filter processing on the image data reproduced in step S405, generates a filtered image, and ends the processing.
- the quantization is controlled for each frequency component using the quantization matrix even for the subblock generated in the first embodiment in which only the low-frequency orthogonal conversion coefficient is quantized and encoded. It is possible to decode a bit stream with improved subjective image quality. Further, for the subblock in which only the low-frequency orthogonal conversion coefficient is quantized and encoded, a quantization matrix obtained by enlarging only the low-frequency part of the base quantization matrix as shown in FIG. 8C is used, which is optimal for the low-frequency part. It is possible to decode a bit stream that has undergone quantization control.
- the base quantization matrix of FIG. 8A which is commonly used for the generation of the quantization matrix of FIGS. 8B and 8C, is decoded. It may be configured to decode the 8C quantization matrix itself. In that case, since a unique value can be set for each frequency component of each quantization matrix, it is possible to decode a bit stream that realizes finer quantization control for each frequency component.
- the quantization matrix corresponding to the orthogonal conversion coefficient of 64 ⁇ 64 is an 8 ⁇ 8 base quantization matrix instead of enlarging the upper left 4 ⁇ 4 portion of the 8 ⁇ 8 base quantization matrix by 8 times. The whole may be magnified four times. In this way, finer quantization control can be realized even for an orthogonal conversion coefficient of 64 ⁇ 64.
- the quantization matrix for the 64 ⁇ 64 subblock using zero out is uniquely determined, but it may be configured to be selectable by introducing an identifier.
- FIG. 6B selectively introduces a quantization matrix coding method information code to selectively encode a quantization matrix for a 64 ⁇ 64 subblock using zero out.
- FIG. 8C which is an independent quantization matrix
- FIG. 8B which is a quantization matrix for a normal non-zero-out subblock, is used for a 64 ⁇ 64 sub-block using zero-out.
- the number of sub-blocks processed using zero-out is only 64 ⁇ 64, but the number of sub-blocks processed using zero-out is not limited to this.
- the lower half and the right half of the 32 ⁇ 32 orthogonal conversion coefficients are not decoded, and the upper half and the upper half It may be configured to decode only the quantization coefficient of the left half.
- FIG. 8B shows the 32 ⁇ 32 orthogonal conversion coefficients in the upper half and the left half.
- the quantization process will be performed using different quantization matrices.
- the value of the quantization matrix corresponding to the DC coefficient located at the upper left end is the value of each element of the 8 ⁇ 8 base matrix. It may be configured to be decoded and set separately. 12B and 12C show an example in which the value of the element located at the upper left end corresponding to the DC component is changed as compared with FIGS. 8B and 8C.
- the quantization matrix shown in FIGS. 12B and 12C can be set by separately decoding the information indicating "2" located in the DC portion in addition to the information of the base quantization matrix of FIG. 8A. it can. As a result, it is possible to decode a bit stream in which finer quantization control is applied to the DC component of the orthogonal conversion coefficient, which has the greatest effect on image quality.
- FIGS. 1 and 2 Each processing unit shown in FIGS. 1 and 2 has been described in the above embodiment as being configured by hardware. However, the processing performed by each processing unit shown in these figures may be configured by a computer program.
- FIG. 5 is a block diagram showing a configuration example of computer hardware applicable to the image coding device and the image decoding device according to each of the above embodiments.
- the CPU 501 controls the entire computer by using the computer programs and data stored in the RAM 502 and the ROM 503, and also executes each of the above-described processes as performed by the image processing apparatus according to each of the above embodiments. That is, the CPU 501 functions as each processing unit shown in FIGS. 1 and 2.
- the RAM 502 has an area for temporarily storing computer programs and data loaded from the external storage device 506, data acquired from the outside via the I / F (interface) 507, and the like. Further, the RAM 502 has a work area used by the CPU 501 when executing various processes. That is, the RAM 502 can be allocated as a frame memory, for example, or can provide various other areas as appropriate.
- the ROM 503 stores the setting data of this computer, the boot program, and the like.
- the operation unit 504 is composed of a keyboard, a mouse, and the like, and can be operated by a user of the computer to input various instructions to the CPU 501.
- the output unit 505 outputs the processing result by the CPU 501.
- the output unit 505 is composed of, for example, a liquid crystal display.
- the external storage device 506 is a large-capacity information storage device represented by a hard disk drive device.
- the external storage device 506 stores an OS (operating system) and a computer program for realizing the functions of the respective parts shown in FIGS. 1 and 2 in the CPU 501. Further, each image data as a processing target may be stored in the external storage device 506.
- the computer programs and data stored in the external storage device 506 are appropriately loaded into the RAM 502 according to the control by the CPU 501, and are processed by the CPU 501.
- a network such as a LAN or the Internet, or other devices such as a projection device or a display device can be connected to the I / F 507, and the computer acquires and sends various information via the I / F 507. Can be done.
- Reference numeral 508 is a bus connecting the above-mentioned parts.
- the CPU 501 plays a central role in controlling the operation described in the above flowchart for the operation having the above configuration.
- Each embodiment can also be achieved by supplying a storage medium in which the code of the computer program that realizes the above-mentioned functions is recorded to the system, and the system reads and executes the code of the computer program.
- the computer program code itself read from the storage medium realizes the function of the above-described embodiment, and the storage medium storing the computer program code constitutes the present invention. It also includes cases where the operating system (OS) running on the computer performs part or all of the actual processing based on the instructions in the code of the program, and the processing realizes the above-mentioned functions. ..
- OS operating system
- the computer program code read from the storage medium is written to the memory provided in the function expansion card inserted in the computer or the function expansion unit connected to the computer. Then, based on the instruction of the code of the computer program, the function expansion card, the CPU provided in the function expansion unit, or the like performs a part or all of the actual processing to realize the above-mentioned function.
- the method of forcibly setting a part of the orthogonal conversion coefficients to 0 can be executed more efficiently.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Discrete Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
以下、本発明の実施形態を、図面を用いて説明する。
図2は、本発明の実施形態2に係る画像復号装置の構成を示すブロック図である。本実施形態では、実施形態1で生成された符号化データを復号する画像復号装置を例にして説明する。
図1、図2に示した各処理部はハードウェアでもって構成しているものとして上記実施形態では説明した。しかし、これらの図に示した各処理部で行う処理をコンピュータプログラムでもって構成してもよい。
各実施形態は、前述した機能を実現するコンピュータプログラムのコードを記録した記憶媒体を、システムに供給し、そのシステムがコンピュータプログラムのコードを読み出し実行することによっても達成することができる。この場合、記憶媒体から読み出されたコンピュータプログラムのコード自体が前述した実施形態の機能を実現し、そのコンピュータプログラムのコードを記憶した記憶媒体は本発明を構成する。また、そのプログラムのコードの指示に基づき、コンピュータ上で稼働しているオペレーティングシステム(OS)などが実際の処理の一部または全部を行い、その処理によって前述した機能が実現される場合も含まれる。
Claims (21)
- P×Q画素(P及びQは整数)のブロックを含む複数のブロック単位で画像を符号化してビットストリームを生成することが可能な画像符号化装置において、
前記P×Q画素のブロックの予測誤差に対して直交変換を実行することによって、N×M個(NはN<Pを満たす整数、かつ、MはM<Qを満たす整数)の直交変換係数を生成する直交変換手段と、
N×M個の要素を有する量子化マトリクスを少なくとも用いて前記N×M個の直交変換係数を量子化して、N×M個の量子化係数を生成する量子化手段と
を有することを特徴とする画像符号化装置。 - 前記ブロックは正方形のブロックである
ことを特徴とする請求項1記載の画像符号化装置。 - 前記P及びQは64であり、前記N及びMは32である
ことを特徴とする請求項2記載の画像符号化装置。 - 前記P及びQは128であり、前記N及びMは32である
ことを特徴とする請求項2記載の画像符号化装置。 - 前記直交変換手段は、64×32の行列と32×64の行列とを用いて前記直交変換を実行することによって、前記N×M個の直交変換係数を生成する
ことを特徴とする請求項3記載の画像符号化装置。 - 前記N×M個の量子化係数以外の前記P×Q画素のブロックに対応する量子化係数がゼロであることを示す前記ビットストリームを生成する生成手段
を更に有することを特徴とする請求項1記載の画像符号化装置。 - 前記N×M個の量子化係数は、前記P×Q画素のブロックに対応する量子化係数におけるDC成分を含む所定の範囲の直交変換係数に対応する
ことを特徴とする請求項1記載の画像符号化装置。 - 前記N×M個の量子化係数以外の前記P×Q画素のブロックに対応する量子化係数は、前記所定の範囲よりも高い周波数成分に対応する
ことを特徴とする請求項7記載の画像符号化装置。 - 前記所定の範囲は、前記P×Q個の直交変換係数におけるDC成分を含む、N×M個の直交変換係数に対応する
ことを特徴とする請求項7記載の画像符号化装置。 - 前記ブロックは、非正方形のブロックである
ことを特徴とする請求項1記載の画像符号化装置。 - P×Q画素(P及びQは整数)のブロックを含む複数のブロック単位で画像を符号化してビットストリームを生成することが可能な画像符号化方法において、
前記P×Q画素のブロックの予測誤差に対して直交変換を実行することによって、N×M個(NはN<Pを満たす整数、かつ、MはM<Qを満たす整数)の直交変換係数を生成する直交変換工程と、
N×M個の要素を有する量子化マトリクスを少なくとも用いて前記N×M個の直交変換係数を量子化して、N×M個の量子化係数を生成する量子化工程と
を有することを特徴とする画像符号化方法。 - 前記ブロックは正方形のブロックである
ことを特徴とする請求項11記載の画像符号化方法。 - 前記P及びQは64であり、前記N及びMは32である
ことを特徴とする請求項12記載の画像符号化方法。 - 前記P及びQは128であり、前記N及びMは32である
ことを特徴とする請求項12記載の画像符号化方法。 - 前記直交変換工程において、64×32の行列と32×64の行列とを用いて前記直交変換を実行することによって、前記N×M個の直交変換係数を生成する
ことを特徴とする請求項13記載の画像符号化方法。 - 前記N×M個の量子化係数以外の前記P×Q画素のブロックに対応する量子化係数がゼロであることを示す前記ビットストリームを生成する生成工程
を更に有することを特徴とする請求項11記載の画像符号化方法。 - 前記N×M個の量子化係数は、前記P×Q画素のブロックに対応する量子化係数におけるDC成分を含む所定の範囲の直交変換係数に対応する
ことを特徴とする請求項11記載の画像符号化方法。 - 前記N×M個の量子化係数以外の前記P×Q画素のブロックに対応する量子化係数は、前記所定の範囲よりも高い周波数成分に対応する
ことを特徴とする請求項17記載の画像符号化方法。 - 前記所定の範囲は、前記P×Q個の直交変換係数におけるDC成分を含む、N×M個の直交変換係数に対応する
ことを特徴とする請求項17記載の画像符号化方法。 - 前記ブロックは、非正方形のブロックである
ことを特徴とする請求項11記載の画像符号化方法。 - コンピュータを、請求項1記載の画像符号化装置の各手段として機能させるためのプログラム。
Priority Applications (14)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410260038.6A CN118158409A (zh) | 2019-03-11 | 2020-02-28 | 图像解码设备、图像解码方法和存储介质 |
CN202410260040.3A CN118158410A (zh) | 2019-03-11 | 2020-02-28 | 图像解码设备、图像解码方法和存储介质 |
CN202410260036.7A CN118158408A (zh) | 2019-03-11 | 2020-02-28 | 图像解码设备、图像解码方法和存储介质 |
KR1020217031424A KR20210125095A (ko) | 2019-03-11 | 2020-02-28 | 화상 부호화 장치, 화상 부호화 방법 및 저장 매체 |
CN202410260046.0A CN118158411A (zh) | 2019-03-11 | 2020-02-28 | 图像解码设备、图像解码方法和存储介质 |
CN202410260049.4A CN118158412A (zh) | 2019-03-11 | 2020-02-28 | 图像解码设备、图像解码方法和存储介质 |
CN202080020387.0A CN113574881B (zh) | 2019-03-11 | 2020-02-28 | 图像编码设备和方法、图像解码设备和方法以及存储介质 |
EP20769936.4A EP3941053A4 (en) | 2019-03-11 | 2020-02-28 | IMAGE CODING DEVICE, IMAGE CODING METHOD, AND PROGRAM |
US17/468,371 US11985352B2 (en) | 2019-03-11 | 2021-09-07 | Image coding apparatus, image coding method, and storage media |
US18/635,725 US20240259600A1 (en) | 2019-03-11 | 2024-04-15 | Image coding apparatus, image coding method, and storage media |
US18/635,703 US20240259599A1 (en) | 2019-03-11 | 2024-04-15 | Image coding apparatus, image coding method, and storage media |
US18/635,778 US20240259602A1 (en) | 2019-03-11 | 2024-04-15 | Image coding apparatus, image coding method, and storage media |
US18/635,763 US20240267562A1 (en) | 2019-03-11 | 2024-04-15 | Image coding apparatus, image coding method, and storage media |
US18/635,745 US20240259601A1 (en) | 2019-03-11 | 2024-04-15 | Image coding apparatus, image coding method, and storage media |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019-044276 | 2019-03-11 | ||
JP2019044276A JP2020150340A (ja) | 2019-03-11 | 2019-03-11 | 画像符号化装置、画像符号化方法、及びプログラム |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/468,371 Continuation US11985352B2 (en) | 2019-03-11 | 2021-09-07 | Image coding apparatus, image coding method, and storage media |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020184229A1 true WO2020184229A1 (ja) | 2020-09-17 |
Family
ID=72427977
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2020/008439 WO2020184229A1 (ja) | 2019-03-11 | 2020-02-28 | 画像符号化装置、画像符号化方法、及びプログラム |
Country Status (7)
Country | Link |
---|---|
US (6) | US11985352B2 (ja) |
EP (1) | EP3941053A4 (ja) |
JP (2) | JP2020150340A (ja) |
KR (1) | KR20210125095A (ja) |
CN (6) | CN118158411A (ja) |
TW (2) | TWI800712B (ja) |
WO (1) | WO2020184229A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4124036A4 (en) * | 2020-09-27 | 2023-11-22 | Tencent Technology (Shenzhen) Company Limited | METHOD, APPARATUS AND DEVICE FOR VIDEO ENCODING/DECODING |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7267785B2 (ja) | 2019-03-11 | 2023-05-02 | キヤノン株式会社 | 画像復号装置、画像復号方法、及びプログラム |
JP2020150338A (ja) * | 2019-03-11 | 2020-09-17 | キヤノン株式会社 | 画像復号装置、画像復号方法、及びプログラム |
CN117519887B (zh) * | 2023-12-13 | 2024-03-12 | 南京云玑信息科技有限公司 | 一种提升云电脑远程操作体验的方法及系统 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013038758A (ja) | 2011-07-13 | 2013-02-21 | Canon Inc | 画像符号化装置、画像符号化方法及びプログラム、画像復号装置、画像復号方法及びプログラム |
JP2017513342A (ja) * | 2014-03-17 | 2017-05-25 | クゥアルコム・インコーポレイテッドQualcomm Incorporated | ゼロアウトされた係数を使用した低複雑な順変換のためのシステムおよび方法 |
WO2018008387A1 (ja) * | 2016-07-04 | 2018-01-11 | ソニー株式会社 | 画像処理装置および方法 |
WO2019188097A1 (ja) * | 2018-03-28 | 2019-10-03 | ソニー株式会社 | 画像処理装置及び画像処理方法 |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS61294585A (ja) * | 1985-06-21 | 1986-12-25 | Nec Corp | 画像信号の変換符号化方式 |
JP3403724B2 (ja) * | 1990-08-20 | 2003-05-06 | 株式会社東芝 | 画像再生装置及び方法 |
JP2001136527A (ja) * | 1999-11-08 | 2001-05-18 | Matsushita Electric Ind Co Ltd | 直交変換画像の解像度変換装置及び方法 |
EP1597909A4 (en) * | 2003-02-21 | 2007-06-06 | Matsushita Electric Ind Co Ltd | IMAGE ENCODING METHOD AND IMAGE DECODING METHOD |
BRPI0413988A (pt) * | 2003-08-26 | 2006-11-07 | Thomson Licensing | método e aparelho para decodificar blocos intra-inter codificador hìbridos |
JP2009021802A (ja) * | 2007-07-11 | 2009-01-29 | Toshiba Corp | 動画像符号化装置及び方法 |
JP4902553B2 (ja) * | 2008-01-08 | 2012-03-21 | 日本電信電話株式会社 | 画像符号化方法、画像復号方法、画像符号化装置、画像復号装置、画像符号化プログラム、画像復号プログラムおよびそれらのプログラムを記録したコンピュータ読み取り可能な記録媒体 |
US8913666B2 (en) * | 2010-10-01 | 2014-12-16 | Qualcomm Incorporated | Entropy coding coefficients using a joint context model |
US9641846B2 (en) * | 2010-10-22 | 2017-05-02 | Qualcomm Incorporated | Adaptive scanning of transform coefficients for video coding |
US9288496B2 (en) * | 2010-12-03 | 2016-03-15 | Qualcomm Incorporated | Video coding using function-based scan order for transform coefficients |
JP6120490B2 (ja) * | 2011-11-07 | 2017-04-26 | キヤノン株式会社 | 画像符号化装置、画像符号化方法及びプログラム、画像復号装置、画像復号方法及びプログラム |
CN107071436A (zh) * | 2011-11-08 | 2017-08-18 | 株式会社Kt | 对视频信号进行解码的方法 |
AU2012355057B2 (en) * | 2011-12-19 | 2016-08-18 | Sony Corporation | Image processing device and method |
JP2015035660A (ja) * | 2013-08-07 | 2015-02-19 | 株式会社Jvcケンウッド | 画像符号化装置、画像符号化方法、及び画像符号化プログラム |
WO2018125972A1 (en) * | 2016-12-28 | 2018-07-05 | Arris Enterprises Llc | Adaptive unequal weight planar prediction |
US10855997B2 (en) * | 2017-04-14 | 2020-12-01 | Mediatek Inc. | Secondary transform kernel size selection |
CN117812271A (zh) * | 2019-02-01 | 2024-04-02 | Lg电子株式会社 | 图像解码方法、图像编码方法、存储介质和发送方法 |
JP2020150338A (ja) * | 2019-03-11 | 2020-09-17 | キヤノン株式会社 | 画像復号装置、画像復号方法、及びプログラム |
-
2019
- 2019-03-11 JP JP2019044276A patent/JP2020150340A/ja active Pending
-
2020
- 2020-02-28 CN CN202410260046.0A patent/CN118158411A/zh active Pending
- 2020-02-28 EP EP20769936.4A patent/EP3941053A4/en active Pending
- 2020-02-28 CN CN202410260040.3A patent/CN118158410A/zh active Pending
- 2020-02-28 KR KR1020217031424A patent/KR20210125095A/ko not_active Application Discontinuation
- 2020-02-28 CN CN202410260038.6A patent/CN118158409A/zh active Pending
- 2020-02-28 CN CN202080020387.0A patent/CN113574881B/zh active Active
- 2020-02-28 CN CN202410260036.7A patent/CN118158408A/zh active Pending
- 2020-02-28 CN CN202410260049.4A patent/CN118158412A/zh active Pending
- 2020-02-28 WO PCT/JP2020/008439 patent/WO2020184229A1/ja active Application Filing
- 2020-03-09 TW TW109107590A patent/TWI800712B/zh active
- 2020-03-09 TW TW112111515A patent/TWI849820B/zh active
-
2021
- 2021-09-07 US US17/468,371 patent/US11985352B2/en active Active
-
2023
- 2023-06-07 JP JP2023093895A patent/JP2023105156A/ja active Pending
-
2024
- 2024-04-15 US US18/635,703 patent/US20240259599A1/en active Pending
- 2024-04-15 US US18/635,725 patent/US20240259600A1/en active Pending
- 2024-04-15 US US18/635,778 patent/US20240259602A1/en active Pending
- 2024-04-15 US US18/635,745 patent/US20240259601A1/en active Pending
- 2024-04-15 US US18/635,763 patent/US20240267562A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013038758A (ja) | 2011-07-13 | 2013-02-21 | Canon Inc | 画像符号化装置、画像符号化方法及びプログラム、画像復号装置、画像復号方法及びプログラム |
JP2017513342A (ja) * | 2014-03-17 | 2017-05-25 | クゥアルコム・インコーポレイテッドQualcomm Incorporated | ゼロアウトされた係数を使用した低複雑な順変換のためのシステムおよび方法 |
WO2018008387A1 (ja) * | 2016-07-04 | 2018-01-11 | ソニー株式会社 | 画像処理装置および方法 |
WO2019188097A1 (ja) * | 2018-03-28 | 2019-10-03 | ソニー株式会社 | 画像処理装置及び画像処理方法 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4124036A4 (en) * | 2020-09-27 | 2023-11-22 | Tencent Technology (Shenzhen) Company Limited | METHOD, APPARATUS AND DEVICE FOR VIDEO ENCODING/DECODING |
Also Published As
Publication number | Publication date |
---|---|
JP2023105156A (ja) | 2023-07-28 |
CN118158409A (zh) | 2024-06-07 |
CN118158410A (zh) | 2024-06-07 |
CN113574881B (zh) | 2024-03-19 |
US20240259602A1 (en) | 2024-08-01 |
US20240259600A1 (en) | 2024-08-01 |
TW202332273A (zh) | 2023-08-01 |
CN118158412A (zh) | 2024-06-07 |
JP2020150340A (ja) | 2020-09-17 |
TW202034691A (zh) | 2020-09-16 |
CN113574881A (zh) | 2021-10-29 |
EP3941053A1 (en) | 2022-01-19 |
KR20210125095A (ko) | 2021-10-15 |
TWI849820B (zh) | 2024-07-21 |
US20240259599A1 (en) | 2024-08-01 |
US20210409772A1 (en) | 2021-12-30 |
CN118158411A (zh) | 2024-06-07 |
EP3941053A4 (en) | 2022-12-21 |
CN118158408A (zh) | 2024-06-07 |
US20240267562A1 (en) | 2024-08-08 |
TWI800712B (zh) | 2023-05-01 |
US11985352B2 (en) | 2024-05-14 |
US20240259601A1 (en) | 2024-08-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020184229A1 (ja) | 画像符号化装置、画像符号化方法、及びプログラム | |
JP7497486B2 (ja) | 画像復号装置、画像復号方法、及びプログラム | |
JP2024015184A (ja) | 画像復号装置及び方法及びプログラム | |
JP7545556B2 (ja) | 画像符号化装置及び画像復号装置及びそれらの制御方法及びプログラム | |
JP2024023793A (ja) | 画像符号化装置及び画像復号装置及び画像符号化方法及び画像復号方法及びプログラム | |
JP2023113858A (ja) | 画像復号装置、画像復号方法、及びプログラム | |
WO2020003740A1 (ja) | 画像符号化装置及び画像復号装置及びそれらの制御方法及びプログラム | |
WO2020183859A1 (ja) | 画像符号化装置、画像復号装置、画像符号化方法、画像復号方法、及びプログラム | |
WO2020129498A1 (ja) | 画像符号化装置、画像符号化方法、画像復号装置、画像復号方法 | |
JP2021150723A (ja) | 画像符号化装置、画像符号化方法、及びプログラム、画像復号装置、画像復号方法、及びプログラム | |
JP2024149754A (ja) | 画像符号化装置及び画像復号装置及びそれらの制御方法及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20769936 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20217031424 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2021128440 Country of ref document: RU |
|
ENP | Entry into the national phase |
Ref document number: 2020769936 Country of ref document: EP Effective date: 20211011 |