WO2018061588A1 - Dispositif de codage d'image, procédé de codage d'image, programme de codage d'image, dispositif de décodage d'image, procédé de décodage d'image et programme de décodage d'image - Google Patents

Dispositif de codage d'image, procédé de codage d'image, programme de codage d'image, dispositif de décodage d'image, procédé de décodage d'image et programme de décodage d'image Download PDF

Info

Publication number
WO2018061588A1
WO2018061588A1 PCT/JP2017/031137 JP2017031137W WO2018061588A1 WO 2018061588 A1 WO2018061588 A1 WO 2018061588A1 JP 2017031137 W JP2017031137 W JP 2017031137W WO 2018061588 A1 WO2018061588 A1 WO 2018061588A1
Authority
WO
WIPO (PCT)
Prior art keywords
value
color component
prediction
unit
quantization
Prior art date
Application number
PCT/JP2017/031137
Other languages
English (en)
Japanese (ja)
Inventor
佐藤 数史
Original Assignee
株式会社ドワンゴ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ドワンゴ filed Critical 株式会社ドワンゴ
Publication of WO2018061588A1 publication Critical patent/WO2018061588A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • H04N19/126Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present disclosure relates to an image encoding device, an image encoding method, an image encoding program, an image decoding device, an image decoding method, and an image decoding program.
  • H.265 / MPEG-H HEVC High Efficiency Video Coding
  • HEVC High Efficiency Video Coding
  • AVC MPEG-4 Advanced Video Coding
  • intra prediction that generates a prediction value by performing spatial prediction within a frame and inter prediction that generates a prediction value by performing motion compensation prediction between frames are used.
  • Non-Patent Documents 1 and 2 as an improvement in intra prediction by developing HEVC, cross-component linear model prediction (cross-component liner model prediction) that uses a decoded pixel value of a luminance signal to generate a predictive value of a color difference signal ) has been proposed.
  • Non-Patent Document 1 describes that the prediction value of one color difference signal is corrected using the prediction residual of one color difference signal.
  • Cross component linear model prediction is abbreviated as CCLM prediction.
  • ⁇ CCLM prediction differs from other intra predictions in that there is a dependency between the components of the video signal (luminance signal and color difference signal).
  • ⁇ CCLM prediction differs from other intra predictions in that there is a dependency between the components of the video signal (luminance signal and color difference signal).
  • Embodiments provide an image encoding apparatus and an image encoding method capable of adjusting a quantization value of a reference source and a reference destination and reducing a code amount by using a dependency relationship between components of a video signal in CCLM prediction And an image encoding program, an image decoding device, an image decoding method, and an image decoding program.
  • the first prediction mode for generating the prediction value of the color component by the linear model using the decoded pixel value of the luminance component, and the color component without using the decoded pixel value of the luminance signal A prediction value generation unit that generates a prediction value of a color component in each block in the picture using any one of the second prediction modes for generating a prediction value of the image, and the prediction from the color component of the original image
  • a subtractor that subtracts a value to generate a prediction residual, an orthogonal transform unit that orthogonally transforms the prediction residual to generate an orthogonal transform coefficient, and the predicted value generation unit selects the first prediction mode.
  • a quantizing unit configured to quantize the orthogonal transform coefficient by setting a quantized value of the reference destination color component higher than that of the reference source color component.
  • the first prediction mode for generating the prediction value of the color component by the linear model using the decoded pixel value of the luminance component, and the color component without using the decoded pixel value of the luminance signal A prediction value of a color component is generated using any one of the second prediction modes for generating a prediction value of the image, a prediction residual is generated by subtracting the prediction value from the color component of the original image, When the prediction residual is orthogonally transformed to generate an orthogonal transform coefficient, and when the first prediction mode is selected to generate a color component prediction value, the reference destination color component is compared with the reference source color component An image encoding method is provided in which the orthogonal transform coefficient is quantized by setting a high quantization value of.
  • the computer uses the first prediction mode for generating the prediction value of the color component by the linear model using the decoded pixel value of the luminance component, and the decoded pixel value of the luminance signal.
  • the second prediction mode for generating the color component prediction value without generating the color component prediction value, and subtracting the prediction value from the color component of the original image to generate the prediction residual.
  • a step of quantizing the orthogonal transform coefficient by setting a quantized value of a reference color component higher than that of the image encoding program.
  • the first prediction mode for generating the prediction value of the color component by the linear model using the decoded pixel value of the luminance component, and the color component without using the decoded pixel value of the luminance signal Entropy decoding for entropy decoding a bitstream in which a prediction value of a color component in each block in a picture is generated and encoded using any one of the second prediction modes for generating a prediction value of And the orthogonal transform coefficient entropy-decoded by the entropy decoding unit in the block in which the prediction value of the color component is generated and encoded using the first prediction mode is compared with the color component of the reference source
  • an image decoding apparatus includes an inverse quantization unit configured to inversely quantize by setting a quantization value of a reference color component high.
  • the first prediction mode for generating the prediction value of the color component by the linear model using the decoded pixel value of the luminance component, and the color component without using the decoded pixel value of the luminance signal Entropy decoding the bitstream in which the prediction value of the color component in each block in the picture is generated and encoded using any one of the second prediction modes for generating the prediction value of In the block in which the predicted value of the color component is generated and encoded using the first prediction mode, the entropy-decoded orthogonal transform coefficient is compared with the color component of the reference source and the quantum of the color component of the reference destination
  • An image decoding method is provided that performs inverse quantization by setting a high quantization value.
  • the computer uses the first prediction mode for generating the prediction value of the color component by the linear model using the decoded pixel value of the luminance component, and the decoded pixel value of the luminance signal.
  • An image decoding program is provided that executes a step of performing inverse quantization by setting a quantization value of a color component of the image to a high value.
  • the dependency relationship between the components of the video signal in CCLM prediction is used.
  • the amount of code can be reduced by adjusting the quantization values of the reference source and the reference destination.
  • FIG. 1 is a block diagram illustrating an overall configuration example of an image encoding / decoding system including an image encoding device and an image decoding device.
  • FIG. 2 is a block diagram illustrating an image encoding device according to an embodiment.
  • FIG. 3 is a diagram illustrating the structure of the coding tree unit.
  • FIG. 4 is a diagram illustrating the structure of the prediction unit.
  • FIG. 5 is a diagram illustrating the structure of the conversion unit.
  • FIG. 6 is a diagram illustrating a configuration example of a frame divided into a plurality of slices.
  • FIG. 7 is a diagram illustrating an example of a prediction mode of intra prediction.
  • FIG. 1 is a block diagram illustrating an overall configuration example of an image encoding / decoding system including an image encoding device and an image decoding device.
  • FIG. 2 is a block diagram illustrating an image encoding device according to an embodiment.
  • FIG. 3 is a diagram illustrating the structure of the coding tree unit.
  • FIG. 8 is a diagram showing the relationship between the block Y signal block of the Cb and Cr signals in the case of a 4: 2: 0 format Y, Cb, and Cr video signal.
  • FIG. 9 is a block diagram illustrating a specific configuration example of the quantization unit 4 and the inverse quantization unit 8 in FIG.
  • FIG. 10 is a diagram illustrating an example of syntax transmitted by the image encoding device according to the embodiment.
  • FIG. 11 is a flowchart illustrating the operation of the image encoding device according to the embodiment, the processing by the image encoding method according to the embodiment, and the processing executed by the image encoding program according to the embodiment.
  • FIG. 9 is a block diagram illustrating a specific configuration example of the quantization unit 4 and the inverse quantization unit 8 in FIG.
  • FIG. 10 is a diagram illustrating an example of syntax transmitted by the image encoding device according to the embodiment.
  • FIG. 11 is a flowchart illustrating the operation of the image encoding device according to the
  • FIG. 12A is a block diagram illustrating a schematic configuration of a computer including a storage unit that stores an image encoding program according to an embodiment.
  • FIG. 12B is a block diagram illustrating a schematic configuration of a computer including a storage unit that stores an image decoding program according to an embodiment.
  • FIG. 13 is a block diagram illustrating an image decoding apparatus according to an embodiment.
  • FIG. 14 is a block diagram illustrating a specific configuration example of the inverse quantization unit 23 in FIG. 13.
  • FIG. 15 is a flowchart illustrating an operation of the image decoding apparatus according to the embodiment, processing by the image decoding method according to the embodiment, and processing executed by the image decoding program according to the embodiment.
  • the preprocessing device 50 sets various conditions for encoding image information by the image encoding device 100 according to an embodiment in accordance with a user operation.
  • the image encoding device 100 encodes the image information according to the setting information from the preprocessing device 50 and outputs a bit stream.
  • the bit stream includes encoded image information and syntax.
  • the transmission device 110 transmits a bit stream to a predetermined transmission path 120.
  • the transmission path 120 is wired or wireless, and may be a communication line such as the Internet, a telephone line, or a radio wave for terrestrial broadcasting or satellite broadcasting that transmits a television signal.
  • the receiving device 150 receives the bit stream transmitted through the transmission path 120.
  • the image decoding apparatus 200 decodes a bit stream and outputs decoded image information.
  • FIG. 2 shows a specific configuration example of the image encoding device 100.
  • pixels of image information of a digital signal are sequentially input to a reorder buffer 1. If the image information is an analog signal, it may be converted into a digital signal by the A / D converter in the previous stage of the rearrangement buffer 1.
  • the image information is, for example, a luminance signal Y (hereinafter referred to as Y signal) and color difference signals Cb and Cr (hereinafter referred to as Cb and Cr signals), and Y, Cb, Cr video signals in 4: 2: 0 format are taken as an example.
  • the rearrangement buffer 1 stores the input pixels for a plurality of frames, rearranges and reads out the frames (pictures) as necessary.
  • Each frame of image information is encoded using an I picture that is encoded using pixels in the frame, a P picture that is encoded predictively using pixels in a past frame, and a predictive encoding using pixels in past and future frames Is set to one of the B pictures.
  • the I picture, P picture, and B picture may be set by the preprocessing device 50, or may be selected by the image encoding device 100 according to a predetermined rule.
  • the rearrangement buffer 1 reads out by rearranging the order of frames for encoding when a plurality of picture groups (GOPs), which are constituent units of a sequence constituting a bit stream described later, include B pictures.
  • GOPs picture groups
  • the frame output from the rearrangement buffer 1 is divided into units composed of a plurality of pixels as follows. As shown in FIG. 3, the pixels constituting the frame are divided into, for example, coding tree units (CTU: Coding Tree Unit) having 64 horizontal pixels and 64 vertical pixels. CTU is CU hierarchy 0. Each CTU may be divided into variable sized coding units (CU: Coding Unit) based on recursive quadtree block division.
  • CTU Coding Tree Unit
  • CU Coding Unit
  • the CTU may be divided into CUs of CU hierarchy 1 with 32 horizontal pixels and 32 vertical pixels.
  • the CU of CU layer 1 may be divided into CUs of CU layer 2 of 16 pixels horizontally and 16 pixels vertically.
  • the CU of CU layer 2 may be divided into CUs of CU layer 3 of 8 horizontal pixels and 8 vertical pixels.
  • the CTU becomes the CU as it is.
  • the maximum size CU is called a maximum coding unit (LCU: Largegest Coding Unit), and the minimum size CU is called a minimum coding unit (SCU: Smallest Coding Unit).
  • LCU Largegest Coding Unit
  • SCU Smallest Coding Unit
  • the CU includes a Y signal coding block (CB) having the same size as that of the CU, and a Cb signal and a CB of the Cr signal that are 1 ⁇ 4 the size of the CU.
  • CB Y signal coding block
  • the CU is divided into prediction units (PU: Prediction Unit) for prediction processing in the intra prediction unit 14 and the inter prediction unit 15 in FIG.
  • PU Prediction Unit
  • 2N ⁇ 2N or N ⁇ N PUs are selected in a CU (intra CU) encoded based on intra-frame prediction by the intra prediction unit 14.
  • the N ⁇ N PU is used only by the SCU.
  • N ⁇ N can be used only by SCUs where N is 16 or more, and 2N ⁇ nU, 2N ⁇ nD, nL ⁇ 2N, and nR ⁇ 2N can be used only when asymmetric motion division is enabled. .
  • 2N ⁇ nU, 2N ⁇ nD, nL ⁇ 2N, and nR ⁇ 2N are PUs that divide the CU by 1: 3 vertically, 3: 1 vertically, 1: 3 horizontally, and 3: 1 horizontally.
  • the PU includes a prediction block (PB) of the Y signal having the same size as the PU, and a Pb of the Cb signal and the Cr signal having a size of 1/4 of the PU.
  • PB prediction block
  • the CU is a variable-size transform unit (TU: Transform Unit based on recursive quadtree block partitioning for orthogonal transform processing in the orthogonal transform unit 3 and quantization processing in the quantization unit 4 in FIG. ).
  • TU Transform Unit based on recursive quadtree block partitioning for orthogonal transform processing in the orthogonal transform unit 3 and quantization processing in the quantization unit 4 in FIG.
  • the maximum size of TU is 32 pixels horizontally and 32 pixels vertically. If the TU of 32 horizontal pixels and 32 vertical pixels is TU layer 0, TU layer 0 may be divided into TUs of TU layer 1 of 16 horizontal pixels and 16 vertical pixels. The TU of TU layer 1 may be divided into TUs of TU layer 2 of 8 horizontal pixels and 8 vertical pixels. The TU of the TU hierarchy 2 may be divided into TUs of the TU hierarchy 3 of 4 horizontal pixels and 4 vertical pixels.
  • the parent node of the quadtree in the intra CU is PU
  • the parent node of the quadtree in the inter CU is CU.
  • the TU includes a Y signal conversion block (TB) having the same size as the TU, and a Cb signal and a Cr signal TB that are 1 ⁇ 4 the size of the TU.
  • TB Y signal conversion block
  • CU, PU, and TU of the above variable sizes are selected so that the cost function value described later is minimized. Since the CU, PU, and TU that minimize the cost function value are different according to the pattern of the image information, the pixels of each frame are divided in a state where CU, PU, and TU having different sizes are mixed.
  • 3 to 5 exemplify the structure of CU, PU, and TU employed in HEVC, but the structure of CU, PU, and TU is not particularly limited. Instead of CU, PU, and TU, a macroblock structure similar to that employed in MPEG2 may be used. In this embodiment, the case where the structure of CU, PU, and TU is adopted is taken as an example.
  • a frame composed of a plurality of CTUs includes at least one slice.
  • a thick solid line indicates a boundary between slices, and the frame is divided into three slices SL1 to SL3. How the frame is divided into a plurality of slices is set by the preprocessing device 50.
  • Each of the slices SL1 to SL3 includes at least one continuous CTU.
  • the image encoding device 100 encodes image information in units of slices
  • the image decoding device 200 decodes image information in units of slices.
  • arrows indicated by solid lines indicate the order of encoding and decoding.
  • the subtracter 2 uses the prediction value (predicted image) generated by the intra prediction unit 14 or the inter prediction unit 15 described later from the CU of the image information that is the original image output from the rearrangement buffer 1. Subtract to produce a prediction residual. The subtracter 2 supplies the prediction residual to the orthogonal transformation unit 3.
  • the orthogonal transform unit 3 performs orthogonal transform on the prediction residual in units of TU and converts the prediction residual into a frequency domain signal.
  • the orthogonal transform unit 3 uses the discrete sine transform (DST) only when the intra prediction is selected and the TU is 4 pixels horizontal and 4 pixels vertical, and the discrete cosine transform (DCT) is used in other cases.
  • DST discrete sine transform
  • DCT discrete cosine transform
  • the orthogonal transformation is performed on the prediction residual.
  • the orthogonal transform unit 3 supplies the orthogonal transform coefficient to the quantization unit 4.
  • the quantization unit 4 quantizes the orthogonal transform coefficient and supplies it to the entropy encoding unit 5 and the inverse quantization unit 8. A specific configuration and operation of the quantization unit 4 will be described later.
  • the entropy encoding unit 5 assigns codes having different lengths to the quantized orthogonal transform coefficients based on the occurrence probabilities, and entropy codes the orthogonal transform coefficients.
  • the entropy encoding unit 5 also entropy encodes the syntax for encoding image information.
  • the syntax includes various syntax elements such as information for specifying a prediction mode, a motion vector, and a reference pixel selected by the intra prediction unit 14 or the inter prediction unit 15.
  • the entropy encoding unit 5 can entropy encode orthogonal transform coefficients and syntax using context adaptive arithmetic codes (CABAC: “Context-based” Adaptive “Binary” Arithmetic “Coding”).
  • CABAC context adaptive arithmetic codes
  • the rate control unit 7 controls the rate of the quantization operation in the quantization unit 4 so that the encoded data output from the entropy encoding unit 5 does not overflow or underflow.
  • An HRD (Hypothetical Reference Decoder) buffer 6 temporarily accumulates and outputs a bit stream composed of encoded data output from the entropy encoding unit 5.
  • the inverse quantization unit 8 inversely quantizes the quantized orthogonal transform coefficient in units of TUs and supplies the quantized orthogonal transform coefficient to the inverse orthogonal transform unit 9.
  • the inverse quantization operation in the inverse quantization unit 8 is an operation opposite to the quantization operation in the quantization unit 4.
  • the inverse orthogonal transform unit 9 performs inverse orthogonal transform on the input orthogonal transform coefficient in units of TU, and supplies the prediction residual to the adder 10.
  • the adder 10 adds the input prediction residual and the prediction value generated by the intra prediction unit 14 or the inter prediction unit 15 selected by the prediction value selection unit 16 to generate a decoded signal.
  • the decoded signal is supplied to the loop filter 11 and the frame memory 12.
  • the loop filter 11 reduces the coding noise of the decoded signal.
  • the loop filter 11 includes a deblocking filter that reduces distortion generated at a block boundary and a pixel adaptive offset that reduces ringing distortion.
  • the decoded signal filtered by the loop filter 11 is supplied to the frame memory 12.
  • the frame memory 12 stores the decoded signal output from the adder 10 that has not been subjected to filter processing by the loop filter 11 and the decoded signal that has been subjected to filter processing by the loop filter 11.
  • the switch 13 supplies the decoded signal stored in the frame memory 12 that has not been subjected to the filter processing to the intra prediction unit 14 and supplies the decoded signal stored in the frame memory 12 to which the filter processing has been performed to the inter prediction unit 15. To do.
  • the intra prediction unit 14 For the Y signal, the intra prediction unit 14 performs, for example, a total of 35 types of prediction in 33 units of directional prediction modes in 33 types of directions, DC (direct current) prediction mode, and planar (Planar) prediction mode described later. Generate predicted values in mode. However, the prediction mode is selected in units of PUs.
  • the intra prediction unit 14 When the prediction mode identical to the Y signal prediction mode is not used in units of TUs for the Cb and Cr signals, the intra prediction unit 14 performs vertical prediction modes shown in FIGS. 7A to 7E, respectively.
  • the prediction value is generated in the horizontal prediction mode, the DC prediction mode, the planar prediction mode, and the CCLM prediction mode. However, the prediction mode is selected in units of PUs.
  • H is a reference pixel located above a TU that is a target for generating a predicted value
  • V is a reference pixel located on the left side of the TU.
  • the vertical prediction mode illustrated in FIG. 7A is a mode in which the prediction pixel in the TU is generated in the vertical direction using the reference pixel H.
  • the horizontal prediction mode illustrated in FIG. 7B is a mode in which the prediction pixel in the TU is generated in the horizontal direction using the reference pixel V.
  • the DC prediction mode shown in FIG. 7C is a mode for generating a prediction pixel in the TU using an average value of the reference pixels H and V.
  • the planar prediction mode illustrated in FIG. 7D is a mode in which prediction pixels in the TU are generated by interpolation prediction using four reference pixels of the reference pixels H and V.
  • the CCLM prediction mode shown in FIG. 7E is a mode for generating prediction pixels in the TU as follows.
  • the TB of the Cb and Cr signals and the TB of the Y signal have the relationship shown in FIGS. 8 (a) and 8 (b).
  • N 8 is shown.
  • the peripheral pixels P1 to P16 located above and to the left of the TB of the Cb and Cr signals shown in FIG. 8A are the peripheral pixels located above and to the left of the TB of the Y signal shown in FIG.
  • the interpolated pixels correspond to the peripheral interpolation pixels Pi1 to Pi16 at the illustrated positions.
  • the predicted value pred C (i, j) of the chrominance signal is expressed by equation (1) using a pixel value rec L (i, j) obtained by shifting the decoded pixel value of the Y signal to the phase of the C signal by a linear model. expressed. (i, j) indicates a pixel position.
  • the intra prediction unit 14 generates a prediction value pred C (i, j) of the Cb and Cr signals using Expression (1).
  • ⁇ and ⁇ in Expression (1) are obtained by, for example, the linear least square method using the peripheral pixels P1 to P16 and the peripheral interpolation pixels Pi1 to Pi16. Therefore, it is not necessary to transmit ⁇ and ⁇ as a bit stream.
  • the intra prediction unit 14 corrects the prediction value pred Cr (i, j) of the Cr signal using the prediction residual resi Cb ′ (i, j) of the Cb signal based on the equation (2), and corrects the correction.
  • the predicted value pred * Cr (i, j) of the Cr signal is generated.
  • the intra prediction unit 14 generates a prediction value in a plurality of prediction modes including the CCLM prediction mode.
  • the prediction modes that do not use the decoded pixel value of the Y signal other than the CCLM prediction mode are not limited to the prediction modes illustrated in FIGS. 7A to 7D. Even when the intra prediction unit 14 generates the prediction values of the Cb and Cr signals using the same prediction mode as the Y signal prediction mode, the prediction mode is not particularly limited.
  • the intra prediction unit 14 calculates a cost function value in one determination mode of High Complexity Mode and Low Complexity Mode, selects a size of the CU, PU, and TU that minimizes the cost function value, and The prediction value of the prediction mode that minimizes the cost function value is selected.
  • High Complexity Mode is a determination mode for obtaining the cost function value Cost_Func based on Equation (3).
  • D is a difference between the original image and the decoded image
  • is a Lagrange multiplier
  • R is all generated code amounts including orthogonal transform coefficients.
  • it is necessary to calculate the generated code amount by encoding once in all the prediction modes as candidates.
  • Low Complexity Mode is a determination mode for obtaining the cost function value Cost_Func based on Expression (4).
  • SA (T) D is a value obtained by multiplying the difference between the original image and the decoded image by a Hadamard matrix
  • QP 0 (QP) is a function of a quantization parameter (QP)
  • Header_Bit includes an orthogonal transform coefficient
  • Cost_Func SA (T) D + QP 0 (QP) x Header_Bit (4)
  • the inter prediction unit 15 detects the motion of the image, and sets the CU size as an upper limit to a minimum of horizontal 8 pixels, vertical 4 pixels or horizontal 4 pixels, and vertical 8 pixels from a PU of 64 horizontal pixels.
  • the prediction value is generated by performing inter-frame motion compensation prediction with a PU of 64 vertical pixels.
  • the inter prediction unit 15 generates a predicted value (predicted image) with reference to a past frame, a future frame, or a past and future frame. Each of the past and future frames may be a plurality of frames.
  • the inter prediction unit 15 calculates the cost function value in one determination mode of High Complexity Mode and Low Complexity Mode, and selects the size of the CU, PU, and TU that minimizes the cost function value. In addition, the prediction value of the prediction mode that minimizes the cost function value is selected.
  • the prediction value selection unit 16 selects the smaller one of the prediction value selected by the intra prediction unit 14 and the prediction value selected by the inter prediction unit 15 as the final prediction value, and adds the subtractor 2 and the addition. Supply to the vessel 10.
  • the intra prediction unit 14 is used when encoding an I picture.
  • the smaller one of the prediction value selected by the intra prediction unit 14 and the prediction value selected by the inter prediction unit 15 is selected.
  • the CCLM prediction mode detection unit 17 is supplied with information indicating the prediction mode selected by the intra prediction unit 14.
  • the quantization unit detects the detection information indicating that fact. 4 and the inverse quantization unit 8.
  • the quantization unit 4 and the inverse quantization unit 8 that have received the detection information are configured to correct the quantization parameter.
  • the quantization unit 4 includes a default quantization parameter holding unit 41, a quantization parameter correction unit 42, and a quantization processing unit 43.
  • the inverse quantization unit 8 includes a default quantization parameter holding unit 81, a quantization parameter correction unit 82, and an inverse quantization processing unit 83.
  • Quantization parameters for the Y signal and the Cb and Cr signals obtained by the normal decoding process are QP Y , QP Cb and QP Cr , respectively.
  • the default quantization parameter holding unit 41 holds the quantization parameters QP Cb and QP Cr as default quantization parameters.
  • the quantization parameter correction unit 42 holds the first offset value cclm_1st_QP_offset and the second offset value cclm_2nd_QP_offset according to the setting by the preprocessing device 50.
  • the first offset value cclm_1st_QP_offset indicates an offset value of the quantization parameters QP Cb and QP Cr with respect to the quantization parameter QP Y.
  • the second offset value cclm_2nd_QP_offset indicates the offset value of the quantization parameter QP Cr with respect to the quantization parameter QP Cb .
  • the quantization parameter correction unit 42 obtains Cb and Cr in the CCLM prediction mode as shown in equations (5) and (6). Correct the quantization parameter for the signal.
  • the correction quantization parameters for the Cb and Cr signals in the CCLM prediction mode are QP Cb_cclm and QP Cr_cclm .
  • QP Cb_cclm Clip (QP Cb + cclm_1st_QP_offset, 0, Max_QP_value) (5)
  • QP Cr_cclm Clip (QP Cr + cclm_1st_QP_offset + cclm_2nd_QP_offset, 0, Max_QP_value) (6)
  • Expression (5) adds the first offset value cclm_1st_QP_offset to the quantization parameter QP Cb, and limits the added value between the minimum value (here, 0) and the maximum value (Max_QP_value) in the quantization parameter standard. It is shown that the obtained value is set as a quantization parameter QP Cb_cclm .
  • Equation (6) adds the first offset value cclm_1st_QP_offset and the second offset value cclm_2nd_QP_offset to the quantization parameter QP Cr and restricts the added value between the minimum value and the maximum value in the standard of the quantization parameter. This indicates that the value is the quantization parameter QP Cr_cclm .
  • the first offset value cclm — 1st_QP_offset and Setting the second offset value cclm_2nd_QP_offset has the following advantages.
  • the second offset value cclm_2nd_QP_offset may be zero. Therefore, if the first offset value cclm_1st_QP_offset and the second offset value cclm_2nd_QP_offset are set as described above, the generated code amount can be reduced.
  • the quantization parameter correction unit 42 supplies correction quantization parameters QP Cb_cclm and QP Cr_cclm to the quantization processing unit 43 when the CCLM prediction mode is selected, and default quantization when a prediction mode other than the CCLM prediction mode is selected.
  • the parameters QP Cb and QP Cr are supplied to the quantization processing unit 43.
  • the quantization processing unit 43 quantizes the input orthogonal transform coefficient with the predetermined quantization parameters QP Cb and QP Cr or the corrected quantization parameters QP Cb_cclm and QP Cr_cclm and outputs a quantized orthogonal transform coefficient.
  • the operations of the default quantization parameter holding unit 81 and the quantization parameter correction unit 82 in the inverse quantization unit 8 are the same as the operations of the default quantization parameter holding unit 41 and the quantization parameter correction unit 42, respectively.
  • the inverse quantization processing unit 83 inversely quantizes the quantized orthogonal transform coefficient with the predetermined quantization parameters QP Cb and QP Cr or the corrected quantization parameters QP Cb_cclm and QP Cr_cclm , and outputs an inverse quantized orthogonal transform coefficient.
  • the first offset value cclm_1st_QP_offset and the second offset value cclm_2nd_QP_offset may be transmitted with a syntax such as a slice header of a bit stream or a picture parameter set (Picture Parameter Set).
  • the first offset value cclm_1st_QP_offset and the second offset value cclm_2nd_QP_offset need only be transmitted when the CCLM prediction mode is enabled in each slice.
  • FIG. 10 shows an example of syntax generated and transmitted by the entropy encoding unit 5.
  • the CCLM prediction mode enable flag is 1
  • syntax elements indicating the first offset value cclm_1st_QP_offset and the second offset value cclm_2nd_QP_offset are included in the syntax and transmitted.
  • Cclm_1st_qp_offset and cclm_2nd_qp_offset in FIG. 10 are syntax elements indicating the first offset value cclm_1st_QP_offset and the second offset value cclm_2nd_QP_offset, respectively.
  • the image encoding apparatus 100 has the configuration shown in FIG. 9, so that when the CCLM prediction mode is selected, the reference Cb and Cr signals are compared with the reference source Cb and Cr signals (referenced Cb and Cr signals). The quantization step of the Cr signal is increased and the quantization value is set high. Therefore, the image coding apparatus 100 can improve the compression rate by reducing the amount of code necessary to compress image information.
  • FIG. 11 shows processing executed by the quantization unit 4 and the inverse quantization unit 8.
  • the quantization unit 4 receives the orthogonal transform coefficient of the block in the slice of the frame to be encoded in step S11.
  • the inverse quantization unit 8 receives the quantized orthogonal transform coefficient of the block in the slice of the frame to be encoded.
  • the quantization unit 4 and the inverse quantization unit 8 determine whether the CCLM prediction mode is enabled in the slice of the frame.
  • the quantization unit 4 and the inverse quantization unit 8 determine whether the detection information is supplied from the CCLM prediction mode detection unit 17 in step S13. It is determined whether the CCLM prediction mode is selected in the block.
  • the quantization unit 4 and the inverse quantization unit 8 add the first offset value cclm — 1st_QP_offset and the second value to the predetermined quantization parameters QP Cb and QP Cr in step S14.
  • the offset values cclm_2nd_QP_offset are added to generate corrected quantization parameters QP Cb_cclm and QP Cr_cclm .
  • step S15 the quantization unit 4 quantizes the orthogonal transform coefficient using the corrected quantization parameters QP Cb_cclm and QP Cr_cclm , and shifts the processing to step S17.
  • the inverse quantization unit 8 inversely quantizes the quantized orthogonal transform coefficient using the corrected quantization parameters QP Cb_cclm and QP Cr_cclm , and shifts the processing to step S17.
  • step S12 the quantization unit 4 and the inverse quantization unit 8 shift the processing to step S16. Also, if the CCLM prediction mode is not selected in step S13 (NO), the quantization unit 4 and the inverse quantization unit 8 shift the processing to step S16.
  • step S16 the quantization unit 4 quantizes the orthogonal transform coefficient using the predetermined quantization parameters QP Cb and QP Cr , and shifts the processing to step S17.
  • the inverse quantization unit 8 inversely quantizes the quantized orthogonal transform coefficient using the predetermined quantization parameters QP Cb and QP Cr , and shifts the processing to step S17.
  • step S17 the quantization unit 4 and the inverse quantization unit 8 determine whether the quantization process and the inverse quantization process have been completed for all the blocks of the slice. If the quantization process and the inverse quantization process have not been completed for all the blocks (NO), the quantization unit 4 and the inverse quantization unit 8 return the process to step S13, and repeat the same process in the next block.
  • the quantization unit 4 and the inverse quantization unit 8 respectively perform the quantization process for all the slices of the frame in step S18. And it is determined whether or not the inverse quantization process is completed. If the quantization process and the inverse quantization process for all slices are not completed (NO), the quantization unit 4 and the inverse quantization unit 8 return the process to step S12, and repeat the same process in the next slice.
  • the quantization unit 4 and the inverse quantization unit 8 respectively perform the quantization process and the inverse quantum for each frame in step S19. It is determined whether or not the digitization process has been completed. If the quantization process and the inverse quantization process for all the frames are not completed (NO), the quantization unit 4 and the inverse quantization unit 8 return the process to step S11 and repeat the same process in the next frame.
  • the quantization unit 4 and the inverse quantization unit 8 end the process.
  • Image coding program The operation in the image encoding device 100 shown in FIG. 2 can be executed by a computer by a computer program (image encoding program). Each process shown in FIG. 11 can be executed by a computer using an image encoding program.
  • a computer 300 includes a central processing unit (CPU) 301 and a storage unit 302.
  • An operation unit 310 is connected to the computer 300.
  • the storage unit 302 stores an image encoding program.
  • the operation unit 310 can function as the pre-processing device 50 shown in FIG.
  • the computer 300 can function as the image encoding device 100 that encodes the input image information.
  • the storage unit 302 is an arbitrary non-temporary storage medium such as a semiconductor memory, a hard disk drive, or an optical disk.
  • the image encoding program may be provided to the computer 300 via a communication line such as the Internet.
  • FIG. 13 shows a specific configuration example of the image decoding device 200.
  • the HRD buffer 21 temporarily accumulates the bit stream and supplies it to the entropy decoding unit 22.
  • the entropy decoding unit 22 entropy decodes the orthogonal transform coefficients and syntax included in the bitstream.
  • the orthogonal transform coefficient is supplied to the inverse quantization unit 23.
  • Information indicating which of intra syntax and inter prediction is adopted among the syntax elements is supplied to the switch 32.
  • information related to intra prediction is supplied to the intra prediction unit 30 and the CCLM prediction mode detection unit 33.
  • Information related to inter prediction among the syntax elements is supplied to the inter prediction unit 31.
  • the inverse quantization unit 23 inversely quantizes the orthogonal transform coefficient in units of TU and supplies the inverse transform coefficient 24 to the inverse orthogonal transform unit 24.
  • the inverse orthogonal transform unit 24 performs inverse orthogonal transform on the inversely quantized orthogonal transform coefficient in units of TU, and supplies the prediction residual to the adder 25.
  • the adder 25 adds the input prediction residual and the prediction value generated by the intra prediction unit 30 or the inter prediction unit 31 supplied from the switch 32 to generate a decoded signal.
  • the decoded signal is supplied to the loop filter 26, the re-order buffer 27, and the frame memory 28.
  • the loop filter 26 has the same configuration as the loop filter 11 and reduces the coding noise of the decoded signal.
  • the rearrangement buffer 27 accumulates the pixels supplied from the loop filter 26 for a plurality of frames. If the frame order is rearranged, the rearrangement buffer 27 rearranges the frames in the order of the image information of the original image and outputs it as decoded image information.
  • the decoded image information is converted into an analog signal by a D / A converter as necessary.
  • the frame memory 28 stores the decoded signal output from the adder 25 and not subjected to the filter processing by the loop filter 26 and the decoded signal subjected to the filter processing by the loop filter 26.
  • the switch 29 supplies the decoded signal that has not been subjected to the filter processing accumulated in the frame memory 28 to the intra prediction unit 30, and supplies the decoded signal that has been subjected to the filter processing accumulated in the frame memory 28 to the inter prediction unit 31. To do.
  • the intra prediction unit 30 performs intra-frame prediction in accordance with information indicating the prediction mode of intra prediction, and generates respective prediction values for the Y signal, the Cb signal, and the Cr signal.
  • the inter prediction unit 31 performs inter-frame prediction in accordance with information related to inter prediction, and generates respective prediction values for the Y signal and the Cb and Cr signals.
  • the switch 32 supplies the adder 25 with the prediction value generated by the intra prediction unit 30 or the inter prediction unit 31 according to information indicating which of intra prediction and inter prediction is adopted.
  • the CCLM prediction mode detection unit 33 detects that the input information indicating the prediction mode indicates the CCLM prediction mode and the intra prediction unit 14 selects the CCLM prediction mode, the detection information indicating that is dequantized. To the unit 23.
  • the inverse quantization unit 23 that has received the detection information is configured to correct the quantization parameter.
  • the inverse quantization unit 23 includes a default quantization parameter holding unit 231, a quantization parameter correction unit 232, and an inverse quantization processing unit 233.
  • the operation of the inverse quantization unit 23 is the same as the operation of the inverse quantization unit 8 in FIG.
  • the default quantization parameter holding unit 231 holds default quantization parameters QP Cb and QP Cr .
  • the quantization parameter correction unit 232 When the CCLM prediction mode detection unit 33 detects that encoding is performed in the CCLM prediction mode, the quantization parameter correction unit 232 generates corrected quantization parameters QP Cb_cclm and QP Cr_cclm according to equations (5) and (6). To do.
  • the quantization parameter correction unit 232 supplies the corrected quantization parameters QP Cb_cclm and QP Cr_cclm to the inverse quantization processing unit 233, so that a prediction mode other than the CCLM prediction mode is provided.
  • the predetermined quantization parameters QP Cb and QP Cr are supplied to the inverse quantization processing unit 233.
  • the inverse quantization processing unit 233 inversely quantizes the input orthogonal transform coefficient with the predetermined quantization parameters QP Cb and QP Cr or the corrected quantization parameters QP Cb_cclm and QP Cr_cclm , and outputs an inverse quantization orthogonal transform coefficient.
  • the image decoding apparatus 200 has the configuration shown in FIG. 14, whereby an image is encoded in the CCLM prediction mode, and the quantization value of the reference destination Cb and Cr signal is set higher than that of the reference source Cb and Cr signal. By doing so, it is possible to decode a bit stream with an improved compression rate.
  • FIG. 15 shows processing executed by the inverse quantization unit 23.
  • the inverse quantization unit 23 receives the orthogonal transform coefficient of the block in the slice of the frame to be decoded in step S21. In step S22, the inverse quantization unit 23 determines whether the CCLM prediction mode is used in the slice based on whether the enable flag of the CCLM prediction mode is 1.
  • the inverse quantization unit 23 receives the first offset value cclm_1st_QP_offset and the second offset value cclm_2nd_QP_offset decoded by the entropy decoding unit 22 in step S23. Based on whether detection information is supplied from the CCLM prediction mode detection unit 33, the inverse quantization unit 23 determines whether the block is encoded in the CCLM prediction mode.
  • the inverse quantization unit 23 adds the first offset value cclm_1st_QP_offset and the second offset to the predetermined quantization parameters QP Cb and QP Cr in step S25.
  • the value cclm_2nd_QP_offset is added to generate corrected quantization parameters QP Cb_cclm and QP Cr_cclm .
  • step S26 the inverse quantization unit 23 inversely quantizes the orthogonal transform coefficient using the corrected quantization parameters QP Cb_cclm and QP Cr_cclm , and shifts the processing to step S28.
  • step S27 the inverse quantization unit 23 shifts the process to step S27. Moreover, the inverse quantization part 23 will transfer a process to step S27, if the said block is not encoded by CCLM prediction mode in step S24 (NO). In step S27, the inverse quantization unit 23 inversely quantizes the orthogonal transform coefficient using the predetermined quantization parameters QP Cb and QP Cr , and shifts the processing to step S28.
  • step S28 the inverse quantization unit 23 determines whether or not the inverse quantization process has been completed for all the blocks of the slice. If the inverse quantization process has not been completed for all the blocks (NO), the inverse quantization unit 23 returns the process to step S24 and repeats the same process in the next block.
  • the inverse quantization unit 23 determines whether or not the inverse quantization process for all slices of the frame has been completed in step S29. If the inverse quantization process has not been completed for all slices (NO), the inverse quantization unit 23 returns the process to step S22 and repeats the same process in the next slice.
  • the inverse quantization unit 23 determines in step S30 whether the inverse quantization process for all frames has been completed. If the inverse quantization process has not been completed for all the frames (NO), the inverse quantization unit 23 returns the process to step S21 and repeats the same process in the next frame. If the inverse quantization process for all the frames has been completed (YES), the inverse quantization unit 23 ends the process.
  • Image decoding program The operation of the image decoding apparatus 200 shown in FIG. 13 can be executed by a computer by a computer program (image decoding program). Each process shown in FIG. 15 can be executed by a computer using an image decoding program.
  • the computer has the same configuration as in FIG. 12A, and the description of the parts common to FIG. 12A is omitted.
  • the storage unit 302 stores an image decoding program instead of the image encoding program.
  • the computer 300 can function as the image decoding device 200 that decodes the encoded bit stream.
  • the first offset value cclm_1st_QP_offset and the second offset value cclm_2nd_QP_offset may be set as a predetermined fixed value without being transmitted in the bit stream.
  • Different values of the first offset value cclm_1st_QP_offset and the second offset value cclm_2nd_QP_offset may be used according to the value of the quantization parameter for the sub-LCU (subLCU) to which the block to be quantized belongs.
  • a sub-LCU is a maximum coding unit having a size smaller than that of a CTU. For example, when the bit rate is high (the quantization parameter is low), the offset value is not added to the default quantization parameters QP Cb and QP Cr , and when the bit rate is low (the quantization parameter is high), the offset value is added. May be.
  • a threshold of a quantization parameter for selecting whether or not to add an offset value to the predetermined quantization parameters QP Cb and QP Cr may be transmitted as a bit stream.
  • the threshold value of the quantization parameter may be transmitted using a slice header or a picture parameter set.
  • Y, Cb, and Cr video signals are taken as examples of image information, but the color space is not limited to Y, Cb, and Cr, and may be Y, Co, and Cg video signals.
  • the image information may be a video signal in an arbitrary color space including a luminance component and a color component composed of two color difference signals.
  • the Y, Cb, Cr video signal in the 4: 2: 0 format is taken as an example, but the Y, Cb, Cr video signal is in the 4: 2: 2 format or the 4: 4: 4 format. Also good.
  • the CCLM prediction mode is used in intra prediction, but the CCLM prediction mode may be used in inter prediction. That is, the image coding apparatus 100 predicts a color component without using a first prediction mode in which a prediction value of a color component is generated by a linear model using the decoded pixel value of the luminance component, and without using a decoded pixel value of the luminance signal. What is necessary is just to provide the predicted value generation part which generates the predicted value of the color component in each block in a picture using either of the 2nd prediction modes which generate a value. The same applies to the image decoding apparatus 200.
  • the intra prediction unit 14 may be a prediction value generation unit
  • the inter prediction unit 15 may be a prediction value generation unit
  • the intra prediction unit 14 and the inter prediction unit 15 may be prediction value generation units.
  • the intra prediction unit 30 may be a prediction value generation unit
  • the inter prediction unit 31 may be a prediction value generation unit
  • the intra prediction unit 30 and the inter prediction unit 31 may be prediction value generation units.
  • the CCLM prediction mode for generating the predicted value of the color difference signal using the decoded pixel value of the luminance signal is taken as an example, but the CCLM for generating the predicted value of the Cr signal using the decoded pixel value of the Cb signal. It is also applicable to the prediction mode. That is, the configuration of the present embodiment can be applied if there is a dependency relationship between components of the video signal (luminance signal and color difference signal).
  • the image encoding device 100 and the image decoding device 200 may be configured by hardware such as an integrated circuit, may be configured by software, or both may be mixed.
  • the image encoding method and the image decoding method may be executed by any hardware resource such as an integrated circuit or a computer.

Abstract

Une unité de génération de valeur prédictive (unité de prédiction intra 14 ou unité de prédiction inter 15) selon l'invention génère une valeur prédictive d'une composante de couleur dans chaque bloc à l'intérieur d'une image en utilisant l'un ou l'autre parmi : un premier mode de prédiction pour générer la valeur prédictive de la composante de couleur par un modèle linéaire en utilisant une valeur de pixel décodée d'une composante de luminance ; et un second mode de prédiction pour générer la valeur prédictive de la composante de couleur sans utiliser une valeur de pixel décodée d'un signal de luminance. Un soustracteur (2) génère un résidu prédictif en soustrayant la valeur prédictive de la composante de couleur dans une image d'origine. Une unité de transformation orthogonale (3) génère un coefficient de transformation orthogonale par transformation orthogonale du résidu prédictif. Lorsque l'unité de génération de valeur prédictive sélectionne le premier mode de prédiction pour générer la valeur prédictive de la composante de couleur, une unité de quantification (4) quantifie le coefficient de transformation orthogonale par définition d'une valeur de quantification de la composante de couleur dans une cible de référence supérieure à la composante de couleur dans une source de référence.
PCT/JP2017/031137 2016-09-27 2017-08-30 Dispositif de codage d'image, procédé de codage d'image, programme de codage d'image, dispositif de décodage d'image, procédé de décodage d'image et programme de décodage d'image WO2018061588A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016188120A JP2018056685A (ja) 2016-09-27 2016-09-27 画像符号化装置、画像符号化方法、及び画像符号化プログラム、並びに、画像復号装置、画像復号方法、及び画像復号プログラム
JP2016-188120 2016-09-27

Publications (1)

Publication Number Publication Date
WO2018061588A1 true WO2018061588A1 (fr) 2018-04-05

Family

ID=61759556

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/031137 WO2018061588A1 (fr) 2016-09-27 2017-08-30 Dispositif de codage d'image, procédé de codage d'image, programme de codage d'image, dispositif de décodage d'image, procédé de décodage d'image et programme de décodage d'image

Country Status (2)

Country Link
JP (1) JP2018056685A (fr)
WO (1) WO2018061588A1 (fr)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020073990A1 (fr) * 2018-10-12 2020-04-16 Oppo广东移动通信有限公司 Procédé et appareil de prédiction de composante d'image vidéo, et support d'informations informatique
CN111698501A (zh) * 2019-03-11 2020-09-22 杭州海康威视数字技术股份有限公司 解码方法及装置
CN111886860A (zh) * 2018-06-06 2020-11-03 Kddi 株式会社 图像解码装置、图像编码装置、图像处理系统、图像解码方法和程序
CN112567743A (zh) * 2018-08-15 2021-03-26 日本放送协会 图像编码装置、图像解码装置及程序
CN112585967A (zh) * 2018-08-15 2021-03-30 日本放送协会 帧内预测装置、图像编码装置、图像解码装置及程序
CN113170169A (zh) * 2018-12-07 2021-07-23 夏普株式会社 预测图像生成装置、运动图像解码装置、运动图像编码装置以及预测图像生成方法
CN113196776A (zh) * 2018-12-20 2021-07-30 夏普株式会社 预测图像生成装置、运动图像解码装置、运动图像编码装置以及预测图像生成方法
CN114258678A (zh) * 2019-12-26 2022-03-29 Kddi 株式会社 图像解码装置、图像解码方法和程序
JP2022531216A (ja) * 2019-05-08 2022-07-06 北京字節跳動網絡技術有限公司 クロスコンポーネントコーディングの適用条件
US11463686B2 (en) * 2018-09-22 2022-10-04 Lg Electronics Inc. Method and device for decoding images using CCLM prediction in image coding system
WO2023116716A1 (fr) * 2021-12-21 2023-06-29 Mediatek Inc. Procédé et appareil pour modèle linéaire de composante transversale pour une prédiction inter dans un système de codage vidéo
RU2800683C2 (ru) * 2018-10-12 2023-07-26 Гуандун Оппо Мобайл Телекоммьюникейшнс Корп., Лтд. Способ и устройство предсказывания компонента видеоизображения и компьютерный носитель данных
US11750799B2 (en) 2019-04-23 2023-09-05 Beijing Bytedance Network Technology Co., Ltd Methods for cross component dependency reduction
US11910020B2 (en) 2019-03-08 2024-02-20 Beijing Bytedance Network Technology Co., Ltd Signaling of reshaping information in video processing
US11924472B2 (en) 2019-06-22 2024-03-05 Beijing Bytedance Network Technology Co., Ltd. Syntax element for chroma residual scaling
US11956439B2 (en) 2019-07-07 2024-04-09 Beijing Bytedance Network Technology Co., Ltd. Signaling of chroma residual scaling
WO2024074125A1 (fr) * 2022-10-07 2024-04-11 Mediatek Inc. Procédé et appareil de dérivation de modèle linéaire implicite à l'aide de multiples lignes de référence pour une prédiction inter-composantes

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112425166B (zh) 2018-07-12 2023-03-10 华为技术有限公司 视频译码中使用交叉分量线性模型进行帧内预测
WO2020073864A1 (fr) * 2018-10-08 2020-04-16 Huawei Technologies Co., Ltd. Procédé et dispositif de prédiction intra
KR102653562B1 (ko) 2018-11-06 2024-04-02 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 위치에 따른 인트라 예측
WO2020108591A1 (fr) 2018-12-01 2020-06-04 Beijing Bytedance Network Technology Co., Ltd. Dérivation de paramètre simplifiée destinée à une prédiction intra
BR112021013110A2 (pt) * 2019-01-02 2021-09-21 Sharp Kabushiki Kaisha Dispositivo de geração de imagens de predição, dispositivo de decodificação de imagens em movimento, dispositivo de codificação de imagens em movimento e método de geração de imagens de predição
BR112021016560A2 (pt) * 2019-02-22 2021-10-26 Huawei Technologies Co., Ltd. Método e aparelho para predição intra usando modelo linear
WO2020171644A1 (fr) * 2019-02-22 2020-08-27 엘지전자 주식회사 Procédé et appareil de décodage d'image sur la base de prédiction cclm dans un système de codage d'image
CN113491115A (zh) * 2019-03-06 2021-10-08 Lg 电子株式会社 基于cclm预测的图像解码方法及其装置
CN113508584A (zh) * 2019-03-25 2021-10-15 Oppo广东移动通信有限公司 图像分量预测方法、编码器、解码器以及存储介质
EP3942811A4 (fr) * 2019-04-24 2022-06-15 ByteDance Inc. Contraintes sur la représentation d'une modulation différentielle par impulsions codées de résidu quantifié pour une vidéo codée

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005039842A (ja) * 2003-07-16 2005-02-10 Samsung Electronics Co Ltd カラー映像のためのビデオ符号化/復号化装置およびその方法
JP2012142844A (ja) * 2011-01-05 2012-07-26 Seikei Gakuen カラー動画像符号化方法及びカラー動画像符号化装置
WO2014166965A1 (fr) * 2013-04-08 2014-10-16 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Prédiction inter-composante
WO2016040865A1 (fr) * 2014-09-12 2016-03-17 Vid Scale, Inc. Dé-corrélation inter-composante pour codage vidéo

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005039842A (ja) * 2003-07-16 2005-02-10 Samsung Electronics Co Ltd カラー映像のためのビデオ符号化/復号化装置およびその方法
JP2012142844A (ja) * 2011-01-05 2012-07-26 Seikei Gakuen カラー動画像符号化方法及びカラー動画像符号化装置
WO2014166965A1 (fr) * 2013-04-08 2014-10-16 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Prédiction inter-composante
WO2016040865A1 (fr) * 2014-09-12 2016-03-17 Vid Scale, Inc. Dé-corrélation inter-composante pour codage vidéo

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "High efficiency video coding", RECOMMENDATION ITU-T H. 265 , H.265(04/2013), ITU-T, April 2013 (2013-04-01), pages 34 - 35 , 39-41, 68-71, 76-80, 140-141, XP055232953 *
JIANLE CHEN ET AL.: "CE6.a.4: Chroma intra prediction by reconstructed luma samples", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP3 AND ISO/IEC JTC1/SC29/WG11, JCTVC-E266_R1.1, 5TH MEETING: GENEVA, no. JCTVC-E266, March 2011 (2011-03-01), pages 1 - 10, XP030008772 *
JUNGSUN KIM ET AL.: "New intra chroma prediction using inter-channel correlation", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP3 AND ISO/IEC JTC1/SC29/WG11, JCTVC-B021, 2ND MEETING: GENEVA, CH, no. JCTVC-B021, 21 July 2010 (2010-07-21) - 28 July 2010 (2010-07-28), pages 1 - 9, XP030007601 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111886860A (zh) * 2018-06-06 2020-11-03 Kddi 株式会社 图像解码装置、图像编码装置、图像处理系统、图像解码方法和程序
CN112567743A (zh) * 2018-08-15 2021-03-26 日本放送协会 图像编码装置、图像解码装置及程序
CN112585967A (zh) * 2018-08-15 2021-03-30 日本放送协会 帧内预测装置、图像编码装置、图像解码装置及程序
US11463686B2 (en) * 2018-09-22 2022-10-04 Lg Electronics Inc. Method and device for decoding images using CCLM prediction in image coding system
US11683480B2 (en) 2018-09-22 2023-06-20 Lg Electronics Inc. Method and device for decoding images using CCLM prediction in image coding system
US11876958B2 (en) 2018-10-12 2024-01-16 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Video picture component prediction method and apparatus, and computer storage medium
RU2800683C2 (ru) * 2018-10-12 2023-07-26 Гуандун Оппо Мобайл Телекоммьюникейшнс Корп., Лтд. Способ и устройство предсказывания компонента видеоизображения и компьютерный носитель данных
WO2020073990A1 (fr) * 2018-10-12 2020-04-16 Oppo广东移动通信有限公司 Procédé et appareil de prédiction de composante d'image vidéo, et support d'informations informatique
US11388397B2 (en) 2018-10-12 2022-07-12 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Video picture component prediction method and apparatus, and computer storage medium
CN113170169A (zh) * 2018-12-07 2021-07-23 夏普株式会社 预测图像生成装置、运动图像解码装置、运动图像编码装置以及预测图像生成方法
CN113170169B (zh) * 2018-12-07 2024-01-30 夏普株式会社 预测图像生成装置、运动图像解码装置、运动图像编码装置以及预测图像生成方法
CN113196776A (zh) * 2018-12-20 2021-07-30 夏普株式会社 预测图像生成装置、运动图像解码装置、运动图像编码装置以及预测图像生成方法
CN113196776B (zh) * 2018-12-20 2023-12-19 夏普株式会社 预测图像生成装置、运动图像解码装置、运动图像编码装置以及预测图像生成方法
US11910020B2 (en) 2019-03-08 2024-02-20 Beijing Bytedance Network Technology Co., Ltd Signaling of reshaping information in video processing
CN111698501B (zh) * 2019-03-11 2022-03-01 杭州海康威视数字技术股份有限公司 解码方法及装置
CN111698501A (zh) * 2019-03-11 2020-09-22 杭州海康威视数字技术股份有限公司 解码方法及装置
US11750799B2 (en) 2019-04-23 2023-09-05 Beijing Bytedance Network Technology Co., Ltd Methods for cross component dependency reduction
JP7407206B2 (ja) 2019-05-08 2023-12-28 北京字節跳動網絡技術有限公司 クロスコンポーネントコーディングの適用条件
JP2022531216A (ja) * 2019-05-08 2022-07-06 北京字節跳動網絡技術有限公司 クロスコンポーネントコーディングの適用条件
US11924472B2 (en) 2019-06-22 2024-03-05 Beijing Bytedance Network Technology Co., Ltd. Syntax element for chroma residual scaling
US11956439B2 (en) 2019-07-07 2024-04-09 Beijing Bytedance Network Technology Co., Ltd. Signaling of chroma residual scaling
CN114258678A (zh) * 2019-12-26 2022-03-29 Kddi 株式会社 图像解码装置、图像解码方法和程序
WO2023116716A1 (fr) * 2021-12-21 2023-06-29 Mediatek Inc. Procédé et appareil pour modèle linéaire de composante transversale pour une prédiction inter dans un système de codage vidéo
WO2024074125A1 (fr) * 2022-10-07 2024-04-11 Mediatek Inc. Procédé et appareil de dérivation de modèle linéaire implicite à l'aide de multiples lignes de référence pour une prédiction inter-composantes

Also Published As

Publication number Publication date
JP2018056685A (ja) 2018-04-05

Similar Documents

Publication Publication Date Title
WO2018061588A1 (fr) Dispositif de codage d'image, procédé de codage d'image, programme de codage d'image, dispositif de décodage d'image, procédé de décodage d'image et programme de décodage d'image
US11356665B2 (en) Method for determining color difference component quantization parameter and device using the method
KR102313195B1 (ko) 양자화 행렬의 부호화 방법 및 복호화 방법과 이를 이용하는 장치
CN105144718B (zh) 当跳过变换时用于有损译码的帧内预测模式
EP2829064B1 (fr) Détermination de paramètres pour une binarisation de résidus exponentiel-golomb pour le codage sans pertes intra hevc
JP4617644B2 (ja) 符号化装置及び方法
CN104811715B (zh) 使用平面表达的增强帧内预测编码
JP2018137761A (ja) 映像符号化方法及び装置、並びに映像復号化方法及び装置
US20110317757A1 (en) Intra prediction mode signaling for finer spatial prediction directions
US9277211B2 (en) Binarization scheme for intra prediction residuals and improved intra prediction in lossless coding in HEVC
KR20100133006A (ko) 동화상 부호화/복호화 방법 및 장치
RU2760234C2 (ru) Кодирование и декодирование данных
US11039166B2 (en) Devices and methods for using base layer intra prediction mode for enhancement layer intra mode prediction
JP7357736B2 (ja) 符号化装置、復号装置、及びプログラム
JP2012080571A (ja) 符号化装置及び方法
KR102448226B1 (ko) 디블로킹 필터 제어 장치 및 프로그램
JP6528635B2 (ja) 動画像符号化装置、動画像符号化方法及び動画像符号化用コンピュータプログラム
JP7441638B2 (ja) 符号化装置、復号装置、及びプログラム
JP2010104026A (ja) 復号装置及び方法
JP6402520B2 (ja) 符号化装置、方法、プログラム及び機器
KR102550503B1 (ko) 부호화 장치, 복호 장치, 및 프로그램
JP7396883B2 (ja) 符号化装置、復号装置、及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17855546

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17855546

Country of ref document: EP

Kind code of ref document: A1