WO2020137904A1 - Information processing device, information processing method, and information processing program - Google Patents

Information processing device, information processing method, and information processing program Download PDF

Info

Publication number
WO2020137904A1
WO2020137904A1 PCT/JP2019/050163 JP2019050163W WO2020137904A1 WO 2020137904 A1 WO2020137904 A1 WO 2020137904A1 JP 2019050163 W JP2019050163 W JP 2019050163W WO 2020137904 A1 WO2020137904 A1 WO 2020137904A1
Authority
WO
WIPO (PCT)
Prior art keywords
points
block
extracted
intra prediction
information processing
Prior art date
Application number
PCT/JP2019/050163
Other languages
French (fr)
Japanese (ja)
Inventor
純代 江嶋
優 池田
勇司 藤本
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2020137904A1 publication Critical patent/WO2020137904A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Definitions

  • the present disclosure relates to an information processing device, an information processing method, and an information processing program. Specifically, it relates to a moving image encoding process.
  • H.265/HEVC High Efficiency Video Coding
  • intra prediction that performs spatial prediction within an image to generate a prediction value
  • inter prediction that performs motion compensation prediction between images to generate a prediction value are used.
  • CCLM prediction Cross-Component Liner Model Prediction
  • Non-Patent Document 1 a method of reducing the calculation cost of CCLM prediction has also been proposed (for example, Non-Patent Document 2).
  • CCLM prediction has a high calculation cost, and thus may reduce the throughput of the entire encoding and decoding processes.
  • CCLM prediction has a high calculation cost, and thus may reduce the throughput of the entire encoding and decoding processes.
  • the proposed method of Non-Patent Document 2 in order to obtain the maximum value and the minimum value of the luminance information and the color information, it is necessary to loop and calculate all the reference pixels, so that the calculation cost is significantly reduced. Is difficult Further, the proposed method of Non-Patent Document 2 is affected by noise because it calculates a regression line (a linear model used for CCLM prediction) by using each one of the maximum value and the minimum value of luminance information and color information. There is a concern that it will be easier.
  • the present disclosure proposes an information processing device, an information processing method, and an information processing program capable of reducing the calculation cost of encoding and decoding while increasing the durability against noise in CCLM prediction.
  • an information processing device is to extract a plurality of points from reference pixels adjacent to a block defined in an image, and to extract brightness information and color information of the plurality of points.
  • an intra-prediction unit that calculates a linear model that predicts color information of pixels existing in the block.
  • FIG. 20 is a block diagram showing a configuration example of an image encoding device which is an example of an information processing device according to the present disclosure.
  • FIG. 20 is a block diagram illustrating a configuration example of an intra prediction unit according to the present disclosure. It is a conceptual diagram which showed the processing block which concerns on 1st Embodiment. 6 is a flowchart showing a procedure of information processing according to the first embodiment. It is a conceptual diagram which showed the processing block which concerns on 2nd Embodiment. It is a figure (1) for explaining the intra prediction processing concerning a 2nd embodiment. It is a figure (2) for demonstrating the intra prediction process which concerns on 2nd Embodiment. It is a flow chart which shows a procedure of information processing concerning a 2nd embodiment.
  • FIG. 16 is a hardware configuration diagram illustrating an example of a computer that realizes a function of the image encoding device according to the present disclosure.
  • First Embodiment 1-1 Configuration example of information processing apparatus according to first embodiment 1-2. Details of information processing according to first embodiment 1-3. Information processing procedure according to the first embodiment 2. Second embodiment 2-1. Details of information processing according to second embodiment 2-2. Information processing procedure according to the second embodiment 3. Third embodiment 4. Modification 5. Other Embodiments 6. Effects of information processing apparatus according to the present disclosure 7. Hardware configuration
  • FIG. 1 is a block diagram showing a configuration example of an image encoding device 100 which is an example of an information processing device according to the present disclosure.
  • the image encoding device 100 includes an external device that is a source of original image data to be encoded and a transmission destination of an encoded bitstream that encodes an image.
  • Other external devices or the like may be communicated with each other using a wired or wireless network.
  • the image encoding device 100 is a device that encodes a prediction error between an image and its predicted image based on the configuration shown in FIG.
  • the configuration of the image encoding device 100 is shown in FIG. 1, the configuration of FIG. 1 is merely an example, and the image encoding device 100 is not necessarily limited to the configuration in FIG. That is, the image encoding device 100 has a configuration for performing the compression encoding standardized by the above-mentioned H.265/HEVC and a configuration for realizing the VVC (Versatile Video Coding) proposed as the next-generation compression encoding technology. If so, the configuration does not necessarily have to be that shown in FIG.
  • VVC Very Video Coding
  • the image encoding device 100 performs intra prediction processing using CCLM prediction in the image compression encoding technique.
  • CCLM prediction a problem relating to CCLM prediction will be described.
  • the CCLM prediction uses information about pixels (hereinafter, referred to as “reference pixels”) adjacent to a block (a group of pixels composed of a plurality of pixels in the horizontal and vertical directions) to be processed in one frame, in the block. Predict the information of the pixel of.
  • the block indicates a processing unit to be encoded, and is generally described as CU (Coding Unit).
  • the CU is a block having a variable size, which is formed by recursively dividing an LCU (Largest Coding Unit) which is the maximum coding unit.
  • LCU Large Coding Unit
  • different processing may be performed depending on the size and shape of the CU or PU (Predict Unit), but in the processing proposed in the present disclosure, the size of the CU or PU is not limited.
  • CU and the like are collectively referred to as “block”.
  • luminance information of a reference pixel may be read as a luminance signal or a luminance value.
  • the luminance information may be expressed as “Luma”) and color information (color difference signal or color difference value).
  • the pixel information in the block is predicted by using the relationship with (color information may be expressed as “Chroma”).
  • a Luma of a block is predicted, and a regression (reconstruct) of the information in the same block is used to reduce a regression error (specifically, a linear model). The regression line) is calculated and Chroma in the block is predicted.
  • a regression error specifically, a linear model
  • the CCLM prediction has a large calculation cost, and the calculation may affect the throughput of the entire coding.
  • the linear model is obtained by the following equation (1).
  • pred c (i,j) indicates the predicted value of Chroma in the block ((i,j) indicates the pixel position).
  • rec L '(i,j) indicates information obtained by down-sampling the regressed Luma.
  • ⁇ and ⁇ are coefficients for specifying the linear model. For example, ⁇ and ⁇ are represented by the following formula (2).
  • N represents a unit of block (horizontal or vertical number of pixels).
  • ⁇ and ⁇ must be calculated by looping all the values of the reference pixels of the block, and since the calculation formula includes division, the calculation cost is very large. Become. Therefore, as disclosed in Non-Patent Document 2 described above, the following formula (3) is proposed which simplifies the calculation of ⁇ and ⁇ .
  • (x A ,y A ) and (x B ,y B ) respectively indicate the maximum value or the minimum value of Luma or Chroma in the reference pixel. According to the equation (3), ⁇ and ⁇ can be easily calculated.
  • Non-Patent Document 2 that uses Expression (3) is likely to be affected by noise because it calculates a linear model by using each one of the maximum value and the minimum value of the luminance information and the color information. I have a concern. Specifically, the pixel that can take the minimum value or the maximum value is likely to be noise, and if a linear model is calculated using these values, the prediction accuracy may decrease.
  • FIGS. 1 and 2 show main components such as a processing unit and a data flow, and the components shown in FIGS. 1 and 2 are not necessarily all. That is, in the image encoding device 100, a processing unit not shown as a block in FIG. 1 may exist, or a process or data flow not shown as an arrow or the like in FIG. 1 may exist.
  • the image encoding device 100 includes an A/D conversion unit 101, a rearrangement buffer 102, a first calculation unit 103, an orthogonal transformation unit 104, a quantization unit 105, an encoding unit 106, and a storage buffer 107. , Inverse quantizer 108, inverse orthogonal transformer 109, second calculator 110, filter 111, frame memory 112, switch 113, intra predictor 114, inter predictor 115, predicted image selector 116, and rate controller. 117 are included.
  • the image encoding apparatus 100 encodes original image data (image) that is an input moving image in frame units for each block (for example, CU).
  • the A/D conversion unit 101 of the image encoding device 100 acquires original image data, and performs A/D conversion on an image included in the original image data in units of frames to be encoded. Then, the A/D conversion unit 101 outputs the A/D converted data to the rearrangement buffer 102.
  • the rearrangement buffer 102 arranges image data output from the A/D conversion unit 101 in units of frames in display order in an order for encoding, for example, according to a GOP (Group of Picture) structure. Change. Then, the rearrangement buffer 102 outputs the rearranged data to the first calculation unit 103 and the inter prediction unit 115.
  • GOP Group of Picture
  • the first calculation unit 103 sequentially sets an input image as an image to be encoded, and sets a block to be encoded for the image to be encoded.
  • the first calculation unit 103 subtracts the prediction image (prediction block) of the block output from the prediction image selection unit 116 from the image of the block to be encoded (current block) to obtain the prediction error, and orthogonally transforms it. It is output to the unit 104.
  • the orthogonal transformation unit 104 performs orthogonal transformation or the like on the prediction error output from the first calculation unit 103, and derives a transformation coefficient.
  • the orthogonal transformation unit 104 outputs the transformation coefficient to the quantization unit 105.
  • the quantization unit 105 scales (quantizes) the transform coefficient output from the orthogonal transform unit 104 and derives a quantized transform coefficient.
  • the quantizer 105 outputs the quantized transform coefficient to the encoder 106 and the inverse quantizer 108.
  • the encoding unit 106 encodes the quantized transform coefficient output from the quantization unit 105 by a predetermined method. For example, the encoding unit 106 converts the encoding parameter and the quantized transform coefficient output from the quantization unit 105 into the syntax value of each syntax element according to the definition of the syntax table. Then, the encoding unit 106 encodes each syntax value (for example, arithmetic encoding such as CABAC (Context-based Adaptive Binary Arithmetic Coding)).
  • CABAC Context-based Adaptive Binary Arithmetic Coding
  • the encoding unit 106 multiplexes encoded data, which is a bit string of each syntax element obtained as a result of encoding, and outputs the multiplexed data to the accumulation buffer 107 as an encoded bit stream.
  • the encoding unit 106 encodes the block that is processed by the intra prediction unit 114, based on the color information predicted by the intra prediction unit 114, which will be described later. More specifically, the encoding unit 106 predicts an image corresponding to the original image data, an image corresponding to the original image data, and a prediction image including a block including the color information predicted by the intra prediction unit 114. The error and are encoded.
  • the accumulation buffer 107 temporarily stores the encoded bitstream output from the encoding unit 106. After that, the accumulation buffer 127 transmits the accumulated encoded bitstream to the decoding device or the like via the transmission path.
  • the inverse quantization unit 108 scales (inverse-quantizes) the value of the quantized transform coefficient output from the quantization unit 105, and derives the transformed coefficient after the inverse quantization.
  • the inverse quantization unit 108 outputs the transform coefficient to the inverse orthogonal transform unit 109. That is, the inverse quantization performed by the inverse quantization unit 108 is an inverse process of the quantization performed by the quantization unit 105, and is the same process as the inverse quantization performed by the image decoding device (decoder).
  • the inverse orthogonal transform unit 109 performs inverse orthogonal transform or the like on the transform coefficient output from the inverse quantization unit 108 and derives a prediction error.
  • the inverse orthogonal transform unit 109 outputs the prediction error to the second calculation unit 110. That is, the inverse orthogonal transform performed by the inverse orthogonal transform unit 109 is an inverse process of the orthogonal transform performed by the orthogonal transform unit 104, and is the same process as the inverse orthogonal transform performed in the image decoding device.
  • the second calculation unit 110 adds the prediction error output from the inverse orthogonal transform unit 109 and the prediction image corresponding to the prediction error output from the prediction image selection unit 116 to obtain a local decoded image (decoded image). Image).
  • the second calculation unit 110 outputs the locally decoded image to the filter 111.
  • the filter 111 removes block distortion by filtering the decoded image output from the second calculation unit 110.
  • the filter 111 outputs the filtered image to the frame memory 112.
  • the frame memory 112 reconstructs a decoded image in image units using the locally decoded image output from the filter 111 and stores it in a buffer in the frame memory 112.
  • the frame memory 112 reads the decoded image designated by the intra prediction unit 114 or the inter prediction unit 115 as a reference image from the buffer, and outputs the decoded image via the switch 113.
  • the intra prediction unit 114 acquires, as a reference image, a decoded image stored in the frame memory 112 at the same time as the block to be encoded. Then, the intra prediction unit 114 uses the reference image to perform intra prediction processing in a predetermined intra prediction mode on the block to be encoded. Although a plurality of intra prediction modes are specified in H.265/HEVC and VVC, only the process using CCLM prediction will be described in the present disclosure as described later.
  • the inter prediction unit 115 includes a motion prediction/compensation unit, a motion vector detection unit, and the like, and executes prediction processing regarding images (inter).
  • the prediction unit 119 outputs the predicted image generated as a result of the intra prediction process or the inter prediction process to the first calculation unit 103 and the second calculation unit 110.
  • FIG. 2 is a block diagram showing a configuration example of the intra prediction unit 114 according to the present disclosure.
  • the intra prediction unit 114 of the image encoding device 100 includes a luminance/color difference separation unit 1141, a luminance signal intra prediction unit 1142, a luminance signal downsampling unit 1143, and a color difference signal intra prediction. And a portion 1144.
  • the luminance/color difference separation unit 1141 acquires the encoded block output from the frame memory 112 and separates it into a luminance signal and a color difference signal.
  • the luminance/color difference separation unit 1141 outputs a luminance signal of the separated information to the luminance signal intra prediction unit 1142.
  • Such a luminance signal corresponds to the regressed luminance signal, and will be referred to as “Rec L (L means Luma)” in the following description.
  • the luminance/color difference separating unit 1141 also outputs the separated luminance signal to the luminance signal down-sampling unit 1143.
  • the luminance/color difference separation unit 1141 also outputs the separated color difference signals to the color difference signal intra prediction unit 1144.
  • a color difference signal corresponds to the regressed color difference signal, and will be referred to as "Rec C (C means Chroma)" in the following description.
  • the luminance signal intra-prediction unit 1142 based on Rec L output from the luminance-color difference separation unit 1141 and parameters for intra-prediction processing (for example, the size of the block to be processed, etc.), Predict the luminance signal of.
  • the predicted luminance signal is referred to as “Pred L (L means Luma)”.
  • the luminance signal downsampling unit 1143 downsamples Rec L output from the luminance/color difference separating unit 1141.
  • the difference is caused. It is done for adjustment.
  • the down-sampled luminance signal will be referred to as “Rec′ L (L means Luma)”.
  • the color difference signal intra prediction unit 1144 predicts the color difference signal in the block to be processed by using the CCLM prediction method. Specifically, the color difference signal intra prediction unit 1144 sets the parameters for intra prediction processing, the down-sampled luminance signal “Rec′ L ”, and the regressed color difference signal “Rec C ”. The predicted value of the color difference signal is calculated by using this. Specifically, the color difference signal intra prediction unit 1144 calculates the predicted value “Pred C ”of the color difference signal using the above equation (1).
  • the intra prediction unit 114 outputs the predicted image based on the calculated “Pred L ” and “Pred C ” to the predicted image selection unit 116.
  • the intra prediction unit 114 extracts a plurality of points from reference pixels adjacent to the block, and based on the extracted luminance information and color information of the plurality of points, a linear model that predicts color information of pixels existing in the block. To calculate.
  • the calculation of the linear model means calculation of the above ⁇ and ⁇ .
  • a plurality of points means not all but a plurality (two or more) points of the reference pixels adjacent to the block.
  • the intra prediction unit 114 extracts three points from vertically adjacent reference pixels in the block, and extracts three points from horizontally adjacent reference pixels in the block.
  • the midpoints in the luminance information and the color information of each of the three points are specified, and a linear model is calculated based on the specified midpoints.
  • the intra prediction unit 114 extracts three consecutive points from vertically adjacent reference pixels in the block, and extracts three consecutive points from horizontally adjacent reference pixels in the block. , The midpoints in the luminance information and the color information of each of the three points are specified, and a linear model is calculated based on the specified midpoints.
  • FIG. 3 is a conceptual diagram showing processing blocks according to the first embodiment.
  • FIG. 3 shows an example in which the relationship between the luminance information and the color information in the image is the 4:2:0 format, and the block size (“N” in the above equation (2)) is “8”. ..
  • the block 40 corresponding to the color information has half the number of vertical and horizontal pixels as compared with the block 50 corresponding to the luminance information. Pixels adjacent to the blocks 40 and 50 indicate reference pixels.
  • the intra prediction unit 114 does not perform processing using information of all reference pixels in the block 40, but extracts three points from each of the vertical and horizontal reference pixels. To do. Then, the intra prediction unit 114 specifies the intermediate points in the luminance information and the color information of each of the three points, and calculates the linear model based on the specified intermediate points.
  • the intra prediction unit 114 can reduce the number of coordinates for calculating the brightness information and the like, as compared with the conventional method of searching all the horizontal and vertical points and calculating the maximum value and the minimum value. High-speed calculation can be performed, and the overall processing speed can be improved. Further, the intra prediction unit 114 can improve the durability against noise by taking an intermediate value of a plurality of points instead of the maximum value and the minimum value estimated to be easily influenced by noise.
  • the intra-prediction unit 114 provides three consecutive points from the reference pixels that are adjacent in the vertical direction in the block 40 to one end that is different from the base point of the block 40. Extract the three points that are located. Further, the intra prediction unit 114 extracts three consecutive three points located at one end different from the base point of the block 40 from the reference pixels adjacent to each other in the horizontal direction in the block 40. Then, the intra prediction unit 114 specifies the intermediate points in the luminance information and the color information of each of the three points, and calculates the linear model based on the specified intermediate points.
  • the intra prediction unit 114 processes each of the three points farthest from the base point (the top end and the left end of the block) in the space of the block 40. In this way, the intra prediction unit 114 can calculate the linear model using the points that are estimated to most reflect the characteristics of the space, and thus the prediction accuracy can be improved.
  • the intra prediction unit 114 is a point A0, a point A1, and a point A2 that are three points far from the base point in the horizontal direction and a point L0 that is three points far from the base point in the vertical direction.
  • Point L1 and point L2 are extracted as processing targets. Note that, in FIG. 3, the same reference numerals are given to corresponding points between the block 40 and the block 50.
  • the intra prediction unit 114 calculates the intermediate points of the extracted three points.
  • the point A0 is represented as (XA0, YA0).
  • XA0 indicates the luminance information at the point A0
  • YA0 indicates the color information at the point A0.
  • the point A1 is (XA1, YA1)
  • the point A2 is (XA2, YA2)
  • the point L0 is (XL0, YL0)
  • the point L1 is (XL1, YL1)
  • the point L2 is , (XL2, YL2).
  • the intra prediction unit 114 obtains the intermediate points YA m , YL m , XA m, and XL m of the color information of each of the horizontal and vertical three points.
  • YA m is YA0 and YA m is XA0.
  • YA1 ⁇ YA2 and YA0 ⁇ YA1 are satisfied, YA m is YA1 and YA m is XA1.
  • YA2 ⁇ YA0 and YA1 ⁇ YA2 are satisfied, YA m is YA2 and YA m is XA2.
  • YL m is YL0 and YL m is XL0.
  • YL1 ⁇ YL2 and YL0 ⁇ YL1 are satisfied, YL m is YL1 and YL m is XL1.
  • the intra prediction unit 114 may calculate the linear model using information other than the intermediate points. For example, the intra prediction unit 114 may calculate the linear model based on the average values of the extracted luminance information and color information of a plurality of points.
  • FIG. 4 is a flowchart showing the flow of information processing according to the first embodiment.
  • the image encoding device 100 calculates a prediction value (Pred L ) of the luminance signal in the processing target block (step S101).
  • the image encoding device 100 determines whether or not to perform CCLM prediction (step S102).
  • CCLM prediction is not performed (step S102; No)
  • the image coding apparatus 100 skips the subsequent process regarding CCLM prediction.
  • the image coding apparatus 100 specifies three horizontal and vertical points in each reference pixel of the block to be processed (step S103).
  • the image coding apparatus 100 calculates the intermediate value of the upper three points (that is, the horizontal side) (step S104). Similarly, the image coding apparatus 100 calculates the intermediate value of the left three points (that is, the vertical side) (step S105).
  • the image coding apparatus 100 calculates the slope ⁇ in the linear equation based on the calculated intermediate value (step S106). Further, the image coding apparatus 100 calculates the intercept ⁇ in the linear equation based on the calculated ⁇ (step S107).
  • the image coding apparatus 100 calculates the predicted value of the color difference signal in the processing target block based on the calculated values of ⁇ and ⁇ (step S108).
  • FIG. 5 is a conceptual diagram showing processing blocks according to the second embodiment. Similar to FIG. 3, FIG. 5 shows a block 40 corresponding to color information and a block 50 corresponding to luminance information.
  • the intra prediction unit 114 does not perform processing using all the reference pixels, but extracts a plurality of points from the reference pixels. Then, the intra prediction unit 114 selects either one of the plurality of points extracted from the vertically adjacent reference pixels of the block 40 or any one of the plurality of points extracted from the horizontally adjacent reference pixels of the block 40. A linear equation calculated from one point is calculated. For example, when the intra prediction unit 114 extracts three points from the vertical direction and three points from the horizontal direction, the intra prediction unit 114 calculates three linear equations that are the number of combinations of each point. Then, the intra prediction unit 114 calculates a linear model based on the comparison of the calculated slopes of the linear equations.
  • the intra-prediction unit 114 has three consecutive points from the reference pixel adjacent in the vertical direction in the block 40 and one end portion different from the base point of the block. Three located points (point L0, point L1, point L2 shown in FIG. 5) are extracted.
  • the intra prediction unit 114 extracts, from the reference pixels adjacent to each other in the horizontal direction in the block 40, three consecutive points located at one end different from the base point of the block 40 (FIG. 5 ). Point A0, point A1, point A2). Then, the intra prediction unit 114 calculates three linear equations each combining three points, and calculates a linear model based on the comparison of the slopes of the calculated linear equations.
  • FIG. 5 conceptually shows straight lines obtained from the points A0 and L0, the points A1 and L1, and the points A2 and L2. That is, in the first embodiment, an intermediate point is obtained from the extracted three points and a linear equation is obtained based on the intermediate point, whereas in the second embodiment, a plurality of linear equations are obtained from the extracted points. The difference is that a linear model is calculated from a plurality of linear equations obtained.
  • the intra prediction unit 114 calculates a linear model based on the intermediate value of the slope of the linear equation calculated for the number of combinations of a plurality of points.
  • the intra prediction unit 114 uses the point A0 (XA0, YA0), the point A1 (XA1, YA1), the point A2 (XA2, YA2), the point L0 (XL0, YL0), and the point L1 (XL1). , YL1) and a point L2 (XL2, YL2) are extracted.
  • the intra prediction unit 114 generates a combination of two points. For example, the intra prediction unit 114 combines the points A0 and L0, the points A1 and L1, and the points A2 and L2, respectively. Then, the information of each point is substituted into the above equation (3) to obtain the slope of each linear equation.
  • FIG. 6 shows the straight line obtained from the three combinations.
  • FIG. 6 is a diagram (1) for explaining the intra prediction process according to the second embodiment.
  • a straight line 52 obtained from the combination of points A0 and L0
  • a straight line 54 obtained from the combination of points A1 and L1
  • And 56 obtained from the combination of points A2 and L2.
  • the intra prediction unit 114 determines the slope ⁇ 00 as the intermediate value ⁇ m .
  • the intra prediction unit 114 extracts the three furthest points in the space (block) and calculates each linear equation in parallel to obtain the intermediate point of the solution. Then, the intra prediction unit 114 calculates (determines) a linear model in CCLM prediction based on the midpoint of the solution.
  • the intra prediction unit 114 can improve the durability against noise by obtaining the intermediate points of the solutions of the plurality of linear equations. This point will be described with reference to FIG. 7.
  • FIG. 7 is a diagram (2) for explaining the intra prediction process according to the second embodiment.
  • FIG. 7 shows a straight line 58 based on the combination of the points L2 and A2′, instead of the straight line 56 based on the combination of the points L2 and A2 shown in FIG.
  • a point A2′ is a point that can become so-called noise that deteriorates the prediction accuracy.
  • the processing according to the second embodiment since the intermediate value is selected from the solutions of the three straight lines, the solution corresponding to the straight line 58 is excluded.
  • noise can be excluded in the process of the calculation processing, and thus durability against noise can be improved.
  • the intra prediction unit 114 may calculate the linear model using information other than the midpoint. For example, the intra prediction unit 114 may calculate the linear model based on the average value of the slopes of the linear equation calculated for the number of combinations of a plurality of points.
  • the intra prediction unit 114 may obtain more or less linear equations than the numbers shown in FIGS. 6 and 7.
  • the intra prediction unit 114 includes points A0 and L0, points A0 and L1, points A0 and L2, points A1 and L0, points A1 and L1, points A1 and L2, points A2 and L0, The solutions of the nine linear equations may be obtained by combining the points A2 and L1 and the points A2 and L2, respectively. Accordingly, the intra prediction unit 114 can calculate a linear model that further improves the prediction accuracy in CCLM prediction.
  • the intra prediction unit 114 can perform faster calculation by reducing the solution of the linear equation to be calculated.
  • FIG. 8 is a flowchart showing the flow of information processing according to the second embodiment.
  • the image encoding device 100 calculates a prediction value (Pred L ) of the luminance signal in the processing target block (step S201).
  • the image encoding device 100 determines whether or not to perform CCLM prediction (step S202).
  • the image coding apparatus 100 skips the subsequent process regarding the CCLM prediction.
  • the image coding apparatus 100 specifies three combinations that connect each horizontal and vertical point in the reference pixel of the processing target block (step S203).
  • the image coding apparatus 100 calculates the slope ⁇ of the linear equation in each combination (step S204). Further, the image coding apparatus 100 calculates the intermediate value of the ⁇ 3 points obtained in step S204 (step S205).
  • the image coding apparatus 100 calculates the intercept ⁇ in the linear equation based on the calculated intermediate value (step S206). Then, the image coding apparatus 100 calculates the predicted value of the color difference signal in the processing target block based on the calculated values of ⁇ and ⁇ (step S207).
  • the intra prediction unit 114 obtains the intermediate value of the extracted plural points or the intermediate value of the slope of the linear equation.
  • the intra prediction unit 114 uses the look-up table to perform the calculation process described in the first and second embodiments.
  • the functions and configuration of the image encoding device 100 are the same as those in the first and second embodiments, so description will be omitted.
  • the intra prediction unit 114 calculates a variable by using a process of referring to a lookup table in the process of calculating a variable (specifically, ⁇ and ⁇ ) for specifying a linear model.
  • the intra prediction unit 114 performs processing such as “selecting the three furthest points in the space (block) and computing the linear equations in parallel to obtain the midpoint of the solution”. ..
  • the intra prediction unit 114 repeats the processing of substituting the coordinates of three points into the above equation (3) and calculating ⁇ and ⁇ for each.
  • the intra prediction unit 114 performs the calculation by referring to the look-up table as shown in the following expression (4).
  • “Table” indicates a lookup table. In this way, by replacing the calculation of ⁇ and ⁇ that are variables of the linear equation with the lookup table reference, the intra prediction unit 114 can realize high-speed calculation of ⁇ and ⁇ with a relatively small circuit scale. ..
  • the intra prediction unit 114 can adopt a method of performing quantization.
  • the intra prediction unit 114 can perform the calculation process shown in the first and second embodiments at a higher speed by adopting the method of referring to the lookup table.
  • the information processing device according to the present disclosure has been described as the image encoding device 100, but the information processing according to the present disclosure may be performed by the decoding device. That is, the information processing device according to the present disclosure is not limited to a device that performs image encoding, and may be a device that performs image decoding.
  • the image encoding device 100 is described as an integrated device, but the image encoding device 100 may be realized by a plurality of devices.
  • each component of each device shown in the drawings is functionally conceptual, and does not necessarily have to be physically configured as shown. That is, the specific form of distribution/integration of each device is not limited to the one shown in the figure, and all or part of the device may be functionally or physically distributed/arranged in arbitrary units according to various loads and usage conditions. It can be integrated and configured.
  • the information processing device includes the intra prediction unit (intra prediction unit 114 in the embodiment).
  • the intra-prediction unit extracts a plurality of points from reference pixels adjacent to the block defined in the image, and linearly predicts color information of pixels existing in the block based on the brightness information and the color information of the extracted points. Calculate the model.
  • the information processing apparatus reduces the number of coordinates for calculating brightness information and the like as compared with the conventional method of searching all horizontal and vertical points and calculating the maximum value and the minimum value. Therefore, high-speed calculation can be performed and the overall processing speed can be improved.
  • the intra prediction unit extracts three points from the reference pixels that are vertically adjacent to each other in the block, and also extracts three points from the reference pixels that are horizontally adjacent to each other from the block, and the luminance information and the color of each of the three points are extracted.
  • a midpoint in the information is specified, and a linear model is calculated based on the specified midpoint.
  • the intra prediction unit extracts three consecutive points from the reference pixels that are vertically adjacent to each other in the block, and also extracts three consecutive points from the reference pixels that are horizontally adjacent to each other from the block.
  • An intermediate point in the luminance information and color information is specified, and a linear model is calculated based on the specified intermediate point.
  • the intra prediction unit extracts three consecutive three points, which are located at one end different from the base point of the block, from the reference pixels that are vertically adjacent to each other in the block, and the horizontal direction of the block is extracted. From the adjacent reference pixels, three consecutive points, which are located at one end different from the base point of the block, are extracted, and intermediate points in the luminance information and the color information of the three points are identified and identified. A linear model is calculated based on the midpoint. In this way, the information processing apparatus generates a linear model from the three points that are estimated to most represent the feature of the space (block), so the prediction accuracy can be further improved.
  • the intra prediction unit also calculates a linear model based on the average values of the extracted luminance information and color information of multiple points. As a result, the information processing device can improve the processing speed while increasing the durability against noise.
  • the intra prediction unit includes one of a plurality of points extracted from vertically adjacent reference pixels of the block, and one of a plurality of points extracted from horizontally adjacent reference pixels of the block.
  • the number of combinations of the extracted plural linear equations is calculated, and the linear model is calculated based on the comparison of the slopes of the calculated linear equations.
  • the intra prediction unit calculates a linear model based on the intermediate value of the slope of the linear equation calculated for the number of combinations of multiple points.
  • the information processing device can perform the prediction process with improved durability against noise.
  • the intra prediction unit calculates a linear model based on the average value of the slopes of the linear equation calculated for the number of combinations of multiple points. As a result, the information processing device can perform the prediction process with improved durability against noise.
  • the intra prediction unit extracts three consecutive three points located at one end different from the base point of the block from the reference pixels adjacent to each other in the vertical direction of the block, and the horizontal prediction block of the block. From the reference pixels adjacent to each other in the direction, three consecutive points, which are located at one end different from the base point of the block, are extracted, and three linear equations combining each of the three points are calculated and calculated. A linear model is calculated based on the comparison of the slopes of the linear equations. As described above, the information processing apparatus generates a linear model from a combination of three points that are estimated to most represent the feature of the space (block), and thus the prediction accuracy can be further improved.
  • the intra-prediction unit calculates variables using a process that refers to a lookup table (Lookup Table) in the variable calculation process for specifying the linear model.
  • a lookup table Lookup Table
  • the information processing apparatus can significantly improve arithmetic processing such as parallel arithmetic.
  • FIG. 9 is a hardware configuration diagram illustrating an example of a computer 1000 that realizes the functions of the image encoding device 100 according to the present disclosure.
  • the computer 1000 has a CPU 1100, a RAM 1200, a ROM (Read Only Memory) 1300, an HDD (Hard Disk Drive) 1400, a communication interface 1500, and an input/output interface 1600.
  • the respective units of the computer 1000 are connected by a bus 1050.
  • the CPU 1100 operates based on a program stored in the ROM 1300 or the HDD 1400, and controls each part. For example, the CPU 1100 expands a program stored in the ROM 1300 or the HDD 1400 into the RAM 1200 and executes processing corresponding to various programs.
  • the ROM 1300 stores a boot program such as a BIOS (Basic Input Output System) executed by the CPU 1100 when the computer 1000 starts up, a program dependent on the hardware of the computer 1000, and the like.
  • BIOS Basic Input Output System
  • the HDD 1400 is a computer-readable recording medium that non-temporarily records a program executed by the CPU 1100, data used by the program, and the like. Specifically, the HDD 1400 is a recording medium that records an information processing program according to the present disclosure, which is an example of the program data 1450.
  • the communication interface 1500 is an interface for connecting the computer 1000 to an external network 1550 (for example, the Internet).
  • the CPU 1100 receives data from another device or transmits the data generated by the CPU 1100 to another device via the communication interface 1500.
  • the input/output interface 1600 is an interface for connecting the input/output device 1650 and the computer 1000.
  • the CPU 1100 receives data from an input device such as a keyboard or a mouse via the input/output interface 1600.
  • the CPU 1100 also transmits data to an output device such as a display, a speaker, a printer, etc. via the input/output interface 1600.
  • the input/output interface 1600 may function as a media interface for reading a program or the like recorded in a predetermined recording medium (medium).
  • Examples of media include optical recording media such as DVD (Digital Versatile Disc) and PD (Phase change rewritable Disk), magneto-optical recording media such as MO (Magneto-Optical disk), tape media, magnetic recording media, and semiconductor memory. Is.
  • optical recording media such as DVD (Digital Versatile Disc) and PD (Phase change rewritable Disk)
  • magneto-optical recording media such as MO (Magneto-Optical disk), tape media, magnetic recording media, and semiconductor memory.
  • the CPU 1100 of the computer 1000 executes the information processing program loaded on the RAM 1200 to realize the functions of the intra prediction unit 114 and the like. ..
  • the HDD 1400 stores the information processing program according to the present disclosure and various data used for information processing. Note that the CPU 1100 reads the program data 1450 from the HDD 1400 and executes the program data, but as another example, these programs may be acquired from another device via the external network 1550.
  • a plurality of points are extracted from reference pixels adjacent to the block defined in the image, and a linear model for predicting color information of pixels existing in the block is calculated based on the luminance information and the color information of the extracted points.
  • An information processing device equipped with an intra prediction unit equipped with an intra prediction unit.
  • the intra prediction unit is Three points are extracted from vertically adjacent reference pixels in the block, three points are extracted from horizontally adjacent reference pixels in the block, and three intermediate points in the luminance information and the color information are extracted respectively.
  • the information processing apparatus according to (1), wherein the linear model is specified and the linear model is calculated based on the specified midpoint.
  • the intra prediction unit is Three consecutive points are extracted from vertically adjacent reference pixels in the block, and three consecutive points are extracted from horizontally adjacent reference pixels in the block. The luminance information and the color information of each of the three points are extracted.
  • the information processing device according to (1) or (2), wherein the linear model is calculated based on the specified intermediate point.
  • the intra prediction unit is From the reference pixels adjacent to each other in the vertical direction in the block, three continuous points located at one end different from the base point of the block are extracted, and the adjacent three pixels are adjacent to each other in the horizontal direction.
  • the information processing apparatus From the reference pixel, three consecutive three points, which are located at one end different from the base point of the block, are extracted, the intermediate points in the luminance information and the color information of the three points are identified, and the identified intermediate points.
  • the intra prediction unit is The information processing apparatus according to any one of (1) to (4), wherein the linear model is calculated based on average values of the extracted luminance information and color information of a plurality of points.
  • the intra prediction unit is A linear equation obtained from any one of a plurality of points extracted from vertically adjacent reference pixels of the block and any one of a plurality of points extracted from horizontally adjacent reference pixels of the block.
  • the information processing device (1), wherein the linear model is calculated based on a comparison of the slopes of the calculated linear equations.
  • the intra prediction unit is The information processing apparatus according to (6), wherein the linear model is calculated based on an intermediate value of the slopes of the linear equation calculated for the number of combinations of the plurality of points.
  • the intra prediction unit is The information processing apparatus according to (6) or (7), wherein the linear model is calculated based on an average value of the slopes of the linear equations calculated for the number of combinations of the plurality of points.
  • the intra prediction unit is From the reference pixels adjacent to each other in the vertical direction in the block, three continuous points located at one end different from the base point of the block are extracted, and the adjacent three pixels are adjacent to each other in the horizontal direction. From the reference pixel, three consecutive three points located at one end different from the base point of the block are extracted, three linear equations combining the three points are calculated, and the calculated linear equations are calculated.
  • the information processing apparatus according to any one of (6) to (8), wherein the linear model is calculated based on a comparison of the slopes of.
  • the intra prediction unit is The information processing apparatus according to any one of (1) to (9), wherein in the process of calculating a variable for specifying the linear model, the variable is calculated using a process that refers to a Lookup Table.
  • Computer A plurality of points are extracted from reference pixels adjacent to the block defined in the image, and a linear model for predicting color information of pixels existing in the block is calculated based on the luminance information and the color information of the extracted points.
  • Information processing method (12) Computer, A plurality of points are extracted from reference pixels adjacent to the block defined in the image, and a linear model for predicting color information of pixels existing in the block is calculated based on the luminance information and the color information of the extracted points.
  • An information processing program that functions as an intra prediction unit.
  • Image Coding Device 101 A/D Conversion Unit 102 Sorting Buffer 103 First Operation Unit 104 Orthogonal Transformation Unit 105 Quantization Unit 106 Encoding Unit 107 Storage Buffer 108 Inverse Quantization Unit 109 Inverse Orthogonal Transformation Unit 110 Second Operation Unit 111 filter 112 frame memory 113 switch 114 intra prediction unit 1141 luminance color difference separation unit 1142 luminance signal intra prediction unit 1143 luminance signal downsampling unit 1144 color difference signal intra prediction unit 115 inter prediction unit 116 predicted image selection unit 117 rate control unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

This information processing device is provided with an intra-frame prediction unit which extracts a plurality of points from a reference pixel adjoining a block prescribed within an image, and calculates a linear model which predicts color information of pixels present in said block, on the basis of color information and brightness information of the plurality of extracted points.

Description

情報処理装置、情報処理方法及び情報処理プログラムInformation processing apparatus, information processing method, and information processing program
 本開示は、情報処理装置、情報処理方法及び情報処理プログラムに関する。詳しくは、動画像の符号化処理に関する。 The present disclosure relates to an information processing device, an information processing method, and an information processing program. Specifically, it relates to a moving image encoding process.
 動画像の圧縮符号化方式として、H.265/HEVC(High Efficiency Video Coding)が標準化されている。H.265/HEVCでは、画像内で空間予測を行って予測値を生成するイントラ予測と、画像間で動き補償予測を行って予測値を生成するインター予測とが用いられる。 H.265/HEVC (High Efficiency Video Coding) is standardized as a moving picture compression encoding method. In H.265/HEVC, intra prediction that performs spatial prediction within an image to generate a prediction value and inter prediction that performs motion compensation prediction between images to generate a prediction value are used.
 現状のイントラ予測の改良手法の一つとして、輝度情報の復号画素値を用いて色情報の予測値を生成するCCLM予測(Cross-Component Liner Model Prediction)が提案されている。CCLM予測によれば、輝度情報及び色情報のコンポーネント間の冗長性を削減することができる(例えば、非特許文献1)。さらに、CCLM予測の計算コストを低減させる手法も提案されている(例えば、非特許文献2)。 As one of the methods for improving the current intra prediction, CCLM prediction (Cross-Component Liner Model Prediction) that generates a prediction value of color information using a decoded pixel value of luminance information is proposed. According to CCLM prediction, redundancy between components of luminance information and color information can be reduced (for example, Non-Patent Document 1). Furthermore, a method of reducing the calculation cost of CCLM prediction has also been proposed (for example, Non-Patent Document 2).
 従来技術によれば、画面(フレーム)内の画素が有する冗長性を削減することができるので、圧縮効率を高めることができる。 According to the conventional technology, it is possible to reduce the redundancy of the pixels in the screen (frame), so that it is possible to improve the compression efficiency.
 しかしながら、上記の従来技術には、改良の余地がある。例えば、CCLM予測は計算コストが高いため、符号化及び復号化処理全体のスループットを低下させるおそれがある。非特許文献2の提案手法によっても、輝度情報及び色情報の最大値及び最小値を求めるためには、参照画素の全てをループして計算することを要するため、計算コストを大幅に削減することは難しい。また、非特許文献2の提案手法は、輝度情報及び色情報の最大値及び最小値の各1点を用いて回帰直線(CCLM予測に用いられる線形モデル)を算出するため、ノイズの影響を受けやすくなるという懸念がある。 However, the above-mentioned conventional technology has room for improvement. For example, CCLM prediction has a high calculation cost, and thus may reduce the throughput of the entire encoding and decoding processes. Even in the proposed method of Non-Patent Document 2, in order to obtain the maximum value and the minimum value of the luminance information and the color information, it is necessary to loop and calculate all the reference pixels, so that the calculation cost is significantly reduced. Is difficult Further, the proposed method of Non-Patent Document 2 is affected by noise because it calculates a regression line (a linear model used for CCLM prediction) by using each one of the maximum value and the minimum value of luminance information and color information. There is a concern that it will be easier.
 そこで、本開示では、CCLM予測におけるノイズへの耐久性を上げつつ、符号化及び復号化の計算コストを低減させることのできる情報処理装置、情報処理方法及び情報処理プログラムを提案する。 Therefore, the present disclosure proposes an information processing device, an information processing method, and an information processing program capable of reducing the calculation cost of encoding and decoding while increasing the durability against noise in CCLM prediction.
 上記の課題を解決するために、本開示に係る一形態の情報処理装置は、画像内で規定されたブロックに隣接する参照画素から複数点を抽出し、抽出した複数点の輝度情報及び色情報に基づいて、当該ブロックに存在する画素の色情報を予測する線形モデルを算出するイントラ予測部を備える。 In order to solve the above problems, an information processing device according to an aspect of the present disclosure is to extract a plurality of points from reference pixels adjacent to a block defined in an image, and to extract brightness information and color information of the plurality of points. And an intra-prediction unit that calculates a linear model that predicts color information of pixels existing in the block.
本開示に係る情報処理装置の一例である画像符号化装置の構成例を示すブロック図である。FIG. 20 is a block diagram showing a configuration example of an image encoding device which is an example of an information processing device according to the present disclosure. 本開示に係るイントラ予測部の構成例を示すブロック図である。FIG. 20 is a block diagram illustrating a configuration example of an intra prediction unit according to the present disclosure. 第1の実施形態に係る処理ブロックを示した概念図である。It is a conceptual diagram which showed the processing block which concerns on 1st Embodiment. 第1の実施形態に係る情報処理の手順を示すフローチャートである。6 is a flowchart showing a procedure of information processing according to the first embodiment. 第2の実施形態に係る処理ブロックを示した概念図である。It is a conceptual diagram which showed the processing block which concerns on 2nd Embodiment. 第2の実施形態に係るイントラ予測処理を説明するための図(1)である。It is a figure (1) for explaining the intra prediction processing concerning a 2nd embodiment. 第2の実施形態に係るイントラ予測処理を説明するための図(2)である。It is a figure (2) for demonstrating the intra prediction process which concerns on 2nd Embodiment. 第2の実施形態に係る情報処理の手順を示すフローチャートである。It is a flow chart which shows a procedure of information processing concerning a 2nd embodiment. 本開示に係る画像符号化装置の機能を実現するコンピュータの一例を示すハードウェア構成図である。FIG. 16 is a hardware configuration diagram illustrating an example of a computer that realizes a function of the image encoding device according to the present disclosure.
 以下に、本開示の実施形態について図面に基づいて詳細に説明する。なお、以下の各実施形態において、同一の部位には同一の符号を付することにより重複する説明を省略する。 Hereinafter, embodiments of the present disclosure will be described in detail with reference to the drawings. In addition, in each of the following embodiments, the same reference numerals are given to the same portions, and the overlapping description will be omitted.
 以下に示す項目順序に従って本開示を説明する。
  1.第1の実施形態
   1-1.第1の実施形態に係る情報処理装置の構成例
   1-2.第1の実施形態に係る情報処理の詳細
   1-3.第1の実施形態に係る情報処理の手順
  2.第2の実施形態
   2-1.第2の実施形態に係る情報処理の詳細
   2-2.第2の実施形態に係る情報処理の手順
  3.第3の実施形態
  4.変形例
  5.その他の実施形態
  6.本開示に係る情報処理装置の効果
  7.ハードウェア構成
The present disclosure will be described in the following item order.
1. First Embodiment 1-1. Configuration example of information processing apparatus according to first embodiment 1-2. Details of information processing according to first embodiment 1-3. Information processing procedure according to the first embodiment 2. Second embodiment 2-1. Details of information processing according to second embodiment 2-2. Information processing procedure according to the second embodiment 3. Third embodiment 4. Modification 5. Other Embodiments 6. Effects of information processing apparatus according to the present disclosure 7. Hardware configuration
(1.第1の実施形態)
[1-1.第1の実施形態に係る情報処理装置の構成例]
 まず、図1を用いて、本開示に係る情報処理装置の構成について説明する。図1は、本開示に係る情報処理装置の一例である画像符号化装置100の構成例を示すブロック図である。なお、図1での図示は省略するが、画像符号化装置100は、符号化を行う対象となる原画データの提供元である外部装置や、画像を符号化した符号化ビットストリームの送信先となる外部装置等と、有線又は無線ネットワークを用いて相互に通信を行ってもよい。
(1. First embodiment)
[1-1. Configuration example of information processing apparatus according to first embodiment]
First, the configuration of the information processing device according to the present disclosure will be described using FIG. 1. FIG. 1 is a block diagram showing a configuration example of an image encoding device 100 which is an example of an information processing device according to the present disclosure. Although not shown in FIG. 1, the image encoding device 100 includes an external device that is a source of original image data to be encoded and a transmission destination of an encoded bitstream that encodes an image. Other external devices or the like may be communicated with each other using a wired or wireless network.
 画像符号化装置100は、図1に示した構成に基づき、画像とその予測画像との予測誤差を符号化する装置である。なお、図1に画像符号化装置100の構成を示しているが、図1の構成はあくまで一例であり、画像符号化装置100は、必ずしも図1中の構成に限られなくてもよい。すなわち、画像符号化装置100は、上述したH.265/HEVCで標準化された圧縮符号化を行う構成や、次世代の圧縮符号化技術として提唱されるVVC(Versatile Video Coding)を実現する構成であれば、必ずしも図1に示した構成でなくてもよい。 The image encoding device 100 is a device that encodes a prediction error between an image and its predicted image based on the configuration shown in FIG. Although the configuration of the image encoding device 100 is shown in FIG. 1, the configuration of FIG. 1 is merely an example, and the image encoding device 100 is not necessarily limited to the configuration in FIG. That is, the image encoding device 100 has a configuration for performing the compression encoding standardized by the above-mentioned H.265/HEVC and a configuration for realizing the VVC (Versatile Video Coding) proposed as the next-generation compression encoding technology. If so, the configuration does not necessarily have to be that shown in FIG.
 画像符号化装置100は、画像における圧縮符号化技術において、CCLM予測を用いたイントラ予測処理を行う。ここで、CCLM予測に関する課題について説明する。 The image encoding device 100 performs intra prediction processing using CCLM prediction in the image compression encoding technique. Here, a problem relating to CCLM prediction will be described.
 CCLM予測は、1フレームにおいて処理対象となるブロック(水平及び垂直方向の複数画素で構成される画素集団)に隣接する画素(以下、「参照画素」と表記する)の情報を用いて、ブロック内の画素の情報を予測する。なお、ブロックとは、符号化の対象となる処理単位を示し、一般にCU(Coding Unit)と表記される。CUは、最大符号化単位であるLCU(Largest Coding Unit)を再帰的に分割することにより形成される、可変的なサイズを有するブロックである。イントラ予測ではCUやPU(Predict Unit)の大きさや形状によって異なる処理を行う場合もあるが、本開示で提案する処理では、CUやPUの大きさ等は限定されない。以下の説明では、CU等を「ブロック」と総称して表記する。 The CCLM prediction uses information about pixels (hereinafter, referred to as “reference pixels”) adjacent to a block (a group of pixels composed of a plurality of pixels in the horizontal and vertical directions) to be processed in one frame, in the block. Predict the information of the pixel of. The block indicates a processing unit to be encoded, and is generally described as CU (Coding Unit). The CU is a block having a variable size, which is formed by recursively dividing an LCU (Largest Coding Unit) which is the maximum coding unit. In intra prediction, different processing may be performed depending on the size and shape of the CU or PU (Predict Unit), but in the processing proposed in the present disclosure, the size of the CU or PU is not limited. In the following description, CU and the like are collectively referred to as “block”.
 CCLM予測では、参照画素の輝度情報(輝度信号や輝度値と読み替えてもよい。また、以下の説明では、輝度情報を「Luma」と表記する場合がある)と色情報(色差信号や色差値と読み替えてもよい。また、以下の説明では、色情報を「Chroma」と表記する場合がある)との関係性を用いて、ブロック内の画素情報を予測する。上記の非特許文献1に開示されるように、CCLM予測では、ブロックのLumaを予測し、同一ブロックにおいてその情報の回帰(reconstruct)を用いて、回帰誤差が少なくなるような線形モデル(具体的には回帰直線)を算出し、ブロック内のChromaを予測する。CCLM予測によれば、フレーム内の画素が有する冗長性を削減することができるので、圧縮効率を高めることができる。 In CCLM prediction, luminance information of a reference pixel (may be read as a luminance signal or a luminance value. In the following description, the luminance information may be expressed as “Luma”) and color information (color difference signal or color difference value). Also, in the following description, the pixel information in the block is predicted by using the relationship with (color information may be expressed as “Chroma”). As disclosed in Non-Patent Document 1 described above, in CCLM prediction, a Luma of a block is predicted, and a regression (reconstruct) of the information in the same block is used to reduce a regression error (specifically, a linear model). The regression line) is calculated and Chroma in the block is predicted. According to the CCLM prediction, it is possible to reduce the redundancy of pixels in the frame, and thus it is possible to improve the compression efficiency.
 しかしながら、CCLM予測は、計算コストが大きく、その計算が符号化全体のスループットに影響を与えるおそれがある。例えば、線形モデルは、下記式(1)で求められる。 However, the CCLM prediction has a large calculation cost, and the calculation may affect the throughput of the entire coding. For example, the linear model is obtained by the following equation (1).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 上記式(1)において、predc(i,j)は、ブロック内のChromaの予測値を示す((i,j)は画素位置を示す)。また、recL’(i,j)は、回帰されたLumaをダウンサンプリングした情報を示す。そして、α及びβは、線形モデルを特定するための係数となる。例えば、α及びβは、下記式(2)で示される。 In the above formula (1), pred c (i,j) indicates the predicted value of Chroma in the block ((i,j) indicates the pixel position). Also, rec L '(i,j) indicates information obtained by down-sampling the regressed Luma. Then, α and β are coefficients for specifying the linear model. For example, α and β are represented by the following formula (2).
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 上記式(2)において、Nは、ブロックの単位(水平又は垂直の画素数)を示す。式(2)で示すように、α及びβは、ブロックの参照画素の全ての値をループして計算する必要があり、また、算出式に除算が含まれることから、計算コストが非常に大きくなる。このため、上記の非特許文献2に開示されるように、α及びβの計算を簡略化した下記式(3)が提案されている。 In the above equation (2), N represents a unit of block (horizontal or vertical number of pixels). As shown in Expression (2), α and β must be calculated by looping all the values of the reference pixels of the block, and since the calculation formula includes division, the calculation cost is very large. Become. Therefore, as disclosed in Non-Patent Document 2 described above, the following formula (3) is proposed which simplifies the calculation of α and β.
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 上記式(3)において、(xA,yA)、(xB,yB)は、それぞれ参照画素におけるLuma又はChromaの最大値又は最小値を示す。式(3)によれば、α及びβを容易に算出可能である。 In the above formula (3), (x A ,y A ) and (x B ,y B ) respectively indicate the maximum value or the minimum value of Luma or Chroma in the reference pixel. According to the equation (3), α and β can be easily calculated.
 しかしながら、上記式(3)を利用するためには、参照画素の輝度情報等の最大値及び最小値を求める過程で、参照画素の全てをループして計算することを要するため、計算コストを大幅に削減することは難しい。また、式(3)を用いる非特許文献2の提案手法は、輝度情報及び色情報の最大値及び最小値の各1点を用いて線形モデルを算出するため、ノイズの影響を受けやすくなるという懸念がある。具体的には、最小値や最大値をとりうる画素はノイズである可能性が高く、それらの値を用いて線形モデルを算出すると、予測精度が低下するおそれがある。 However, in order to use the above formula (3), it is necessary to loop and calculate all the reference pixels in the process of obtaining the maximum value and the minimum value of the brightness information of the reference pixel, etc. It is difficult to reduce to. In addition, the proposed method of Non-Patent Document 2 that uses Expression (3) is likely to be affected by noise because it calculates a linear model by using each one of the maximum value and the minimum value of the luminance information and the color information. I have a concern. Specifically, the pixel that can take the minimum value or the maximum value is likely to be noise, and if a linear model is calculated using these values, the prediction accuracy may decrease.
 そこで、本開示では、以下に説明するように、CCLM予測におけるノイズへの耐久性を上げつつ、符号化及び復号化の計算コストを低減させることのできる情報処理を提案する。 Therefore, in the present disclosure, as described below, information processing that can reduce the calculation cost of encoding and decoding while improving the durability against noise in CCLM prediction is proposed.
 まず、図1を用いて、符号化処理の全体の流れを説明し、その後、図2以下を用いて、本開示に係る情報処理(イントラ予測処理)の詳細について説明する。なお、図1や図2は、処理部やデータの流れ等の主なものを示しており、図1や図2に示されるものが全てとは限らない。つまり、画像符号化装置100において、図1においてブロックとして示されていない処理部が存在したり、図1において矢印等として示されていない処理やデータの流れが存在したりしてもよい。 First, the overall flow of the encoding process will be described using FIG. 1, and then the details of the information processing (intra prediction process) according to the present disclosure will be described using FIG. 2 and subsequent figures. It should be noted that FIGS. 1 and 2 show main components such as a processing unit and a data flow, and the components shown in FIGS. 1 and 2 are not necessarily all. That is, in the image encoding device 100, a processing unit not shown as a block in FIG. 1 may exist, or a process or data flow not shown as an arrow or the like in FIG. 1 may exist.
 図1に示すように、画像符号化装置100は、A/D変換部101、並べ替えバッファ102、第1演算部103、直交変換部104、量子化部105、符号化部106、蓄積バッファ107、逆量子化部108、逆直交変換部109、第2演算部110、フィルタ111、フレームメモリ112、スイッチ113、イントラ予測部114、インター予測部115、予測画像選択部116、及び、レート制御部117を含む。画像符号化装置100は、入力されるフレーム単位の動画像である原画データ(画像)に対してブロック(例えば、CU)ごとに符号化を行う。 As illustrated in FIG. 1, the image encoding device 100 includes an A/D conversion unit 101, a rearrangement buffer 102, a first calculation unit 103, an orthogonal transformation unit 104, a quantization unit 105, an encoding unit 106, and a storage buffer 107. , Inverse quantizer 108, inverse orthogonal transformer 109, second calculator 110, filter 111, frame memory 112, switch 113, intra predictor 114, inter predictor 115, predicted image selector 116, and rate controller. 117 are included. The image encoding apparatus 100 encodes original image data (image) that is an input moving image in frame units for each block (for example, CU).
 例えば、画像符号化装置100のA/D変換部101は、原画データを取得し、原画データに含まれる画像について、符号化対象のフレーム単位にA/D変換を行う。そして、A/D変換部101は、並べ替えバッファ102にA/D変換したデータを出力する。 For example, the A/D conversion unit 101 of the image encoding device 100 acquires original image data, and performs A/D conversion on an image included in the original image data in units of frames to be encoded. Then, the A/D conversion unit 101 outputs the A/D converted data to the rearrangement buffer 102.
 並べ替えバッファ102は、A/D変換部101から出力された画像について、表示の順番のフレーム単位の画像データを、例えばGOP(Group of Picture)構造に応じて、符号化のための順番に並べ替える。そして、並べ替えバッファ102は、並べ替えたデータを第1演算部103、及び、インター予測部115に出力する。 The rearrangement buffer 102 arranges image data output from the A/D conversion unit 101 in units of frames in display order in an order for encoding, for example, according to a GOP (Group of Picture) structure. Change. Then, the rearrangement buffer 102 outputs the rearranged data to the first calculation unit 103 and the inter prediction unit 115.
 第1演算部103は、入力される画像を順に符号化対象の画像とし、符号化対象の画像に対して符号化対象のブロックを設定する。第1演算部103は、符号化対象のブロックの画像(カレントブロック)から、予測画像選択部116から出力されたブロックの予測画像(予測ブロック)を減算して予測誤差を求め、それを直交変換部104に出力する。 The first calculation unit 103 sequentially sets an input image as an image to be encoded, and sets a block to be encoded for the image to be encoded. The first calculation unit 103 subtracts the prediction image (prediction block) of the block output from the prediction image selection unit 116 from the image of the block to be encoded (current block) to obtain the prediction error, and orthogonally transforms it. It is output to the unit 104.
 直交変換部104は、第1演算部103から出力される予測誤差に対して直交変換等を行い、変換係数を導出する。直交変換部104は、変換係数を量子化部105に出力する。 The orthogonal transformation unit 104 performs orthogonal transformation or the like on the prediction error output from the first calculation unit 103, and derives a transformation coefficient. The orthogonal transformation unit 104 outputs the transformation coefficient to the quantization unit 105.
 量子化部105は、直交変換部104から出力される変換係数をスケーリング(量子化)し、量子化変換係数を導出する。量子化部105は、量子化変換係数を符号化部106及び逆量子化部108に出力する。 The quantization unit 105 scales (quantizes) the transform coefficient output from the orthogonal transform unit 104 and derives a quantized transform coefficient. The quantizer 105 outputs the quantized transform coefficient to the encoder 106 and the inverse quantizer 108.
 符号化部106は、量子化部105から出力される量子化変換係数等を所定の手法で符号化する。例えば、符号化部106は、シンタックステーブルの定義に沿って、符号化パラメータと、量子化部105から出力される量子化変換係数を、各シンタックス要素のシンタックス値へ変換する。そして、符号化部106は、各シンタックス値を符号化(例えば、CABAC(Context-based Adaptive Binary Arithmetic Coding)などの算術符号化)する。 The encoding unit 106 encodes the quantized transform coefficient output from the quantization unit 105 by a predetermined method. For example, the encoding unit 106 converts the encoding parameter and the quantized transform coefficient output from the quantization unit 105 into the syntax value of each syntax element according to the definition of the syntax table. Then, the encoding unit 106 encodes each syntax value (for example, arithmetic encoding such as CABAC (Context-based Adaptive Binary Arithmetic Coding)).
 符号化部106は、例えば、符号化の結果得られる各シンタックス要素のビット列である符号化データを多重化し、符号化ビットストリームとして蓄積バッファ107に出力する。例えば、符号化部106は、後述するイントラ予測部114によって予測された色情報に基づいて、イントラ予測部114が処理対象としたブロックを符号化する。より具体的には、符号化部106は、原画データに対応する画像と、原画データに対応する画像と、イントラ予測部114によって予測された色情報により構成されるブロックを含む予測画像との予測誤差と、を符号化する。 The encoding unit 106 multiplexes encoded data, which is a bit string of each syntax element obtained as a result of encoding, and outputs the multiplexed data to the accumulation buffer 107 as an encoded bit stream. For example, the encoding unit 106 encodes the block that is processed by the intra prediction unit 114, based on the color information predicted by the intra prediction unit 114, which will be described later. More specifically, the encoding unit 106 predicts an image corresponding to the original image data, an image corresponding to the original image data, and a prediction image including a block including the color information predicted by the intra prediction unit 114. The error and are encoded.
 蓄積バッファ107は、符号化部106から出力された符号化ビットストリームを一時的に記憶する。その後、蓄積バッファ127は、伝送路を介して、蓄積した符号化ビットストリームを復号装置等に送信する。 The accumulation buffer 107 temporarily stores the encoded bitstream output from the encoding unit 106. After that, the accumulation buffer 127 transmits the accumulated encoded bitstream to the decoding device or the like via the transmission path.
 逆量子化部108は、量子化部105から出力された量子化変換係数の値をスケーリング(逆量子化)し、逆量子化後の変換係数を導出する。逆量子化部108は、その変換係数を逆直交変換部109に出力する。すなわち、逆量子化部108により行われる逆量子化は、量子化部105により行われる量子化の逆処理であり、画像復号装置(デコーダー)において行われる逆量子化と同様の処理である。 The inverse quantization unit 108 scales (inverse-quantizes) the value of the quantized transform coefficient output from the quantization unit 105, and derives the transformed coefficient after the inverse quantization. The inverse quantization unit 108 outputs the transform coefficient to the inverse orthogonal transform unit 109. That is, the inverse quantization performed by the inverse quantization unit 108 is an inverse process of the quantization performed by the quantization unit 105, and is the same process as the inverse quantization performed by the image decoding device (decoder).
 逆直交変換部109は、逆量子化部108から出力された変換係数に対して逆直交変換等を行い、予測誤差を導出する。逆直交変換部109は、その予測誤差を第2演算部110に出力する。すなわち、逆直交変換部109により行われる逆直交変換は、直交変換部104により行われる直交変換の逆処理であり、画像復号装置において行われる逆直交変換と同様の処理である。 The inverse orthogonal transform unit 109 performs inverse orthogonal transform or the like on the transform coefficient output from the inverse quantization unit 108 and derives a prediction error. The inverse orthogonal transform unit 109 outputs the prediction error to the second calculation unit 110. That is, the inverse orthogonal transform performed by the inverse orthogonal transform unit 109 is an inverse process of the orthogonal transform performed by the orthogonal transform unit 104, and is the same process as the inverse orthogonal transform performed in the image decoding device.
 第2演算部110は、逆直交変換部109から出力された予測誤差と、予測画像選択部116から出力された、予測誤差に対応する予測画像とを加算して、局所的な復号画像(デコード画像)を導出する。第2演算部110は、その局所的な復号画像をフィルタ111に出力する。 The second calculation unit 110 adds the prediction error output from the inverse orthogonal transform unit 109 and the prediction image corresponding to the prediction error output from the prediction image selection unit 116 to obtain a local decoded image (decoded image). Image). The second calculation unit 110 outputs the locally decoded image to the filter 111.
 フィルタ111は、第2演算部110から出力された復号画像をフィルタリングすることにより、ブロック歪を除去する。フィルタ111は、フィルタリング後の画像をフレームメモリ112に出力する。 The filter 111 removes block distortion by filtering the decoded image output from the second calculation unit 110. The filter 111 outputs the filtered image to the frame memory 112.
 フレームメモリ112は、フィルタ111から出力された局所的な復号画像を用いて、画像単位の復号画像を再構築し、フレームメモリ112内のバッファへ格納する。フレームメモリ112は、イントラ予測部114もしくはインター予測部115により指定される復号画像を参照画像としてバッファより読み出し、スイッチ113を介して、復号画像を出力する。 The frame memory 112 reconstructs a decoded image in image units using the locally decoded image output from the filter 111 and stores it in a buffer in the frame memory 112. The frame memory 112 reads the decoded image designated by the intra prediction unit 114 or the inter prediction unit 115 as a reference image from the buffer, and outputs the decoded image via the switch 113.
 イントラ予測部114は、フレームメモリ112に格納された符号化対象のブロックと同一時刻の復号画像を参照画像として取得する。そして、イントラ予測部114は、参照画像を用いて、符号化対象のブロックに対して、所定のイントラ予測モードのイントラ予測処理を行う。なお、H.265/HEVCやVVCでは複数のイントラ予測モードが規定されているが、後述のように、本開示ではCCLM予測を用いた処理についてのみ説明する。 The intra prediction unit 114 acquires, as a reference image, a decoded image stored in the frame memory 112 at the same time as the block to be encoded. Then, the intra prediction unit 114 uses the reference image to perform intra prediction processing in a predetermined intra prediction mode on the block to be encoded. Although a plurality of intra prediction modes are specified in H.265/HEVC and VVC, only the process using CCLM prediction will be described in the present disclosure as described later.
 インター予測部115は、動き予測補償部や動きベクトル検出部等を備え、画像間(インター)に関する予測処理を実行する。 The inter prediction unit 115 includes a motion prediction/compensation unit, a motion vector detection unit, and the like, and executes prediction processing regarding images (inter).
 予測部119は、イントラ予測処理又はインター予測処理の結果生成された予測画像を第1演算部103や第2演算部110に出力する。 The prediction unit 119 outputs the predicted image generated as a result of the intra prediction process or the inter prediction process to the first calculation unit 103 and the second calculation unit 110.
 以上、本開示に係る画像符号化装置100の全体の構成の概要を説明した。図2以下では、本開示に係る情報処理であるイントラ予測処理の詳細について説明する。 The overview of the overall configuration of the image encoding device 100 according to the present disclosure has been described above. The details of the intra prediction process that is the information processing according to the present disclosure will be described below with reference to FIG.
[1-2.第1の実施形態に係る情報処理の詳細]
 以下、図2及び図3を用いて、本開示に係る情報処理の詳細について説明する。まず、図2を用いて、本開示に係るイントラ予測部114の構成について説明する。図2は、本開示に係るイントラ予測部114の構成例を示すブロック図である。
[1-2. Details of information processing according to the first embodiment]
Hereinafter, details of information processing according to the present disclosure will be described with reference to FIGS. 2 and 3. First, the configuration of the intra prediction unit 114 according to the present disclosure will be described using FIG. 2. FIG. 2 is a block diagram showing a configuration example of the intra prediction unit 114 according to the present disclosure.
 図2に示すように、本開示に係る画像符号化装置100のイントラ予測部114は、輝度色差分離部1141と、輝度信号イントラ予測部1142と、輝度信号ダウンサンプリング部1143と、色差信号イントラ予測部1144とを含む。 As illustrated in FIG. 2, the intra prediction unit 114 of the image encoding device 100 according to the present disclosure includes a luminance/color difference separation unit 1141, a luminance signal intra prediction unit 1142, a luminance signal downsampling unit 1143, and a color difference signal intra prediction. And a portion 1144.
 輝度色差分離部1141は、フレームメモリ112から出力された符号化ブロックを取得し、輝度信号と色差信号とに分離する。輝度色差分離部1141は、分離した情報のうち、輝度信号を輝度信号イントラ予測部1142に出力する。かかる輝度信号は、回帰された輝度信号に対応し、以下の説明では「RecL(LはLumaを意味する)」と表記する。なお、輝度色差分離部1141は、分離した輝度信号を輝度信号ダウンサンプリング部1143にも出力する。 The luminance/color difference separation unit 1141 acquires the encoded block output from the frame memory 112 and separates it into a luminance signal and a color difference signal. The luminance/color difference separation unit 1141 outputs a luminance signal of the separated information to the luminance signal intra prediction unit 1142. Such a luminance signal corresponds to the regressed luminance signal, and will be referred to as “Rec L (L means Luma)” in the following description. The luminance/color difference separating unit 1141 also outputs the separated luminance signal to the luminance signal down-sampling unit 1143.
 また、輝度色差分離部1141は、分離した色差信号を色差信号イントラ予測部1144にも出力する。かかる色差信号は、回帰された色差信号に対応し、以下の説明では「RecC(CはChromaを意味する)」と表記する。 The luminance/color difference separation unit 1141 also outputs the separated color difference signals to the color difference signal intra prediction unit 1144. Such a color difference signal corresponds to the regressed color difference signal, and will be referred to as "Rec C (C means Chroma)" in the following description.
 輝度信号イントラ予測部1142は、輝度色差分離部1141から出力されたRecLと、イントラ予測処理のためのパラメータ(例えば、処理対象となるブロックの大きさ等)とに基づき、処理対象のブロック内の輝度信号を予測する。以下の説明では、予測された輝度信号を「PredL(LはLumaを意味する)」と表記する。 The luminance signal intra-prediction unit 1142, based on Rec L output from the luminance-color difference separation unit 1141 and parameters for intra-prediction processing (for example, the size of the block to be processed, etc.), Predict the luminance signal of. In the following description, the predicted luminance signal is referred to as “Pred L (L means Luma)”.
 輝度信号ダウンサンプリング部1143は、輝度色差分離部1141から出力されたRecLをダウンサンプリングする。かかる処理は、通常、輝度信号と色差信号とで情報量が相違することから(例えば、輝度信号Yと色差信号Cb及びCrは4:2:0となるフォーマットで表示される)、その相違の調整のために行われる。以下の説明では、ダウンサンプリングされた輝度信号を「Rec´L(LはLumaを意味する)」と表記する。 The luminance signal downsampling unit 1143 downsamples Rec L output from the luminance/color difference separating unit 1141. In such processing, since the amount of information is usually different between the luminance signal and the color difference signal (for example, the luminance signal Y and the color difference signals Cb and Cr are displayed in the format of 4:2:0), the difference is caused. It is done for adjustment. In the following description, the down-sampled luminance signal will be referred to as “Rec′ L (L means Luma)”.
 色差信号イントラ予測部1144は、CCLM予測の手法を用いて、処理対象となるブロック内の色差信号を予測する。具体的には、色差信号イントラ予測部1144は、イントラ予測処理のためのパラメータと、ダウンサンプリングされた輝度信号である「Rec´L」と、回帰された色差信号である「RecC」とを用いて、色差信号の予測値を算出する。具体的には、色差信号イントラ予測部1144は、上記の式(1)を用いて、色差信号の予測値「PredC」を算出する。 The color difference signal intra prediction unit 1144 predicts the color difference signal in the block to be processed by using the CCLM prediction method. Specifically, the color difference signal intra prediction unit 1144 sets the parameters for intra prediction processing, the down-sampled luminance signal “Rec′ L ”, and the regressed color difference signal “Rec C ”. The predicted value of the color difference signal is calculated by using this. Specifically, the color difference signal intra prediction unit 1144 calculates the predicted value “Pred C ”of the color difference signal using the above equation (1).
 そして、イントラ予測部114は、算出された「PredL」と「PredC」とに基づく予測画像を予測画像選択部116に出力する。 Then, the intra prediction unit 114 outputs the predicted image based on the calculated “Pred L ” and “Pred C ” to the predicted image selection unit 116.
 続いて、上記式(1)のα及びβの算出手法について説明する。本開示において、イントラ予測部114は、ブロックに隣接する参照画素から複数点を抽出し、抽出した複数点の輝度情報及び色情報に基づいて、ブロックに存在する画素の色情報を予測する線形モデルを算出する。なお、線形モデルの算出とは、上記のα及びβを算出することを意味する。また、複数点とは、ブロックに隣接する参照画素のうち、全てでなく、かつ、複数(2つ以上)の点を意味する。 Next, the method for calculating α and β in the above formula (1) will be described. In the present disclosure, the intra prediction unit 114 extracts a plurality of points from reference pixels adjacent to the block, and based on the extracted luminance information and color information of the plurality of points, a linear model that predicts color information of pixels existing in the block. To calculate. Note that the calculation of the linear model means calculation of the above α and β. In addition, a plurality of points means not all but a plurality (two or more) points of the reference pixels adjacent to the block.
 例えば、第1の実施形態では、イントラ予測部114は、ブロックのうち垂直方向に隣接する参照画素から3点を抽出するとともに、ブロックのうち水平方向に隣接する参照画素から3点を抽出し、各々3点の輝度情報及び色情報における中間点を特定し、特定した中間点に基づいて、線形モデルを算出する。 For example, in the first embodiment, the intra prediction unit 114 extracts three points from vertically adjacent reference pixels in the block, and extracts three points from horizontally adjacent reference pixels in the block. The midpoints in the luminance information and the color information of each of the three points are specified, and a linear model is calculated based on the specified midpoints.
 より具体的には、イントラ予測部114は、ブロックのうち垂直方向に隣接する参照画素から連続した3点を抽出するとともに、ブロックのうち水平方向に隣接する参照画素から連続した3点を抽出し、各々3点の輝度情報及び色情報における中間点を特定し、特定した中間点に基づいて、線形モデルを算出する。 More specifically, the intra prediction unit 114 extracts three consecutive points from vertically adjacent reference pixels in the block, and extracts three consecutive points from horizontally adjacent reference pixels in the block. , The midpoints in the luminance information and the color information of each of the three points are specified, and a linear model is calculated based on the specified midpoints.
 この点について、図3を用いて説明する。図3は、第1の実施形態に係る処理ブロックを示した概念図である。図3では、画像における輝度情報と色情報との関係が4:2:0フォーマットであり、また、ブロックの大きさ(上記式(2)における「N」)が「8」である例を示す。この場合、図3に示すように、色情報に対応するブロック40は、輝度情報に対応するブロック50と比較して、垂直及び水平の画素数がそれぞれ半分となる。また、ブロック40及びブロック50の周囲に隣接する画素は、参照画素を示す。 -This point will be explained using FIG. FIG. 3 is a conceptual diagram showing processing blocks according to the first embodiment. FIG. 3 shows an example in which the relationship between the luminance information and the color information in the image is the 4:2:0 format, and the block size (“N” in the above equation (2)) is “8”. .. In this case, as shown in FIG. 3, the block 40 corresponding to the color information has half the number of vertical and horizontal pixels as compared with the block 50 corresponding to the luminance information. Pixels adjacent to the blocks 40 and 50 indicate reference pixels.
 上記のように、第1の実施形態に係るイントラ予測部114は、ブロック40において、全ての参照画素の情報を用いて処理を行うのではなく、垂直及び水平の参照画素から各々3点を抽出する。そして、イントラ予測部114は、各々3点の輝度情報及び色情報における中間点を特定し、特定した中間点に基づいて、線形モデルを算出する。 As described above, the intra prediction unit 114 according to the first embodiment does not perform processing using information of all reference pixels in the block 40, but extracts three points from each of the vertical and horizontal reference pixels. To do. Then, the intra prediction unit 114 specifies the intermediate points in the luminance information and the color information of each of the three points, and calculates the linear model based on the specified intermediate points.
 これにより、イントラ予測部114は、水平、垂直の全点をサーチして最大値と最小値を算出する従来手法と比較して、輝度情報等を計算する座標数を削減することができるので、高速な計算を行うことができ、全体の処理速度を向上させることができる。また、イントラ予測部114は、ノイズの影響を受けやすいと推定される最大値や最小値ではなく、複数点の中間値をとることにより、ノイズへの耐久性を向上させることができる。 With this, the intra prediction unit 114 can reduce the number of coordinates for calculating the brightness information and the like, as compared with the conventional method of searching all the horizontal and vertical points and calculating the maximum value and the minimum value. High-speed calculation can be performed, and the overall processing speed can be improved. Further, the intra prediction unit 114 can improve the durability against noise by taking an intermediate value of a plurality of points instead of the maximum value and the minimum value estimated to be easily influenced by noise.
 より好適には、第1の実施形態に係るイントラ予測部114は、ブロック40のうち垂直方向に隣接する参照画素から、連続した3点であってブロック40の基点とは異なる一方の端部に位置する3点を抽出する。また、イントラ予測部114は、ブロック40のうち水平方向に隣接する参照画素から、連続した3点であってブロック40の基点とは異なる一方の端部に位置する3点を抽出する。そして、イントラ予測部114は、各々3点の輝度情報及び色情報における中間点を特定し、特定した中間点に基づいて線形モデルを算出する。 More preferably, the intra-prediction unit 114 according to the first embodiment provides three consecutive points from the reference pixels that are adjacent in the vertical direction in the block 40 to one end that is different from the base point of the block 40. Extract the three points that are located. Further, the intra prediction unit 114 extracts three consecutive three points located at one end different from the base point of the block 40 from the reference pixels adjacent to each other in the horizontal direction in the block 40. Then, the intra prediction unit 114 specifies the intermediate points in the luminance information and the color information of each of the three points, and calculates the linear model based on the specified intermediate points.
 このように、イントラ予測部114は、ブロック40という空間において、基点(ブロックの最上端かつ最左端)から最も離れた各3点を処理対象とする。これにより、イントラ予測部114は、その空間の特徴を最も反映していると推定される点を用いて線形モデルを算出することができるので、予測精度を高めることができる。 In this way, the intra prediction unit 114 processes each of the three points farthest from the base point (the top end and the left end of the block) in the space of the block 40. In this way, the intra prediction unit 114 can calculate the linear model using the points that are estimated to most reflect the characteristics of the space, and thus the prediction accuracy can be improved.
 具体的には、イントラ予測部114は、図3に示すように、水平方向の基点から遠い3点である点A0、点A1、点A2と、垂直方向の基点から遠い3点である点L0、点L1、点L2を処理対象として抽出する。なお、図3では、ブロック40とブロック50とで対応する点については、同一の参照符号を付与する。 Specifically, the intra prediction unit 114, as shown in FIG. 3, is a point A0, a point A1, and a point A2 that are three points far from the base point in the horizontal direction and a point L0 that is three points far from the base point in the vertical direction. , Point L1 and point L2 are extracted as processing targets. Note that, in FIG. 3, the same reference numerals are given to corresponding points between the block 40 and the block 50.
 続けて、イントラ予測部114は、抽出した各3点の中間点を算出する。例えば、各点情報を(輝度情報,色情報)という座標で示すと、点A0は、(XA0,YA0)と表される。ここで、XA0とは、点A0における輝度情報を示し、YA0とは、点A0における色情報を示す。同様に、点A1は、(XA1,YA1)と、点A2は、(XA2,YA2)と、点L0は、(XL0,YL0)と、点L1は、(XL1,YL1)と、点L2は、(XL2,YL2)と表される。 Subsequently, the intra prediction unit 114 calculates the intermediate points of the extracted three points. For example, when each point information is represented by coordinates (luminance information, color information), the point A0 is represented as (XA0, YA0). Here, XA0 indicates the luminance information at the point A0, and YA0 indicates the color information at the point A0. Similarly, the point A1 is (XA1, YA1), the point A2 is (XA2, YA2), the point L0 is (XL0, YL0), the point L1 is (XL1, YL1), and the point L2 is , (XL2, YL2).
 そして、イントラ予測部114は、水平、垂直それぞれ3点の色情報の中間点YA、YL、XA及びXLを求める。 Then, the intra prediction unit 114 obtains the intermediate points YA m , YL m , XA m, and XL m of the color information of each of the horizontal and vertical three points.
 例えば、水平の3点において、YA0≧YA1、かつ、YA2≧YA0が成立するとき、YAはYA0であり、YAはXA0である。同様に、YA1≧YA2、かつ、YA0≧YA1が成立するとき、YAはYA1であり、YAはXA1である。同様に、YA2≧YA0、かつ、YA1≧YA2が成立するとき、YAはYA2であり、YAはXA2である。 For example, when YA0≧YA1 and YA2≧YA0 are satisfied at three horizontal points, YA m is YA0 and YA m is XA0. Similarly, when YA1≧YA2 and YA0≧YA1 are satisfied, YA m is YA1 and YA m is XA1. Similarly, when YA2≧YA0 and YA1≧YA2 are satisfied, YA m is YA2 and YA m is XA2.
 また、垂直の3点において、YL0≧YL1、かつ、YL2≧YL0が成立するとき、YLはYL0であり、YLはXL0である。同様に、YL1≧YL2、かつ、YL0≧YL1が成立するとき、YLはYL1であり、YLはXL1である。同様に、YL2≧YL0、かつ、YL1≧YL2が成立するとき、YLはYL2であり、YLはXL2である。 When YL0≧YL1 and YL2≧YL0 are satisfied at three vertical points, YL m is YL0 and YL m is XL0. Similarly, when YL1≧YL2 and YL0≧YL1 are satisfied, YL m is YL1 and YL m is XL1. Similarly, YL2 ≧ YL0 and, when the YL1 ≧ YL2 is satisfied, YL m is YL2, the YL m is XL2.
 そして、イントラ予測部114は、水平、垂直の色情報の中間点の2点である(XA,YA)と(XL,YL)との直線方程式のα及びβを算出する。具体的には、イントラ予測部114は、YA=αXA+β、YL=αXL+βという2式に基づいて、α及びβを算出する。これは、上記式(3)における(xA,xB)、(yA,yB)に、中間点が代入されたことと同じことを意味する。このように、イントラ予測部114は、従来手法と比較して、大幅に計算コストを削減することができる。 Then, the intra prediction unit 114 calculates α and β of the linear equation of (XA m , YA m ) and (XL m , YL m ), which are two intermediate points of the horizontal and vertical color information. Specifically, the intra prediction unit 114 calculates α and β based on the two equations YA m =αXA m +β and YL m =αXL m +β. This means that the intermediate point is substituted for (x A ,x B ) and (y A ,y B ) in the above equation (3). In this way, the intra prediction unit 114 can significantly reduce the calculation cost as compared with the conventional method.
 なお、第1の実施形態では、抽出した3点の中間点をとる例を示したが、イントラ予測部114は、中間点以外の情報を用いて線形モデルを算出してもよい。例えば、イントラ予測部114は、抽出した複数点の輝度情報及び色情報の平均値に基づいて線形モデルを算出してもよい。 Note that, in the first embodiment, an example in which the extracted three intermediate points are taken has been shown, but the intra prediction unit 114 may calculate the linear model using information other than the intermediate points. For example, the intra prediction unit 114 may calculate the linear model based on the average values of the extracted luminance information and color information of a plurality of points.
[1-3.第1の実施形態に係る情報処理の手順]
 次に、図4を用いて、第1の実施形態に係る情報処理の手順について説明する。図4は、第1の実施形態に係る情報処理の流れを示すフローチャートである。
[1-3. Information Processing Procedure According to First Embodiment]
Next, the procedure of information processing according to the first embodiment will be described with reference to FIG. FIG. 4 is a flowchart showing the flow of information processing according to the first embodiment.
 図4に示すように、画像符号化装置100は、処理対象ブロックにおいて、輝度信号の予測値(PredL)を算出する(ステップS101)。 As illustrated in FIG. 4, the image encoding device 100 calculates a prediction value (Pred L ) of the luminance signal in the processing target block (step S101).
 続けて、画像符号化装置100は、CCLM予測を行うか否かを判定する(ステップS102)。CCLM予測を行わない場合(ステップS102;No)、画像符号化装置100は、その後のCCLM予測に関する処理をスキップする。一方、CCLM予測を行う場合(ステップS102;Yes)、画像符号化装置100は、処理対象ブロックの参照画素において、水平、垂直の各3点を特定する(ステップS103)。 Subsequently, the image encoding device 100 determines whether or not to perform CCLM prediction (step S102). When CCLM prediction is not performed (step S102; No), the image coding apparatus 100 skips the subsequent process regarding CCLM prediction. On the other hand, when CCLM prediction is performed (step S102; Yes), the image coding apparatus 100 specifies three horizontal and vertical points in each reference pixel of the block to be processed (step S103).
 そして、画像符号化装置100は、上側3点(すなわち水平側)の中間値を算出する(ステップS104)。同様に、画像符号化装置100は、左側3点(すなわち垂直側)の中間値を算出する(ステップS105)。 Then, the image coding apparatus 100 calculates the intermediate value of the upper three points (that is, the horizontal side) (step S104). Similarly, the image coding apparatus 100 calculates the intermediate value of the left three points (that is, the vertical side) (step S105).
 算出された中間値に基づいて、画像符号化装置100は、直線方程式における傾きαを算出する(ステップS106)。また、算出したαに基づいて、画像符号化装置100は、直線方程式における切片βを算出する(ステップS107)。 The image coding apparatus 100 calculates the slope α in the linear equation based on the calculated intermediate value (step S106). Further, the image coding apparatus 100 calculates the intercept β in the linear equation based on the calculated α (step S107).
 その後、画像符号化装置100は、算出したα及びβの値に基づいて、処理対象ブロックにおける色差信号の予測値を算出する(ステップS108)。 After that, the image coding apparatus 100 calculates the predicted value of the color difference signal in the processing target block based on the calculated values of α and β (step S108).
(2.第2の実施形態)
[2-1.第2の実施形態に係る情報処理の詳細]
 次に、第2の実施形態について説明する。第1の実施形態では、イントラ予測部114が、抽出した複数点の中間値を求め、求めた中間値から直線方程式の傾きと切片、すなわちα及びβを算出する例を示した。第2の実施形態では、イントラ予測部114は、抽出した複数点の組み合わせの各々について直線方程式を求め、求めた直線方程式の解に基づいて、CCLM予測における線形モデルを算出する。なお、第2の実施形態では、画像符号化装置100の機能や構成については第1の実施形態と共通するため、説明を省略する。
(2. Second embodiment)
[2-1. Details of information processing according to the second embodiment]
Next, a second embodiment will be described. In the first embodiment, an example has been shown in which the intra prediction unit 114 calculates the median value of the extracted plural points and calculates the slope and intercept of the linear equation, that is, α and β from the calculated median value. In the second embodiment, the intra prediction unit 114 obtains a linear equation for each of the extracted combinations of a plurality of points, and calculates a linear model in CCLM prediction based on the obtained solution of the linear equation. Note that, in the second embodiment, the functions and configuration of the image encoding device 100 are the same as those in the first embodiment, so description will be omitted.
 図5を用いて、第2の実施形態に係る情報処理の概要について説明する。図5は、第2の実施形態に係る処理ブロックを示した概念図である。図5は、図3と同様、色情報に対応するブロック40と、輝度情報に対応するブロック50とを示す。 An outline of information processing according to the second embodiment will be described with reference to FIG. FIG. 5 is a conceptual diagram showing processing blocks according to the second embodiment. Similar to FIG. 3, FIG. 5 shows a block 40 corresponding to color information and a block 50 corresponding to luminance information.
 イントラ予測部114は、第1の実施形態と同様、参照画素の全てを用いて処理を行うのではなく、参照画素のうち複数点を抽出する。そして、イントラ予測部114は、ブロック40のうち垂直方向に隣接する参照画素から抽出した複数点のいずれか1点と、ブロック40のうち水平方向に隣接する参照画素から抽出した複数点のいずれか1点とから求められる直線方程式を算出する。例えば、イントラ予測部114は、垂直方向から3点、水平方向から3点を抽出した場合、各々の点の組み合わせの数である3つの直線方程式を算出する。そして、イントラ予測部114は、算出した直線方程式の傾きの比較に基づいて、線形モデルを算出する。 Like the first embodiment, the intra prediction unit 114 does not perform processing using all the reference pixels, but extracts a plurality of points from the reference pixels. Then, the intra prediction unit 114 selects either one of the plurality of points extracted from the vertically adjacent reference pixels of the block 40 or any one of the plurality of points extracted from the horizontally adjacent reference pixels of the block 40. A linear equation calculated from one point is calculated. For example, when the intra prediction unit 114 extracts three points from the vertical direction and three points from the horizontal direction, the intra prediction unit 114 calculates three linear equations that are the number of combinations of each point. Then, the intra prediction unit 114 calculates a linear model based on the comparison of the calculated slopes of the linear equations.
 具体的には、イントラ予測部114は、第1の実施形態と同様、ブロック40のうち垂直方向に隣接する参照画素から、連続した3点であってブロックの基点とは異なる一方の端部に位置する3点(図5に示す点L0、点L1、点L2)を抽出する。また、イントラ予測部114は、ブロック40のうち水平方向に隣接する参照画素から、連続した3点であってブロック40の基点とは異なる一方の端部に位置する3点を抽出する(図5に示す点A0、点A1、点A2)。そして、イントラ予測部114は、各々3点を組み合わせた3つの直線方程式を算出し、算出した直線方程式の傾きの比較に基づいて、線形モデルを算出する。 Specifically, the intra-prediction unit 114, as in the first embodiment, has three consecutive points from the reference pixel adjacent in the vertical direction in the block 40 and one end portion different from the base point of the block. Three located points (point L0, point L1, point L2 shown in FIG. 5) are extracted. In addition, the intra prediction unit 114 extracts, from the reference pixels adjacent to each other in the horizontal direction in the block 40, three consecutive points located at one end different from the base point of the block 40 (FIG. 5 ). Point A0, point A1, point A2). Then, the intra prediction unit 114 calculates three linear equations each combining three points, and calculates a linear model based on the comparison of the slopes of the calculated linear equations.
 図5には、概念的に、点A0と点L0、点A1と点L1、点A2と点L2とから求められる直線を示している。すなわち、第1の実施形態では、抽出した3点から中間点を求め、中間点に基づいて直線方程式を求めたのに対して、第2の実施形態では、抽出した点から複数の直線方程式を求め、複数の直線方程式から、線形モデルを算出するという点で相違する。 FIG. 5 conceptually shows straight lines obtained from the points A0 and L0, the points A1 and L1, and the points A2 and L2. That is, in the first embodiment, an intermediate point is obtained from the extracted three points and a linear equation is obtained based on the intermediate point, whereas in the second embodiment, a plurality of linear equations are obtained from the extracted points. The difference is that a linear model is calculated from a plurality of linear equations obtained.
 具体的には、イントラ予測部114は、複数点の組み合わせの数だけ算出された直線方程式の傾きの中間値に基づいて、線形モデルを算出する。 Specifically, the intra prediction unit 114 calculates a linear model based on the intermediate value of the slope of the linear equation calculated for the number of combinations of a plurality of points.
 具体的な算出手法について説明する。第1の実施形態と同様、イントラ予測部114は、点A0(XA0,YA0)、点A1(XA1,YA1)、点A2(XA2,YA2)、点L0(XL0,YL0)、点L1(XL1,YL1)、点L2(XL2,YL2)を抽出する。 Explain specific calculation method. As in the first embodiment, the intra prediction unit 114 uses the point A0 (XA0, YA0), the point A1 (XA1, YA1), the point A2 (XA2, YA2), the point L0 (XL0, YL0), and the point L1 (XL1). , YL1) and a point L2 (XL2, YL2) are extracted.
 続けて、イントラ予測部114は、2点の組み合わせを生成する。例えば、イントラ予測部114は、点A0と点L0、点A1と点L1、点A2と点L2とをそれぞれ組み合わせる。そして、各々の点の情報を上記式(3)に代入し、各々の直線方程式の傾きを求める。 Subsequently, the intra prediction unit 114 generates a combination of two points. For example, the intra prediction unit 114 combines the points A0 and L0, the points A1 and L1, and the points A2 and L2, respectively. Then, the information of each point is substituted into the above equation (3) to obtain the slope of each linear equation.
 図6に、3つの組み合わせから求められた直線を示す。図6は、第2の実施形態に係るイントラ予測処理を説明するための図(1)である。図6には、点A0と点L0との組み合わせから求められた直線52と、点A1と点L1との組み合わせから求められた直線54と、点A2と点L2との組み合わせから求められた直線56とを示す。 Fig. 6 shows the straight line obtained from the three combinations. FIG. 6 is a diagram (1) for explaining the intra prediction process according to the second embodiment. In FIG. 6, a straight line 52 obtained from the combination of points A0 and L0, a straight line 54 obtained from the combination of points A1 and L1, and a straight line obtained from the combination of points A2 and L2. And 56.
 ここで、直線52の傾きをα00、直線54の傾きをα11、直線56の傾きをα22とおき、傾きの中間値αを求める。中間値を求める手法は、第1の実施形態と共通するため省略する。 Here, assuming that the slope of the straight line 52 is α 00 , the slope of the straight line 54 is α 11 , and the slope of the straight line 56 is α 22 , an intermediate value α m of the slope is obtained. The method of obtaining the intermediate value is common to that of the first embodiment, and will be omitted.
 図6の例では、イントラ予測部114は、傾きα00を中間値αとして求めたとする。この場合、イントラ予測部114は、中間値αとして決定した傾きα00の値、及び、そのときの色情報Y(Chroma)と輝度情報L(Luma)を用いて、切片であるβを求める。具体的には、βは、Y×α-Xで求められるため、α=α00であるとき、βm=YA0×α00-XA0となる。 In the example of FIG. 6, it is assumed that the intra prediction unit 114 determines the slope α 00 as the intermediate value α m . In this case, the intra prediction unit 114 uses the value of the gradient α 00 determined as the intermediate value α m , and the color information Y m (Chroma) and the luminance information L m (Luma) at that time to obtain an intercept β. Find m . Specifically, since β m is calculated by Y m ×α m −X m , when α m00 , β m =YA0×α 00 −XA0.
 上記のように、第2の実施形態に係るイントラ予測部114は、空間(ブロック)における最も遠い3点を抽出するとともに、それぞれの直線方程式を並列に演算して、解の中間点を求める。そして、イントラ予測部114は、解の中間点に基づいて、CCLM予測における線形モデルを算出(決定)する。 As described above, the intra prediction unit 114 according to the second embodiment extracts the three furthest points in the space (block) and calculates each linear equation in parallel to obtain the intermediate point of the solution. Then, the intra prediction unit 114 calculates (determines) a linear model in CCLM prediction based on the midpoint of the solution.
 このように、第2の実施形態に係るイントラ予測部114は、複数の直線方程式の解の中間点を求めることにより、ノイズへの耐久性を上げることができる。この点について、図7を用いて説明する。 As described above, the intra prediction unit 114 according to the second embodiment can improve the durability against noise by obtaining the intermediate points of the solutions of the plurality of linear equations. This point will be described with reference to FIG. 7.
 図7は、第2の実施形態に係るイントラ予測処理を説明するための図(2)である。図7には、図6で示した点L2と点A2との組み合わせに基づく直線56に替えて、点L2と点A2´との組み合わせに基づく直線58を示す。 FIG. 7 is a diagram (2) for explaining the intra prediction process according to the second embodiment. FIG. 7 shows a straight line 58 based on the combination of the points L2 and A2′, instead of the straight line 56 based on the combination of the points L2 and A2 shown in FIG.
 図7において、点A2´は予測精度を低下させる、いわゆるノイズとなりうる点である。しかし、第2の実施形態に係る処理によれば、3つの直線の解のうち中間値が選択されるため、直線58に対応する解は除外される。このように、第2の実施形態に係る処理によれば、算出処理の過程においてノイズを除外することができるため、ノイズへの耐久性を上げることができる。 In FIG. 7, a point A2′ is a point that can become so-called noise that deteriorates the prediction accuracy. However, according to the processing according to the second embodiment, since the intermediate value is selected from the solutions of the three straight lines, the solution corresponding to the straight line 58 is excluded. As described above, according to the processing according to the second embodiment, noise can be excluded in the process of the calculation processing, and thus durability against noise can be improved.
 なお、第2の実施形態では、直線方程式の解の中間点をとる例を示したが、イントラ予測部114は、中間点以外の情報を用いて線形モデルを算出してもよい。例えば、イントラ予測部114は、複数点の組み合わせの数だけ算出された直線方程式の傾きの平均値に基づいて、線形モデルを算出してもよい。 In addition, in the second embodiment, an example in which the midpoint of the solution of the linear equation is taken has been shown, but the intra prediction unit 114 may calculate the linear model using information other than the midpoint. For example, the intra prediction unit 114 may calculate the linear model based on the average value of the slopes of the linear equation calculated for the number of combinations of a plurality of points.
 また、イントラ予測部114は、図6や図7で示した数よりも多い、もしくは少ない直線方程式を求めてもよい。 Also, the intra prediction unit 114 may obtain more or less linear equations than the numbers shown in FIGS. 6 and 7.
 例えば、イントラ予測部114は、点A0と点L0、点A0と点L1、点A0と点L2、点A1と点L0、点A1と点L1、点A1と点L2、点A2と点L0、点A2と点L1、点A2と点L2とをそれぞれ組み合わせて、9つの直線方程式の解を求めてもよい。これにより、イントラ予測部114は、CCLM予測において予測精度をさらに向上させるような線形モデルを算出することができる。 For example, the intra prediction unit 114 includes points A0 and L0, points A0 and L1, points A0 and L2, points A1 and L0, points A1 and L1, points A1 and L2, points A2 and L0, The solutions of the nine linear equations may be obtained by combining the points A2 and L1 and the points A2 and L2, respectively. Accordingly, the intra prediction unit 114 can calculate a linear model that further improves the prediction accuracy in CCLM prediction.
 あるいは、イントラ予測部114は、算出する直線方程式の解を減らすことで、より高速な算出を行うことができえる。 Alternatively, the intra prediction unit 114 can perform faster calculation by reducing the solution of the linear equation to be calculated.
[2-2.第2の実施形態に係る情報処理の手順]
 次に、図8を用いて、第2の実施形態に係る情報処理の手順について説明する。図8は、第2の実施形態に係る情報処理の流れを示すフローチャートである。
[2-2. Information Processing Procedure According to Second Embodiment]
Next, a procedure of information processing according to the second embodiment will be described with reference to FIG. FIG. 8 is a flowchart showing the flow of information processing according to the second embodiment.
 図8に示すように、画像符号化装置100は、処理対象ブロックにおいて、輝度信号の予測値(PredL)を算出する(ステップS201)。 As illustrated in FIG. 8, the image encoding device 100 calculates a prediction value (Pred L ) of the luminance signal in the processing target block (step S201).
 続けて、画像符号化装置100は、CCLM予測を行うか否かを判定する(ステップS202)。CCLM予測を行わない場合(ステップS202;No)、画像符号化装置100は、その後のCCLM予測に関する処理をスキップする。一方、CCLM予測を行う場合(ステップS202;Yes)、画像符号化装置100は、処理対象ブロックの参照画素において、水平、垂直の各1点を結ぶ3つの組み合わせを特定する(ステップS203)。 Subsequently, the image encoding device 100 determines whether or not to perform CCLM prediction (step S202). When the CCLM prediction is not performed (step S202; No), the image coding apparatus 100 skips the subsequent process regarding the CCLM prediction. On the other hand, when CCLM prediction is performed (step S202; Yes), the image coding apparatus 100 specifies three combinations that connect each horizontal and vertical point in the reference pixel of the processing target block (step S203).
 そして、画像符号化装置100は、それぞれの組み合わせにおける直線方程式の傾きαを算出する(ステップS204)。さらに、画像符号化装置100は、ステップS204で求めたα3点の中間値を算出する(ステップS205)。 Then, the image coding apparatus 100 calculates the slope α of the linear equation in each combination (step S204). Further, the image coding apparatus 100 calculates the intermediate value of the α3 points obtained in step S204 (step S205).
 算出された中間値に基づいて、画像符号化装置100は、直線方程式における切片βを算出する(ステップS206)。その後、画像符号化装置100は、算出したα及びβの値に基づいて、処理対象ブロックにおける色差信号の予測値を算出する(ステップS207)。 The image coding apparatus 100 calculates the intercept β in the linear equation based on the calculated intermediate value (step S206). Then, the image coding apparatus 100 calculates the predicted value of the color difference signal in the processing target block based on the calculated values of α and β (step S207).
(3.第3の実施形態)
 次に、第3の実施形態について説明する。第1の実施形態及び第2の実施形態では、イントラ予測部114が、抽出した複数点の中間値を求めたり、直線方程式の傾きの中間値を求めたりする例を示した。ここで、中間値の算出等の並列演算においては、ルックアップテーブル(Lookup Table)を利用することで、高速な演算が可能となる。そこで、第3の実施形態に係るイントラ予測部114は、ルックアップテーブルを用いて第1の実施形態及び第2の実施形態で説明した算出処理を行う。なお、第3の実施形態では、画像符号化装置100の機能や構成については第1の実施形態及び第2の実施形態と共通するため、説明を省略する。
(3. Third embodiment)
Next, a third embodiment will be described. In the first and second embodiments, examples have been shown in which the intra prediction unit 114 obtains the intermediate value of the extracted plural points or the intermediate value of the slope of the linear equation. Here, in parallel calculation such as calculation of an intermediate value, a high-speed calculation can be performed by using a Lookup Table. Therefore, the intra prediction unit 114 according to the third embodiment uses the look-up table to perform the calculation process described in the first and second embodiments. Note that, in the third embodiment, the functions and configuration of the image encoding device 100 are the same as those in the first and second embodiments, so description will be omitted.
 例えば、イントラ予測部114は、線形モデルを特定するための変数(具体的にはα及びβ)の算出過程において、ルックアップテーブルを参照する処理を用いて変数を算出する。例えば、第2の実施形態では、イントラ予測部114は、「空間(ブロック)において、最も遠い3点を選択して、直線方程式を並列に演算して解の中間点をとる」といった処理を行う。この場合、イントラ予測部114は、上記式(3)に3点の座標を代入して、各々にα及びβを算出する処理を繰り返す。ここで、第3の実施形態では、イントラ予測部114は、下記式(4)に示すように、ルックアップテーブルを参照して算出を行う。 For example, the intra prediction unit 114 calculates a variable by using a process of referring to a lookup table in the process of calculating a variable (specifically, α and β) for specifying a linear model. For example, in the second embodiment, the intra prediction unit 114 performs processing such as “selecting the three furthest points in the space (block) and computing the linear equations in parallel to obtain the midpoint of the solution”. .. In this case, the intra prediction unit 114 repeats the processing of substituting the coordinates of three points into the above equation (3) and calculating α and β for each. Here, in the third embodiment, the intra prediction unit 114 performs the calculation by referring to the look-up table as shown in the following expression (4).
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 上記式(4)において、「Table」はルックアップテーブルを示す。このように、直線方程式の変数であるα及びβの計算をルックアップテーブル参照に置き換えることにより、イントラ予測部114は、α及びβの高速な演算を比較的小さな回路規模で実現することができる。 In the above formula (4), “Table” indicates a lookup table. In this way, by replacing the calculation of α and β that are variables of the linear equation with the lookup table reference, the intra prediction unit 114 can realize high-speed calculation of α and β with a relatively small circuit scale. ..
 ここで、上記式(4)において、「yB>yA」に限定したとき、Tableは、8bitのとき「2(8+1)×2=512×256」の領域が必要となる。そこで、さらに規模を小さくするため、イントラ予測部114は、量子化を行う手法を採りうる。例えば、上記式(4)を量子化すると、以下のように表せる。 Here, in the above formula (4), when it is limited to “y B >y A ”, the table requires an area of “2 (8+1) ×2 8 =512×256” in the case of 8 bits. Therefore, in order to further reduce the scale, the intra prediction unit 114 can adopt a method of performing quantization. For example, when the above equation (4) is quantized, it can be expressed as follows.
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005
 上記式(5)において、「n」は量子化量に対応する整数を示す。例えば、n=16のとき、Tableは「32×16」のサイズとなり、サイズが縮小される。 In the above formula (5), “n” indicates an integer corresponding to the amount of quantization. For example, when n=16, the table has a size of “32×16”, and the size is reduced.
 上記のように、イントラ予測部114は、ルックアップテーブルを参照する手法を採用することにより、第1の実施形態及び第2の実施形態で示した算出処理をより高速に行うことができる。 As described above, the intra prediction unit 114 can perform the calculation process shown in the first and second embodiments at a higher speed by adopting the method of referring to the lookup table.
(4.変形例)
 上述した本開示に係る情報処理は、上記各実施形態以外にも種々の異なる形態にて実施されてよい。そこで、以下では、本開示に係る情報処理の変形例について説明する。
(4. Modified example)
The information processing according to the present disclosure described above may be implemented in various different forms other than the above-described embodiments. Therefore, below, a modification of the information processing according to the present disclosure will be described.
 上記各実施形態では、本開示に係る情報処理装置を画像符号化装置100として説明しているが、本開示に係る情報処理は、復号化装置で実施されてもよい。すなわち、本開示に係る情報処理装置とは、画像符号化を行う装置に限らず、画像復号化を行う装置であってもよい。 In each of the above embodiments, the information processing device according to the present disclosure has been described as the image encoding device 100, but the information processing according to the present disclosure may be performed by the decoding device. That is, the information processing device according to the present disclosure is not limited to a device that performs image encoding, and may be a device that performs image decoding.
 上記各実施形態では、抽出した複数点の中間値や平均値を算出する例を示しているが、算出する数値はこれらの例に限らず、何らかの参照となる値であれば、いずれの値を採用してもよい。 In each of the above-described embodiments, an example of calculating an intermediate value or an average value of a plurality of extracted points is shown, but the numerical value to be calculated is not limited to these examples, and any value can be used as long as it is a reference value May be adopted.
 実施形態では、画像符号化装置100を一体の装置として説明しているが、画像符号化装置100は複数の装置によって実現されてもよい。 In the embodiment, the image encoding device 100 is described as an integrated device, but the image encoding device 100 may be realized by a plurality of devices.
(5.その他の実施形態)
 上述した各実施形態に係る処理は、上記各実施形態以外にも種々の異なる形態にて実施されてよい。
(5. Other embodiments)
The processing according to each of the above-described embodiments may be performed in various different forms other than each of the above-described embodiments.
 また、上記各実施形態において説明した各処理のうち、自動的に行われるものとして説明した処理の全部または一部を手動的に行うこともでき、あるいは、手動的に行われるものとして説明した処理の全部または一部を公知の方法で自動的に行うこともできる。この他、上記文書中や図面中で示した処理手順、具体的名称、各種のデータやパラメータを含む情報については、特記する場合を除いて任意に変更することができる。例えば、各図に示した各種情報は、図示した情報に限られない。 Further, of the processes described in the above embodiments, all or part of the processes described as being automatically performed can be manually performed, or the processes described as being manually performed. All or part of the above can be automatically performed by a known method. In addition, the processing procedures, specific names, information including various data and parameters shown in the above-mentioned documents and drawings can be arbitrarily changed unless otherwise specified. For example, the various information shown in each drawing is not limited to the illustrated information.
 また、図示した各装置の各構成要素は機能概念的なものであり、必ずしも物理的に図示の如く構成されていることを要しない。すなわち、各装置の分散・統合の具体的形態は図示のものに限られず、その全部または一部を、各種の負荷や使用状況などに応じて、任意の単位で機能的または物理的に分散・統合して構成することができる。 Also, each component of each device shown in the drawings is functionally conceptual, and does not necessarily have to be physically configured as shown. That is, the specific form of distribution/integration of each device is not limited to the one shown in the figure, and all or part of the device may be functionally or physically distributed/arranged in arbitrary units according to various loads and usage conditions. It can be integrated and configured.
 また、上述してきた各実施形態及び変形例は、処理内容を矛盾させない範囲で適宜組み合わせることが可能である。 Also, the above-described respective embodiments and modified examples can be appropriately combined within a range in which the processing content is not inconsistent.
 また、本明細書に記載された効果はあくまで例示であって限定されるものでは無く、他の効果があってもよい。 Also, the effects described in this specification are merely examples and are not limited, and there may be other effects.
(6.本開示に係る情報処理装置の効果)
 上述してきたように、本開示に係る情報処理装置(実施形態では画像符号化装置100)は、イントラ予測部(実施形態ではイントラ予測部114)を備える。イントラ予測部は、画像内で規定されたブロックに隣接する参照画素から複数点を抽出し、抽出した複数点の輝度情報及び色情報に基づいて、ブロックに存在する画素の色情報を予測する線形モデルを算出する。
(6. Effect of information processing apparatus according to the present disclosure)
As described above, the information processing device according to the present disclosure (the image encoding device 100 in the embodiment) includes the intra prediction unit (intra prediction unit 114 in the embodiment). The intra-prediction unit extracts a plurality of points from reference pixels adjacent to the block defined in the image, and linearly predicts color information of pixels existing in the block based on the brightness information and the color information of the extracted points. Calculate the model.
 このように、本開示に係る情報処理装置は、水平、垂直の全点をサーチして最大値と最小値を算出する従来手法と比較して、輝度情報等を計算する座標数を削減することができるので、高速な計算を行うことができ、全体の処理速度を向上させることができる。 As described above, the information processing apparatus according to the present disclosure reduces the number of coordinates for calculating brightness information and the like as compared with the conventional method of searching all horizontal and vertical points and calculating the maximum value and the minimum value. Therefore, high-speed calculation can be performed and the overall processing speed can be improved.
 また、イントラ予測部は、ブロックのうち垂直方向に隣接する参照画素から3点を抽出するとともに、ブロックのうち水平方向に隣接する参照画素から3点を抽出し、各々3点の輝度情報及び色情報における中間点を特定し、特定した中間点に基づいて、線形モデルを算出する。これにより、情報処理装置は、ノイズへの耐久性を上げつつ、処理速度を向上させることができる。 Also, the intra prediction unit extracts three points from the reference pixels that are vertically adjacent to each other in the block, and also extracts three points from the reference pixels that are horizontally adjacent to each other from the block, and the luminance information and the color of each of the three points are extracted. A midpoint in the information is specified, and a linear model is calculated based on the specified midpoint. As a result, the information processing apparatus can improve the processing speed while increasing the durability against noise.
 また、イントラ予測部は、ブロックのうち垂直方向に隣接する参照画素から連続した3点を抽出するとともに、ブロックのうち水平方向に隣接する参照画素から連続した3点を抽出し、各々3点の輝度情報及び色情報における中間点を特定し、特定した中間点に基づいて、線形モデルを算出する。これにより、情報処理装置は、ノイズへの耐久性を上げつつ、処理速度を向上させることができる。 In addition, the intra prediction unit extracts three consecutive points from the reference pixels that are vertically adjacent to each other in the block, and also extracts three consecutive points from the reference pixels that are horizontally adjacent to each other from the block. An intermediate point in the luminance information and color information is specified, and a linear model is calculated based on the specified intermediate point. As a result, the information processing apparatus can improve the processing speed while increasing the durability against noise.
 イントラ予測部は、ブロックのうち垂直方向に隣接する参照画素から、連続した3点であってブロックの基点とは異なる一方の端部に位置する3点を抽出するとともに、ブロックのうち水平方向に隣接する参照画素から、連続した3点であってブロックの基点とは異なる一方の端部に位置する3点を抽出し、各々3点の輝度情報及び色情報における中間点を特定し、特定した中間点に基づいて、線形モデルを算出する。このように、情報処理装置は、空間(ブロック)の特徴を最も表すと推定される3点から線形モデルを生成するので、予測精度をより向上させることができる。 The intra prediction unit extracts three consecutive three points, which are located at one end different from the base point of the block, from the reference pixels that are vertically adjacent to each other in the block, and the horizontal direction of the block is extracted. From the adjacent reference pixels, three consecutive points, which are located at one end different from the base point of the block, are extracted, and intermediate points in the luminance information and the color information of the three points are identified and identified. A linear model is calculated based on the midpoint. In this way, the information processing apparatus generates a linear model from the three points that are estimated to most represent the feature of the space (block), so the prediction accuracy can be further improved.
 また、イントラ予測部は、抽出した複数点の輝度情報及び色情報の平均値に基づいて、線形モデルを算出する。これにより、情報処理装置は、ノイズへの耐久性を上げつつ、処理速度を向上させることができる。 The intra prediction unit also calculates a linear model based on the average values of the extracted luminance information and color information of multiple points. As a result, the information processing device can improve the processing speed while increasing the durability against noise.
 また、イントラ予測部は、ブロックのうち垂直方向に隣接する参照画素から抽出した複数点のいずれか1点と、ブロックのうち水平方向に隣接する参照画素から抽出した複数点のいずれか1点とから求められる直線方程式を抽出した複数点の組み合わせの数だけ算出し、算出した直線方程式の傾きの比較に基づいて、線形モデルを算出する。これにより、情報処理装置は、よりノイズへの耐久性を向上させた予測処理を行うことができる。 In addition, the intra prediction unit includes one of a plurality of points extracted from vertically adjacent reference pixels of the block, and one of a plurality of points extracted from horizontally adjacent reference pixels of the block. The number of combinations of the extracted plural linear equations is calculated, and the linear model is calculated based on the comparison of the slopes of the calculated linear equations. As a result, the information processing device can perform the prediction process with improved durability against noise.
 また、イントラ予測部は、複数点の組み合わせの数だけ算出された直線方程式の傾きの中間値に基づいて、線形モデルを算出する。これにより、情報処理装置は、よりノイズへの耐久性を向上させた予測処理を行うことができる。 In addition, the intra prediction unit calculates a linear model based on the intermediate value of the slope of the linear equation calculated for the number of combinations of multiple points. As a result, the information processing device can perform the prediction process with improved durability against noise.
 また、イントラ予測部は、複数点の組み合わせの数だけ算出された直線方程式の傾きの平均値に基づいて、線形モデルを算出する。これにより、情報処理装置は、よりノイズへの耐久性を向上させた予測処理を行うことができる。 Also, the intra prediction unit calculates a linear model based on the average value of the slopes of the linear equation calculated for the number of combinations of multiple points. As a result, the information processing device can perform the prediction process with improved durability against noise.
 また、イントラ予測部は、ブロックのうち垂直方向に隣接する参照画素から、連続した3点であってブロックの基点とは異なる一方の端部に位置する3点を抽出するとともに、ブロックのうち水平方向に隣接する参照画素から、連続した3点であってブロックの基点とは異なる一方の端部に位置する3点を抽出し、各々3点を組み合わせた3つの直線方程式を算出し、算出した直線方程式の傾きの比較に基づいて、線形モデルを算出する。このように、情報処理装置は、空間(ブロック)の特徴を最も表すと推定される3点の組み合わせから線形モデルを生成するので、予測精度をより向上させることができる。 In addition, the intra prediction unit extracts three consecutive three points located at one end different from the base point of the block from the reference pixels adjacent to each other in the vertical direction of the block, and the horizontal prediction block of the block. From the reference pixels adjacent to each other in the direction, three consecutive points, which are located at one end different from the base point of the block, are extracted, and three linear equations combining each of the three points are calculated and calculated. A linear model is calculated based on the comparison of the slopes of the linear equations. As described above, the information processing apparatus generates a linear model from a combination of three points that are estimated to most represent the feature of the space (block), and thus the prediction accuracy can be further improved.
 また、イントラ予測部は、線形モデルを特定するための変数の算出過程において、ルックアップテーブル(Lookup Table)を参照する処理を用いて変数を算出する。これにより、情報処理装置は、並列演算等の演算処理を大幅に向上させることができる。 Also, the intra-prediction unit calculates variables using a process that refers to a lookup table (Lookup Table) in the variable calculation process for specifying the linear model. As a result, the information processing apparatus can significantly improve arithmetic processing such as parallel arithmetic.
(7.ハードウェア構成)
 上述してきた各実施形態に係る画像符号化装置100は、例えば図9に示すような構成のコンピュータ1000によって実現される。図9は、本開示に係る画像符号化装置100の機能を実現するコンピュータ1000の一例を示すハードウェア構成図である。コンピュータ1000は、CPU1100、RAM1200、ROM(Read Only Memory)1300、HDD(Hard Disk Drive)1400、通信インターフェイス1500、及び入出力インターフェイス1600を有する。コンピュータ1000の各部は、バス1050によって接続される。
(7. Hardware configuration)
The image coding apparatus 100 according to each of the embodiments described above is realized by, for example, a computer 1000 having a configuration illustrated in FIG. 9. FIG. 9 is a hardware configuration diagram illustrating an example of a computer 1000 that realizes the functions of the image encoding device 100 according to the present disclosure. The computer 1000 has a CPU 1100, a RAM 1200, a ROM (Read Only Memory) 1300, an HDD (Hard Disk Drive) 1400, a communication interface 1500, and an input/output interface 1600. The respective units of the computer 1000 are connected by a bus 1050.
 CPU1100は、ROM1300又はHDD1400に格納されたプログラムに基づいて動作し、各部の制御を行う。例えば、CPU1100は、ROM1300又はHDD1400に格納されたプログラムをRAM1200に展開し、各種プログラムに対応した処理を実行する。 The CPU 1100 operates based on a program stored in the ROM 1300 or the HDD 1400, and controls each part. For example, the CPU 1100 expands a program stored in the ROM 1300 or the HDD 1400 into the RAM 1200 and executes processing corresponding to various programs.
 ROM1300は、コンピュータ1000の起動時にCPU1100によって実行されるBIOS(Basic Input Output System)等のブートプログラムや、コンピュータ1000のハードウェアに依存するプログラム等を格納する。 The ROM 1300 stores a boot program such as a BIOS (Basic Input Output System) executed by the CPU 1100 when the computer 1000 starts up, a program dependent on the hardware of the computer 1000, and the like.
 HDD1400は、CPU1100によって実行されるプログラム、及び、かかるプログラムによって使用されるデータ等を非一時的に記録する、コンピュータが読み取り可能な記録媒体である。具体的には、HDD1400は、プログラムデータ1450の一例である本開示に係る情報処理プログラムを記録する記録媒体である。 The HDD 1400 is a computer-readable recording medium that non-temporarily records a program executed by the CPU 1100, data used by the program, and the like. Specifically, the HDD 1400 is a recording medium that records an information processing program according to the present disclosure, which is an example of the program data 1450.
 通信インターフェイス1500は、コンピュータ1000が外部ネットワーク1550(例えばインターネット)と接続するためのインターフェイスである。例えば、CPU1100は、通信インターフェイス1500を介して、他の機器からデータを受信したり、CPU1100が生成したデータを他の機器へ送信したりする。 The communication interface 1500 is an interface for connecting the computer 1000 to an external network 1550 (for example, the Internet). For example, the CPU 1100 receives data from another device or transmits the data generated by the CPU 1100 to another device via the communication interface 1500.
 入出力インターフェイス1600は、入出力デバイス1650とコンピュータ1000とを接続するためのインターフェイスである。例えば、CPU1100は、入出力インターフェイス1600を介して、キーボードやマウス等の入力デバイスからデータを受信する。また、CPU1100は、入出力インターフェイス1600を介して、ディスプレイやスピーカーやプリンタ等の出力デバイスにデータを送信する。また、入出力インターフェイス1600は、所定の記録媒体(メディア)に記録されたプログラム等を読み取るメディアインターフェイスとして機能してもよい。メディアとは、例えばDVD(Digital Versatile Disc)、PD(Phase change rewritable Disk)等の光学記録媒体、MO(Magneto-Optical disk)等の光磁気記録媒体、テープ媒体、磁気記録媒体、または半導体メモリ等である。 The input/output interface 1600 is an interface for connecting the input/output device 1650 and the computer 1000. For example, the CPU 1100 receives data from an input device such as a keyboard or a mouse via the input/output interface 1600. The CPU 1100 also transmits data to an output device such as a display, a speaker, a printer, etc. via the input/output interface 1600. Also, the input/output interface 1600 may function as a media interface for reading a program or the like recorded in a predetermined recording medium (medium). Examples of media include optical recording media such as DVD (Digital Versatile Disc) and PD (Phase change rewritable Disk), magneto-optical recording media such as MO (Magneto-Optical disk), tape media, magnetic recording media, and semiconductor memory. Is.
 例えば、コンピュータ1000が実施形態に係る画像符号化装置100として機能する場合、コンピュータ1000のCPU1100は、RAM1200上にロードされた情報処理プログラムを実行することにより、イントラ予測部114等の機能を実現する。また、HDD1400には、本開示に係る情報処理プログラムや、情報処理に用いる各種データが格納される。なお、CPU1100は、プログラムデータ1450をHDD1400から読み取って実行するが、他の例として、外部ネットワーク1550を介して、他の装置からこれらのプログラムを取得してもよい。 For example, when the computer 1000 functions as the image encoding device 100 according to the embodiment, the CPU 1100 of the computer 1000 executes the information processing program loaded on the RAM 1200 to realize the functions of the intra prediction unit 114 and the like. .. Further, the HDD 1400 stores the information processing program according to the present disclosure and various data used for information processing. Note that the CPU 1100 reads the program data 1450 from the HDD 1400 and executes the program data, but as another example, these programs may be acquired from another device via the external network 1550.
 なお、本技術は以下のような構成も取ることができる。
(1)
 画像内で規定されたブロックに隣接する参照画素から複数点を抽出し、抽出した複数点の輝度情報及び色情報に基づいて、当該ブロックに存在する画素の色情報を予測する線形モデルを算出するイントラ予測部
 を備えた情報処理装置。
(2)
 前記イントラ予測部は、
 前記ブロックのうち垂直方向に隣接する参照画素から3点を抽出するとともに、当該ブロックのうち水平方向に隣接する参照画素から3点を抽出し、各々3点の輝度情報及び色情報における中間点を特定し、特定した中間点に基づいて、前記線形モデルを算出する
 前記(1)に記載の情報処理装置。
(3)
 前記イントラ予測部は、
 前記ブロックのうち垂直方向に隣接する参照画素から連続した3点を抽出するとともに、当該ブロックのうち水平方向に隣接する参照画素から連続した3点を抽出し、各々3点の輝度情報及び色情報における中間点を特定し、特定した中間点に基づいて、前記線形モデルを算出する
 前記(1)又は(2)に記載の情報処理装置。
(4)
 前記イントラ予測部は、
 前記ブロックのうち垂直方向に隣接する参照画素から、連続した3点であって当該ブロックの基点とは異なる一方の端部に位置する3点を抽出するとともに、当該ブロックのうち水平方向に隣接する参照画素から、連続した3点であって当該ブロックの基点とは異なる一方の端部に位置する3点を抽出し、各々3点の輝度情報及び色情報における中間点を特定し、特定した中間点に基づいて、前記線形モデルを算出する
 前記(1)~(3)のいずれかに記載の情報処理装置。
(5)
 前記イントラ予測部は、
 抽出した複数点の輝度情報及び色情報の平均値に基づいて、前記線形モデルを算出する
 前記(1)~(4)のいずれかに記載の情報処理装置。
(6)
 前記イントラ予測部は、
 前記ブロックのうち垂直方向に隣接する参照画素から抽出した複数点のいずれか1点と、当該ブロックのうち水平方向に隣接する参照画素から抽出した複数点のいずれか1点とから求められる直線方程式を前記抽出した複数点の組み合わせの数だけ算出し、算出した直線方程式の傾きの比較に基づいて、前記線形モデルを算出する
 前記(1)に記載の情報処理装置。
(7)
 前記イントラ予測部は、
 前記複数点の組み合わせの数だけ算出された直線方程式の傾きの中間値に基づいて、前記線形モデルを算出する
 前記(6)に記載の情報処理装置。
(8)
 前記イントラ予測部は、
 前記複数点の組み合わせの数だけ算出された直線方程式の傾きの平均値に基づいて、前記線形モデルを算出する
 前記(6)又は(7)に記載の情報処理装置。
(9)
 前記イントラ予測部は、
 前記ブロックのうち垂直方向に隣接する参照画素から、連続した3点であって当該ブロックの基点とは異なる一方の端部に位置する3点を抽出するとともに、当該ブロックのうち水平方向に隣接する参照画素から、連続した3点であって当該ブロックの基点とは異なる一方の端部に位置する3点を抽出し、各々3点を組み合わせた3つの前記直線方程式を算出し、算出した直線方程式の傾きの比較に基づいて、前記線形モデルを算出する
 前記(6)~(8)のいずれかに記載の情報処理装置。
(10)
 前記イントラ予測部は、
 前記線形モデルを特定するための変数の算出過程において、ルックアップテーブル(Lookup Table)を参照する処理を用いて当該変数を算出する
 前記(1)~(9)のいずれかに記載の情報処理装置。
(11)
 コンピュータが、
 画像内で規定されたブロックに隣接する参照画素から複数点を抽出し、抽出した複数点の輝度情報及び色情報に基づいて、当該ブロックに存在する画素の色情報を予測する線形モデルを算出する
 情報処理方法。
(12)
 コンピュータを、
 画像内で規定されたブロックに隣接する参照画素から複数点を抽出し、抽出した複数点の輝度情報及び色情報に基づいて、当該ブロックに存在する画素の色情報を予測する線形モデルを算出するイントラ予測部
 として機能させるための情報処理プログラム。
Note that the present technology may also be configured as below.
(1)
A plurality of points are extracted from reference pixels adjacent to the block defined in the image, and a linear model for predicting color information of pixels existing in the block is calculated based on the luminance information and the color information of the extracted points. An information processing device equipped with an intra prediction unit.
(2)
The intra prediction unit is
Three points are extracted from vertically adjacent reference pixels in the block, three points are extracted from horizontally adjacent reference pixels in the block, and three intermediate points in the luminance information and the color information are extracted respectively. The information processing apparatus according to (1), wherein the linear model is specified and the linear model is calculated based on the specified midpoint.
(3)
The intra prediction unit is
Three consecutive points are extracted from vertically adjacent reference pixels in the block, and three consecutive points are extracted from horizontally adjacent reference pixels in the block. The luminance information and the color information of each of the three points are extracted. The information processing device according to (1) or (2), wherein the linear model is calculated based on the specified intermediate point.
(4)
The intra prediction unit is
From the reference pixels adjacent to each other in the vertical direction in the block, three continuous points located at one end different from the base point of the block are extracted, and the adjacent three pixels are adjacent to each other in the horizontal direction. From the reference pixel, three consecutive three points, which are located at one end different from the base point of the block, are extracted, the intermediate points in the luminance information and the color information of the three points are identified, and the identified intermediate points The information processing apparatus according to any one of (1) to (3), wherein the linear model is calculated based on points.
(5)
The intra prediction unit is
The information processing apparatus according to any one of (1) to (4), wherein the linear model is calculated based on average values of the extracted luminance information and color information of a plurality of points.
(6)
The intra prediction unit is
A linear equation obtained from any one of a plurality of points extracted from vertically adjacent reference pixels of the block and any one of a plurality of points extracted from horizontally adjacent reference pixels of the block The information processing device according to (1), wherein the linear model is calculated based on a comparison of the slopes of the calculated linear equations.
(7)
The intra prediction unit is
The information processing apparatus according to (6), wherein the linear model is calculated based on an intermediate value of the slopes of the linear equation calculated for the number of combinations of the plurality of points.
(8)
The intra prediction unit is
The information processing apparatus according to (6) or (7), wherein the linear model is calculated based on an average value of the slopes of the linear equations calculated for the number of combinations of the plurality of points.
(9)
The intra prediction unit is
From the reference pixels adjacent to each other in the vertical direction in the block, three continuous points located at one end different from the base point of the block are extracted, and the adjacent three pixels are adjacent to each other in the horizontal direction. From the reference pixel, three consecutive three points located at one end different from the base point of the block are extracted, three linear equations combining the three points are calculated, and the calculated linear equations are calculated. The information processing apparatus according to any one of (6) to (8), wherein the linear model is calculated based on a comparison of the slopes of.
(10)
The intra prediction unit is
The information processing apparatus according to any one of (1) to (9), wherein in the process of calculating a variable for specifying the linear model, the variable is calculated using a process that refers to a Lookup Table. ..
(11)
Computer
A plurality of points are extracted from reference pixels adjacent to the block defined in the image, and a linear model for predicting color information of pixels existing in the block is calculated based on the luminance information and the color information of the extracted points. Information processing method.
(12)
Computer,
A plurality of points are extracted from reference pixels adjacent to the block defined in the image, and a linear model for predicting color information of pixels existing in the block is calculated based on the luminance information and the color information of the extracted points. An information processing program that functions as an intra prediction unit.
 100 画像符号化装置
 101 A/D変換部
 102 並べ替えバッファ
 103 第1演算部
 104 直交変換部
 105 量子化部
 106 符号化部
 107 蓄積バッファ
 108 逆量子化部
 109 逆直交変換部
 110 第2演算部
 111 フィルタ
 112 フレームメモリ
 113 スイッチ
 114 イントラ予測部
 1141 輝度色差分離部
 1142 輝度信号イントラ予測部
 1143 輝度信号ダウンサンプリング部
 1144 色差信号イントラ予測部
 115 インター予測部
 116 予測画像選択部
 117 レート制御部
100 Image Coding Device 101 A/D Conversion Unit 102 Sorting Buffer 103 First Operation Unit 104 Orthogonal Transformation Unit 105 Quantization Unit 106 Encoding Unit 107 Storage Buffer 108 Inverse Quantization Unit 109 Inverse Orthogonal Transformation Unit 110 Second Operation Unit 111 filter 112 frame memory 113 switch 114 intra prediction unit 1141 luminance color difference separation unit 1142 luminance signal intra prediction unit 1143 luminance signal downsampling unit 1144 color difference signal intra prediction unit 115 inter prediction unit 116 predicted image selection unit 117 rate control unit

Claims (12)

  1.  画像内で規定されたブロックに隣接する参照画素から複数点を抽出し、抽出した複数点の輝度情報及び色情報に基づいて、当該ブロックに存在する画素の色情報を予測する線形モデルを算出するイントラ予測部
     を備えた情報処理装置。
    A plurality of points are extracted from reference pixels adjacent to the block defined in the image, and a linear model for predicting color information of pixels existing in the block is calculated based on the luminance information and the color information of the extracted points. An information processing device equipped with an intra prediction unit.
  2.  前記イントラ予測部は、
     前記ブロックのうち垂直方向に隣接する参照画素から3点を抽出するとともに、当該ブロックのうち水平方向に隣接する参照画素から3点を抽出し、各々3点の輝度情報及び色情報における中間点を特定し、特定した中間点に基づいて、前記線形モデルを算出する
     請求項1に記載の情報処理装置。
    The intra prediction unit is
    Three points are extracted from vertically adjacent reference pixels in the block, three points are extracted from horizontally adjacent reference pixels in the block, and three intermediate points in the luminance information and the color information are extracted respectively. The information processing apparatus according to claim 1, wherein the linear model is specified and calculated based on the specified midpoint.
  3.  前記イントラ予測部は、
     前記ブロックのうち垂直方向に隣接する参照画素から連続した3点を抽出するとともに、当該ブロックのうち水平方向に隣接する参照画素から連続した3点を抽出し、各々3点の輝度情報及び色情報における中間点を特定し、特定した中間点に基づいて、前記線形モデルを算出する
     請求項1に記載の情報処理装置。
    The intra prediction unit is
    Three consecutive points are extracted from vertically adjacent reference pixels in the block, and three consecutive points are extracted from horizontally adjacent reference pixels in the block. The luminance information and the color information of each of the three points are extracted. The information processing apparatus according to claim 1, wherein the linear model is calculated on the basis of the specified intermediate point in the.
  4.  前記イントラ予測部は、
     前記ブロックのうち垂直方向に隣接する参照画素から、連続した3点であって当該ブロックの基点とは異なる一方の端部に位置する3点を抽出するとともに、当該ブロックのうち水平方向に隣接する参照画素から、連続した3点であって当該ブロックの基点とは異なる一方の端部に位置する3点を抽出し、各々3点の輝度情報及び色情報における中間点を特定し、特定した中間点に基づいて、前記線形モデルを算出する
     請求項1に記載の情報処理装置。
    The intra prediction unit is
    From the reference pixels adjacent to each other in the vertical direction in the block, three continuous points located at one end different from the base point of the block are extracted, and the adjacent three pixels are adjacent to each other in the horizontal direction. From the reference pixel, three consecutive three points, which are located at one end different from the base point of the block, are extracted, the intermediate points in the luminance information and the color information of the three points are identified, and the identified intermediate points The information processing apparatus according to claim 1, wherein the linear model is calculated based on points.
  5.  前記イントラ予測部は、
     抽出した複数点の輝度情報及び色情報の平均値に基づいて、前記線形モデルを算出する
     請求項1に記載の情報処理装置。
    The intra prediction unit is
    The information processing apparatus according to claim 1, wherein the linear model is calculated based on average values of the extracted luminance information and color information of a plurality of points.
  6.  前記イントラ予測部は、
     前記ブロックのうち垂直方向に隣接する参照画素から抽出した複数点のいずれか1点と、当該ブロックのうち水平方向に隣接する参照画素から抽出した複数点のいずれか1点とから求められる直線方程式を前記抽出した複数点の組み合わせの数だけ算出し、算出した直線方程式の傾きの比較に基づいて、前記線形モデルを算出する
     請求項1に記載の情報処理装置。
    The intra prediction unit is
    A linear equation obtained from any one of a plurality of points extracted from vertically adjacent reference pixels of the block and any one of a plurality of points extracted from horizontally adjacent reference pixels of the block The information processing apparatus according to claim 1, wherein the linear model is calculated based on a comparison of the slopes of the calculated linear equations.
  7.  前記イントラ予測部は、
     前記複数点の組み合わせの数だけ算出された直線方程式の傾きの中間値に基づいて、前記線形モデルを算出する
     請求項6に記載の情報処理装置。
    The intra prediction unit is
    The information processing apparatus according to claim 6, wherein the linear model is calculated based on an intermediate value of the slopes of the linear equation calculated for the number of combinations of the plurality of points.
  8.  前記イントラ予測部は、
     前記複数点の組み合わせの数だけ算出された直線方程式の傾きの平均値に基づいて、前記線形モデルを算出する
     請求項6に記載の情報処理装置。
    The intra prediction unit is
    The information processing apparatus according to claim 6, wherein the linear model is calculated based on an average value of the slopes of the linear equation calculated for the number of combinations of the plurality of points.
  9.  前記イントラ予測部は、
     前記ブロックのうち垂直方向に隣接する参照画素から、連続した3点であって当該ブロックの基点とは異なる一方の端部に位置する3点を抽出するとともに、当該ブロックのうち水平方向に隣接する参照画素から、連続した3点であって当該ブロックの基点とは異なる一方の端部に位置する3点を抽出し、各々3点を組み合わせた3つの前記直線方程式を算出し、算出した直線方程式の傾きの比較に基づいて、前記線形モデルを算出する
     請求項6に記載の情報処理装置。
    The intra prediction unit is
    From the reference pixels adjacent to each other in the vertical direction in the block, three continuous points located at one end different from the base point of the block are extracted, and the adjacent three pixels are adjacent to each other in the horizontal direction. From the reference pixel, three consecutive three points located at one end different from the base point of the block are extracted, three linear equations combining the three points are calculated, and the calculated linear equations are calculated. The information processing apparatus according to claim 6, wherein the linear model is calculated based on a comparison of the slopes of.
  10.  前記イントラ予測部は、
     前記線形モデルを特定するための変数の算出過程において、ルックアップテーブル(Lookup Table)を参照する処理を用いて当該変数を算出する
     請求項1に記載の情報処理装置。
    The intra prediction unit is
    The information processing apparatus according to claim 1, wherein in the process of calculating the variable for specifying the linear model, the variable is calculated by using a process that refers to a lookup table.
  11.  コンピュータが、
     画像内で規定されたブロックに隣接する参照画素から複数点を抽出し、抽出した複数点の輝度情報及び色情報に基づいて、当該ブロックに存在する画素の色情報を予測する線形モデルを算出する
     情報処理方法。
    Computer
    A plurality of points are extracted from reference pixels adjacent to a block defined in the image, and a linear model for predicting color information of pixels existing in the block is calculated based on the brightness information and the color information of the extracted points. Information processing method.
  12.  コンピュータを、
     画像内で規定されたブロックに隣接する参照画素から複数点を抽出し、抽出した複数点の輝度情報及び色情報に基づいて、当該ブロックに存在する画素の色情報を予測する線形モデルを算出するイントラ予測部
     として機能させるための情報処理プログラム。
    Computer,
    A plurality of points are extracted from reference pixels adjacent to the block defined in the image, and a linear model for predicting color information of pixels existing in the block is calculated based on the luminance information and the color information of the extracted points. An information processing program that functions as an intra prediction unit.
PCT/JP2019/050163 2018-12-28 2019-12-20 Information processing device, information processing method, and information processing program WO2020137904A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018247928 2018-12-28
JP2018-247928 2018-12-28

Publications (1)

Publication Number Publication Date
WO2020137904A1 true WO2020137904A1 (en) 2020-07-02

Family

ID=71127695

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/050163 WO2020137904A1 (en) 2018-12-28 2019-12-20 Information processing device, information processing method, and information processing program

Country Status (1)

Country Link
WO (1) WO2020137904A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018053293A1 (en) * 2016-09-15 2018-03-22 Qualcomm Incorporated Linear model chroma intra prediction for video coding

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018053293A1 (en) * 2016-09-15 2018-03-22 Qualcomm Incorporated Linear model chroma intra prediction for video coding

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
GUILLAUME LAROUCHE , JONATHAN TAQUET , CHRISTOPHE GISQUET , PATRICE ONNO: "CE3-5.1: On cross- component linear model simplification", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-TSG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, no. L0191, 12 October 2018 (2018-10-12), Macao, CN, pages 1 - 4, XP030193702 *
JANGWON CHOI , JIM HEO , SUNMI YOO , LING LI , JUNGAH CHOI , JAEHYUN LIM , SEUNG HWAN KIM: "CE3-related : Reduced number of reference samples for CCLM parameter calculation", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-TSG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, no. L0138, 3 October 2018 (2018-10-03), Macao, CN, pages 1 - 5, XP030193668 *
JULIANA HSU , MEI GUO , XUN GUO , SHAWMIN LEI: "Simplified Parameter Calculation for Chroma LM Mode", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP3 AND ISO/IEC JTC1/SC29/WG11, no. I0166, 7 May 2012 (2012-05-07), Geneva, CH, pages 1 - 6, XP030233808 *
SHENG-PO WANG , PO-HAN LIN , CHANG-HAO YAU , CHUNG-LUNG LIN , CHING-CHIEF LIN: "CE3: Adaptive multiple cross-component linear model (Test 5.9.1)", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-TSG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 12TH MEETING, no. L0419, 12 October 2018 (2018-10-12), Macao, CN, pages 1 - 3, XP030191154 *
XIANG MA , FAN MU , HAITAO YANG , JIANLE CHEN: "CE3-related: Classification- based mean value for CCLM coefficients derivation", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-TSG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, no. L0342_r2, 12 October 2018 (2018-10-12), Macao, CN, pages 1 - 5, XP030194320 *

Similar Documents

Publication Publication Date Title
US11889107B2 (en) Image encoding method and image decoding method
JP7422841B2 (en) Intra prediction-based image coding method and device using MPM list
US12075083B2 (en) Image encoding and decoding method with merge flag and motion vectors
KR102283407B1 (en) Innovations in block vector prediction and estimation of reconstructed sample values within an overlap area
JP6285020B2 (en) Inter-component filtering
US9930356B2 (en) Optimized image decoding device and method for a predictive encoded bit stream
JP5289440B2 (en) Image encoding device, image decoding device, image encoding method, and image decoding method
KR101581100B1 (en) Method for managing a reference picture list, and apparatus using same
KR102010160B1 (en) Image processing device, image processing method and recording medium
TWI617180B (en) Method and apparatus for scalable video encoding based on coding units of tree structure, method and apparatus for scalable video decoding based on coding units of tree structure
WO2012087034A2 (en) Intra prediction method and apparatus using the method
KR20170021337A (en) Encoder decisions based on results of hash-based block matching
EP1833256B1 (en) Selection of encoded data, setting of encoded data, creation of recoded data, and recoding method and device
KR20170063895A (en) Hash-based encoder decisions for video coding
JPWO2008120577A1 (en) Image encoding and decoding method and apparatus
JP2010011075A (en) Method and apparatus for encoding and decoding moving image
CN114270836A (en) Color conversion for video encoding and decoding
JP4383240B2 (en) Intra-screen predictive coding apparatus, method thereof and program thereof
KR101601854B1 (en) Spatial prediction apparatus and method video encoding apparatus and method and video decoding apparatus and method
JP2024508303A (en) Method and electronic device for decoding inter-predicted video blocks of a video stream
JP7030246B2 (en) Intra predictor, image decoder, and program
WO2020137904A1 (en) Information processing device, information processing method, and information processing program
JP2018037936A (en) Image encoder and image decoder
JP3770466B2 (en) Image coding rate conversion apparatus and image coding rate conversion method
KR102038818B1 (en) Intra prediction method and apparatus using the method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19905008

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19905008

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP