US20220070447A1 - Image processing device and method - Google Patents

Image processing device and method Download PDF

Info

Publication number
US20220070447A1
US20220070447A1 US17/312,405 US201917312405A US2022070447A1 US 20220070447 A1 US20220070447 A1 US 20220070447A1 US 201917312405 A US201917312405 A US 201917312405A US 2022070447 A1 US2022070447 A1 US 2022070447A1
Authority
US
United States
Prior art keywords
prediction
unit
bio
processing
size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/312,405
Other languages
English (en)
Inventor
Sinsuke HISHINUMA
Kenji Kondo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Group Corp
Original Assignee
Sony Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Group Corp filed Critical Sony Group Corp
Assigned to Sony Group Corporation reassignment Sony Group Corporation ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HISHINUMA, SINSUKE, KONDO, KENJI
Publication of US20220070447A1 publication Critical patent/US20220070447A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/109Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/43Hardware specially adapted for motion estimation or compensation
    • H04N19/433Hardware specially adapted for motion estimation or compensation characterised by techniques for memory access
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/567Motion estimation based on rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements

Definitions

  • the present technology relates to image processing device and method, in particular, to image processing device and method that achieve a reduction in a buffer size.
  • VVC standard that is a next-generation codec has been developed as a successor to AVC/H.264 and HEVC/H.265.
  • VPDUs Virtual Pipeline Data Units
  • the VPDU size is a buffer size that allows smooth processing on each pipeline stage.
  • the VPDU size is often set to the maximum size of TUs (Transform Units).
  • VVC the maximum TU size is 64 ⁇ 64, and the same is assumed to hold true for VPDUs.
  • one CU corresponds to one PU, and hence inter prediction processing is required to be performed on PUs larger than VPDUs. Even in this case, the PU can be partitioned into virtual vPUs (virtual PUs) to be processed.
  • VVC is consistent with VPDUs and has been able to be implemented with reasonable HW resources until BIO (Bi-directional optical flow) described later has been employed.
  • the optical flow method is an image processing method for detecting the motion of an object in a moving image, to thereby estimate a direction in which the object is to move in a certain period of time.
  • Codec inter prediction employing the optical flow method as an option enhances the encoding efficiency.
  • the term “BIO” is based on the fact that the optical flow method is used in Bi prediction (bidirectional prediction) in which temporally continuous frames are referred to in units of frames (see NPL 1).
  • difference MVs are encoded since there are differences between optimal MVs and predicted MVs (PMVs).
  • PMVs predicted MVs
  • a result equivalent to that in normal Bi prediction is obtained as follows: a gradient (G) and a velocity (V) are obtained by the optical flow method for prediction blocks generated with predicted MVs (PMVs).
  • G gradient
  • V velocity
  • the encoding of difference MVs (MVDs) can be unnecessary or eliminated so that the encoding efficiency is enhanced (see NPL 2).
  • BIO Sum of Absolute Difference
  • the SAD of L 0 and L 1 prediction blocks is calculated for an entire PU to be compared to the threshold, thereby determining whether or not to apply BIO processing, and the processing then branches.
  • BIO processing it is difficult to virtually partition, in a case where inter prediction is performed on PUs larger than VPDUs, the PU into a plurality of vPUs.
  • the present technology has been made in view of such circumstances, and achieves a reduction in a buffer size.
  • an image processing device including a control unit configured to partition a unit of processing into partitioned processing units each of which corresponds to a VPDU size or is equal to or smaller than the VPDU size, the unit of processing being used for calculation of a cost that is used for determining whether or not to perform bidirectional prediction; and a determination unit configured to make the determination by using the cost calculated based on the partitioned processing units.
  • a unit of processing is partitioned into partitioned processing units each of which corresponds to a VPDU size or is equal to or smaller than the VPDU size, the unit of processing being used for calculation of a cost that is used for determining whether or not to perform bidirectional prediction, and the determination is made by using the cost calculated based on the partitioned processing units.
  • FIG. 1 is a diagram illustrating an example in which a pipeline is structured without the introduction of VPDUs.
  • FIG. 2 is a flowchart illustrating Bi prediction that is one of inter PU processing in the case of FIG. 1 .
  • FIG. 3 is a diagram illustrating an example in which a pipeline is efficiently structured with the introduction of VPDUs.
  • FIG. 4 is a flowchart illustrating Bi prediction that is one of inter PU processing in the case of FIG. 3 .
  • FIG. 5 is a diagram illustrating exemplary normal Bi prediction.
  • FIG. 6 is a diagram illustrating exemplary Bi prediction employing BIO.
  • FIG. 7 is a diagram illustrating exemplary 2-block partition in normal Bi prediction.
  • FIG. 8 is a diagram illustrating exemplary 2-block partition in Bi prediction employing BIO.
  • FIG. 9 is a block diagram illustrating a configuration example of an encoding device according to an embodiment of the present technology.
  • FIG. 10 is a flowchart illustrating details of encoding processing by the encoding device.
  • FIG. 11 is a flowchart illustrating the details of the encoding processing by the encoding device, which is a continuation of FIG. 10 .
  • FIG. 12 is a block diagram illustrating a configuration example of an embodiment of a decoding device to which the present disclosure is applied.
  • FIG. 13 is a flowchart illustrating details of decoding processing by the decoding device.
  • FIG. 14 is a block diagram illustrating a configuration example of an inter prediction unit.
  • FIG. 15 is a flowchart illustrating related-art BIO-included Bi prediction.
  • FIG. 16 is a flowchart illustrating the related-art BIO-included Bi prediction, which is a continuation of FIG. 15 .
  • FIG. 17 is a flowchart illustrating BIO-included Bi prediction according to a first embodiment of the present technology.
  • FIG. 18 is a flowchart illustrating the BIO-included Bi prediction according to the first embodiment of the present technology, which is a continuation of FIG. 17 .
  • FIG. 19 is a diagram illustrating correspondences between PU size, vPU number, and processing position and size.
  • FIG. 20 is a diagram illustrating comparisons between related-art operation and operation according to the first embodiment of the present technology.
  • FIG. 21 is a diagram illustrating comparisons between the related-art operation and the operation according to the first embodiment of the present technology.
  • FIG. 22 is a diagram illustrating an example in which in a case where PUs are larger than VPDUs, a BIO determination result for a vPU number of 0 is also used for another vPU.
  • FIG. 23 is a diagram illustrating an example in which in a case where PUs are larger than VPDUs, a BIO determination result for the vPU number of 0 is also used for the other vPU.
  • FIG. 24 is a flowchart illustrating BIO-included Bi prediction in the cases of FIG. 22 and FIG. 23 .
  • FIG. 25 is a flowchart illustrating the BIO-included Bi prediction in the cases of FIG. 22 and FIG. 23 , which is a continuation of FIG. 24 .
  • FIG. 26 is a diagram illustrating an example in which whether to apply BIO is determined with a partial SAD value in each vPU.
  • FIG. 27 is another diagram illustrating an example in which whether to apply BIO is determined with a partial SAD value in each vPU.
  • FIG. 28 is a flowchart illustrating the processing of determining a partial SAD calculation region for determining BIO_vPU_ON in each vPU.
  • FIG. 29 is a flowchart illustrating the processing of determining a partial SAD calculation region for determining BIO_vPU_ON in each vPU, which is a continuation of FIG. 28 .
  • FIG. 30 is a flowchart illustrating, as an operation example according to a second embodiment of the present technology, BIO-included Bi prediction that is performed by an inter prediction unit 51 .
  • FIG. 31 is a flowchart illustrating, as the operation example according to the second embodiment of the present technology, the BIO-included Bi prediction that is performed by the inter prediction unit 51 , which is a continuation of FIG. 30 .
  • FIG. 32 is a diagram illustrating correspondence between BIO_MAX_SAD_BLOCK_SIZE and sPU.
  • FIG. 33 is a flowchart illustrating, as an operation example according to a third embodiment of the present technology, BIO-included Bi prediction that is performed by the inter prediction unit 51 .
  • FIG. 34 is a flowchart illustrating, as the operation example according to the third embodiment of the present technology, the BIO-included Bi prediction that is performed by the inter prediction unit 51 , which is a continuation of FIG. 33 .
  • FIG. 35 is a diagram illustrating exemplary regions for calculating SADs in each PU in a case where BIO_MAX_SAD_BLOCK_SIZE is 2.
  • FIG. 36 is another diagram illustrating exemplary regions for calculating SADs in each PU in the case where BIO_MAX_SAD_BLOCK_SIZE is 2.
  • FIG. 37 is a flowchart illustrating, as an operation example according to a fourth embodiment of the present technology, BIO-included Bi prediction that is performed by the inter prediction unit 51 .
  • FIG. 38 is a flowchart illustrating, as the operation example according to the fourth embodiment of the present technology, the BIO-included Bi prediction that is performed by the inter prediction unit 51 , which is a continuation of FIG. 37 .
  • FIG. 39 is a flowchart illustrating, as an operation example according to a fifth embodiment of the present technology, BIO-included Bi prediction that is performed by the inter prediction unit 51 .
  • FIG. 40 is a flowchart illustrating, as the operation example according to the fifth embodiment of the present technology, the BIO-included Bi prediction that is performed by the inter prediction unit 51 , which is a continuation of FIG. 39 .
  • FIG. 41 is a block diagram illustrating a configuration example of a computer.
  • VVC standard that is a next-generation codec has been developed as a successor to AVC/H.264 and HEVC/H.265.
  • VPDUs Virtual Pipeline Data Units
  • the VPDU size is a buffer size that allows smooth processing on each pipeline stage.
  • the VPDU size is often set to the maximum size of TUs (Transform Units).
  • VVC the maximum TU size is 64 ⁇ 64, and the same is assumed to hold true for VPDUs.
  • one CU corresponds to one PU, and hence inter prediction processing is required to be performed on PUs larger than VPDUs. Even in this case, the PU can be partitioned into virtual vPUs (virtual PUs) to be processed.
  • VVC is consistent with VPDUs and has been able to be implemented with reasonable HW resources since only small buffers are used as illustrated in FIG. 1 to FIG. 4 until BIO (Bi-directional optical flow) described later has been employed.
  • FIG. 1 is a diagram illustrating an example in which a pipeline is structured without the introduction of VPDUs.
  • FIG. 1 In the upper part of FIG. 1 , the blocks of a CU, an inter PU, and a TU are illustrated.
  • the maximum CU size is 128 ⁇ 128.
  • the maximum inter PU size is 128 ⁇ 128.
  • one CU corresponds to one PU.
  • the TU includes a TU 0 to a TU 3 , the maximum size of each TU is 64 ⁇ 64.
  • the TU size is the VPDU size.
  • the CU is obtained by adding the inter PU generated by inter PU processing and the TU obtained by TU processing together.
  • the pipeline including inter PU processing, TU processing, and local decoding processing is illustrated.
  • the inter PU processing and the processing on the TU 0 to the TU 3 are performed in parallel, and the local decoding processing on the CU starts when both the processing processes are complete.
  • the inter PU processing requires a buffer of 128 ⁇ 128, and the TU processing requires a buffer of 128 ⁇ 128 to meet the PU.
  • FIG. 2 is a flowchart illustrating Bi prediction (bidirectional prediction) that is one of the inter PU processing in the case of FIG. 1 .
  • Step S 1 inter prediction parameters are acquired.
  • Step S 2 an L 0 prediction block is generated.
  • Step S 3 an L 1 prediction block is generated.
  • Step S 4 a Bi prediction block PU is generated from the L 0 prediction block and the L 1 prediction block.
  • Steps S 2 to S 4 the PU size is required as the maximum buffer size.
  • FIG. 3 is a diagram illustrating an example in which a pipeline is efficiently structured with the introduction of VPDUs.
  • the CU includes, unlike FIG. 1 , divisions CU( 0 ) to CU( 3 ) since the PU is virtually partitioned into vPUs to be processed.
  • the PU includes virtual vPU( 0 ) to vPU( 3 ).
  • the pipeline including inter PU processing, TU processing, and local decoding processing is illustrated.
  • the processing on the vPU( 0 ) to the vPU( 3 ) in the inter PU and the processing on the TU 0 to the TU 3 are performed in parallel.
  • the local decoding processing on the CU( 0 ) starts.
  • the local decoding processing on the CU( 1 ) starts.
  • the local decoding processing on the CU( 2 ) starts.
  • the local decoding processing on the CU( 3 ) starts.
  • a buffer of 64 ⁇ 64 is enough in the inter PU processing, and in the TU processing, a buffer having a size of 64 ⁇ 64 is enough to meet the vPU.
  • FIG. 4 is a flowchart illustrating Bi prediction that is one of the inter PU processing in the case of FIG. 3 .
  • Step S 11 inter prediction parameters are acquired.
  • Step S 12 the number of vPUs included in the PU is acquired.
  • Step S 13 0 is set to the vPU number.
  • Step S 14 it is determined whether or not the vPU number is smaller than the number of vPUs. In a case where it is determined in Step S 14 that the vPU number is smaller than the number of vPUs, the processing proceeds to Step S 15 .
  • Step S 15 the position and size of the vPU in the PU are acquired from the vPU number.
  • Step S 16 an L 0 prediction block in the vPU region is generated.
  • Step S 17 an L 1 prediction block in the vPU region is generated.
  • Step S 18 a Bi prediction block vPU is generated from the L 0 prediction block and the L 1 prediction block.
  • Step S 19 the vPU number is incremented. After that, the processing returns to Step S 14 , and the later processing is repeated.
  • Step S 14 in a case where it is determined in Step S 14 that the vPU number is equal to or larger than the number of vPUs, the Bi prediction ends.
  • Steps S 16 to S 17 the VPDU size smaller than the PU size is enough for the maximum buffer size.
  • the optical flow method is an image processing method for detecting the motion of an object in a moving image, to thereby estimate a direction in which the object is to move in a certain period of time.
  • Codec inter prediction employing the optical flow method as an option enhances the encoding efficiency.
  • the term “BIO” is based on the fact that the optical flow method is used in Bi prediction in which temporally continuous frames are referred to in units of frames.
  • FIG. 5 is a diagram illustrating exemplary normal Bi prediction.
  • FIG. 5 illustrates an example in which optimal MVs on a reference plane 0 in an L 0 direction and a reference plane 1 in an L 1 direction are obtained for the Bi prediction value of a Bi prediction block on a picture B. The same holds true for the following figures.
  • the Bi prediction value corresponds to a pixel L 0 of an L 0 prediction block on the reference plane 0 and a pixel L 1 of an L 1 prediction block on the reference plane 1 , and the Bi prediction value is thus obtained from (L 0 +L 1 )/2.
  • optimal MVs (MV_L 0 and MV_L 1 ) are different from predicted MVs (MVP_L 0 and MVP_L 1 ), and hence the encoding of difference MVs (MVD_L 0 and MVD_L 1 ) is necessary.
  • FIG. 6 is a diagram illustrating exemplary Bi prediction employing BIO.
  • FIG. 6 illustrates, as the Bi prediction employing BIO, an example in which a gradient (G) and a velocity (V) are obtained by the optical flow method for prediction blocks generated with the predicted MVs (MVP_L 0 and MVP_L 1 ).
  • the gradient (G) and the velocity (V) are obtained by the optical flow method for the prediction blocks so that a result equivalent to that in the normal Bi prediction is obtained.
  • the predicted MVs (MVP_L 0 and MVP_L 1 ) are directly used as the MVs (MV_L 0 and MV_L 1 ), and hence the encoding of the difference MVs (MVD_L 0 and MVD_L 1 ) is unnecessary, which means that the encoding efficiency is enhanced.
  • FIG. 7 is a diagram illustrating exemplary two-block partition in the normal Bi prediction.
  • FIG. 8 is a diagram illustrating exemplary 2-block partition in the Bi prediction employing BIO.
  • the gradient (G) and the velocity (V) are obtained by the optical flow method without partitioning the blocks so that a result equivalent to that in the normal Bi prediction is obtained.
  • the encoding of block partition information which is necessary in the Bi prediction of FIG. 7
  • the encoding of difference MVs which is necessary in the Bi prediction of FIG. 7
  • the encoding of difference MVs can be unnecessary or eliminated, with the result that the encoding efficiency can be enhanced.
  • BIO Sum of Absolute Difference
  • the SAD of L 0 and L 1 prediction blocks is calculated for an entire PU to be compared to the threshold, thereby determining whether or not to apply BIO processing, and the processing then branches.
  • BIO processing it is difficult to virtually partition, in a case where inter prediction is performed on PUs larger than VPDUs, the PU into a plurality of vPUs.
  • BIO is implemented by HW
  • the reduction in terms of BIO is implemented by HW
  • BIO-included inter prediction due to a large difference between the pipeline delay of BIO-included inter prediction and the pipeline delay of TU processing, HW implementation that maintains throughput is difficult to achieve.
  • a unit of processing in calculation of a cost that is used for determining whether or not to perform bidirectional prediction such as BIO (for example, PU) is partitioned into partitioned processing units each of which corresponds to the VPDU size (for example, vPU) or is equal to or smaller than the VPDU size (for example, sPU described later), and the determination is made by using the cost calculated on the basis of the partitioned processing units.
  • BIO for example, PU
  • partitioned processing units each of which corresponds to the VPDU size (for example, vPU) or is equal to or smaller than the VPDU size (for example, sPU described later)
  • the size corresponding to the VPDU size means a size slightly larger than the VPDU size.
  • a is larger than B means “the horizontal size of A is larger than the horizontal size of B” or “the vertical size of A is larger than the vertical size of B.”
  • A is equal to or smaller than B” means “the horizontal size of A is equal to or smaller than the horizontal size of B and the vertical size of A is equal to or smaller than the vertical size of B.”
  • FIG. 9 is a block diagram illustrating a configuration example of an encoding device according to an embodiment of the present technology.
  • An encoding device 1 of FIG. 9 includes an A/D conversion unit 31 , a screen rearrangement buffer 32 , a calculation unit 33 , an orthogonal transform unit 34 , a quantization unit 35 , a lossless encoding unit 36 , an accumulation buffer 37 , an inverse quantization unit 38 , an inverse orthogonal transform unit 39 , and an addition unit 40 . Further, the encoding device 1 includes a deblocking filter 41 , an adaptive offset filter 42 , an adaptive loop filter 43 , a frame memory 44 , a switch 45 , an intra prediction unit 46 , a motion prediction/compensation unit 47 , a predicted image selection unit 48 , and a rate control unit 49 .
  • the A/D conversion unit 31 performs A/D conversion on images in units of frames input to be encoded.
  • the A/D conversion unit 31 outputs the images that are now the digital signals after the conversion to the screen rearrangement buffer 32 and stores the digital signals therein.
  • the screen rearrangement buffer 32 rearranges images in units of frames stored in a display order into an encoding order on the basis of the GOP structure.
  • the screen rearrangement buffer 32 outputs the rearranged images to the calculation unit 33 , the intra prediction unit 46 , and the motion prediction/compensation unit 47 .
  • the calculation unit 33 subtracts predicted images supplied from the predicted image selection unit 48 from images supplied from the screen rearrangement buffer 32 , to thereby perform encoding.
  • the calculation unit 33 outputs the images obtained as a result of the subtraction as residual information (difference) to the orthogonal transform unit 34 . Note that, in a case where no predicted image is supplied from the predicted image selection unit 48 , the calculation unit 33 directly outputs images read out from the screen rearrangement buffer 32 as residual information to the orthogonal transform unit 34 .
  • the orthogonal transform unit 34 performs orthogonal transform processing on residual information from the calculation unit 33 .
  • the orthogonal transform unit 34 outputs the images obtained as a result of the orthogonal transform processing to the quantization unit 35 .
  • the quantization unit 35 quantizes images obtained as a result of orthogonal transform processing supplied from the orthogonal transform unit 34 .
  • the quantization unit 35 outputs the quantized values obtained as a result of the quantization to the lossless encoding unit 36 .
  • the lossless encoding unit 36 acquires intra prediction mode information that is information indicating an optimal intra prediction mode from the intra prediction unit 46 . Further, the lossless encoding unit 36 acquires inter prediction mode information that is information indicating an optimal inter prediction mode and inter prediction parameters such as motion information and reference image information from the motion prediction/compensation unit 47 .
  • the lossless encoding unit 36 acquires offset filter information associated with an offset filter from the adaptive offset filter 42 and acquires filter coefficients from the adaptive loop filter 43 .
  • the lossless encoding unit 36 performs, on quantized values supplied from the quantization unit 35 , lossless encoding such as variable-length coding (for example, CAVLC (Context-Adaptive Variable Length Coding)) or arithmetic coding (for example, CABAC (Context-Adaptive Binary Arithmetic Coding)).
  • lossless encoding such as variable-length coding (for example, CAVLC (Context-Adaptive Variable Length Coding)) or arithmetic coding (for example, CABAC (Context-Adaptive Binary Arithmetic Coding)).
  • the lossless encoding unit 36 losslessly encodes, as encoding information associated with encoding, the intra prediction mode information or the inter prediction mode information, the inter prediction parameters, the offset filter information, or the filter coefficients.
  • the lossless encoding unit 36 outputs the lossless-encoded encoding information and quantized values as encoded data to the accumulation buffer 37 and accumulates the information and the quantized values therein.
  • the accumulation buffer 37 temporarily stores encoded data supplied from the lossless encoding unit 36 . Further, the accumulation buffer 37 outputs the stored encoded data as encoded streams to the subsequent stage.
  • the quantized values output from the quantization unit 35 are also input to the inverse quantization unit 38 .
  • the inverse quantization unit 38 inversely quantizes the quantized values, and outputs the orthogonal transform processing results obtained as a result of the inverse quantization to the inverse orthogonal transform unit 39 .
  • the inverse orthogonal transform unit 39 performs inverse orthogonal transform processing on orthogonal transform processing results supplied from the inverse quantization unit 38 .
  • Examples of the inverse orthogonal transform include IDCT (inverse discrete cosine transform) and IDST (inverse discrete sine transform).
  • the inverse orthogonal transform unit 39 outputs the residual information obtained as a result of the inverse orthogonal transform processing to the addition unit 40 .
  • the addition unit 40 adds residual information supplied from the inverse orthogonal transform unit 39 and predicted images supplied from the predicted image selection unit 48 together, to thereby perform decoding.
  • the addition unit 40 outputs the decoded images to the deblocking filter 41 and the frame memory 44 .
  • the deblocking filter 41 performs deblocking filter processing of eliminating block deformation on decoded images supplied from the addition unit 40 .
  • the deblocking filter 41 outputs the images obtained as a result of the deblocking filter processing to the adaptive offset filter 42 .
  • the adaptive offset filter 42 performs adaptive offset filter (SAO (Sample adaptive offset)) processing of mainly eliminating ringing on images obtained as a result of deblocking filter processing by the deblocking filter 41 .
  • SAO sample adaptive offset
  • the adaptive offset filter 42 outputs the images obtained as a result of the adaptive offset filter processing to the adaptive loop filter 43 . Further, the adaptive offset filter 42 supplies, as offset filter information, information indicating the types of the adaptive offset filter processing and the offsets to the lossless encoding unit 36 .
  • the adaptive loop filter 43 includes a two-dimensional Wiener filter, for example.
  • the adaptive loop filter 43 performs adaptive loop filter (ALF) processing on images obtained as a result of adaptive offset filter processing.
  • ALF adaptive loop filter
  • the adaptive loop filter 43 outputs the images obtained as a result of the adaptive loop filter processing to the frame memory 44 . Further, the adaptive loop filter 43 outputs the filter coefficients used in the adaptive loop filter processing to the lossless encoding unit 36 .
  • the frame memory 44 accumulates images supplied from the adaptive loop filter 43 and images supplied from the addition unit 40 .
  • images neighboring the CUs are output as peripheral images to the intra prediction unit 46 through the switch 45 .
  • the images subjected to the filter processing to be accumulated in the frame memory 44 are output as reference images to the motion prediction/compensation unit 47 through the switch 45 .
  • the intra prediction unit 46 performs intra prediction processing in all candidate intra prediction modes in units of PUs by using peripheral images read out from the frame memory 44 through the switch 45 .
  • the intra prediction unit 46 calculates RD costs in all the candidate intra prediction modes on the basis of images read out from the screen rearrangement buffer 32 and predicted images generated by the intra prediction processing.
  • the intra prediction unit 46 determines an intra prediction mode having the calculated RD cost that is minimum as an optimal intra prediction mode.
  • the intra prediction unit 46 outputs the predicted image generated in the optimal intra prediction mode to the predicted image selection unit 48 .
  • the intra prediction unit 46 outputs, when being notified that the predicted image generated in the optimal intra prediction mode has been selected, the intra prediction mode information to the lossless encoding unit 36 .
  • the intra prediction mode is a mode indicating PU sizes, prediction directions, and the like.
  • the motion prediction/compensation unit 47 performs motion prediction/compensation processing in all candidate inter prediction modes.
  • the motion prediction/compensation unit 47 includes an inter prediction unit 51 configured to compensate for predicted motions to generate predicted images.
  • the motion prediction/compensation unit 47 detects motion information (motion vectors) in all the candidate inter prediction modes on the basis of images supplied from the screen rearrangement buffer 32 and reference images read out from the frame memory 44 through the switch 45 .
  • the motion prediction/compensation unit 47 supplies, to the inter prediction unit 51 , PU positions in frames, PU sizes, prediction directions, reference image information, motion information, and the like that correspond to the detected motion information as inter prediction parameters.
  • the inter prediction unit 51 generates predicted images by BIO processing-included Bi prediction, for example, by using inter prediction parameters supplied from the motion prediction/compensation unit 47 .
  • the motion prediction/compensation unit 47 calculates RD costs in all the candidate inter prediction modes on the basis of images supplied from the screen rearrangement buffer 32 and predicted images generated by the inter prediction unit 51 .
  • the motion prediction/compensation unit 47 determines an inter prediction mode having the minimum RD cost as an optimal inter prediction mode.
  • the RD cost and the predicted image in the determined optimal inter prediction mode are output to the predicted image selection unit 48 .
  • the inter prediction parameters in the determined optimal inter prediction mode are output to the lossless encoding unit 36 .
  • the predicted image selection unit 48 determines, as an optimal prediction mode, one of an optimal intra prediction mode supplied from the intra prediction unit 46 and an optimal inter prediction mode supplied from the motion prediction/compensation unit 47 that has a smaller RD cost than the other. Then, the predicted image selection unit 48 outputs the predicted image in the optimal prediction mode to the calculation unit 33 and the addition unit 40 .
  • the rate control unit 49 controls the rate of the quantization operation by the quantization unit 35 on the basis of encoded data accumulated in the accumulation buffer 37 so that neither overflow nor underflow occurs.
  • FIG. 10 and FIG. 11 are flowcharts illustrating the details of encoding processing by the encoding device.
  • Step S 31 of FIG. 10 the A/D conversion unit 31 performs A/D conversion on images in units of frames input to be encoded.
  • the A/D conversion unit 31 outputs the images that are now the digital signals after the conversion to the screen rearrangement buffer 32 and stores the digital signals therein.
  • Step S 32 the screen rearrangement buffer 32 rearranges the frame images stored in a display order into an encoding order on the basis of the GOP structure.
  • the screen rearrangement buffer 32 outputs the rearranged images in units of frames to the calculation unit 33 , the intra prediction unit 46 , and the motion prediction/compensation unit 47 .
  • Step S 33 the intra prediction unit 46 performs intra prediction processing in all candidate intra prediction modes. Further, the intra prediction unit 46 calculates RD costs in all the candidate intra prediction modes on the basis of the image read out from the screen rearrangement buffer 32 and predicted images generated by the intra prediction processing. The intra prediction unit 46 determines an intra prediction mode having the minimum RD cost as an optimal intra prediction mode. The intra prediction unit 46 outputs the predicted image generated in the optimal intra prediction mode to the predicted image selection unit 48 .
  • Step S 34 the motion prediction/compensation unit 47 performs motion prediction/compensation processing in all candidate inter prediction modes.
  • the motion prediction/compensation unit 47 detects motion information (motion vectors) in all the candidate inter prediction modes on the basis of the image supplied from the screen rearrangement buffer 32 and reference images read out from the frame memory 44 through the switch 45 .
  • the inter prediction unit 51 generates predicted images by BIO processing-included Bi prediction, for example, by using inter prediction parameters supplied from the motion prediction/compensation unit 47 .
  • the motion prediction/compensation unit 47 calculates RD costs in all the candidate inter prediction modes on the basis of the image supplied from the screen rearrangement buffer 32 and the predicted images generated by the inter prediction unit 51 .
  • the motion prediction/compensation unit 47 determines an inter prediction mode having the minimum RD cost as an optimal inter prediction mode.
  • the RD cost and the predicted image in the determined optimal inter prediction mode are output to the predicted image selection unit 48 .
  • the inter prediction parameters in the determined optimal inter prediction mode are output to the lossless encoding unit 36 .
  • Step S 35 the predicted image selection unit 48 determines, as an optimal prediction mode, one of the optimal intra prediction mode and the optimal inter prediction mode that has a smaller RD cost than the other. Then, the predicted image selection unit 48 outputs the predicted image in the optimal prediction mode to the calculation unit 33 and the addition unit 40 .
  • Step S 36 the predicted image selection unit 48 determines whether the optimal prediction mode is the optimal inter prediction mode. In a case where it is determined in Step S 36 that the optimal prediction mode is the optimal inter prediction mode, the predicted image selection unit 48 notifies the motion prediction/compensation unit 47 that the predicted image generated in the optimal inter prediction mode has been selected.
  • Step S 37 the motion prediction/compensation unit 47 outputs the inter prediction mode information and the inter prediction parameters to the lossless encoding unit 36 .
  • Step S 39 the processing proceeds to Step S 39 .
  • Step S 36 the predicted image selection unit 48 notifies the intra prediction unit 46 that the predicted image generated in the optimal intra prediction mode has been selected. Then, in Step S 38 , the intra prediction unit 46 outputs the intra prediction mode information to the lossless encoding unit 36 . After that, the processing proceeds to Step S 39 .
  • Step S 39 the calculation unit 33 subtracts the predicted image supplied from the predicted image selection unit 48 from the image supplied from the screen rearrangement buffer 32 , to thereby perform encoding.
  • the calculation unit 33 outputs the image obtained as a result of the subtraction as residual information to the orthogonal transform unit 34 .
  • Step S 40 the orthogonal transform unit 34 performs orthogonal transform processing on the residual information.
  • the orthogonal transform unit 34 outputs the orthogonal transform processing result obtained as a result of the orthogonal transform processing to the quantization unit 35 .
  • Step S 41 the quantization unit 35 quantizes the orthogonal transform processing result supplied from the orthogonal transform unit 34 .
  • the quantization unit 35 outputs the quantized value obtained as a result of the quantization to the lossless encoding unit 36 and the inverse quantization unit 38 .
  • Step S 42 of FIG. 11 the inverse quantization unit 38 inversely quantizes the quantized value from the quantization unit 35 .
  • the inverse quantization unit 38 outputs the orthogonal transform processing result obtained as a result of the inverse quantization to the inverse orthogonal transform unit 39 .
  • Step S 43 the inverse orthogonal transform unit 39 performs inverse orthogonal transform processing on the orthogonal transform processing result.
  • the inverse orthogonal transform unit 39 outputs the residual information obtained as a result of the inverse orthogonal transform processing to the addition unit 40 .
  • Step S 44 the addition unit 40 adds the residual information supplied from the inverse orthogonal transform unit 39 and the predicted image supplied from the predicted image selection unit 48 together, to thereby perform decoding.
  • the addition unit 40 outputs the decoded image to the deblocking filter 41 and the frame memory 44 .
  • Step S 45 the deblocking filter 41 performs deblocking filter processing on the image supplied from the addition unit 40 .
  • the deblocking filter 41 outputs the image obtained as a result of the deblocking filter processing to the adaptive offset filter 42 .
  • Step S 46 the adaptive offset filter 42 performs adaptive offset filter processing on the image obtained as a result of the deblocking filter processing.
  • the adaptive offset filter 42 outputs the image obtained as a result of the adaptive offset filter processing to the adaptive loop filter 43 . Further, the adaptive offset filter 42 outputs the offset filter information to the lossless encoding unit 36 .
  • Step S 47 the adaptive loop filter 43 performs adaptive loop filter processing on the image obtained as a result of the adaptive offset filter processing.
  • the adaptive loop filter 43 outputs the image obtained as a result of the adaptive loop filter processing to the frame memory 44 . Further, the adaptive loop filter 43 outputs the filter coefficients used in the adaptive loop filter processing to the lossless encoding unit 36 .
  • Step S 48 the frame memory 44 accumulates the image supplied from the adaptive loop filter 43 and the image supplied from the addition unit 40 .
  • images neighboring the CUs are output as peripheral images to the intra prediction unit 46 through the switch 45 .
  • the images subjected to the filter processing to be accumulated in the frame memory 44 are output as reference images to the motion prediction/compensation unit 47 through the switch 45 .
  • Step S 49 the lossless encoding unit 36 losslessly encodes, as encoding information, the intra prediction mode information or the inter prediction mode information, the inter prediction parameters, the offset filter information, or the filter coefficients.
  • Step S 50 the lossless encoding unit 36 losslessly encodes the quantized value supplied from the quantization unit 35 . Then, the lossless encoding unit 36 generates encoded data from the encoding information losslessly encoded by the processing in Step S 49 and the lossless-encoded quantized value and outputs the encoded data to the accumulation buffer 37 .
  • Step S 51 the accumulation buffer 37 temporarily accumulates the encoded data supplied from the lossless encoding unit 36 .
  • Step S 52 the rate control unit 49 controls the rate of the quantization operation by the quantization unit 35 on the basis of the encoded data accumulated in the accumulation buffer 37 so that neither overflow nor underflow occurs. After that, the encoding processing ends.
  • the intra prediction processing and the motion prediction/compensation processing are always performed, but in reality, only one of the intra prediction processing and the motion prediction/compensation processing may be performed depending on picture types or the like.
  • FIG. 12 is a block diagram illustrating a configuration example of an embodiment of a decoding device to which the present disclosure is applied, which decodes encoded streams transmitted from the encoding device of FIG. 9 .
  • a decoding device 101 of FIG. 12 includes an accumulation buffer 131 , a lossless decoding unit 132 , an inverse quantization unit 133 , an inverse orthogonal transform unit 134 , an addition unit 135 , a deblocking filter 136 , an adaptive offset filter 137 , an adaptive loop filter 138 , and a screen rearrangement buffer 139 . Further, the decoding device 101 includes a D/A conversion unit 140 , a frame memory 141 , a switch 142 , an intra prediction unit 143 , the inter prediction unit 51 , and a switch 144 .
  • the accumulation buffer 131 of the decoding device 101 receives encoded data transmitted as encoded streams from the encoding device 1 of FIG. 9 and accumulates the encoded data.
  • the accumulation buffer 131 outputs the accumulated encoded data to the lossless decoding unit 132 .
  • the lossless decoding unit 132 performs lossless decoding such as variable length decoding or arithmetic decoding on encoded data from the accumulation buffer 131 , to thereby obtain quantized values and encoding information.
  • the lossless decoding unit 132 outputs the quantized values to the inverse quantization unit 133 .
  • the encoding information includes intra prediction mode information, inter prediction mode information, inter prediction parameters, offset filter information, filter coefficients, or the like.
  • the lossless decoding unit 132 outputs the intra prediction mode information and the like to the intra prediction unit 143 .
  • the lossless decoding unit 132 outputs the inter prediction parameters, the inter prediction mode information, and the like to the inter prediction unit 51 .
  • the lossless decoding unit 132 outputs the intra prediction mode information or the inter prediction mode information to the switch 144 .
  • the lossless decoding unit 132 outputs the offset filter information to the adaptive offset filter 137 .
  • the lossless decoding unit 132 outputs the filter coefficients to the adaptive loop filter 138 .
  • the inverse quantization unit 133 , the inverse orthogonal transform unit 134 , the addition unit 135 , the deblocking filter 136 , the adaptive offset filter 137 , the adaptive loop filter 138 , the frame memory 141 , the switch 142 , the intra prediction unit 143 , and the inter prediction unit 51 perform processing processes similar to those of the inverse quantization unit 38 , the inverse orthogonal transform unit 39 , the addition unit 40 , the deblocking filter 41 , the adaptive offset filter 42 , the adaptive loop filter 43 , the frame memory 44 , the switch 45 , the intra prediction unit 46 , and the motion prediction/compensation unit 47 of FIG. 9 . With this, images are decoded.
  • the inverse quantization unit 133 is configured like the inverse quantization unit 38 of FIG. 9 .
  • the inverse quantization unit 133 inversely quantizes quantized values from the lossless decoding unit 132 .
  • the inverse quantization unit 133 outputs the orthogonal transform processing results obtained as a result of the inverse quantization to the inverse orthogonal transform unit 134 .
  • the inverse orthogonal transform unit 134 is configured like the inverse orthogonal transform unit 39 of FIG. 9 .
  • the inverse orthogonal transform unit 134 performs inverse orthogonal transform processing on orthogonal transform processing results supplied from the inverse quantization unit 133 .
  • the inverse orthogonal transform unit 134 outputs the residual information obtained as a result of the inverse orthogonal transform processing to the addition unit 135 .
  • the addition unit 135 adds residual information supplied from the inverse orthogonal transform unit 134 and predicted images supplied from the switch 144 together, to thereby perform decoding.
  • the addition unit 135 outputs the decoded images to the deblocking filter 136 and the frame memory 141 .
  • the deblocking filter 136 performs deblocking filter processing on images supplied from the addition unit 135 and outputs the images obtained as a result of the deblocking filter processing to the adaptive offset filter 137 .
  • the adaptive offset filter 137 performs, by using offsets indicated by offset filter information from the lossless decoding unit 132 , adaptive offset filter processing of types indicated by the offset filter information on images obtained as a result of deblocking filter processing.
  • the adaptive offset filter 137 outputs the images obtained as a result of the adaptive offset filter processing to the adaptive loop filter 138 .
  • the adaptive loop filter 138 performs adaptive loop filter processing on images supplied from the adaptive offset filter 137 by using filter coefficients supplied from the lossless decoding unit 132 .
  • the adaptive loop filter 138 outputs the images obtained as a result of the adaptive loop filter processing to the frame memory 141 and the screen rearrangement buffer 139 .
  • the screen rearrangement buffer 139 stores images obtained as a result of adaptive loop filter processing in units of frames.
  • the screen rearrangement buffer 139 rearranges the images in units of frames in the encoding order into the original display order and outputs the resultant to the D/A conversion unit 140 .
  • the D/A conversion unit 140 performs D/A conversion on images in units of frames supplied from the screen rearrangement buffer 139 and outputs the resultant.
  • the frame memory 141 accumulates images obtained as a result of adaptive loop filter processing and images supplied from the addition unit 135 .
  • images neighboring the CUs are supplied as peripheral images to the intra prediction unit 143 through the switch 142 .
  • the images subjected to the filter processing to be accumulated in the frame memory 141 are output as reference images to the inter prediction unit 51 through the switch 142 .
  • the intra prediction unit 143 performs, by using peripheral images read out from the frame memory 141 through the switch 142 , intra prediction processing in an optimal intra prediction mode indicated by intra prediction mode information supplied from the lossless decoding unit 132 .
  • the intra prediction unit 143 outputs the thus generated predicted images to the switch 144 .
  • the inter prediction unit 51 is configured like the one in FIG. 9 .
  • the inter prediction unit 51 performs, by using inter prediction parameters supplied from the lossless decoding unit 132 , inter prediction in an optimal inter prediction mode indicated by inter prediction mode information, to thereby generate a predicted image.
  • the inter prediction unit 51 reads out, from the frame memory 141 through the switch 142 , reference images specified by reference image information that is an inter prediction parameter supplied from the lossless decoding unit 132 .
  • the inter prediction unit 51 generates predicted images with BIO processing-included Bi prediction, for example, by using motion information that is an inter prediction parameter supplied from the lossless decoding unit 132 and the read-out reference images.
  • the generated predicted images are output to the switch 144 .
  • the switch 144 outputs, in a case where intra prediction mode information has been supplied from the lossless decoding unit 132 , predicted images supplied from the intra prediction unit 143 to the addition unit 135 . Meanwhile, the switch 144 outputs, in a case where inter prediction mode information has been supplied from the lossless decoding unit 132 , predicted images supplied from the inter prediction unit 51 to the addition unit 135 .
  • FIG. 13 is a flowchart illustrating the details of decoding processing by the decoding device.
  • Step S 131 of FIG. 13 the accumulation buffer 131 of the decoding device 101 receives encoded data in units of frames supplied from the preceding stage, which is not illustrated, and accumulates the encoded data.
  • the accumulation buffer 131 outputs the accumulated encoded data to the lossless decoding unit 132 .
  • Step S 132 the lossless decoding unit 132 losslessly decodes the encoded data from the accumulation buffer 131 to obtain a quantized value and encoding information.
  • the lossless decoding unit 132 outputs the quantized value to the inverse quantization unit 133 .
  • the lossless decoding unit 132 outputs intra prediction mode information and the like to the intra prediction unit 143 .
  • the lossless decoding unit 132 outputs inter prediction parameters, inter prediction mode information, and the like to the inter prediction unit 51 .
  • the lossless decoding unit 132 outputs the intra prediction mode information or the inter prediction mode information to the switch 144 .
  • the lossless decoding unit 132 supplies offset filter information to the adaptive offset filter 137 and outputs filter coefficients to the adaptive loop filter 138 .
  • Step S 133 the inverse quantization unit 133 inversely quantizes the quantized value supplied from the lossless decoding unit 132 .
  • the inverse quantization unit 133 outputs the orthogonal transform processing result obtained as a result of the inverse quantization to the inverse orthogonal transform unit 134 .
  • Step S 134 the inverse orthogonal transform unit 134 performs orthogonal transform processing on the orthogonal transform processing result supplied from the inverse quantization unit 133 .
  • Step S 135 the inter prediction unit 51 determines whether the inter prediction mode information has been supplied from the lossless decoding unit 132 . In a case where it is determined in Step S 135 that the inter prediction mode information has been supplied, the processing proceeds to Step S 136 .
  • Step S 136 the inter prediction unit 51 reads out reference images on the basis of reference image specification information supplied from the lossless decoding unit 132 , and performs, by using motion information and the reference images, motion compensation processing in an optimal inter prediction mode indicated by the inter prediction mode information.
  • the inter prediction unit 51 generates a predicted image with BIO processing-included Bi prediction.
  • the inter prediction unit 51 outputs the generated predicted image to the addition unit 135 through the switch 144 . After that, the processing proceeds to Step S 138 .
  • Step S 135 the processing proceeds to Step S 137 .
  • Step S 137 the intra prediction unit 143 performs, by using peripheral images read out from the frame memory 141 through the switch 142 , intra prediction processing in an intra prediction mode indicated by the intra prediction mode information.
  • the intra prediction unit 143 outputs the predicted image generated as a result of the intra prediction processing to the addition unit 135 through the switch 144 . After that, the processing proceeds to Step S 138 .
  • Step S 138 the addition unit 135 adds residual information supplied from the inverse orthogonal transform unit 134 and the predicted image supplied from the switch 144 together, to thereby perform decoding.
  • the addition unit 135 outputs the decoded image to the deblocking filter 136 and the frame memory 141 .
  • Step S 139 the deblocking filter 136 performs deblocking filter processing on the image supplied from the addition unit 135 to remove block deformation.
  • the deblocking filter 136 outputs the image obtained as a result of the deblocking filter processing to the adaptive offset filter 137 .
  • Step S 140 the adaptive offset filter 137 performs, on the basis of the offset filter information supplied from the lossless decoding unit 132 , adaptive offset filter processing on the image obtained as a result of the deblocking filter processing.
  • the adaptive offset filter 137 outputs the image obtained as a result of the adaptive offset filter processing to the adaptive loop filter 138 .
  • Step S 141 the adaptive loop filter 138 performs, by using the filter coefficients supplied from the lossless decoding unit 132 , adaptive loop filter processing on the image supplied from the adaptive offset filter 137 .
  • the adaptive loop filter 138 supplies the image obtained as a result of the adaptive loop filter processing to the frame memory 141 and the screen rearrangement buffer 139 .
  • Step S 142 the frame memory 141 accumulates the image supplied from the addition unit 135 and the image supplied from the adaptive loop filter 138 .
  • images neighboring the CUs are supplied as peripheral images to the intra prediction unit 143 through the switch 142 .
  • the images subjected to the filter processing to be accumulated in the frame memory 141 are supplied as reference images to the inter prediction unit 51 through the switch 142 .
  • Step S 143 the screen rearrangement buffer 139 stores the images supplied from the adaptive loop filter 138 in units of frames.
  • the screen rearrangement buffer 139 rearranges the images in units of frames in the encoding order into the original display order and outputs the resultant to the D/A conversion unit 140 .
  • Step S 144 the D/A conversion unit 140 performs D/A conversion on the image obtained as a result of the adaptive loop filter processing and outputs the resultant.
  • FIG. 14 is a block diagram illustrating a configuration example of the inter prediction unit.
  • the inter prediction unit 51 includes an inter prediction control unit 201 , an L 0 prediction block generation unit 202 , an L 1 prediction block generation unit 203 , a BIO cost calculation unit 204 , a BIO application determination unit 205 , a Bi prediction block generation unit 206 , a BIO processing-included Bi prediction block generation unit 207 , a Bi prediction block selection unit 208 , and a prediction block selection unit 209 .
  • the inter prediction control unit 201 receives, in the case of the encoding device 1 , inter prediction parameters from the motion prediction/compensation unit 47 (from the lossless decoding unit 132 in the case of the decoding device 101 ).
  • the inter prediction parameters include a PU position in a frame, a PU size, a prediction direction (any one of L 0 , L 1 , and Bi is set), reference image information, motion information, and the like.
  • the inter prediction control unit 201 includes, for example, a CPU (Central Processing Unit) or a microprocessor.
  • the inter prediction control unit 201 executes a predetermined program by the CPU to control the units on the basis of the contents of inter prediction parameters.
  • the inter prediction control unit 201 supplies L 0 prediction parameters to the L 0 prediction block generation unit 202 , thereby controlling the L 0 prediction block generation unit 202 .
  • the L 0 prediction parameters include PU positions, PU sizes, reference image information REFIDX_L 0 , and motion information MV_L 0 .
  • the inter prediction control unit 201 supplies L 1 prediction parameters to the L 1 prediction block generation unit 203 , thereby controlling the L 1 prediction block generation unit 203 .
  • the L 1 prediction parameters include PU positions, PU sizes, reference image information REFIDX_L 1 , and motion information MV L 1 .
  • the inter prediction control unit 201 supplies Bi prediction parameters to the BIO cost calculation unit 204 , the Bi prediction block generation unit 206 , and the BIO processing-included Bi prediction block generation unit 207 , thereby controlling the BIO cost calculation unit 204 , the Bi prediction block generation unit 206 , and the BIO processing-included Bi prediction block generation unit 207 .
  • the Bi prediction parameters include PU sizes and the like.
  • the inter prediction control unit 201 supplies a BIO threshold to the BIO application determination unit 205 , thereby controlling the BIO application determination unit 205 .
  • the inter prediction control unit 201 supplies a prediction direction to the prediction block selection unit 209 , thereby controlling the prediction block selection unit 209 .
  • the L 0 prediction block generation unit 202 operates when the prediction direction is L 0 or Bi.
  • the L 0 prediction block generation unit 202 accesses the frame memory 44 on the basis of L 0 prediction parameters supplied from the inter prediction control unit 201 , to thereby generate L 0 prediction images from reference images.
  • the generated L 0 prediction images are supplied from the L 0 prediction block generation unit 202 to the BIO cost calculation unit 204 , the BIO application determination unit 205 , the Bi prediction block generation unit 206 , the BIO processing-included Bi prediction block generation unit 207 , and the prediction block selection unit 209 .
  • the L 1 prediction block generation unit 203 operates when the prediction direction is L 1 or Bi.
  • the L 1 prediction block generation unit 203 accesses the frame memory 44 on the basis of L 1 prediction parameters supplied from the inter prediction control unit 201 , to thereby generate L 1 prediction images from reference images.
  • the generated L 1 prediction images are supplied from the L 1 prediction block generation unit 203 to the BIO cost calculation unit 204 , the BIO application determination unit 205 , the Bi prediction block generation unit 206 , the BIO processing-included Bi prediction block generation unit 207 , and the prediction block selection unit 209 .
  • the BIO cost calculation unit 204 operates when the prediction direction is Bi.
  • the BIO cost calculation unit 204 calculates, on the basis of Bi prediction parameters supplied from the inter prediction control unit 201 , the SAD of an L 0 prediction image supplied from the L 0 prediction block generation unit 202 and an L 1 prediction image supplied from the L 1 prediction block generation unit 203 .
  • the calculated SAD is supplied from the BIO cost calculation unit 204 to the BIO application determination unit 205 .
  • the BIO application determination unit 205 operates when the prediction direction is Bi.
  • the BIO application determination unit 205 compares the BIO threshold supplied from the inter prediction control unit 201 to a SAD supplied from the BIO cost calculation unit 204 , thereby determining a BIO_ON flag.
  • the determined BIO_ON flag is supplied from the BIO application determination unit 205 to the Bi prediction block generation unit 206 , the BIO processing-included Bi prediction block generation unit 207 , and the Bi prediction block selection unit 208 .
  • the Bi prediction block generation unit 206 generates, on the basis of Bi prediction parameters supplied from the inter prediction control unit 201 , Bi prediction images from L 0 prediction images supplied from the L 0 prediction block generation unit 202 and L 1 prediction images supplied from the L 1 prediction block generation unit 203 .
  • the generated Bi prediction images are supplied from the Bi prediction block generation unit 206 to the Bi prediction block selection unit 208 .
  • the Bi prediction block generation unit 206 generates, on the basis of Bi prediction parameters supplied from the inter prediction control unit 201 , BIO processing-included Bi prediction images from L 0 prediction images supplied from the L 0 prediction block generation unit 202 and L 1 prediction images supplied from the L 1 prediction block generation unit 203 .
  • the generated BIO processing-included Bi prediction images are supplied from the BIO processing-included Bi prediction block generation unit 207 to the Bi prediction block selection unit 208 .
  • the Bi prediction block selection unit 208 selects Bi prediction images on the basis of the BIO_ON flag supplied from the BIO application determination unit 205 .
  • the selected Bi prediction images are supplied from the Bi prediction block selection unit 208 to the prediction block selection unit 209 .
  • the prediction block selection unit 209 selects predicted images on the basis of a prediction direction supplied from the inter prediction control unit 201 and outputs the selected predicted images as the predicted images of inter prediction to the predicted image selection unit 48 of FIG. 9 (or the switch 144 of FIG. 12 ) on the subsequent stage.
  • the prediction block selection unit 209 selects L 0 prediction images supplied from the L 0 prediction block generation unit 202 in a case where the prediction direction is L 0 , and selects L 1 prediction images supplied from the L 1 prediction block generation unit 203 in a case where the prediction direction is L 1 .
  • the prediction block selection unit 209 selects Bi prediction images supplied from the Bi prediction block selection unit 208 in a case where the prediction direction is Bi.
  • FIG. 15 and FIG. 16 are flowcharts illustrating BIO-included Bi prediction that is performed by the inter prediction unit 51 .
  • this processing is related-art BIO-included Bi prediction processing that is compared to BIO-included Bi prediction processing of the present technology described later. Further, this BIO-included Bi prediction processing is processing that is performed on the encoding side and the decoding side, is part of the motion prediction/compensation processing in Step S 34 of FIG. 10 , and is part of the inter prediction processing in Step S 136 of FIG. 13 .
  • Step S 301 of FIG. 15 the inter prediction control unit 201 acquires inter prediction parameters supplied from the motion prediction/compensation unit 47 .
  • the inter prediction parameters are supplied from the lossless decoding unit 132 .
  • the inter prediction parameters include a PU position in a frame, a PU size, a prediction direction (any one of L 0 , L 1 , and Bi is set), reference image information, motion information, and the like.
  • the inter prediction control unit 201 supplies L 0 prediction parameters to the L 0 prediction block generation unit 202 .
  • the L 0 prediction parameters include a PU position, a PU size, reference image information REFIDX_L 0 , and motion information MV_L 0 .
  • the inter prediction control unit 201 supplies L 1 prediction parameters to the L 1 prediction block generation unit 203 .
  • the L 1 prediction parameters include a PU position, a PU size, reference image information REFIDX_L 1 , and motion information MV_L 1 .
  • the inter prediction control unit 201 supplies Bi prediction parameters to the BIO cost calculation unit 204 , the Bi prediction block generation unit 206 , and the BIO processing-included Bi prediction block generation unit 207 .
  • the Bi prediction parameters are information indicating PU sizes.
  • the inter prediction control unit 201 supplies the BIO threshold to the BIO application determination unit 205 .
  • the inter prediction control unit 201 supplies a prediction direction to the prediction block selection unit 209 , thereby controlling the prediction block selection unit 209 .
  • Step S 302 the L 0 prediction block generation unit 202 accesses the frame memory 44 on the basis of the L 0 prediction parameters supplied from the inter prediction control unit 201 , to thereby generate an L 0 prediction image from a reference image.
  • the reference image is referred to through an access to the frame memory 141 .
  • Step S 303 the L 1 prediction block generation unit 203 accesses the frame memory 44 on the basis of the L 1 prediction parameters supplied from the inter prediction control unit 201 , to thereby generate an L 1 prediction image from a reference image.
  • Step S 304 the BIO cost calculation unit 204 calculates, in units of 4 ⁇ 4, the SAD of the L 0 prediction image supplied from the L 0 prediction block generation unit 202 and the L 1 prediction image supplied from the L 1 prediction block generation unit 203 .
  • the SADs calculated in units of 4 ⁇ 4 are accumulated so that SAD 4 ⁇ 4 block that is the sum of the SADs is acquired.
  • Step S 305 the BIO cost calculation unit 204 calculates, in units of PUs, the SAD of the L 0 prediction image supplied from the L 0 prediction block generation unit 202 and the L 1 prediction image supplied from the L 1 prediction block generation unit 203 .
  • the SADs calculated in units of PUs are accumulated so that SAD PU that is the sum of the SADs is acquired.
  • the acquired SAD PU is supplied from the BIO cost calculation unit 204 to the BIO application determination unit 205 .
  • SAD PU is supplied from the BIO cost calculation unit 204 and BIO threshold PU is supplied from the inter prediction control unit 201 .
  • the determined BIO_PU_ON flag is supplied from the BIO application determination unit 205 to the Bi prediction block generation unit 206 , the BIO processing-included Bi prediction block generation unit 207 , and the Bi prediction block selection unit 208 .
  • Step S 307 the Bi prediction block generation unit 206 and the BIO processing-included Bi prediction block generation unit 207 determine whether or not the BIO_PU_ON flag is 1.
  • Step S 307 In a case where it is determined in Step S 307 that the BIO_PU_ON flag is not 1, the processing proceeds to Step S 308 .
  • Step S 308 the Bi prediction block generation unit 206 generates a Bi prediction block PU from the L 0 prediction image supplied from the L 0 prediction block generation unit 202 and the L 1 prediction image supplied from the L 1 prediction block generation unit 203 .
  • the generated Bi prediction block PU is supplied from the Bi prediction block generation unit 206 to the Bi prediction block selection unit 208 . After that, the BIO-included Bi prediction processing ends.
  • the maximum buffer size in the processing in Step S 308 is the PU size.
  • Step S 307 determines whether the BIO_PU_ON flag is 1 or not.
  • Steps S 309 to 5320 the BIO processing-included Bi prediction block generation unit 207 performs the processing of generating a BIO processing-included Bi prediction image.
  • Step S 309 the BIO processing-included Bi prediction block generation unit 207 calculates a plurality of gradients from the L 0 prediction image supplied from the L 0 prediction block generation unit 202 and the L 1 prediction image supplied from the L 1 prediction block generation unit 203 .
  • the maximum buffer size in the processing in Step S 309 is the total size of nine PU's.
  • Step S 310 the BIO processing-included Bi prediction block generation unit 207 acquires the number of 4 ⁇ 4 blocks included in the PU.
  • Step S 311 the BIO processing-included Bi prediction block generation unit 207 sets 0 to the 4 ⁇ 4 block number.
  • Step S 312 of FIG. 16 the BIO processing-included Bi prediction block generation unit 207 determines whether or not the 4 ⁇ 4 block number is smaller than the number of 4 ⁇ 4 blocks.
  • Step S 312 In a case where it is determined in Step S 312 that the 4 ⁇ 4 block number is smaller than the number of 4 ⁇ 4 blocks, the processing proceeds to Step S 313 .
  • Step S 313 the BIO processing-included Bi prediction block generation unit 207 acquires the position in the PU and SAD 4 ⁇ 4 from the 4 ⁇ 4 block number.
  • Step S 315 the BIO processing-included Bi prediction block generation unit 207 determines whether or not the BIO_4 ⁇ 4_ON flag is 1.
  • Step S 315 In a case where it is determined in Step S 315 that the BIO_4 ⁇ 4_ON flag is not 1, the processing proceeds to Step S 316 .
  • Step S 316 the BIO processing-included Bi prediction block generation unit 207 generates a Bi prediction value from the L 0 prediction image and the L 1 prediction image in the region of the 4 ⁇ 4 block number.
  • Step S 317 the BIO processing-included Bi prediction block generation unit 207 calculates a velocity from the plurality of gradients in the region of
  • Step S 318 the BIO processing-included Bi prediction block generation unit 207 generates a BIO prediction value from the L 0 prediction image, the L 1 prediction image, the gradients, and the velocity in the region of the 4 ⁇ 4 block number.
  • Step S 316 and 5318 the processing proceeds to Step S 319 .
  • Step S 319 the BIO processing-included Bi prediction block generation unit 207 stores the prediction value at the position of the 4 ⁇ 4 block number in the buffer.
  • the maximum buffer size in the processing in Step 319 is the PU size.
  • Step S 320 the BIO processing-included Bi prediction block generation unit 207 increments the 4 ⁇ 4 block number. After that, the processing returns to Step S 312 , and the later processing is repeated.
  • Step S 308 After Step S 308 or in a case where it is determined in Step S 312 that the 4 ⁇ 4 block number is not smaller than the number of 4 ⁇ 4 blocks, the BIO-included Bi prediction ends.
  • the SAD of the L 0 prediction block and the L 1 prediction block is calculated for the entire PU in Step S 305 , the SAD is compared to the threshold to determine whether or not to apply BIO processing in Step S 306 , and the processing branches in Step S 307 .
  • the PU′ size which is slightly larger than the PU size, is required for the buffers required in Steps S 302 , S 303 , and S 309 to achieve the gradient calculation in Step S 309 and the velocity calculation in Step S 317 .
  • the maximum PU′ size is a size of 130 ⁇ 130 obtained by adding 2 to the PU horizontal size and the PU vertical size.
  • Step S 308 the buffer having the PU size is required.
  • inter prediction unit 51 that requires this buffer is implemented by HW (hardware)
  • HW hardware
  • a unit of processing in calculation of a cost that is used for determining whether or not to perform bidirectional prediction such as BIO is partitioned into partitioned processing units each of which corresponds to the VPDU size or is equal to or smaller than the VPDU size, and the determination is made by using the cost calculated on the basis of the partitioned processing units.
  • the size corresponding to the VPDU size means the VPDU′ size slightly larger than the VPDU size.
  • FIG. 17 and FIG. 18 are flowcharts illustrating, as an operation example according to the first embodiment of the present technology, BIO-included Bi prediction that is performed by the inter prediction unit 51 .
  • the case of the encoding device 1 is illustrated in FIG. 17 and FIG. 18 , and since similar processing is performed in the case of the decoding device 101 , the description thereof is omitted.
  • Step S 401 the inter prediction control unit 201 acquires inter prediction parameters supplied from the motion prediction/compensation unit 47 .
  • Step S 402 the inter prediction control unit 201 acquires the number of vPUs included in the PU. That is, in a case where PUs are larger than VPDUs, the PU is virtually partitioned into a plurality of vPUs. In a case where the PU is 128 ⁇ 128, 4 is set to the number of vPUs. In a case where the PU is 128 ⁇ 64 or 64 ⁇ 128, 2 is set to the number of vPUs. In a case where the PU is 64 ⁇ 64 or less, 1 is set to the number of vPUs. In the case where the number of vPUs is 1, the PU is not virtually partitioned, and processing similar to that of FIG. 15 and FIG. 16 is substantially performed.
  • Step S 403 the inter prediction control unit 201 sets 0 as a vPU number that is processed first.
  • Step S 404 the inter prediction control unit 201 determines whether or not the vPU number is smaller than the number of vPUs.
  • Step S 404 In a case where it is determined in Step S 404 that the vPU number is smaller than the number of vPUs, the processing proceeds to Step S 405 .
  • Step S 405 the inter prediction control unit 201 acquires, from the PU size and the vPU number, the position and size of the vPU indicating a region in the PU to be processed.
  • FIG. 19 is a diagram illustrating the correspondences between PU size, vPU number, and processing position and size.
  • the processing position is at the upper left and the size is 64 ⁇ 64.
  • the processing position is at the upper right and the size is 64 ⁇ 64.
  • the processing position is at the lower left and the size is 64 ⁇ 64.
  • the processing position is at the lower right and the size is 64 ⁇ 64.
  • the processing position is on the left and the size is 64 ⁇ 64.
  • the processing position is on the right and the size is 64 ⁇ 64.
  • the processing position is at the top and the size is 64 ⁇ 64.
  • the processing position is at the bottom and the size is 64 ⁇ 64.
  • the processing position is the PU itself.
  • the position and size of the vPU acquired in Step S 405 are supplied to the L 0 prediction block generation unit 202 and the L 1 prediction block generation unit 203 .
  • Step S 406 the L 0 prediction block generation unit 202 generates an L 0 prediction block in the region of the vPU number.
  • Step S 407 the L 1 prediction block generation unit 203 generates an L 1 prediction block in the region of the vPU number.
  • the maximum buffer size in the processing in Steps 406 and S 407 is, for example, the VPDU′ size including a slightly large region that is required for the gradient calculation in Step S 413 and the velocity calculation in Step S 421 .
  • the VPDU′ size represents the above-mentioned size corresponding to the VPDU size, which is the size slightly larger than the VPDU size.
  • the VPDU′ size is 66 ⁇ 66 obtained by adding 2 to the horizontal and vertical sizes, for example.
  • Step S 408 the BIO cost calculation unit 204 calculates, in units of 4 ⁇ 4 in the vPU, the SAD of the L 0 prediction image supplied from the L 0 prediction block generation unit 202 and the L 1 prediction image supplied from the L 1 prediction block generation unit 203 .
  • the SADs calculated in units of 4 ⁇ 4 are accumulated so that SAD_4 ⁇ 4 block that is the sum of the SADs is acquired.
  • this SAD_4 ⁇ 4 block is required to be stored.
  • the buffer size for storing SAD_4 ⁇ 4 block can be reduced to 1 ⁇ 4 of the size in Step S 304 of FIG. 15 .
  • Step S 409 the BIO cost calculation unit 204 calculates, in units of vPUs, the SAD of the L 0 prediction image supplied from the L 0 prediction block generation unit 202 and the L 1 prediction image supplied from the L 1 prediction block generation unit 203 .
  • the SADs calculated in units of vPUs are accumulated so that SAD_vPU that is the sum of the SADs is acquired.
  • the acquired SAD_vPU is supplied from the BIO cost calculation unit 204 to the BIO application determination unit 205 .
  • SAD_vPU is supplied from the BIO cost calculation unit 204 and BIO threshold_vPU is supplied from the inter prediction control unit 201 .
  • BIO threshold_vPU is a value obtained by scaling BIO threshold_PU to a value based on the vPU size obtained in Step S 405 .
  • the determined BIO_vPU_ON flag is supplied from the BIO application determination unit 205 to the Bi prediction block generation unit 206 , the BIO processing-included Bi prediction block generation unit 207 , and the Bi prediction block selection unit 208 .
  • Step S 411 the Bi prediction block generation unit 206 and the BIO processing-included Bi prediction block generation unit 207 determine whether or not the BIO_vPU_ON flag is 1.
  • Step S 411 In a case where it is determined in Step S 411 that the BIO_vPU_ON flag is not 1, the processing proceeds to Step S 412 since BIO is not effective to the entire vPU.
  • Step S 412 the Bi prediction block generation unit 206 generates a Bi prediction block vPU from the L 0 prediction image supplied from the L 0 prediction block generation unit 202 and the L 1 prediction image supplied from the L 1 prediction block generation unit 203 .
  • the generated Bi prediction block vPU is stored in the buffer and supplied from the Bi prediction block generation unit 206 to the Bi prediction block selection unit 208 .
  • Step S 411 determines whether the BIO_vPU_ON flag is 1 or not. If it is determined in Step S 411 that the BIO_vPU_ON flag is 1, the processing proceeds to Step S 413 .
  • Step S 413 the BIO processing-included Bi prediction block generation unit 207 calculates a plurality of gradients from the L 0 prediction block supplied from the L 0 prediction block generation unit 202 and the L 1 prediction block supplied from the L 1 prediction block generation unit 203 .
  • Step S 413 9 types of intermediate parameters are calculated from the L 0 prediction block and the L 1 prediction block.
  • the amount of change between the L 0 prediction block and the L 1 prediction block, and the amount of horizontal or vertical change in pixel value in each prediction block are calculated. These are collectively referred to as “gradient.”
  • the gradients are required to be calculated by as many pixels as prediction blocks, and hence it is enough that the buffer required here has the total size of nine VPDU's at most.
  • Step S 414 of FIG. 18 the BIO processing-included Bi prediction block generation unit 207 acquires the number of 4 ⁇ 4 blocks included in the vPU.
  • the number of 4 ⁇ 4 blocks is 256.
  • the highest prediction accuracy is achieved when velocities are obtained in units of pixels to calculate prediction values. This, however, requires large-scale calculation.
  • BIO velocities are calculated in units of 4 ⁇ 4 blocks in view of the balanced trade-off of performance and cost.
  • Step S 415 the BIO processing-included Bi prediction block generation unit 207 sets 0 as a 4 ⁇ 4 block number that is processed first.
  • Step S 416 the BIO processing-included Bi prediction block generation unit 207 determines whether or not the 4 ⁇ 4 block number is smaller than the number of 4 ⁇ 4 blocks.
  • Step S 416 In a case where it is determined in Step S 416 that the 4 ⁇ 4 block number is smaller than the number of 4 ⁇ 4 blocks, the processing proceeds to Step S 417 .
  • Step S 417 the BIO processing-included Bi prediction block generation unit 207 acquires the position in the vPU and SAD_4 ⁇ 4 from the 4 ⁇ 4 block number.
  • the 4 ⁇ 4 blocks are processed in the raster scan order.
  • Step S 419 the BIO processing-included Bi prediction block generation unit 207 determines whether or not the BIO_4 ⁇ 4_ON flag is 1.
  • Step S 419 In a case where it is determined in Step S 419 that the BIO_4 ⁇ 4_ON flag is not 1, the processing proceeds to Step S 420 since BIO is not expected to be effective to the 4 ⁇ 4 block.
  • Step S 420 the BIO processing-included Bi prediction block generation unit 207 calculates the average of the L 0 prediction image and the L 1 prediction image in the region of the 4 ⁇ 4 block number, to thereby generate a Bi prediction value.
  • Step S 419 In a case where it is determined in Step S 419 that the BIO_4 ⁇ 4_ON flag is 1, the processing proceeds to Step S 421 .
  • Step S 421 the BIO processing-included Bi prediction block generation unit 207 calculates a velocity from the plurality of gradients in the region of the 4 ⁇ 4 block number.
  • Step S 422 the BIO processing-included Bi prediction block generation unit 207 generates a BIO prediction value from the L 0 prediction image, the L 1 prediction image, the gradients, and the velocity in the region of the 4 ⁇ 4 block number.
  • Step S 420 and S 422 the processing proceeds to Step S 423 .
  • Step S 423 the BIO processing-included Bi prediction block generation unit 207 stores the prediction value generated in Step S 420 or Step S 422 at the position of the 4 ⁇ 4 block number in the buffer.
  • the maximum buffer size in the processing in Step 423 is the VPDU size.
  • the buffer may be the buffer that is used in the processing in S 412 .
  • Step S 424 the BIO processing-included Bi prediction block generation unit 207 increments the 4 ⁇ 4 block number. After that, the processing returns to Step S 416 , and the later processing is repeated.
  • Step S 412 After Step S 412 or in a case where it is determined in Step S 416 that the 4 ⁇ 4 block number is equal to or larger than the number of 4 ⁇ 4 blocks, the processing proceeds to Step S 425 .
  • Step S 425 the inter prediction control unit 201 increments the vPU number.
  • the processing returns to Step S 404 , and the later processing is repeated.
  • Step S 404 In a case where it is determined in Step S 404 that the vPU number is equal to or larger than the number of vPUs, the BIO processing-included Bi prediction ends.
  • FIG. 20 and FIG. 21 are diagrams illustrating comparisons between related-art operation and operation according to the first embodiment of the present technology.
  • the CU (PU) is partitioned into four vPUs that are SAD calculation regions for BIO_vPU_ON determination.
  • the CU (PU) of 128 ⁇ 64 the CU (PU) is partitioned into two left and right vPUs that are SAD calculation regions for BIO_vPU_ON determination.
  • the CU (PU) is partitioned into two top and bottom vPUs that are SAD calculation regions for BIO_vPU_ON determination.
  • the CU (PU) is not partitioned and includes a single vPU that is a SAD calculation region for BIO_vPU_ON determination.
  • the SAD for the entire PU is required, and hence the large L 0 prediction block and the large L 1 prediction block are required to be prepared and stored in advance.
  • the PU larger than the VPDU whether to apply BIO is determined for each vPU obtained by virtually partitioning the PU, and the buffer for the L 0 prediction block and the L 1 prediction block prepared and stored in advance can therefore be reduced in size.
  • the buffers that are used in Steps S 412 , S 413 , and S 423 of FIG. 17 and FIG. 18 can be reduced to 1 ⁇ 4 of the buffers that are used in Steps S 308 , S 309 , and 5319 of FIG. 15 and FIG. 16 .
  • FRUC Full Rate Up-Conversion
  • DMVR Decoder-side motion vector refinement
  • PUs are larger than VPDUs
  • processing similar to that in the present technology is required.
  • PUs are larger than VPDUs
  • a case where PUs are larger than VPDUs can be handled as follows: the PU is virtually partitioned into a plurality of vPUs, and MV correction is performed for each vPU.
  • the SAD calculation and BIO application determination for an entire PU in the related-art operation and the SAD calculation and BIO application determination for each vPU in the present technology, which are descried above, are generally mainly intended to achieve early termination, and hence a further reduction can be achieved.
  • the PU is virtually partitioned into a plurality of vPUs, and a SAD is calculated to determine whether to apply BIO for each vPU is described.
  • the vPUs of the PU are originally included in the same PU, and hence it is conceivable that a certain partial tendency is similar to the tendencies of the different portions.
  • FIG. 22 and FIG. 23 are diagrams illustrating, as a first modified example, an example in which in a case where PUs are larger than VPDUs, a BIO determination result for a vPU number of 0 is also used for other vPUs on the premise of the tendency described above.
  • the CU (PU) is not partitioned and includes a single vPU as a SAD calculation region for BIO_vPU_ON determination.
  • FIG. 24 and FIG. 25 are flowcharts illustrating BIO-included Bi prediction in the case of FIG. 23 .
  • Steps S 501 to S 508 and Steps S 510 to S 526 of FIG. 24 and FIG. 25 processing basically similar to that in Steps S 401 to S 425 of FIG. 17 and FIG. 18 is performed, and hence the description thereof, which is redundant, is appropriately omitted.
  • Step S 508 of FIG. 25 the BIO cost calculation unit 204 calculates, in units of 4 ⁇ 4 in the vPU, the SAD of the L 0 prediction image supplied from the L 0 prediction block generation unit 202 and the L 1 prediction image supplied from the L 1 prediction block generation unit 203 .
  • the SADs calculated in units of 4 ⁇ 4 are accumulated so that SAD 4 ⁇ 4 block that is the sum of the SADs is acquired.
  • Step S 509 the BIO cost calculation unit 204 determines whether or not the vPU number is 0.
  • Step S 509 In a case where it is determined in Step S 509 that the vPU number is 0, the processing proceeds to Step S 510 .
  • Step S 510 the BIO cost calculation unit 204 calculates, in units of vPUs, the SAD of the L 0 prediction image supplied from the L 0 prediction block generation unit 202 and the L 1 prediction image supplied from the L 1 prediction block generation unit 203 .
  • the SADs calculated in units of vPUs are accumulated so that SAD vPU that is the sum of the SADs is acquired.
  • the acquired SAD vPU is supplied from the BIO cost calculation unit 204 to the BIO application determination unit 205 .
  • SAD_vPU is supplied from the BIO cost calculation unit 204 and BIO threshold vPU is supplied from the inter prediction control unit 201 .
  • the processing proceeds to Step S 512 .
  • Step S 510 and S 511 the processing skips Steps S 510 and S 511 and proceeds to Step S 512 .
  • the SAD accumulation and BIO determination for the vPUs are performed, with the result that the processing related to early termination and time taken for the processing can be reduced.
  • FIG. 26 and FIG. 27 are diagrams illustrating, as a second modified example, an example in which whether to apply BIO is determined with a partial SAD value in each vPU.
  • a SAD is calculated for an upper left partial region (32 ⁇ 32) of each vPU obtained by partitioning the CU (PU) into two as SAD calculation regions for BIO_vPU_ON determination.
  • a SAD is calculated for an upper left partial region (32 ⁇ 32) of each vPU obtained by partitioning the CU (PU) into two as SAD calculation regions for BIO_vPU_ON determination.
  • a SAD is calculated for an upper left partial region (32 ⁇ 32) of each vPU obtained by partitioning the CU (PU) into two as SAD calculation regions for BIO_vPU_ON determination.
  • a SAD is calculated for an upper left partial region (32 ⁇ 32) of the CU (PU) not partitioned and including a vPU as a SAD calculation region for BIO_vPU_ON determination.
  • FIG. 26 and FIG. 27 illustrate the examples in which whether to apply BIO is determined in the upper-left 1 ⁇ 4 region of each vPU.
  • the upper-left 1 ⁇ 4 regions are used in consideration of compatibility with a case where the pipeline is structured with HW. This is because BIO application determination becomes possible when the L 0 prediction blocks and the L 1 prediction blocks in the upper-left 1 ⁇ 4 regions are prepared.
  • BIO is determined only for the partial region of each vPU so that the buffers that are prepared on the pipeline stages can be reduced to be smaller than the VPDU size.
  • the partial region has any size, and the cost (SAD) calculation can be performed for a partial region having a size of 0 ⁇ 0, for example. That is, 0 means that the cost is not calculated and early termination is skipped.
  • the region for calculating a SAD necessary for determining BIO_vPU_ON in each vPU can be dynamically changed.
  • FIG. 28 and FIG. 29 are flowcharts illustrating the processing of determining a partial SAD calculation region for BIO_PU_ON determination in each vPU.
  • Step S 509 it is determined whether or not the vPU number corresponds to an installed region, and the processing in Steps S 510 and S 511 is performed only on the set region.
  • Step S 601 the inter prediction control unit 201 acquires MVL 0 x and MVL 0 y of L 0 prediction and MVL 1 x and MVL 0 y of L 1 prediction.
  • Step S 602 the inter prediction control unit 201 selects one of the four MVs that has the maximum absolute value and substitutes the MV into MV_MAX.
  • Step S 603 the inter prediction control unit 201 determines whether or not
  • Step S 603 In a case where it is determined in Step S 603 that
  • Step S 604 the inter prediction control unit 201 sets the central part of the vPU as a SAD calculation region.
  • Step S 605 the inter prediction control unit 201 determines whether or not PU size ⁇ vPU size holds.
  • Step S 605 In a case where it is determined in Step S 605 that PU size ⁇ vPU size holds, the processing proceeds to Step S 606 .
  • Step S 605 In a case where it is determined in Step S 605 that PU size ⁇ vPU size does not hold, the processing proceeds to Step S 607 .
  • Step S 603 determines whether the processing is a case where it is determined in Step S 603 that
  • Step S 609 the inter prediction control unit 201 determines whether or not MV_MAX is smaller than 0.
  • Step S 609 In a case where it is determined in Step S 609 that MV_MAX is smaller than 0 , the processing proceeds to Step S 610 .
  • Step S 610 the inter prediction control unit 201 sets the left part of the vPU as the SAD calculation region.
  • Step S 609 In a case where it is determined in Step S 609 that MV_MAX is equal to or larger than 0 , the processing proceeds to Step S 611 .
  • Step S 611 the inter prediction control unit 201 sets the right part of the vPU as the SAD calculation region.
  • Step S 610 or S 611 the processing proceeds to Step S 612 .
  • Step S 612 the inter prediction control unit 201 determines whether or not PU size ⁇ vPU size holds.
  • Step S 612 In a case where it is determined in Step S 612 that PU size ⁇ vPU size holds, the processing proceeds to Step S 613 .
  • Step S 612 In a case where it is determined in Step S 612 that PU size ⁇ vPU size does not hold, the processing proceeds to Step S 614 .
  • Step S 615 the inter prediction control unit 201 determines whether or not MV_MAX ⁇ 0 holds.
  • Step S 615 In a case where it is determined in Step S 615 that MV_MAX ⁇ 0 holds, the processing proceeds to Step S 616 .
  • Step S 616 the inter prediction control unit 201 sets the upper part of the vPU as the SAD calculation region.
  • Step S 615 In a case where it is determined in Step S 615 that MV_MAX ⁇ 0 does not hold, the processing proceeds to Step S 617 .
  • Step S 617 the inter prediction control unit 201 sets the lower part of the vPU as the SAD calculation region.
  • Step S 616 or S 617 the processing proceeds to Step S 618 .
  • Step S 618 the inter prediction control unit 201 determines whether or not PU size ⁇ vPU size holds.
  • Step S 618 In a case where it is determined in Step S 618 that PU size ⁇ vPU size holds, the processing proceeds to Step S 619 .
  • Step S 618 In a case where it is determined in Step S 618 that PU size ⁇ vPU size does not hold, the processing proceeds to Step S 620 .
  • Step S 606 After Step S 606 , Step S 607 , Step S 613 , Step S 614 , Step S 619 , and Step S 620 , the processing proceeds to Step S 621 of FIG. 29 .
  • Step S 621 the inter prediction control unit 201 determines whether or not horizontal size ⁇ 4 holds.
  • Step S 621 In a case where it is determined in Step S 621 that horizontal size ⁇ 4 holds, the processing proceeds to Step S 622 .
  • Step S 621 In a case where it is determined in Step S 621 that horizontal size ⁇ 4 does not hold, the processing skips Step S 622 and proceeds to Step S 623 .
  • Step S 623 the inter prediction control unit 201 determines whether or not vertical size ⁇ 4 holds.
  • Step S 623 In a case where it is determined in Step S 623 that vertical size ⁇ 4 holds, the processing proceeds to Step S 624 .
  • Step S 623 In a case where it is determined in Step S 623 that vertical size ⁇ 4 does not hold, the processing skips Step S 624 , and the processing of determining a partial SAD calculation region for BIO_vPU_ON determination ends.
  • the processing of calculating SADs for partial regions to determine whether to apply BIO as described can also be applied to FRUC and DMVR.
  • FRUC and DMVR the calculation of SADs or similar costs and the determination thereafter, which are used for early termination in BIO, are directly reflected in the inter prediction accuracy.
  • the price paid for the omission of cost calculation is high, and it can therefore be said that the processing of calculating SADs for partial regions to determine whether to apply BIO is processing unique to BIO.
  • the PU in a case where PUs are larger than VPDUs, the PU is virtually partitioned into vPUs, and the processing is performed in units of vPUs.
  • 1 bit of the BIO_PU_ON flag is included in bitstreams that are transmitted/received between the encoding device 1 and the decoding device 101 so that the operation of the encoding device 1 and the operation of the decoding device 101 can be shared.
  • FIG. 30 and FIG. 31 are flowcharts illustrating, as an operation example according to the second embodiment of the present technology, BIO-included Bi prediction that is performed by the inter prediction unit 51 .
  • Steps S 701 to S 708 and Steps S 715 to S 728 of FIG. 30 and FIG. 31 processing basically similar to that in Steps S 401 to S 408 and Steps S 412 to S 425 of FIG. 17 and FIG. 18 is performed, and hence the description thereof, which is redundant, is appropriately omitted.
  • Step S 708 of FIG. 30 the BIO cost calculation unit 204 calculates, in units of 4 ⁇ 4 in the vPU, the SAD of the L 0 prediction image supplied from the L 0 prediction block generation unit 202 and the L 1 prediction image supplied from the L 1 prediction block generation unit 203 .
  • the SADs calculated in units of 4 ⁇ 4 are accumulated so that SAD 4 ⁇ 4 block that is the sum of the SADs is acquired.
  • Step S 709 the inter prediction control unit 201 determines whether or not the number of vPUs is 1.
  • Step S 709 In a case where it is determined in Step S 709 that the number of vPUs is 1, the processing proceeds to Step S 710 .
  • Steps S 710 and S 711 processing similar to the processing that is performed in units of PUs is performed.
  • Step S 710 the BIO cost calculation unit 204 calculates, in units of vPUs, the SAD of the L 0 prediction image supplied from the L 0 prediction block generation unit 202 and the L 1 prediction image supplied from the L 1 prediction block generation unit 203 .
  • the SADs calculated in units of vPUs are accumulated so that SAD PU that is the sum of the SADs is acquired.
  • the acquired SAD PU is supplied from the BIO cost calculation unit 204 to the BIO application determination unit 205 .
  • SAD_PU is supplied from the BIO cost calculation unit 204 and BIO threshold PU is supplied from the inter prediction control unit 201 . After that, the processing proceeds to Step S 714 .
  • Step S 709 In a case where it is determined in Step S 709 that the vPU number is not 1, the processing proceeds to Step S 712 .
  • Step S 712 the inter prediction control unit 201 determines whether or not the vPU number is 0.
  • Step S 709 In a case where it is determined in Step S 709 that the vPU number is 0, the processing proceeds to Step S 713 .
  • Step S 713 the inter prediction control unit 201 sets BIO_PU_ON.
  • BIO_PU_ON determined from a motion estimation (ME) result, for example, is set.
  • BIO_PU_ON acquired from the stream is set.
  • Step S 712 determines that the vPU number is not 0, the processing skips Step S 713 and proceeds to Step S 714 of FIG. 31 .
  • Step S 714 it is determined whether or not the BIO_PU_ON flag is 1.
  • Step S 714 In a case where it is determined in Step S 714 that the BIO_PU_ON flag is not 1, the processing proceeds to Step S 715 since BIO is not effective to the entire PU.
  • Step S 715 the Bi prediction block generation unit 206 generates a Bi prediction block vPU from the L 0 prediction image supplied from the L 0 prediction block generation unit 202 and the L 1 prediction image supplied from the L 1 prediction block generation unit 203 .
  • the generated Bi prediction block vPU is stored in the buffer and supplied from the Bi prediction block generation unit 206 to the Bi prediction block selection unit 208 .
  • Step S 714 determines whether the BIO_PU_ON flag is 1 or not. If it is determined in Step S 714 that the BIO_PU_ON flag is 1, the processing proceeds to Step S 716 .
  • Step S 716 the BIO processing-included Bi prediction block generation unit 207 calculates a plurality of gradients from the L 0 prediction block supplied from the L 0 prediction block generation unit 202 and the L 1 prediction block supplied from the L 1 prediction block generation unit 203 .
  • the operation of the encoding device 1 and the operation of the decoding device 101 can be shared.
  • BIO_PU_ON flag is not included in all the layers, but is included only in a case where PUs are larger than VPDUs so that the value of 1 bit is relatively small.
  • SAD values are calculated in units of PUs and whether to apply BIO is determined as in the first embodiment.
  • the encoding device 1 may freely set 0 or 1 to the BIO_PU_ON flag.
  • a determination method in which motion compensation is performed with BIO_PU_ON flags of 0 and 1, and one of the BIO_PU_ON flags that provides a favorable result is determined may be employed. Further, a determination method in which the BIO_PU_ON flag is set to 0 when the PU size is 128 ⁇ 128, and is otherwise set to 1 may be employed.
  • the BIO_PU_ON flag is decoded on the PU layer of the CU in the Bi prediction mode in which the PUs are larger than the VPDUs so that, when the vPU number is 0, the BIO_PU_ON flag is acquired in Step S 713 , and the processing proceeds.
  • the processing skips Step S 713 and proceeds from Step S 712 to Step S 714 .
  • a method similar to the second embodiment described above is applicable to FRUC and DMVR, but the application of the second embodiment to FRUC or DMVR is mostly pointless. This is because data for MV correction is included in bitstreams substantially means that difference MVs (MVDs) are encoded.
  • MVDs difference MVs
  • a virtual partition size is different from that of the first embodiment.
  • the PU is virtually partitioned into sPUs, and the processing is performed in units of sPUs.
  • a unit of the processing of calculating SADs to determine whether to apply BIO is any unit that does not cross over VPDU boundaries and is equal to or smaller than the VPDU size
  • a PU is virtually partitioned into plurality of sPUs with separately given information, and whether to apply BIO is determined for each sPU.
  • BIO MAX SAD BLOCK SIZE is added to and included in bitstreams to be shared by the encoding device 1 and the decoding device 101 .
  • FIG. 32 is a diagram illustrating the correspondence between BIO_MAX_SAD_BLOCK_SIZE and sPU.
  • BIO_MAX_SAD_BLOCK_SIZE In a case where BIO_MAX_SAD_BLOCK_SIZE is 1, the sPU size is 8 ⁇ 8. In a case where BIO_MAX_SAD_BLOCK_SIZE is 2, the sPU size is 16 ⁇ 16. In a case where BIO_MAX_SAD_BLOCK_SIZE is 3, the sPU size is 32 ⁇ 32. In a case where BIO_MAX_SAD_BLOCK_SIZE is 4, the sPU size is 64 ⁇ 64.
  • BIO_MAX_SAD_BLOCK_SIZE may be set to any value based on the performance of the encoding device 1 , or may be determined in advance as a profile/level constraint serving as a standard. There is a level constraint that sets BIO_MAX_SAD_BLOCK_SIZE depending on picture sizes to be handled, that is, sets BIO_MAX_SAD_BLOCK_SIZE to 0 for SD or less, 1 for HD, 2 for 4K, and 3 for 8K, for example.
  • FIG. 33 and FIG. 34 are flowcharts illustrating, as an operation example according to the third embodiment of the present technology, BIO-included Bi prediction that is performed by the inter prediction unit 51 .
  • Steps S 801 to S 825 of FIG. 33 and FIG. 34 processing basically similar to that in Steps S 401 to S 425 of FIG. 17 and FIG. 18 is performed although the vPU is replaced by the sPU different from the vPU in size, and hence the description thereof, which is redundant, is appropriately omitted.
  • FIG. 35 and FIG. 36 are diagrams illustrating exemplary regions for calculating SADs in each PU in a case where BIO_MAX_SAD_BLOCK_SIZE is 2.
  • the PU is partitioned into 16 sPUs that do not cross over the VPDU boundaries.
  • the PU is partitioned into eight sPUs that do not cross over the VPDU boundaries.
  • the PU is partitioned into eight sPUs that do not cross over the VPDU boundaries.
  • the PU is partitioned into four sPUs that do not cross over the VPDU boundaries.
  • a PU is virtually partitioned into a plurality of sPUs with separately given information, and whether to apply BIO is determined for each sPU.
  • the buffer size can be further reduced as compared to the buffer size in the case of by using vPUs.
  • BIO in a case where PUs are larger than VPDUs, the use of BIO is constrained. With this, the buffer size can be reduced.
  • FIG. 37 and FIG. 38 are flowcharts illustrating, as an operation example according to the fourth embodiment of the present technology, BIO-included Bi prediction that is performed by the inter prediction unit 51 .
  • Steps S 901 to S 907 and S 926 of FIG. 37 and FIG. 38 processing basically similar to that in Steps S 401 to S 407 and S 425 of FIG. 17 and FIG. 18 is performed, and hence the description thereof, which is redundant, is appropriately omitted. Further, in Steps S 909 to S 925 of FIG. 37 and FIG. 38 , processing basically similar to that in Steps S 304 to S 320 of FIG. 15 and FIG. 16 is performed, and hence the description thereof, which is redundant, is appropriately omitted.
  • Step S 907 the L 1 prediction block generation unit 203 generates an L 1 prediction block in the region of the vPU number.
  • Step S 908 the inter prediction control unit 201 determines whether or not 1 ⁇ the number of vPUs holds.
  • Step S 908 determines that 1 ⁇ the number of vPUs does not hold.
  • the processing proceeds to Step S 909 .
  • the processing subsequent to Step S 909 is similar to the processing subsequent to Step S 309 of FIG. 15 .
  • Step S 908 In a case where it is determined in Step S 908 that 1 ⁇ the number of vPUs holds, the processing proceeds to Step S 913 of FIG. 38 .
  • Step S 912 determines whether the BIO_vPU_ON flag is not 1, the processing proceeds to Step S 913 since BIO is not effective to the entire vPU.
  • Step S 913 the Bi prediction block generation unit 206 generates a Bi prediction block vPU from the L 0 prediction image supplied from the L 0 prediction block generation unit 202 and the L 1 prediction image supplied from the L 1 prediction block generation unit 203 .
  • the generated Bi prediction block vPU is stored in the buffer and supplied from the Bi prediction block generation unit 206 to the Bi prediction block selection unit 208 .
  • Step S 908 is added as the conditional branch step for determining whether or not there are a plurality of vPUs, that is, whether or not a PU is larger than a VPDU.
  • Step S 908 the processing proceeds from Step S 908 to normal Bi prediction in Step S 913 in which BIO is not used and SAD value calculation for the entire PU is unnecessary, and hence, as in FIG. 4 , the PU can be partitioned into virtual vPUs to be processed.
  • Step S 909 to S 925 which come after the branch in Step S 908 , is similar to that in the related-art BIO-included Bi prediction (S 304 to S 320 of FIG. 15 and FIG. 16 ). However, the processing proceeds to Step S 909 in a case where the PU is equal to or smaller than the VPDU, and hence SAD calculation for the entire PU only uses a resource equal to or smaller than the VPDU.
  • BIO is always applied so that the buffer size is reduced.
  • FIG. 39 and FIG. 40 are flowcharts illustrating, as an operation example according to the fifth embodiment of the present technology, BIO-included Bi prediction that is performed by the inter prediction unit 51 .
  • Steps S 1001 to S 1008 and S 1026 of FIG. 39 and FIG. 40 processing basically similar to that in Steps S 401 to S 408 and S 425 of FIG. 17 and FIG. 18 is performed, and hence the description thereof, which is redundant, is appropriately omitted.
  • Steps S 1014 to S 1025 of FIG. 39 and FIG. 40 processing basically similar to that in Steps S 309 to S 320 of FIG. 15 and FIG. 16 is performed, and hence the description thereof, which is redundant, is appropriately omitted.
  • Step S 1008 the BIO cost calculation unit 204 calculates, in units of 4 ⁇ 4 in the vPU, the SAD of the L 0 prediction image supplied from the L 0 prediction block generation unit 202 and the L 1 prediction image supplied from the L 1 prediction block generation unit 203 .
  • the SADs calculated in units of 4 ⁇ 4 are accumulated so that SAD 4 ⁇ 4 block that is the sum of the SADs is acquired.
  • Step S 1009 the inter prediction control unit 201 determines whether or not 1 ⁇ the number of vPUs holds.
  • Step S 1009 In a case where it is determined in Step S 1009 that 1 ⁇ the number of vPUs does not hold, the processing proceeds to Step S 1010 .
  • Step S 1010 the BIO cost calculation unit 204 calculates, in units of PUs, the SAD of the L 0 prediction image supplied from the L 0 prediction block generation unit 202 and the L 1 prediction image supplied from the L 1 prediction block generation unit 203 .
  • the SADs calculated in units of PUs are accumulated so that SAD PU that is the sum of the SADs is acquired.
  • the acquired SAD PU is supplied from the BIO cost calculation unit 204 to the BIO application determination unit 205 .
  • SAD_PU is supplied from the BIO cost calculation unit 204 and BIO threshold PU is supplied from the inter prediction control unit 201 .
  • Step S 1012 it is determined whether or not the BIO_PU_ON flag is 1.
  • Step S 1012 determines whether the BIO_PU_ON flag is not 1
  • the processing proceeds to Step S 1013 of FIG. 40 since BIO is not effective to the entire vPU.
  • Step S 1013 the Bi prediction block generation unit 206 generates a Bi prediction block vPU from the L 0 prediction image supplied from the L 0 prediction block generation unit 202 and the L 1 prediction image supplied from the L 1 prediction block generation unit 203 .
  • the generated Bi prediction block vPU is stored in the buffer and supplied from the Bi prediction block generation unit 206 to the Bi prediction block selection unit 208 .
  • Step S 1012 determines whether the BIO_PU_ON flag is 1 . If it is determined in Step S 1012 that the BIO_PU_ON flag is 1 , the processing proceeds to Step S 1014 of FIG. 40 .
  • Step S 1009 determines that 1 ⁇ the number of vPUs holds. Further, in a case where it is determined in Step S 1009 that 1 ⁇ the number of vPUs holds, the processing proceeds to Step S 1014 .
  • Step S 1014 and the later steps BIO processing similar to that in Steps S 309 to S 320 of FIG. 15 is performed.
  • Step S 1009 the conditional branch for determining whether or not there are a plurality of vPUs, that is, whether or not a PU is larger than a VPDU is added.
  • the processing bypasses the SAD calculation to the threshold determination in S 1010 to S 1012 to proceed to the BIO application processing in Step S 1014 and the later steps so that SAD calculation for the entire PU is not necessary, and hence, as in FIG. 4 , the PU can be partitioned into virtual vPUs to be processed.
  • Step S 1010 to S 1012 in a case where the PU is equal to or smaller than the VPDU, and hence SAD calculation for the entire PU only uses a resource equal to or smaller than the VPDU.
  • the fifth embodiment is not applicable to FRUC and DMVR because of the following reason. Since SAD calculation in BIO is for the purpose of early termination, the cost calculation can be avoided with another criterion such as the PU size as in the fifth embodiment. Cost calculation in FRUC and DMVR is, however, key processing in MV correction, and is difficult to avoid.
  • a unit of processing in calculation of a cost that is used for determining whether to perform bidirectional prediction such as BIO or not is partitioned into partitioned processing units each of which corresponds to the VPDU size (for example, vPU) or is equal to or smaller than the VPDU size (for example, sPU), and the determination is made by using the cost calculated on the basis of the partitioned processing units.
  • the buffer size can be reduced.
  • VVC can be implemented with BIO so that the necessary sizes of the various buffers can be reduced to 1 ⁇ 4 of the related-art buffer sizes.
  • the HW configuration can be optimized so that BIO can be implemented with the buffers, some of which have sizes greatly smaller than 1 ⁇ 4 of the related-art sizes.
  • the series of processing processes described above can be executed by hardware or software.
  • a program configuring the software is installed on a computer incorporated in dedicated hardware or a general-purpose personal computer from a program recording medium.
  • FIG. 41 is a block diagram illustrating a configuration example of the hardware of a computer configured to execute the above-mentioned series of processing processes with the program.
  • a CPU (Central Processing Unit) 301 , a ROM (Read Only Memory) 302 , and a RAM (Random Access Memory) 303 are connected to each other through a bus 304 .
  • An input/output interface 305 is further connected to the bus 304 .
  • the input/output interface 305 is connected to an input unit 306 including a keyboard, a mouse, or the like and an output unit 307 including a display, a speaker, or the like.
  • the input/output interface 305 is connected to a storage unit 308 including a hard disk, a non-volatile memory, or the like, a communication unit 309 including a network interface or the like, and a drive 310 configured to drive a removable medium 311 .
  • the CPU 301 loads the program stored in the storage unit 308 into the RAM 303 through the input/output interface 305 and the bus 304 and executes the program to perform the series of processing processes described above.
  • the program that is executed by the CPU 301 can be recorded on the removable medium 311 to be installed on the storage unit 308 , for example.
  • the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting to be installed on the storage unit 308 .
  • processing processes of the program may be performed chronologically in the order described herein or in parallel. Alternatively, the processing processes may be performed at appropriate timings, for example, when the program is called.
  • a system herein means a set of plural components (devices, modules (parts), or the like), and it does not matter whether or not all the components are in the same housing.
  • plural devices that are accommodated in separate housings and connected to each other via a network, and a single device in which plural modules are accommodated in a single housing are both systems.
  • the present technology can be implemented as cloud computing in which a single function is shared and processed by plural devices via a network.
  • the plural processing processes included in the single step can be executed by a single device or shared and executed by plural devices.
  • the present technology can also take the following configurations.
  • An image processing device including:
  • control unit configured to partition a unit of processing into partitioned processing units each of which corresponds to a VPDU size or is equal to or smaller than the VPDU size, the unit of processing being used for calculation of a cost that is used for determining whether or not to perform bidirectional prediction;
  • a determination unit configured to make the determination by using the cost calculated based on the partitioned processing units.
  • the determination unit makes the determination by using the cost calculated by each of the partitioned processing units.
  • the determination unit makes, by using the cost calculated for a first one of the partitioned processing units, the determination on the first one of the partitioned processing units, and makes the determination on another of the partitioned processing units by using a result of the determination on the first one of the partitioned processing units.
  • the determination unit makes the determination by each of the partitioned processing units by using the cost calculated for each of partial regions in the partitioned processing units.
  • the determination unit makes the determination by each of the partitioned processing units based on a flag set to each of the partitioned processing units, the flag indicating whether or not to perform the bidirectional prediction.
  • the bidirectional prediction includes the bidirectional prediction employing BIO.
  • the bidirectional prediction includes the bidirectional prediction employing FRUC or DMVR.
  • An image processing method for causing an image processing device to:
  • BIO application determination unit 205 BIO application determination unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
US17/312,405 2018-12-28 2019-12-16 Image processing device and method Abandoned US20220070447A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2018248147 2018-12-28
JP2018-248147 2018-12-28
PCT/JP2019/049090 WO2020137643A1 (ja) 2018-12-28 2019-12-16 画像処理装置および方法

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/049090 A-371-Of-International WO2020137643A1 (ja) 2018-12-28 2019-12-16 画像処理装置および方法

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/398,418 Continuation US20240129459A1 (en) 2018-12-28 2023-12-28 Image processing device and method for partitioning a coding unit into partitioned processing units

Publications (1)

Publication Number Publication Date
US20220070447A1 true US20220070447A1 (en) 2022-03-03

Family

ID=71129761

Family Applications (2)

Application Number Title Priority Date Filing Date
US17/312,405 Abandoned US20220070447A1 (en) 2018-12-28 2019-12-16 Image processing device and method
US18/398,418 Pending US20240129459A1 (en) 2018-12-28 2023-12-28 Image processing device and method for partitioning a coding unit into partitioned processing units

Family Applications After (1)

Application Number Title Priority Date Filing Date
US18/398,418 Pending US20240129459A1 (en) 2018-12-28 2023-12-28 Image processing device and method for partitioning a coding unit into partitioned processing units

Country Status (10)

Country Link
US (2) US20220070447A1 (es)
EP (1) EP3905676A4 (es)
JP (3) JP7414008B2 (es)
CN (3) CN118337991A (es)
AU (1) AU2019417255A1 (es)
BR (1) BR112021012260A2 (es)
CA (1) CA3120750A1 (es)
MX (1) MX2021007180A (es)
SG (1) SG11202103292TA (es)
WO (1) WO2020137643A1 (es)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11909993B1 (en) * 2021-07-30 2024-02-20 Meta Platforms, Inc. Fractional motion estimation engine with parallel code unit pipelines

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
PT3909247T (pt) 2019-02-08 2024-05-13 Beijing Dajia Internet Information Tech Co Ltd Método e dispositivo para aplicação seletiva de refinamento de vetor de movimento do lado do descodificador para codificação de vídeo

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120263231A1 (en) * 2011-04-18 2012-10-18 Minhua Zhou Temporal Motion Data Candidate Derivation in Video Coding
US20170127061A1 (en) * 2014-03-28 2017-05-04 Sony Corporation Image decoding device and method
US20190141333A1 (en) * 2016-04-25 2019-05-09 Lg Electronics Inc. Inter-prediction method and apparatus in image coding system
US20200021841A1 (en) * 2018-07-11 2020-01-16 Apple Inc. Global motion vector video encoding systems and methods
US20200195923A1 (en) * 2017-06-23 2020-06-18 Sony Corporation Image processing apparatus and image processing method
US20200221122A1 (en) * 2017-07-03 2020-07-09 Vid Scale, Inc. Motion-compensation prediction based on bi-directional optical flow
US20200382795A1 (en) * 2018-11-05 2020-12-03 Beijing Bytedance Network Technology Co., Ltd. Inter prediction with refinement in video processing
US20210243458A1 (en) * 2018-10-22 2021-08-05 Panasonic Intellectual Property Corporation Of America Encoder, decoder, encoding method, and decoding method
US20220060744A1 (en) * 2018-12-21 2022-02-24 Electronics And Telecommunications Research Institute Image encoding/decoding method and device, and recording medium in which bitstream is stored

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7369707B2 (en) * 2003-10-28 2008-05-06 Matsushita Electric Industrial Co., Ltd. Intra-picture prediction coding method
JP5215951B2 (ja) * 2009-07-01 2013-06-19 キヤノン株式会社 符号化装置及びその制御方法、コンピュータプログラム
JPWO2012046435A1 (ja) 2010-10-04 2014-02-24 パナソニック株式会社 画像処理装置、画像符号化方法および画像処理方法
JP2013085096A (ja) * 2011-10-07 2013-05-09 Sony Corp 画像処理装置および方法
US10757431B2 (en) * 2014-02-12 2020-08-25 Chips & Media, Inc Method and apparatus for processing video
WO2017133661A1 (en) * 2016-02-05 2017-08-10 Mediatek Inc. Method and apparatus of motion compensation based on bi-directional optical flow techniques for video coding
CA3025488A1 (en) * 2016-05-25 2017-11-30 Arris Enterprises Llc Weighted angular prediction for intra coding
WO2018173895A1 (ja) 2017-03-21 2018-09-27 シャープ株式会社 予測画像生成装置、動画像復号装置、および動画像符号化装置
WO2018212110A1 (ja) * 2017-05-19 2018-11-22 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 符号化装置、復号装置、符号化方法及び復号方法
US10904565B2 (en) * 2017-06-23 2021-01-26 Qualcomm Incorporated Memory-bandwidth-efficient design for bi-directional optical flow (BIO)
JP6508553B2 (ja) * 2017-09-19 2019-05-08 ソニー株式会社 画像処理装置および方法
US11677940B2 (en) * 2017-09-20 2023-06-13 Electronics And Telecommunications Research Institute Method and device for encoding/decoding image, and recording medium having stored bitstream
RU2020135518A (ru) * 2018-04-06 2022-04-29 Вид Скейл, Инк. Метод двунаправленного оптического потока с упрощенным выведением градиента
CN113170093B (zh) * 2018-11-20 2023-05-02 北京字节跳动网络技术有限公司 视频处理中的细化帧间预测

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120263231A1 (en) * 2011-04-18 2012-10-18 Minhua Zhou Temporal Motion Data Candidate Derivation in Video Coding
US20170127061A1 (en) * 2014-03-28 2017-05-04 Sony Corporation Image decoding device and method
US20190141333A1 (en) * 2016-04-25 2019-05-09 Lg Electronics Inc. Inter-prediction method and apparatus in image coding system
US20200195923A1 (en) * 2017-06-23 2020-06-18 Sony Corporation Image processing apparatus and image processing method
US20200221122A1 (en) * 2017-07-03 2020-07-09 Vid Scale, Inc. Motion-compensation prediction based on bi-directional optical flow
US20200021841A1 (en) * 2018-07-11 2020-01-16 Apple Inc. Global motion vector video encoding systems and methods
US20210243458A1 (en) * 2018-10-22 2021-08-05 Panasonic Intellectual Property Corporation Of America Encoder, decoder, encoding method, and decoding method
US20200382795A1 (en) * 2018-11-05 2020-12-03 Beijing Bytedance Network Technology Co., Ltd. Inter prediction with refinement in video processing
US20220060744A1 (en) * 2018-12-21 2022-02-24 Electronics And Telecommunications Research Institute Image encoding/decoding method and device, and recording medium in which bitstream is stored

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11909993B1 (en) * 2021-07-30 2024-02-20 Meta Platforms, Inc. Fractional motion estimation engine with parallel code unit pipelines

Also Published As

Publication number Publication date
CN118337991A (zh) 2024-07-12
CN113424530A (zh) 2021-09-21
US20240129459A1 (en) 2024-04-18
CN113424530B (zh) 2024-05-24
EP3905676A4 (en) 2022-10-26
EP3905676A1 (en) 2021-11-03
SG11202103292TA (en) 2021-04-29
JP7563563B2 (ja) 2024-10-08
JP7563562B2 (ja) 2024-10-08
JPWO2020137643A1 (ja) 2021-11-11
JP2024038146A (ja) 2024-03-19
CA3120750A1 (en) 2020-07-02
MX2021007180A (es) 2021-08-05
AU2019417255A1 (en) 2021-06-10
BR112021012260A2 (pt) 2021-08-31
CN118354070A (zh) 2024-07-16
JP7414008B2 (ja) 2024-01-16
WO2020137643A1 (ja) 2020-07-02
JP2024023955A (ja) 2024-02-21

Similar Documents

Publication Publication Date Title
AU2015213341B2 (en) Video decoder, video encoder, video decoding method, and video encoding method
US20240129459A1 (en) Image processing device and method for partitioning a coding unit into partitioned processing units
US10341679B2 (en) Encoding system using motion estimation and encoding method using motion estimation
JP2013115583A (ja) 動画像符号化装置及びその制御方法並びにプログラム
USRE47004E1 (en) Moving image coding device and method
Radicke et al. Highly-parallel HVEC motion estimation with CUDA [title missing from article PDF]
WO2023193769A1 (en) Implicit multi-pass decoder-side motion vector refinement

Legal Events

Date Code Title Description
AS Assignment

Owner name: SONY GROUP CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HISHINUMA, SINSUKE;KONDO, KENJI;SIGNING DATES FROM 20210518 TO 20210520;REEL/FRAME:056496/0394

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION