WO2013047325A1 - Dispositif et procédé de traitement d'image - Google Patents

Dispositif et procédé de traitement d'image Download PDF

Info

Publication number
WO2013047325A1
WO2013047325A1 PCT/JP2012/074092 JP2012074092W WO2013047325A1 WO 2013047325 A1 WO2013047325 A1 WO 2013047325A1 JP 2012074092 W JP2012074092 W JP 2012074092W WO 2013047325 A1 WO2013047325 A1 WO 2013047325A1
Authority
WO
WIPO (PCT)
Prior art keywords
filter
image
unit
processing
deblocking
Prior art date
Application number
PCT/JP2012/074092
Other languages
English (en)
Japanese (ja)
Inventor
優 池田
小川 一哉
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2013047325A1 publication Critical patent/WO2013047325A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements

Definitions

  • the present disclosure relates to an image processing apparatus and method, and more particularly, to an image processing apparatus and method capable of reducing line memory with a simple processing structure.
  • a deblocking filter, an adaptive loop filter, and an adaptive offset filter are employed as in-loop filters.
  • processing is performed in the order of a deblocking filter, an adaptive offset filter, and an adaptive loop filter.
  • LCU boundary On the horizontal boundary of the LCU, which is the largest coding unit (hereinafter also simply referred to as an LCU boundary), each line It is necessary to have a memory, and in total, a large amount of line memory is required.
  • Non-Patent Document 1 the adaptive offset filter processing (tap reference pixel) for a line held for deblocking uses a reconstructed pixel (that is, a pixel before deblocking).
  • a reconstructed pixel that is, a pixel before deblocking
  • Non-Patent Document 1 is processing specialized for LCU-based processing. Therefore, when the deblocking filter, the adaptive offset filter, and the adaptive offset filter each perform frame processing by software or the like, the control becomes quite complicated.
  • the present disclosure has been made in view of such a situation, and can reduce the line memory with a simple processing structure.
  • An image processing device includes a decoding unit that decodes an encoded stream to generate an image, and a first filter process for a reconstructed image of the image generated by the decoding unit.
  • a first filter for performing a second filter process different from the first filter process for the reconstructed image of the image generated by the decoding unit, and the first filter An arithmetic unit that performs arithmetic processing using the image on which the filter processing has been performed and the image on which the second filter processing has been performed is provided.
  • a control unit that controls the first filter and the second filter may be further provided so that the first filter processing and the second filter processing are performed in parallel.
  • the control unit can control to match the output phases of the first filter and the second filter.
  • the image processing apparatus further includes a memory that holds a reconstructed image of the image generated by the decoding unit, and the first filter and the second filter can acquire the reconstructed image from the memory.
  • the first filter is a filter that removes block boundary noise.
  • the first filter is a deblocking filter.
  • the deblocking filter may include a filter applied to the left and right pixels of the vertical boundary and a filter applied to the pixels above and below the horizontal boundary.
  • the control unit can perform control so that the filter processing applied to the left and right pixels of the vertical boundary and the filter processing applied to the pixels above and below the horizontal boundary are performed in parallel.
  • the second filter may include at least one of a third filter that removes ringing and a fourth filter that performs class classification on a block basis.
  • the third filter is an adaptive offset filter
  • the fourth filter is an adaptive loop filter
  • the calculation unit includes an image on which the first filter processing has been performed and an image on which the second filter processing has been performed with a first calculation coefficient corresponding to the first filter processing and the second It is possible to perform arithmetic processing so that addition is performed as a linear sum using the second arithmetic coefficient corresponding to the filter processing.
  • the first calculation coefficient and the second calculation coefficient are set according to the distance from the vertical boundary and the horizontal boundary.
  • an image processing apparatus generates an image by decoding an encoded stream, and performs a first filter process on a reconstructed image of the generated image. Then, a second filter process different from the first filter process is performed on the reconstructed image of the generated image, and the image subjected to the first filter process and the second filter process are performed. An arithmetic process is performed using the broken image.
  • An image processing device includes a first filter that performs a first filter process on a reconstructed image of an image that has been subjected to local decoding processing when the image is encoded, and the local decoding For a reconstructed image of the processed image, a second filter that performs a second filter process different from the first filter process, an image that has been subjected to the first filter process, and the second filter process
  • arithmetic unit that performs arithmetic processing using an image that has been subjected to filter processing
  • an encoding unit that encodes the image using an image that is a result of arithmetic processing performed by the arithmetic unit.
  • a control unit that controls the first filter and the second filter may be further provided so that the first filter processing and the second filter processing are performed in parallel.
  • the control unit can control to match the output phases of the first filter and the second filter.
  • the image processing apparatus further includes a memory that holds a reconstructed image of the image generated by the decoding unit, and the first filter and the second filter can acquire the reconstructed image from the memory.
  • the image processing apparatus performs first filter processing on a reconstructed image of an image that has been locally decoded when the image is encoded, and performs the local decoding
  • a second filter process different from the first filter process is performed on the reconstructed image of the processed image, and the image subjected to the first filter process and the second filter process are performed.
  • the image is subjected to arithmetic processing, and the image is encoded using the image that is the result of the arithmetic processing.
  • An image processing device includes a decoding unit that decodes an encoded stream to generate an image, and a first filter process for a reconstructed image of the image generated by the decoding unit. And a second filter that performs a second filter process different from the first filter process on the image that has been subjected to the first filter process by the first filter. And an arithmetic unit that performs arithmetic processing using the image on which the first filter processing has been performed and the image on which the second filter processing has been performed.
  • an image processing apparatus generates an image by decoding an encoded stream, and performs a first filter process on a reconstructed image of the generated image.
  • the second filter processing different from the first filter processing is performed on the image on which the first filter processing has been performed, and the image on which the first filter processing has been performed and the second filter
  • An arithmetic process is performed using the processed image.
  • an image is generated by decoding an encoded stream, and a first filter process is performed on a reconstructed image of the generated image.
  • a second filter process different from the first filter process is performed on the reconstructed image of the generated image. Then, arithmetic processing is performed using the image on which the first filter processing has been performed and the image on which the second filter processing has been performed.
  • the first filter processing is performed on the reconstructed image of the image that has been locally decoded when the image is encoded.
  • a second filter process different from the first filter process is performed on the reconstructed image of the image subjected to the local decoding process.
  • arithmetic processing is performed using the image that has been subjected to the first filter processing and the image that has been subjected to the second filter processing, and the image is encoded using the image that is the result of the arithmetic processing. It becomes.
  • an image is generated by decoding an encoded stream, and a first filter process is performed on a reconstructed image of the generated image.
  • a second filter process different from the first filter process is performed on the image on which the first filter process has been performed.
  • arithmetic processing is performed using the image on which the first filter processing has been performed and the image on which the second filter processing has been performed.
  • the above-described image processing apparatus may be an independent apparatus, or may be an internal block constituting one image encoding apparatus or image decoding apparatus.
  • an image can be decoded.
  • the line memory can be reduced with a simple processing structure.
  • an image can be encoded.
  • the line memory can be reduced with a simple processing structure.
  • FIG. 1 It is a block diagram which shows the main structural examples of an image coding apparatus. It is a flowchart explaining the example of the flow of an encoding process. It is a block diagram which shows the main structural examples of an image decoding apparatus. It is a flowchart explaining the example of the flow of a decoding process. It is a figure explaining each line memory number required in the LCU boundary of the conventional in-loop filter. It is a block diagram which shows the structural example of the conventional in-loop filter. It is a block diagram which shows the structural example of the in-loop filter to which this indication is applied. It is a figure which shows the example of the pixel of a LCU boundary. FIG.
  • FIG. 8 is a block diagram illustrating a more detailed configuration example of the in-loop filter of FIG. 7. It is a flowchart explaining the process of the in-loop filter of FIG. It is a figure explaining the determination of the weight for a weighted average. It is a figure which shows an example of the weight for a weighted average. It is a figure which shows the output pixel value from the calculating part according to the determination result of the necessity of filtering in a vertical boundary and a horizontal boundary. It is a figure which shows the pattern of the parallel processing which can comprise an in-loop filter. It is a block diagram showing other examples of composition of an in-loop filter to which this indication is applied.
  • FIG. 26 is a block diagram illustrating a main configuration example of a personal computer. It is a block diagram which shows an example of a schematic structure of a television apparatus. It is a block diagram which shows an example of a schematic structure of a mobile telephone. It is a block diagram which shows an example of a schematic structure of a recording / reproducing apparatus. It is a block diagram which shows an example of a schematic structure of an imaging device.
  • FIG. 1 illustrates a configuration of an embodiment of an image encoding device as an image processing device to which the present disclosure is applied.
  • the image encoding device 11 shown in FIG. 1 encodes image data using a prediction process.
  • a prediction process for example, HEVC (High) Efficiency Video Coding) method or the like is used.
  • the image encoding device 11 includes an A / D (Analog / Digital) conversion unit 21, a screen rearrangement buffer 22, a calculation unit 23, an orthogonal transformation unit 24, a quantization unit 25, and a lossless encoding unit 26. And a storage buffer 27.
  • the image encoding device 11 includes an inverse quantization unit 28, an inverse orthogonal transform unit 29, a calculation unit 30, an in-loop filter 31a, a frame memory 32, a selection unit 33, an intra prediction unit 34, a motion prediction / compensation unit 35, A predicted image selection unit 36 and a rate control unit 37 are included.
  • the A / D converter 21 A / D converts the input image data, outputs it to the screen rearrangement buffer 22, and stores it.
  • the screen rearrangement buffer 22 rearranges the stored frame images in the display order in the order of frames for encoding according to the GOP (Group of Picture) structure.
  • the screen rearrangement buffer 22 supplies the image with the rearranged frame order to the arithmetic unit 23.
  • the screen rearrangement buffer 22 also supplies the image in which the frame order is rearranged to the intra prediction unit 34 and the motion prediction / compensation unit 35.
  • the calculation unit 23 subtracts the predicted image supplied from the intra prediction unit 34 or the motion prediction / compensation unit 35 via the predicted image selection unit 36 from the image read from the screen rearrangement buffer 22, and the difference information Is output to the orthogonal transform unit 24.
  • the calculation unit 23 subtracts the prediction image supplied from the intra prediction unit 34 from the image read from the screen rearrangement buffer 22.
  • the calculation unit 23 subtracts the prediction image supplied from the motion prediction / compensation unit 35 from the image read from the screen rearrangement buffer 22.
  • the orthogonal transform unit 24 performs orthogonal transform such as discrete cosine transform and Karhunen-Loeve transform on the difference information supplied from the computation unit 23 and supplies the transform coefficient to the quantization unit 25.
  • the quantization unit 25 quantizes the transform coefficient output from the orthogonal transform unit 24.
  • the quantization unit 25 supplies the quantized transform coefficient to the lossless encoding unit 26.
  • the lossless encoding unit 26 performs lossless encoding such as variable length encoding and arithmetic encoding on the quantized transform coefficient.
  • the lossless encoding unit 26 acquires parameters such as information indicating the intra prediction mode from the intra prediction unit 34, and acquires parameters such as information indicating the inter prediction mode and motion vector information from the motion prediction / compensation unit 35.
  • the lossless encoding unit 26 encodes the quantized transform coefficient, encodes each acquired parameter (syntax element), and uses it as part of the header information of the encoded data (multiplexes).
  • the lossless encoding unit 26 supplies the encoded data obtained by encoding to the accumulation buffer 27 for accumulation.
  • lossless encoding processing such as variable length encoding or arithmetic encoding is performed.
  • variable length coding include CAVLC (Context-Adaptive Variable Length Coding).
  • arithmetic coding include CABAC (Context-Adaptive Binary Arithmetic Coding).
  • the accumulation buffer 27 temporarily stores the encoded data supplied from the lossless encoding unit 26, and, for example, as a coded image encoded at a predetermined timing, for example, a recording device (not shown) or a transmission in the subsequent stage. Output to the road.
  • the transform coefficient quantized by the quantization unit 25 is also supplied to the inverse quantization unit 28.
  • the inverse quantization unit 28 inversely quantizes the quantized transform coefficient by a method corresponding to the quantization by the quantization unit 25.
  • the inverse quantization unit 28 supplies the obtained transform coefficient to the inverse orthogonal transform unit 29.
  • the inverse orthogonal transform unit 29 performs inverse orthogonal transform on the supplied transform coefficient by a method corresponding to the orthogonal transform processing by the orthogonal transform unit 24.
  • the inversely orthogonally transformed output (restored difference information) is supplied to the arithmetic unit 30.
  • the calculation unit 30 is supplied from the intra prediction unit 34 or the motion prediction / compensation unit 35 to the inverse orthogonal transform result supplied from the inverse orthogonal transform unit 29, that is, the restored difference information, via the predicted image selection unit 36. Predicted images are added to obtain a locally decoded image (decoded image).
  • the calculation unit 30 adds the prediction image supplied from the intra prediction unit 34 to the difference information.
  • the calculation unit 30 adds the predicted image supplied from the motion prediction / compensation unit 35 to the difference information.
  • the decoded image as the addition result is supplied to the in-loop filter 31a and the frame memory 32.
  • the in-loop filter 31a is configured to include a deblock filter, an adaptive offset filter, and an adaptive loop filter.
  • the in-loop filter 31a applies a process of a deblocking filter, an adaptive offset filter, and an adaptive loop filter to a decoded image pixel (that is, a reconstructed pixel), and an image obtained by adding the filter processing results to the frame memory 32. To supply.
  • the in-loop filter 31a At least two processes of the deblocking filter vertical and horizontal, the adaptive offset filter, and the adaptive loop filter are performed in parallel. Details of the configuration and operation of the in-loop filter 31a will be described later with reference to FIG.
  • the frame memory 32 outputs the stored reference image to the intra prediction unit 34 or the motion prediction / compensation unit 35 via the selection unit 33 at a predetermined timing.
  • the frame memory 32 supplies the reference image to the intra prediction unit 34 via the selection unit 33.
  • the frame memory 32 supplies the reference image to the motion prediction / compensation unit 35 via the selection unit 33.
  • the selection unit 33 supplies the reference image to the intra prediction unit 34 when the reference image supplied from the frame memory 32 is an image to be subjected to intra coding.
  • the selection unit 33 also supplies the reference image to the motion prediction / compensation unit 35 when the reference image supplied from the frame memory 32 is an image to be inter-encoded.
  • the intra prediction unit 34 performs intra prediction (intra-screen prediction) for generating a predicted image using pixel values in the screen.
  • the intra prediction unit 34 performs intra prediction in a plurality of modes (intra prediction modes).
  • the intra prediction unit 34 generates prediction images in all intra prediction modes, evaluates each prediction image, and selects an optimal mode. When the optimal intra prediction mode is selected, the intra prediction unit 34 supplies the prediction image generated in the optimal mode to the calculation unit 23 and the calculation unit 30 via the predicted image selection unit 36.
  • the intra prediction unit 34 supplies parameters such as intra prediction mode information indicating the adopted intra prediction mode to the lossless encoding unit 26 as appropriate.
  • the motion prediction / compensation unit 35 uses the input image supplied from the screen rearrangement buffer 22 and the reference image supplied from the frame memory 32 via the selection unit 33 for the image to be inter-coded, Motion prediction is performed, motion compensation processing is performed according to the detected motion vector, and a predicted image (inter predicted image information) is generated.
  • the motion prediction / compensation unit 35 performs inter prediction processing in all candidate inter prediction modes, and generates a prediction image.
  • the motion prediction / compensation unit 35 supplies the generated predicted image to the calculation unit 23 and the calculation unit 30 via the predicted image selection unit 36.
  • the motion prediction / compensation unit 35 supplies parameters such as inter prediction mode information indicating the employed inter prediction mode and motion vector information indicating the calculated motion vector to the lossless encoding unit 26.
  • the predicted image selection unit 36 supplies the output of the intra prediction unit 34 to the calculation unit 23 and the calculation unit 30 in the case of an image to be subjected to intra coding, and in the case of an image to be subjected to inter coding, the motion prediction / compensation unit 35.
  • the output is supplied to the calculation unit 23 and the calculation unit 30.
  • the rate control unit 37 controls the quantization operation rate of the quantization unit 25 based on the compressed image stored in the storage buffer 27 so that overflow or underflow does not occur.
  • step S11 the A / D converter 21 A / D converts the input image.
  • step S12 the screen rearrangement buffer 22 stores the A / D converted images, and rearranges them from the display order of each picture to the encoding order.
  • a decoded image to be referred to is read from the frame memory 32 and the intra prediction unit via the selection unit 33 34.
  • the intra prediction unit 34 performs intra prediction on the pixels of the block to be processed in all candidate intra prediction modes. Note that pixels that are not filtered by the in-loop filter 31 are used as the decoded pixels that are referred to.
  • intra prediction is performed in all candidate intra prediction modes, and cost function values are calculated for all candidate intra prediction modes. Then, based on the calculated cost function value, the optimal intra prediction mode is selected, and the predicted image generated by the intra prediction of the optimal intra prediction mode and its cost function value are supplied to the predicted image selection unit 36.
  • the processing target image supplied from the screen rearrangement buffer 22 is an inter-processed image
  • the referenced image is read from the frame memory 32 and supplied to the motion prediction / compensation unit 35 via the selection unit 33. Is done.
  • the motion prediction / compensation unit 35 performs motion prediction / compensation processing.
  • motion prediction processing is performed in all candidate inter prediction modes, cost function values are calculated for all candidate inter prediction modes, and optimal inter prediction is performed based on the calculated cost function values. The mode is determined. Then, the predicted image generated in the optimal inter prediction mode and its cost function value are supplied to the predicted image selection unit 36.
  • step S15 the predicted image selection unit 36 selects one of the optimal intra prediction mode and the optimal inter prediction mode based on the cost function values output from the intra prediction unit 34 and the motion prediction / compensation unit 35. Determine the prediction mode. Then, the predicted image selection unit 36 selects a predicted image in the determined optimal prediction mode and supplies it to the calculation units 23 and 30. This predicted image is used for calculations in steps S16 and S21 described later.
  • the prediction image selection information is supplied to the intra prediction unit 34 or the motion prediction / compensation unit 35.
  • the intra prediction unit 34 supplies information indicating the optimal intra prediction mode (that is, a parameter related to intra prediction) to the lossless encoding unit 26.
  • the motion prediction / compensation unit 35 When a prediction image in the optimal inter prediction mode is selected, the motion prediction / compensation unit 35 performs lossless encoding of information indicating the optimal inter prediction mode and information corresponding to the optimal inter prediction mode (that is, parameters relating to motion prediction). To the unit 26.
  • Information according to the optimal inter prediction mode includes motion vector information and reference frame information.
  • step S16 the calculation unit 23 calculates a difference between the image rearranged in step S12 and the predicted image selected in step S15.
  • the predicted image is supplied from the motion prediction / compensation unit 35 in the case of inter prediction, and from the intra prediction unit 34 in the case of intra prediction, to the calculation unit 23 via the predicted image selection unit 36, respectively.
  • ⁇ Difference data has a smaller data volume than the original image data. Therefore, the data amount can be compressed as compared with the case where the image is encoded as it is.
  • step S17 the orthogonal transformation unit 24 orthogonally transforms the difference information supplied from the calculation unit 23. Specifically, orthogonal transformation such as discrete cosine transformation and Karhunen-Loeve transformation is performed, and transformation coefficients are output.
  • orthogonal transformation such as discrete cosine transformation and Karhunen-Loeve transformation is performed, and transformation coefficients are output.
  • step S18 the quantization unit 25 quantizes the transform coefficient.
  • the rate is controlled as described in the process of step S26 described later.
  • step S19 the inverse quantization unit 28 inversely quantizes the transform coefficient quantized by the quantization unit 25 with characteristics corresponding to the characteristics of the quantization unit 25.
  • step S ⁇ b> 20 the inverse orthogonal transform unit 29 performs inverse orthogonal transform on the transform coefficient inversely quantized by the inverse quantization unit 28 with characteristics corresponding to the characteristics of the orthogonal transform unit 24.
  • step S21 the calculation unit 30 adds the predicted image input via the predicted image selection unit 36 to the locally decoded difference information, and locally decoded (ie, locally decoded) image. (Image corresponding to the input to the calculation unit 23) is generated.
  • step S ⁇ b> 22 the in-loop filter 31 a performs a filter process including a deblocking filter, an adaptive offset filter, and an adaptive loop filter on the image output from the calculation unit 30. At this time, at least two processes of the vertical and horizontal deblocking filters, the adaptive offset filter, and the adaptive loop filter are performed in parallel. Details of the in-loop filter processing will be described later with reference to FIG.
  • the decoded image from the in-loop filter 31a is output to the frame memory 32.
  • step S23 the frame memory 32 stores the filtered image.
  • an image not filtered by the in-loop filter 31a is also supplied from the arithmetic unit 30 and stored.
  • the transform coefficient quantized in step S18 described above is also supplied to the lossless encoding unit 26.
  • the lossless encoding unit 26 encodes the quantized transform coefficient output from the quantization unit 25 and each supplied parameter. That is, the difference image is subjected to lossless encoding such as variable length encoding and arithmetic encoding, and is compressed.
  • step S25 the accumulation buffer 27 accumulates the encoded difference image (that is, the encoded stream) as a compressed image.
  • the compressed image stored in the storage buffer 27 is appropriately read out and transmitted to the decoding side via the transmission path.
  • step S26 the rate control unit 37 controls the quantization operation rate of the quantization unit 25 based on the compressed image stored in the storage buffer 27 so that overflow or underflow does not occur.
  • step S26 ends, the encoding process ends.
  • FIG. 3 illustrates a configuration of an embodiment of an image decoding device as an image processing device to which the present disclosure is applied.
  • An image decoding device 51 shown in FIG. 3 is a decoding device corresponding to the image encoding device 11 of FIG.
  • encoded data encoded by the image encoding device 11 is transmitted to an image decoding device 51 corresponding to the image encoding device 11 via a predetermined transmission path and decoded.
  • the image decoding device 51 includes a storage buffer 61, a lossless decoding unit 62, an inverse quantization unit 63, an inverse orthogonal transform unit 64, a calculation unit 65, an in-loop filter 31b, a screen rearrangement buffer 67, And a D / A converter 68.
  • the image decoding device 51 includes a frame memory 69, a selection unit 70, an intra prediction unit 71, a motion prediction / compensation unit 72, and a selection unit 73.
  • the accumulation buffer 61 accumulates the transmitted encoded data. This encoded data is encoded by the image encoding device 11.
  • the lossless decoding unit 62 decodes the encoded data read from the accumulation buffer 61 at a predetermined timing by a method corresponding to the encoding method of the lossless encoding unit 26 in FIG.
  • the lossless decoding unit 62 supplies parameters such as information indicating the decoded intra prediction mode to the intra prediction unit 71, and supplies parameters such as information indicating the inter prediction mode and motion vector information to the motion prediction / compensation unit 72. .
  • the inverse quantization unit 63 inversely quantizes the coefficient data (quantization coefficient) obtained by decoding by the lossless decoding unit 62 by a method corresponding to the quantization method of the quantization unit 25 in FIG. That is, the inverse quantization unit 63 uses the quantization parameter supplied from the image encoding device 11 to perform inverse quantization of the quantization coefficient by the same method as the inverse quantization unit 28 in FIG.
  • the inverse quantization unit 63 supplies the inversely quantized coefficient data, that is, the orthogonal transform coefficient, to the inverse orthogonal transform unit 64.
  • the inverse orthogonal transform unit 64 is a method corresponding to the orthogonal transform method of the orthogonal transform unit 24 in FIG. 1, performs inverse orthogonal transform on the orthogonal transform coefficient, and converts the residual data before the orthogonal transform in the image encoding device 11 Corresponding decoding residual data is obtained.
  • the decoded residual data obtained by the inverse orthogonal transform is supplied to the arithmetic unit 65. Further, a prediction image is supplied to the calculation unit 65 from the intra prediction unit 71 or the motion prediction / compensation unit 72 via the selection unit 73.
  • the calculating unit 65 adds the decoded residual data and the predicted image, and obtains decoded image data corresponding to the image data before the predicted image is subtracted by the calculating unit 23 of the image encoding device 11.
  • the arithmetic unit 65 supplies the decoded image data to the in-loop filter 31b.
  • the in-loop filter 31b is configured to include a deblocking filter, an adaptive offset filter, and an adaptive loop filter, similarly to the in-loop filter 31a of the image encoding device 11.
  • the in-loop filter 31b performs processing of a deblocking filter, an adaptive offset filter, and an adaptive loop filter on a pixel (that is, a reconstructed pixel) of the decoded image, and rearranges the image obtained by adding the filter processing results. This is supplied to the buffer 67.
  • the in-loop filter 31b At least two processes of the deblocking filter vertical and horizontal, the adaptive offset filter, and the adaptive loop filter are performed in parallel. Details of the configuration and operation of the in-loop filter 31b will be described later with reference to FIG.
  • the screen rearrangement buffer 67 rearranges images. That is, the order of frames rearranged for the encoding order by the screen rearrangement buffer 22 in FIG. 1 is rearranged in the original display order.
  • the D / A converter 68 performs D / A conversion on the image supplied from the screen rearrangement buffer 67, and outputs and displays the image on a display (not shown).
  • the output of the in-loop filter 31b is further supplied to the frame memory 69.
  • the frame memory 69, the selection unit 70, the intra prediction unit 71, the motion prediction / compensation unit 72, and the selection unit 73 are the frame memory 32, the selection unit 33, the intra prediction unit 34, and the motion prediction / compensation unit of the image encoding device 11. 35 and the predicted image selection unit 36, respectively.
  • the selection unit 70 reads out the inter-processed image and the referenced image from the frame memory 69 and supplies the image to the motion prediction / compensation unit 72.
  • the selection unit 70 reads an image used for intra prediction from the frame memory 69 and supplies the image to the intra prediction unit 71.
  • the intra prediction unit 71 is appropriately supplied with information indicating the intra prediction mode obtained by decoding the header information from the lossless decoding unit 62. Based on this information, the intra prediction unit 71 generates a prediction image from the reference image acquired from the frame memory 69 and supplies the generated prediction image to the selection unit 73.
  • the motion prediction / compensation unit 72 is supplied with information (prediction mode information, motion vector information, reference frame information, flags, various parameters, etc.) obtained by decoding the header information from the lossless decoding unit 62.
  • the motion prediction / compensation unit 72 generates a prediction image from the reference image acquired from the frame memory 69 based on the information supplied from the lossless decoding unit 62 and supplies the generated prediction image to the selection unit 73.
  • the selection unit 73 selects the prediction image generated by the motion prediction / compensation unit 72 or the intra prediction unit 71 and supplies the selected prediction image to the calculation unit 65.
  • the storage buffer 61 stores the transmitted encoded data in step S51.
  • the lossless decoding unit 62 decodes the encoded data supplied from the accumulation buffer 61.
  • the I picture, P picture, and B picture encoded by the lossless encoding unit 26 in FIG. 1 are decoded.
  • parameter information such as motion vector information, reference frame information, and prediction mode information (intra prediction mode or inter prediction mode) is also decoded.
  • the prediction mode information is intra prediction mode information
  • the prediction mode information is supplied to the intra prediction unit 71.
  • the prediction mode information is inter prediction mode information
  • motion vector information corresponding to the prediction mode information is supplied to the motion prediction / compensation unit 72.
  • step S53 the intra prediction unit 71 or the motion prediction / compensation unit 72 performs a prediction image generation process corresponding to the prediction mode information supplied from the lossless decoding unit 62, respectively.
  • the intra prediction unit 71 when the intra prediction mode information is supplied from the lossless decoding unit 62, the intra prediction unit 71 generates Most Probable Mode, and generates an intra prediction image of the intra prediction mode by parallel processing.
  • the motion prediction / compensation unit 72 performs an inter prediction mode motion prediction / compensation process to generate an inter prediction image.
  • the prediction image (intra prediction image) generated by the intra prediction unit 71 or the prediction image (inter prediction image) generated by the motion prediction / compensation unit 72 is supplied to the selection unit 73.
  • step S54 the selection unit 73 selects a predicted image. That is, a prediction image generated by the intra prediction unit 71 or a prediction image generated by the motion prediction / compensation unit 72 is supplied. Therefore, the supplied predicted image is selected and supplied to the calculation unit 65, and is added to the output of the inverse orthogonal transform unit 64 in step S57 described later.
  • step S52 the transform coefficient decoded by the lossless decoding unit 62 is also supplied to the inverse quantization unit 63.
  • step S55 the inverse quantization unit 63 inversely quantizes the transform coefficient decoded by the lossless decoding unit 62 with characteristics corresponding to the characteristics of the quantization unit 25 in FIG.
  • step S56 the inverse orthogonal transform unit 29 performs inverse orthogonal transform on the transform coefficient inversely quantized by the inverse quantization unit 28 with characteristics corresponding to the characteristics of the orthogonal transform unit 24 of FIG. As a result, the difference information corresponding to the input of the orthogonal transform unit 24 of FIG. 1 (the output of the calculation unit 23) is decoded.
  • step S57 the calculation unit 65 adds the prediction image selected in the process of step S54 described above and input via the selection unit 73 to the difference information. As a result, the original image is decoded.
  • step S58 the in-loop filter 31b performs a filter process including a deblocking filter, an adaptive offset filter, and an adaptive loop filter on the image output from the calculation unit 30. At this time, at least two processes of the vertical and horizontal deblocking filters, the adaptive offset filter, and the adaptive loop filter are performed in parallel. Details of the in-loop filter processing will be described later with reference to FIG.
  • the decoded image from the in-loop filter 31 b is output to the frame memory 69 and the screen rearrangement buffer 67.
  • step S59 the frame memory 69 stores the filtered image.
  • step S60 the screen rearrangement buffer 67 rearranges the images after the in-loop filter 31b. That is, the order of frames rearranged for encoding by the screen rearrangement buffer 22 of the image encoding device 11 is rearranged to the original display order.
  • step S61 the D / A converter 68 D / A converts the image from the screen rearrangement buffer 67. This image is output to a display (not shown), and the image is displayed.
  • step S61 ends, the decryption process ends.
  • a conventional in-loop filter processes in series in the order of a deblocking filter, an adaptive offset filter, and an adaptive loop filter, so that line memory is not used at the horizontal boundary of the LCU (Largest Coding Unit) (that is, at the bottom of the LCU). I had to have each.
  • LCU Large Coding Unit
  • H.264 and MPEG (Moving Picture Experts Group) 4 Part 10 AVC (Advanced Video Coding)
  • AVC Advanced Video Coding
  • one macroblock is divided into a plurality of motion compensation blocks. It was possible to have different motion information. That is, H. In the H.264 / AVC format, a hierarchical structure is defined by macroblocks and sub-macroblocks.
  • a coding unit (CU) is defined in the HEVC (High Efficiency Video Coding) method.
  • CU is also called Coding Tree Block (CTB).
  • CTB Coding Tree Block
  • This is an area (partial area of an image in picture units) serving as an encoding (decoding) processing unit that plays the same role as a macroblock in the H.264 / AVC format.
  • the latter is fixed to a size of 16 ⁇ 16 pixels, whereas the size of the former is not fixed, and is specified in the image compression information in each sequence.
  • the maximum size (LCU (Largest Coding Unit)) and the minimum size ((SCU (Smallest Coding Unit)) are specified. Is done.
  • the CU division line is not shown, but the size of the LCU is 16 ⁇ 16 pixels, and four 8 ⁇ 8 pixel CUs are included therein. An example is shown.
  • H In the case of an encoding method in which a CU is defined and various processes are performed in units of the CU as in the HEVC method above, H. It can be considered that a macroblock in the H.264 / AVC format corresponds to an LCU, and a block (subblock) corresponds to a CU. However, since the CU has a hierarchical structure, the size of the LCU in the highest hierarchy is H.264, for example, 128 ⁇ 128 pixels. Generally, it is set larger than the macroblock of the H.264 / AVC format.
  • LCU It also includes macroblocks in the H.264 / AVC format. It also includes blocks (sub-blocks) in the H.264 / AVC format.
  • FIG. 5 an example of a luminance signal is shown.
  • the bottom indicates the LCU boundary, and the circle indicates a pixel.
  • the circles in the first to third lines from the LCU boundary represent pixels at which deblocking V (vertical) filter processing at the horizontal boundary is started when the next LCU is input to the deblocking filter.
  • the circles indicated by hatching in the first to third lines represent pixels that have been partially deblocked H (horizontal) filtered at the vertical boundary of the CUs included in the LCU.
  • white circles represent pixels that are not actually subjected to deblocking H filter processing at the vertical boundary of the CU.
  • the pixel on the fourth line from the LCU boundary is a pixel that has been subjected to deblocking V filter processing and that has not been subjected to adaptive offset filter (SAO: “Sample” adaptive ”offset) processing.
  • the pixels on the fourth line are also referred to in the deblocking V filter processing on the first to third lines.
  • Pixels on the fifth line from the LCU boundary are pixels that have been subjected to deblocking V filter processing and adaptive offset filtering.
  • the pixel on the sixth line from the LCU boundary is a pixel that has been subjected to the adaptive offset filter process and has not been subjected to an adaptive loop filter (ALF).
  • ALF adaptive loop filter
  • the circles on the 7th to 16th lines from the LCU boundary represent pixels after adaptive loop filter (ALF) processing.
  • ALF adaptive loop filter
  • the deblocking filter processes the pixels on the first to third lines from the LCU boundary at the LCU boundary, the LCU pixels (pixels for the next four lines) that are in contact below the LCU boundary are not input.
  • the deblocking V filter process cannot be started.
  • the deblocking filter line memory holds pixels for a total of four lines, that is, the pixels of the first to third lines to be processed next and the pixels of the fourth line as reference pixels, as shown in FIG.
  • the deblocking V filter process is on standby (temporarily stopped).
  • the adaptive offset filter refers to the filter processing of the pixel on the fifth line from the LCU boundary held in the line memory of the adaptive offset filter, and refers to the pixel on the fourth line from the LCU boundary held in the line memory of the deblocking filter. To complete.
  • the adaptive offset filter cannot start processing the pixel on the fourth line from the next LCU boundary. Therefore, the adaptive offset filter processing also waits in the state of FIG. 5 in which the pixel of the fifth line from the LCU boundary is held in the line memory of the adaptive offset filter.
  • the adaptive loop filter completes the filtering process (for example, 5 taps) of the pixel on the seventh line from the LCU boundary with reference to the pixels on the fifth and sixth lines and the pixel on the eighth and ninth lines from the LCU boundary. At this time, the adaptive loop filter releases from the line memory of the adaptive loop filter the 9th line pixel that is not necessary for the filter processing of the 6th line pixel from the next LCU boundary. Pixels for 4th to 8th lines are held.
  • the deblocking filter for the pixel on the fourth line is not completed for the pixel on the fourth line, which is referred to for filtering the pixel on the sixth line from the next LCU boundary
  • the deblocking filter cannot be released, and the adaptive loop filter Is not input to the line memory. Therefore, the adaptive loop filter cannot start the next process. Therefore, the adaptive loop filter processing also waits in the state shown in FIG. 5 in which the pixels of the fourth to fifth lines from the LCU boundary are held in the line memory of the adaptive loop filter.
  • FIG. 6 is a block diagram illustrating a configuration example of a conventional in-loop filter.
  • the in-loop filter shown in FIG. 6 includes a deblocking filter unit 101, an adaptive offset filter unit 102, and an adaptive loop filter unit 103.
  • the deblocking filter unit 101 is configured to include an H (horizontal) filter 111, a V (vertical) filter 112, and a line memory 113, and removes noise at a block CU (LCU) boundary from an input pixel. Blocking filter processing is performed.
  • the H filter 111 is a deblocking filter applied to the left and right (horizontal direction) pixels of the vertical boundary between the left and right adjacent CUs (LCUs) in the input image.
  • the V filter 112 is a deblocking filter that is applied to pixels above and below (vertically) a horizontal boundary between CUs (LCUs) adjacent to each other in the input image.
  • the line memory 113 temporarily holds a reconstructed pixel that is an input pixel input from the previous stage at the LCU boundary. As described above with reference to FIG. 5, the line memory 113 holds pixels for four lines for luminance (Y) and pixels for two lines for color difference (C) at the LCU boundary.
  • the deblocking filter unit 101 normally performs the filtering process by the H filter 111 and the filtering process by the V filter 112 on the reconstructed pixel that is the input pixel from the previous stage (except for the LCU boundary).
  • the deblocking filter unit 101 outputs the filtered pixel to the adaptive offset filter unit 102.
  • the deblocking filter unit 101 once holds the reconstructed pixel, which is the input pixel from the previous stage, in the line memory 113.
  • the deblocking filter unit 101 performs filtering using the H filter 111 using the input pixels and the pixels held in the line memory 113, and the V filter 112. Filter processing is performed.
  • the deblocking filter unit 101 outputs the filtered pixel to the adaptive offset filter unit 102.
  • the adaptive offset filter unit 102 is configured to include an offset filter 121 and a line memory 122, and performs an offset filter process that mainly removes ringing on the decoded image from the deblocking filter unit 101.
  • the line memory 122 holds pixels for one line for luminance (Y) and holds pixels for one line for color difference (C) at the LCU boundary.
  • the adaptive offset filter unit 102 normally performs a filter process using the offset filter 121 on the pixels subjected to the filter process by the deblocking filter unit 101, and outputs the filtered pixels to the adaptive loop filter unit 103. .
  • the adaptive offset filter unit 102 temporarily holds the reconstructed pixel that is an input pixel from the deblocking filter unit 101 in the line memory 122.
  • the adaptive offset filter unit 102 performs a filtering process by the offset filter 121 using the input pixel and the pixel held in the line memory 122.
  • the adaptive offset filter unit 102 outputs the filtered pixel to the adaptive offset filter unit 102.
  • the adaptive loop filter unit 103 is configured to include a loop filter 131 and a line memory 132, performs class classification on a block basis on the decoded image from the adaptive offset filter unit 102, and performs adaptive loop filter processing.
  • the loop filter 131 is composed of, for example, a two-dimensional Wiener filter. As described above with reference to FIG. 5, the line memory 132 holds pixels for four lines for luminance (Y) and pixels for four lines for color difference (C) at the LCU boundary.
  • the adaptive loop filter unit 103 normally performs a filter process by the loop filter 131 on the pixels subjected to the filter process by the adaptive offset filter unit 102, and outputs the filtered pixels to a subsequent frame memory or the like. .
  • the adaptive loop filter unit 103 temporarily holds the reconstructed pixel, which is an input pixel from the adaptive offset filter unit 102, in the line memory 132.
  • the adaptive loop filter unit 103 performs a filtering process by the loop filter 131 using the input pixel and the pixel held in the line memory 132.
  • the adaptive loop filter unit 103 outputs the filtered pixel to a subsequent frame memory or the like.
  • the deblocking filter requires a line memory that holds pixels for four lines, and the adaptive offset filter has a line that holds pixels for one line. Memory was needed. Furthermore, the adaptive loop filter requires a line memory that holds pixels for four lines and includes overlapping pixels, but a line memory that holds pixels for nine lines in total is required.
  • the deblocking filter requires a line memory that holds pixels for two lines, and the adaptive offset filter has one line.
  • a line memory for holding the pixels was required.
  • the adaptive loop filter requires a line memory that holds pixels for four lines and includes overlapping pixels, but a line memory that holds pixels for seven lines in total is required.
  • Non-Patent Document 1 the adaptive offset filter processing (tap reference pixel) for a line held for deblocking uses a reconstructed pixel (that is, a pixel before deblocking).
  • a reconstructed pixel that is, a pixel before deblocking
  • Non-Patent Document 1 is processing specialized for LCU-based processing. Therefore, when the deblocking filter, the adaptive offset filter, and the adaptive offset filter each perform frame processing by software or the like, the control becomes quite complicated.
  • At least two of the deblocking filter, the adaptive offset filter, and the adaptive loop filter are parallelized to share the line memory, thereby reducing the line memory with a simple processing structure. Plan.
  • FIG. 7 is a block diagram illustrating a configuration example of an in-loop filter to which the present disclosure is applied.
  • the configurations of the in-loop filter 31a of the image encoding device 11 shown in FIG. 1 and the in-loop filter 31b of the image decoding device 51 shown in FIG. 3 may be common. Therefore, in the following description, the in-loop filter 31a and the in-loop filter 31b are collectively referred to as the in-loop filter 31 when it is not necessary to distinguish them individually.
  • the in-loop filter 31 includes a line memory 151, a deblocking H (horizontal) filter unit 152, a deblocking V (vertical) filter unit 153, an adaptive offset filter unit 154, an adaptive loop filter unit 155, and an arithmetic operation. It is comprised so that the part 156 may be included.
  • a configuration example of the in-loop filter 31 at the LCU boundary is shown.
  • the line memory 151 temporarily holds the reconstructed pixel input from the previous stage at the LCU boundary.
  • the preceding stage is the arithmetic unit 30 in the case of the image encoding device 11 in FIG. 1, and the arithmetic unit 65 in the case of the image decoding device 51 in FIG.
  • the line memory 151 holds pixels for five lines for luminance (Y) and holds pixels for three lines for color difference (C). Note that the number of lines to be held is not limited because it depends on the architecture and the like.
  • the deblocking H filter unit 152, the deblocking V filter unit 153, the adaptive offset filter unit 154, and the adaptive loop filter unit 155 share the line memory 151 in which the reconstructed pixels are held.
  • the deblocking H filter unit 152, the deblocking V filter unit 153, the adaptive offset filter unit 154, and the adaptive loop filter unit 155 perform each filter process in parallel on the reconstructed pixels held in the line memory 151. Then, the deblocking H filter unit 152, the deblocking V filter unit 153, the adaptive offset filter unit 154, and the adaptive loop filter unit 155 output the filtered pixels to the calculation unit 156, respectively.
  • the deblocking H filter unit 152 is configured to include the H filter 111 of FIG. 6, and performs a deblocking H filter process that removes block noise at a vertical boundary between adjacent blocks on the left and right.
  • the H filter 111 is a deblocking filter that is applied to the left and right pixels at the vertical boundary between blocks adjacent to the left and right in the input image.
  • the deblocking H filter unit 152 reads out the reconstructed pixel held in the line memory 151 and performs the filtering process by the H filter 111 on the read out reconstructed pixel.
  • the deblocking H filter unit 152 outputs the pixel after the filter processing by the H filter 111 to the calculation unit 156.
  • the deblocking V filter unit 153 is configured to include the V filter 112 of FIG. 6, and performs a deblocking V filter process that removes block noise at the horizontal boundary between vertically adjacent blocks.
  • the V filter 112 is a deblocking filter that is applied to the upper and lower pixels of the horizontal boundary between the upper and lower adjacent blocks in the input image.
  • the deblocking V filter unit 153 reads out the reconstructed pixel held in the line memory 151, and performs a filtering process by the V filter 112 on the read out reconstructed pixel.
  • the deblocking V filter unit 153 outputs the pixel after the filtering process by the V filter 112 to the calculation unit 156.
  • the adaptive offset filter unit 154 is configured to include the offset filter 121 of FIG. 6, and performs an offset filter process that mainly removes ringing on the input image.
  • the offset filter 121 is applied using a quad-tree structure in which the type of the offset filter 121 is determined for each divided region and the offset value for each divided region.
  • the quad-tree structure and the offset value are calculated in the case of the image encoding device 11 in FIG. 1, and are calculated by the image encoding device 11 in FIG. 1 in the case of the image decoding device 51 in FIG. Are decrypted and used.
  • the adaptive offset filter unit 154 reads the reconstructed pixel held in the line memory 151, and performs the filtering process by the offset filter 121 on the read reconstructed pixel.
  • the adaptive offset filter unit 154 outputs the pixel after the filter processing by the offset filter 121 to the calculation unit 156.
  • the adaptive loop filter unit 155 is configured to include the loop filter 131 of FIG. 6, performs class classification on an input image on a block basis, and performs adaptive loop filter processing.
  • the loop filter 131 is configured by, for example, a two-dimensional Wiener filter. Specifically, the loop filter 131 is applied using an adaptive loop filter coefficient.
  • the adaptive loop filter coefficients are classified into blocks on a block basis, and the residual from the original image from the screen rearrangement buffer 22 is minimized for each classified class. The calculated value is used.
  • the adaptive loop filter coefficient is decoded and used as calculated by the image encoding device 11 in FIG.
  • the adaptive loop filter unit 155 reads out the reconstructed pixel held in the line memory 151, and performs the filtering process by the loop filter 131 on the read out reconstructed pixel.
  • the adaptive loop filter unit 155 outputs the pixel after the filter processing by the loop filter 131 to the calculation unit 156.
  • the arithmetic unit 156 adds, for example, addition to the pixels after filtering by the deblocking H filter unit 152, the deblocking V filter unit 153, the adaptive offset filter unit 154, and the adaptive loop filter unit 155 at the LCU boundary.
  • the arithmetic processing is performed. Note that this arithmetic processing may include not only addition but also processing such as subtraction and multiplication.
  • the calculating part 156 outputs a calculation result to a back
  • the latter stage is the frame memory 32 in the case of the image encoding device 11 of FIG. 1, and the screen rearrangement buffer 67 and the frame memory 69 in the case of the image decoding device 51 of FIG.
  • FIG. 8 shows an example of pixels at the LCU boundary.
  • a circle represents a reconstructed pixel input to the in-loop filter 31, and a lower line in the figure represents an LCU boundary.
  • the deblocking V filter unit 153 performs the first line from the LCU boundary until pixels for four lines of the next LCU are input. In the processing of the pixels on the third to third lines, a standby state is entered. That is, since the deblocking V filter unit 153 can process only up to the fourth line from the LCU boundary, the other filter units in parallel need to align the output phase with the deblocking V filter unit 153.
  • both the adaptive offset filter unit 154 and the adaptive loop filter unit 155 complete the processing from the LCU boundary to the fourth line, and enter the standby state in the processing of the pixel on the third line from the next LCU boundary.
  • the deblocking H filter unit 152, the deblocking V filter unit 153, the adaptive offset filter unit 154, and the adaptive loop filter unit 155 are set to three lines from the LCU boundary. The process is started so as to output from the eye pixel.
  • the deblocking H filter unit 152 needs to hold the pixels in the first to third lines from the LCU boundary in the line memory 151.
  • the deblocking V filter unit 153 the pixels in the first to fourth lines from the LCU boundary need to be held in the line memory 151.
  • the adaptive offset filter unit 154 the pixels in the first to fourth lines from the LCU boundary need to be held in the line memory 151.
  • the adaptive loop filter unit 155 needs to hold the pixels in the first to fifth lines from the LCU boundary in the line memory 151.
  • the line memory 151 holds pixels for five lines from the first to fifth lines from the LCU boundary.
  • the number of line memories can be reduced by 4 lines as compared with the conventional 9 lines of pixels described above with reference to FIG.
  • the deblocking V filter unit 153 waits in the processing of the pixels on the first line and the second line from the LCU boundary until the two lines of pixels of the next LCU are input. It becomes a state. That is, since the deblocking V filter unit 153 can process only the third line from the LCU boundary, it is necessary to align the output phase with the deblocking V filter unit 153.
  • both the adaptive offset filter unit 154 and the adaptive loop filter unit 155 complete the processing from the LCU boundary to the third line, and enter the standby state in the processing of the pixel on the second line from the next LCU boundary.
  • the deblocking H filter unit 152, the deblocking V filter unit 153, the adaptive offset filter unit 154, and the adaptive loop filter unit 155 are also two lines from the LCU boundary. The process is started so as to output from the eye pixel.
  • the deblocking H filter unit 152 needs to hold the pixels on the first and second lines from the LCU boundary in the line memory 151.
  • the deblocking V filter unit 153 the pixels in the first and second lines from the LCU boundary need to be held in the line memory 151.
  • the adaptive offset filter unit 154 needs to hold the pixels in the first to third lines from the LCU boundary in the line memory 151.
  • the adaptive loop filter unit 155 for example, when the color difference is a 5-tap process, the pixels in the first to fourth lines from the LCU boundary need to be held in the line memory 151.
  • the line memory 151 holds pixels for four lines from the first to fourth lines from the LCU boundary.
  • the number of line memories can be reduced by 3 lines as compared with the conventional 7 lines of pixels described above with reference to FIG.
  • FIG. 9 is a block diagram illustrating a more detailed configuration example of the in-loop filter of FIG.
  • the in-loop filter 31 in FIG. 7 shows a configuration in the case of an LCU boundary
  • the in-loop filter 31 in the example in FIG. 9 shows a detailed configuration including the case of an LCU boundary.
  • the in-loop filter 31 is configured to include a line memory 151, a deblocking H filter unit 152, and a deblocking V filter unit 153.
  • the in-loop filter 31 is configured to include an adaptive offset filter unit 154, an adaptive loop filter unit 155, a calculation unit 156, and a coefficient memory 171.
  • the in-loop filter 31 is different from the in-loop filter 31 of FIG. 7 only in that a coefficient memory 171 is added.
  • the reconstructed pixel that is an input pixel from the previous stage is input to the line memory 151, the deblocking H filter unit 152, the deblocking V filter unit 153, the adaptive offset filter unit 154, the adaptive loop filter unit 155, and the calculation unit 156. .
  • the line memory 151 is configured to hold the reconstructed pixels for five lines from the LCU boundary for the luminance signal, and to hold the reconstructed pixels for three lines from the LCU boundary for the color difference signal. .
  • the deblocking H filter unit 152, the deblocking V filter unit 153, the adaptive offset filter unit 154, and the adaptive loop filter unit 155 are arranged for each reconstructed pixel input from the previous stage. Apply filtering. Then, the deblocking H filter unit 152, the deblocking V filter unit 153, the adaptive offset filter unit 154, and the adaptive loop filter unit 155 output the filtered pixels to the calculation unit 156.
  • the deblocking H filter unit 152, the deblocking V filter unit 153, the adaptive offset filter unit 154, and the adaptive loop filter unit 155 perform filter processing on the reconstructed pixels held in the line memory 151. Apply. Then, the deblocking H filter unit 152, the deblocking V filter unit 153, the adaptive offset filter unit 154, and the adaptive loop filter unit 155 output the filtered pixels to the calculation unit 156.
  • the deblocking H filter unit 152, the deblocking V filter unit 153, the adaptive offset filter unit 154, and the adaptive loop filter unit 155 Resume processing after aligning the phases.
  • the calculation unit 156 includes subtraction units 181-1 to 181-4, multiplication units 182-1 to 182-4, and an addition unit 183, and calculates the output P after each filter process as a linear sum. Note that as the input pixel in the calculation unit 156, the reconstructed pixel from the previous stage is usually used, but the pixel held in the line memory 151 is read and used at the LCU boundary.
  • Subtraction unit 181-1 subtracts the input pixel P in the pixel P DB_H after filtering from the deblocking H filter unit 152, and outputs to the multiplier 182-1.
  • the multiplication unit 182-1 multiplies the input (P DB_H ⁇ P in ) from the subtraction unit 181-1 by the coefficient C DB_H corresponding to the deblocking H filter unit 152 from the coefficient memory 171, and adds the addition unit 183. Output to.
  • Subtraction unit 181-2 subtracts the input pixel P in the pixel P DB_V after filtering from the deblocking V filter unit 153, and outputs to the multiplier 182-2.
  • the multiplication unit 182-2 multiplies the input (P DB_H ⁇ P in ) from the subtraction unit 181-2 by the coefficient C DB_V corresponding to the deblocking V filter unit 153 from the coefficient memory 171, and adds the addition unit 183. Output to.
  • Subtraction unit 181-3 subtracts the input pixel P in the pixel P SAO after filtering from the adaptive offset filter unit 154, and outputs to the multiplier 182-3.
  • the multiplier 182-3 multiplies the input (P SAO ⁇ P in ) from the subtractor 181-3 by the coefficient C SAO corresponding to the adaptive offset filter unit 154 from the coefficient memory 171, and causes the adder 183 to Output.
  • Subtraction unit 181-4 subtracts the input pixel P in the pixel P ALF after filtering from the adaptive loop filter unit 155, and outputs to the multiplier 182-4.
  • the multiplication unit 182-4 multiplies the input (P ALF ⁇ P in ) from the subtraction unit 181-3 by the coefficient C ALF corresponding to the adaptive loop filter unit 155 from the coefficient memory 171, and the addition unit 183 Output.
  • the input pixel P in the multiplication result from the multiplying unit 182-1 through 182-4 adds the equation (1), the P is the addition result is output to the frame memory.
  • the coefficient memory 171 stores a coefficient corresponding to each filter.
  • coefficient memory 171 coefficient C DB_V corresponding to the coefficient C DB_H and deblocking V filter unit 153 corresponds to the deblocking H filter 152 are stored.
  • the coefficient memory 171 stores a coefficient C SAO corresponding to the adaptive offset filter unit 154 and a coefficient C ALF corresponding to the adaptive loop filter unit 155.
  • coefficients may be settable by the user via an operation input unit (not shown). These coefficients may be set according to the characteristics of the image.
  • the coefficient C DB_V corresponding to the coefficient C DB_H and deblocking V filter unit 153 corresponds to the deblocking H filter 152, than other coefficients Is also set larger.
  • the coefficient C SAO corresponding to the adaptive offset filter unit 154 is set larger than the other coefficients.
  • the coefficient CALF corresponding to the adaptive loop filter unit 155 is set larger than the other coefficients.
  • FIG. 10 is an example of the in-loop filter process in step S22 in FIG. 2 described above, and is an example of the in-loop filter process in step S58 in FIG.
  • This in-loop filter process starts from the upper left LCU in the screen.
  • the reconstructed pixels constituting the LCU are input to each part of the in-loop filter 31 from the previous stage.
  • the processes of steps S111 to S114 are executed in parallel using the reconstructed pixels input from the previous stage.
  • the reconstructed pixels constituting the LCU are input to the line memory 151 from the previous stage.
  • the processes in steps S111 to S114 are executed in parallel using the reconstructed pixels held in the line memory 151.
  • the processing is started with the output phases aligned.
  • the input to each part is used by being switched by a switch or the like in each part of the in-loop filter 31.
  • the deblocking H filter unit 152 performs a filtering process by the H filter 111 on the line memory 151 or the reconstructed pixel from the previous stage in step S111.
  • the deblocking H filter unit 152 outputs the pixel after the filter processing by the H filter 111 to the calculation unit 156.
  • step S112 the deblocking V filter unit 153 performs a filtering process by the V filter 112 on the line memory 151 or the reconstructed pixel from the previous stage.
  • the deblocking V filter unit 153 outputs the pixel after the filtering process by the V filter 112 to the calculation unit 156.
  • step S113 the adaptive offset filter unit 154 performs filter processing by the offset filter 121 on the line memory 151 or the reconstructed pixel from the previous stage.
  • the adaptive offset filter unit 154 outputs the pixel after the filter processing by the offset filter 121 to the calculation unit 156.
  • step S114 the adaptive loop filter unit 155 performs filter processing by the loop filter 131 on the line memory 151 or the reconstructed pixel from the previous stage.
  • the adaptive loop filter unit 155 outputs the pixel after the filter processing by the loop filter 131 to the calculation unit 156.
  • step S115 the calculation unit 156 calculates four results after each filter processing by the deblocking H filter unit 152, the deblocking V filter unit 153, the adaptive offset filter unit 154, and the adaptive loop filter unit 155.
  • the calculation unit 156 calculates the four results from the four filter units by, for example, the linear sum as in the above-described equation (1), and outputs the calculation result to the subsequent stage.
  • step S116 the deblocking H filter unit 152, the deblocking V filter unit 153, the adaptive offset filter unit 154, and the adaptive loop filter unit 155 determine whether it is the last pixel in the LCU. If it is determined in step S116 that the pixel is not the last pixel in the LCU, the process returns to step S111, and the subsequent processes are repeated.
  • step S116 If it is determined in step S116 that it is the last pixel in the LCU, the process proceeds to step S117.
  • step S117 the deblocking H filter unit 152, the deblocking V filter unit 153, the adaptive offset filter unit 154, and the adaptive loop filter unit 155 determine whether or not it is the last pixel in the screen. If it is determined in step S117 that the pixel is not the last pixel in the screen, the process proceeds to step S118.
  • step S118 the deblocking H filter unit 152, the deblocking V filter unit 153, the adaptive offset filter unit 154, and the adaptive loop filter unit 155 select the next LCU, and the process returns to step S111. That is, the processing from step S111 onward is repeated for the LCU selected in step S118.
  • step S117 If it is determined in step S117 that it is not the last pixel in the screen, the in-loop filter process is terminated.
  • the input of the four filter processes constituting the in-loop filter 31 is processed in parallel as a reconstructed pixel, and the line memory is shared at the LCU boundary. Therefore, as described above with reference to FIG. As described above, the number of line memories can be reduced.
  • the in-loop filter 31 has a simple configuration in which switching between normal processing and LCU boundary processing is performed only by switching whether a pixel from the previous stage or a pixel from the line memory is input. Thereby, control when the deblocking filter, the adaptive offset filter, and the adaptive offset filter each perform frame processing by software or the like can be easily performed.
  • the coefficient corresponding to each filter is set in the calculation unit 156 according to the characteristics of the image, it is possible to obtain a better image according to the characteristics of the image than simply outputting at a fixed rate. Is possible.
  • the calculation unit 156 calculates the linear sum of each filter output and outputs the result.
  • a simple average of each filter output may be used, or the output of each filter output may be calculated.
  • a weighted average may be calculated.
  • the arithmetic unit 156 can determine the weight of the weighted average for each pixel according to the distance to the vertical boundary and the distance to the horizontal boundary for each pixel.
  • FIG. 11 is an explanatory diagram for explaining the determination of the weight for the weighted average by the calculation unit 156.
  • the weighted average of the output of the deblocking H filter unit 152 for the vertical boundary Vz and the output of the deblocking V filter unit 153 for the horizontal boundary Hz will be described as an example.
  • the distance Dv between the target pixel Pz and the nearest vertical boundary Vz is 3 pixels.
  • a distance Dh between the target pixel Pz and the nearest horizontal boundary Hz is two pixels.
  • the distance Dh is smaller than the distance Dv.
  • the calculation unit 156 can determine the weight for the output of the deblocking V filter unit 153 for the horizontal boundary Hz larger than the weight for the output of the deblocking H filter unit 152 for the vertical boundary Vz.
  • the weight ratio between the H filter output P DB_H for the vertical boundary Vz and the V filter output P DB_V for the horizontal boundary Hz is determined to be 2: 3.
  • the in-loop filter 31 may include a V filter, an H filter, and one two-dimensional filter that calculates a weighted average at the same time.
  • the implementation becomes extremely complicated.
  • the weighted average is calculated after the two one-dimensional filters are executed in parallel as in the example of FIG. 11, the two-dimensional filter is substantially used while utilizing the existing deblocking filter mechanism. Can be easily realized.
  • FIG. 12 is an explanatory diagram for explaining an example of the weight for the weighted average determined according to the example of FIG.
  • 6 ⁇ 6 36 pixels (pixels at the above-described overlapping positions) located around one intersection of the vertical boundary and the horizontal boundary are shown.
  • the weight ratio between the filter output P DB_H and the filter output P DB_V is 1 to 1 (or 2 to 2 or 3 to 3). It is.
  • the calculation unit 156 weights each pixel according to the edge strength of the vertical boundary and the horizontal boundary corresponding to each pixel.
  • An average weight may be determined.
  • the filter output weight for the stronger edge boundary may be determined to be greater than the filter output weight for the weaker edge boundary.
  • FIG. 13 shows output pixel values from the calculation unit 156 according to the determination result of necessity of filtering at the vertical boundary and the horizontal boundary.
  • the calculation unit 156 selects, for example, the output from the filter unit that has actually performed filtering for pixels that are filtered by one of the deblocking H filter unit 152 and the deblocking V filter unit 153. Further, the calculation unit 156 outputs the input pixel value to the in-loop filter 31 as it is for pixels that are not filtered by either the deblocking H filter unit 152 or the deblocking V filter unit 153.
  • the above weighted average can also be applied to the four filters of the deblocking H filter unit 152, the deblocking V filter unit 153, the adaptive offset filter unit 154, and the adaptive loop filter unit 155.
  • the weights of the outputs to the adaptive offset filter unit 154 and the adaptive loop filter unit 155 are set to 1, and the outputs of the deblocking H filter unit 152 and the deblocking V filter unit 153 are the above weights. Average is adopted. Thereby, an image from which block distortion has been optimally removed can be obtained.
  • the configuration of the in-loop filter 31 is not limited to the four parallels described above, and may be configured as shown in FIG.
  • FIG. 14 a pattern of parallel processing that can constitute the in-loop filter 31 is shown.
  • FIG. 14 will be described in order from the left.
  • the in-loop filter 31 includes a deblocking H filter (DH), a deblocking V filter (DV), an adaptive offset filter (SAO), and an adaptive loop filter (ALF) 4. Configured in parallel.
  • DH deblocking H filter
  • DV deblocking V filter
  • SAO adaptive offset filter
  • ALF adaptive loop filter
  • the deblocking H filter (D-H), deblocking V filter (D-V), adaptive offset filter (SAO), and adaptive loop filter (ALF) perform processing using the reconstructed pixels. Then, the four filter results are output to the calculation unit 156 in phase.
  • the in-loop filter 31 is configured in three parallels including a deblocking H filter (D-H), a deblocking V filter (D-V), and an adaptive offset filter (SAO) ⁇ adaptive loop filter (ALF).
  • D-H deblocking H filter
  • D-V deblocking V filter
  • SAO adaptive offset filter
  • ALF adaptive loop filter
  • the deblocking H filter (D-H), the deblocking V filter (D-V), and the adaptive offset filter (SAO) perform processing using the reconstructed pixel.
  • the adaptive loop filter (ALF) performs processing using the pixels after filtering by the adaptive offset filter (SAO). Then, the three filter results from the deblocking H filter (D-H), the deblocking V filter (D-V), and the adaptive loop filter (ALF) are matched in phase and output to the computing unit 156. Note that a configuration example of the in-loop filter 31 configured in parallel is described later with reference to FIG.
  • the in-loop filter 31 is configured in three parallels including a deblocking H filter (D-H) ⁇ deblocking V filter (D-V), an adaptive offset filter (SAO), and an adaptive loop filter (ALF).
  • D-H deblocking H filter
  • D-V deblocking V filter
  • SAO adaptive offset filter
  • ALF adaptive loop filter
  • the deblocking H filter (D-H), adaptive offset filter (SAO), and adaptive loop filter (ALF) perform processing using the reconstructed pixels.
  • the deblocking V filter (D-V) performs processing using pixels after filtering by the deblocking H filter (D-H). Then, the three filter results from the deblocking V filter (D-V), the adaptive offset filter (SAO), and the adaptive loop filter (ALF) are matched in phase and output to the computing unit 156.
  • the in-loop filter 31 is configured in two parallels including a deblocking H filter (D-H) ⁇ deblocking V filter (D-V) and an adaptive offset filter (SAO) ⁇ adaptive loop filter (ALF).
  • D-H deblocking H filter
  • D-V deblocking V filter
  • SAO adaptive offset filter
  • ALF adaptive loop filter
  • the deblocking H filter (D-H) and the adaptive offset filter (SAO) perform processing using the reconstructed pixels.
  • the deblocking V filter (DV) performs processing using the pixels after filtering by the deblocking H filter (DH), and the adaptive loop filter (ALF) uses pixels after filtering by the adaptive offset filter (SAO). Process. Then, the two filter results from the deblocking V filter (D-V) and the adaptive loop filter (ALF) are matched in phase and output to the arithmetic unit 156.
  • the in-loop filter 31 is configured in two parallels consisting of an adaptive offset filter (SAO) for inputting pixels from the deblocking H filter (D-H) ⁇ deblocking V filter (D-V) and an adaptive loop filter (ALF).
  • SAO adaptive offset filter
  • D-H deblocking H filter
  • D-V deblocking V filter
  • ALF adaptive loop filter
  • the deblocking H filter (D-H) and the adaptive loop filter (ALF) perform processing using the reconstructed pixels.
  • the deblocking V filter (DV) performs processing using pixels after filtering by the deblocking H filter (DH), and the adaptive offset filter (SAO) uses pixels after filtering by the deblocking V filter (DV). Process. Then, two filter results from the adaptive offset filter (SAO) and the adaptive loop filter (ALF) are output in phase to the arithmetic unit 156 in phase.
  • the in-loop filter 31 is configured in two parallels including a deblocking filter (D-H) ⁇ a deblocking V filter (D-V) ⁇ an adaptive offset filter (SAO) and an adaptive loop filter (ALF).
  • D-H deblocking filter
  • D-V deblocking V filter
  • ALF adaptive loop filter
  • the deblocking H filter (DH) performs processing using the reconstructed pixel
  • the deblocking V filter (DV) performs processing using the pixel after filtering by the deblocking H filter (DH).
  • the adaptive offset filter (SAO) and the adaptive loop filter (ALF) perform processing using pixels after filtering by the deblocking V filter (D-V). Then, two filter results from the adaptive offset filter (SAO) and the adaptive loop filter (ALF) are output in phase to the arithmetic unit 156 in phase.
  • the difference between the fifth and sixth components from the left is that the pixel input to the fifth adaptive loop filter (ALF) is the reconstructed pixel, whereas the sixth adaptive loop filter ( ALF) is a pixel after the deblocking filter.
  • ALF adaptive loop filter
  • the adaptive offset filter (SAO) and the adaptive loop filter (ALF) that are processed in series are adaptive loop filters ( ALF) can be arranged after the calculation unit 156.
  • the adaptive loop filter (ALF) is arranged after the arithmetic unit 156 in the third parallel configuration from the left, the deblocking H filter (DH), the deblocking V filter (DV), and the adaptive offset filter (SAO)
  • the three filter results from are matched in phase and output to the calculation unit 156.
  • an adaptive loop filter (ALF) processes using the pixel after the calculation by the calculating part 156, and outputs it to a back
  • FIG. 15 is a block diagram illustrating a configuration example of the in-loop filter.
  • the in-loop filter 31 shown in FIG. 15 is a configuration example in the case of 3 parallel shown second from the left in FIG.
  • the in-loop filter 31 in FIG. 15 is common to the in-loop filter 31 in FIG. 9 in that the in-loop filter 31 in FIG. 15 includes an adaptive offset filter unit 154, a deblocking H filter unit 152, and a deblocking V filter unit 153.
  • the in-loop filter 31 of FIG. 15 includes a line memory 201, an adaptive loop filter unit 202, a calculation unit 203, and a coefficient memory 204 instead of the line memory 151, the adaptive loop filter unit 155, the calculation unit 156, and the coefficient memory 171.
  • the point provided is different from the in-loop filter 31 of FIG.
  • the reconstructed pixels that are input pixels from the previous stage are input to the line memory 201, the deblocking H filter unit 152, the deblocking V filter unit 153, the adaptive offset filter unit 154, and the calculation unit 203.
  • the line memory 201 is configured to hold the reconstructed pixels for four lines from the LCU boundary for the luminance signal, and to hold the reconstructed pixels for three lines from the LCU boundary for the color difference signal. .
  • the deblocking H filter unit 152, the deblocking V filter unit 153, and the adaptive offset filter unit 154 share the line memory 201 in which the reconstructed pixels are held.
  • the deblocking H filter unit 152, the deblocking V filter unit 153, and the adaptive offset filter unit 154 perform each filter processing in parallel on the reconstructed pixels input from the previous stage. I do. Then, the deblocking H filter unit 152 and the deblocking V filter unit 153 output the pixel after the filter processing to the calculation unit 203, respectively. Further, the adaptive offset filter unit 154 outputs the pixel after filter processing to the adaptive loop filter unit 202.
  • the deblocking H filter unit 152, the deblocking V filter unit 153, and the adaptive offset filter unit 154 perform filter processing on the reconstructed pixels held in the line memory 201 in parallel. Then, the deblocking H filter unit 152 and the deblocking V filter unit 153 output the pixel after the filter processing to the calculation unit 203, respectively. Further, the adaptive offset filter unit 154 outputs the pixel after filter processing to the adaptive loop filter unit 202.
  • the deblocking H filter unit 152, the deblocking V filter unit 153, and the adaptive loop filter unit 202 align the output phases and restart the processing. For this reason, the adaptive offset filter unit 154 resumes the process with an output phase that allows the adaptive loop filter unit 202 to align the output phase with the other filter units.
  • the adaptive loop filter unit 202 is provided between the adaptive offset filter unit 154 and the arithmetic unit 203, and is configured to include the loop filter 131 and the line memory 211 of FIG.
  • the line memory 211 holds the pixels after the adaptive offset filter for four lines from the LCU boundary for the luminance signal, and holds the pixels after the adaptive offset filter for four lines from the LCU boundary for the color difference signal. It is configured.
  • the adaptive loop filter unit 202 classifies the pixels from the adaptive offset filter unit 154 on a block basis, and performs an adaptive loop filter process by the loop filter 131.
  • the adaptive loop filter unit 202 temporarily holds the pixels from the adaptive offset filter unit 154 in the line memory 211.
  • the adaptive loop filter unit 202 classifies the pixels in the line memory 211 on a block basis, and performs adaptive loop filter processing by the loop filter 131.
  • the adaptive loop filter unit 202 outputs the pixels after the filter processing by the loop filter 131 to the calculation unit 203.
  • the calculation unit 203 includes subtraction units 181-1 to 181-3, multiplication units 182-1 to 182-3, and an addition unit 183, and calculates the output P after each filter process by a linear sum. Note that as the input pixel in the arithmetic unit 203, a reconstructed pixel from the previous stage is usually used, but a pixel held in the line memory 201 is read and used at the LCU boundary.
  • Subtraction unit 181-1 subtracts the input pixel P in the pixel P DB_H after filtering from the deblocking H filter unit 152, and outputs to the multiplier 182-1.
  • the multiplication unit 182-1 multiplies the input (P DB_H ⁇ P in ) from the subtraction unit 181-1 by the coefficient C DB_H corresponding to the deblocking H filter unit 152 from the coefficient memory 204, and adds the addition unit 183. Output to.
  • Subtraction unit 181-2 subtracts the input pixel P in the pixel P DB_V after filtering from the deblocking V filter unit 153, and outputs to the multiplier 182-2.
  • the multiplication unit 182-2 multiplies the input (P DB_H ⁇ P in ) from the subtraction unit 181-2 by the coefficient C DB_V corresponding to the deblocking V filter unit 153 from the coefficient memory 204, and adds the addition unit 183. Output to.
  • Subtraction unit 181-3 subtracts the input pixel P in the pixel P ALF after filtering from the adaptive loop filter unit 202, and outputs to the multiplier 182-3.
  • the multiplication unit 182-3 receives the coefficient C SAO / ALF corresponding to the adaptive offset filter unit 154 and the adaptive loop filter unit 202 from the coefficient memory 204 with respect to the input (P ALF ⁇ P in ) from the subtraction unit 181-3. And output to the adder 183.
  • the coefficient memory 204 stores a coefficient corresponding to each filter.
  • the coefficient memory 204, coefficient C DB_V corresponding to the coefficient C DB_H and deblocking V filter unit 153 corresponds to the deblocking H filter 152 are stored.
  • the coefficient memory 204 stores a coefficient C SAO / ALF corresponding to the adaptive offset filter unit 154 and the adaptive loop filter unit 202. These coefficients can also be set by the user via an operation input unit (not shown), similarly to the coefficients in the coefficient memory 204.
  • FIG. 16 shows an example of LCU boundary pixels in the case of a luminance signal.
  • the circles on the first to fourth lines from the LCU boundary are non-deblocking V pixels and are pixels that need to be stored in the line memory 201 at the LCU boundary.
  • Circles on the second to fifth lines from the LCU boundary represent pixels after the offset filter (SAO) and need to be stored in the line memory 211.
  • a circle above the fourth line from the LCU boundary represents a pixel after the loop filter (ALF).
  • the deblocking V filter unit 153 performs the first line through the LCU boundary until the four lines of pixels of the next LCU are input. A standby state is entered in the processing of the pixels on the third line. That is, since the deblocking V filter unit 153 can process only the fourth line from the LCU boundary, the deblocking H filter unit 152 and the adaptive loop filter unit 202 align the output phase with the deblocking V filter unit 153. It will be necessary.
  • the adaptive loop filter unit 202 processes the pixel after the filter (SAO) by the adaptive offset filter unit 154. Therefore, the adaptive offset filter unit 154 completes the processing from the LCU boundary to the second line so that the adaptive loop filter unit 202 can process from the third line. Then, as shown in the post-SAO pixel in FIG. 16, the adaptive offset filter unit 154 enters a standby state in the processing of the pixel on the first line from the next LCU boundary.
  • the deblocking H filter unit 152 and the deblocking V filter unit 153 start processing so as to output from the pixels on the third line from the LCU boundary. Further, the adaptive offset filter unit 154 starts processing so as to output from the pixels on the first line from the LCU boundary.
  • the deblocking H filter unit 152 needs to hold the pixels in the first to third lines from the LCU boundary in the line memory 201.
  • the deblocking V filter unit 153 the pixels in the first to fourth lines from the LCU boundary need to be held in the line memory 201.
  • the adaptive offset filter unit 154 the pixels in the first and second lines from the LCU boundary need to be held in the line memory 201.
  • the line memory 201 only needs to hold pixels for four lines from the first to fourth lines from the LCU boundary.
  • the adaptive offset filter unit 154 performs processing from the LCU boundary to the second line. Therefore, the adaptive loop filter unit 202 completes the process from the LCU boundary to the fourth line, and enters a standby state in the process of the pixel on the third line from the next LCU boundary.
  • the adaptive loop filter unit 202 receives the pixels on the first line from the LCU boundary after the filter processing by the adaptive offset filter unit 154, so that the adaptive loop filter unit 202 From the pixel on the third line.
  • the adaptive loop filter unit 202 needs to hold the pixels for four lines from the second to fifth lines from the LCU boundary in the line memory 211.
  • the in-loop filter 31 of FIG. 15 requires the line memory 201 for four lines and the line memory 211 for four lines, which is compared with the conventional nine lines of pixels. It is possible to reduce the number of line memories for one line.
  • FIG. 17 shows an example of LCU boundary pixels in the case of a color difference signal.
  • the circles in the first to third lines from the LCU boundary are non-deblocking V pixels and represent pixels that need to be stored in the line memory 201 at the LCU boundary.
  • Circles on the second to fifth lines from the LCU boundary represent pixels after the offset filter (SAO) and need to be stored in the line memory 211.
  • a circle above the fourth line from the LCU boundary represents a pixel after the loop filter (ALF).
  • the deblocking V filter unit 153 is in a standby state in processing of the pixels on the first line and the second line from the LCU boundary until the pixels for the next two lines of the LCU are input. It becomes.
  • the adaptive offset filter unit 154 can only complete the processing from the LCU boundary to the second line
  • the adaptive loop filter unit 202 can only complete the processing from the LCU boundary to the fourth line, and the three lines from the LCU boundary.
  • a standby state is entered in the processing of the eye pixel.
  • the deblocking H filter unit 152 and the deblocking V filter unit 153 can process from the pixels on the second line from the LCU boundary.
  • the adaptive loop filter unit 202 can process only from the third line from the LCU boundary, it is necessary to align the output phase with the adaptive loop filter unit 202.
  • the deblocking H filter unit 152 and the deblocking V filter unit 153 start processing so as to output from the pixels on the third line from the LCU boundary. Further, the adaptive offset filter unit 154 starts processing so as to output from the pixels on the first line from the LCU boundary.
  • the deblocking H filter unit 152 needs to hold the pixels in the first and third lines from the LCU boundary in the line memory 201.
  • the deblocking V filter unit 153 the pixels in the first and third lines from the LCU boundary need to be held in the line memory 201.
  • the adaptive offset filter unit 154 the pixels in the first and second lines from the LCU boundary need to be held in the line memory 201.
  • the line memory 201 only needs to hold pixels for three lines from the first to third lines from the LCU boundary.
  • the first line of pixels from the LCU boundary after the filter processing by the adaptive offset filter unit 154 is input to the adaptive loop filter unit 202. Will be output.
  • the adaptive loop filter unit 202 needs to hold the pixels for four lines from the second to fifth lines from the LCU boundary in the line memory 211.
  • the in-loop filter 31 of FIG. 15 requires the line memory 201 for three lines and the line memory 211 for four lines. This is a line memory equivalent to a conventional pixel for seven lines, but the number of line memories is reduced in the case of a luminance signal, so that the effect of the in-loop filter 31 of FIG. It is done.
  • FIG. 10 is an example of the in-loop filter process in step S22 in FIG. 2 described above, and is an example of the in-loop filter process in step S58 in FIG.
  • This in-loop filter process starts from the upper left LCU in the screen.
  • the reconstructed pixels constituting the LCU are input to each part of the in-loop filter 31 from the previous stage.
  • the processes in steps S201 to S203 are executed in parallel using the reconstructed pixels input from the previous stage.
  • the reconstructed pixels constituting the LCU are input to the line memory 201 from the previous stage.
  • the processes in steps S201 to S203 are executed in parallel using the reconstructed pixels held in the line memory 201.
  • steps S201, S202, and S204 processing is started with the output phases aligned.
  • step S203 processing is started at a timing such that the output phases are aligned in steps S201, S202, and S204.
  • the input to each part is used by being switched by a switch or the like in each part of the in-loop filter 31.
  • step S201 the deblocking H filter unit 152 performs a filtering process by the H filter 111 on the line memory 201 or the reconstructed pixel from the previous stage.
  • the deblocking H filter unit 152 outputs the pixel after the filtering process by the H filter 111 to the calculation unit 203.
  • step S202 the deblocking V filter unit 153 performs a filtering process by the V filter 112 on the line memory 201 or the reconstructed pixel from the previous stage.
  • the deblocking V filter unit 153 outputs the pixel after the filter processing by the V filter 112 to the calculation unit 203.
  • step S203 the adaptive offset filter unit 154 performs filter processing by the offset filter 121 on the reconstructed pixels from the line memory 201 or the previous stage.
  • the adaptive offset filter unit 154 outputs the pixel after the filter processing by the offset filter 121 to the adaptive loop filter unit 202.
  • step S204 the adaptive loop filter unit 202 performs a filtering process by the loop filter 131 on the pixels that have already undergone the offset filter 121 from the line memory 211 or the adaptive offset filter unit 154.
  • the adaptive loop filter unit 202 outputs the pixels after the filter processing by the loop filter 131 to the calculation unit 203.
  • step S205 the calculation unit 203 calculates three results after each filter process by the deblocking H filter unit 152, the deblocking V filter unit 153, and the adaptive loop filter unit 202.
  • the calculation unit 203 calculates the three results obtained by the deblocking H filter unit 152, the deblocking V filter unit 153, and the adaptive loop filter unit 202 by, for example, a linear sum, and outputs the calculation result to the subsequent stage.
  • step S206 the deblocking H filter unit 152, the deblocking V filter unit 153, and the adaptive loop filter unit 202 determine whether it is the last pixel in the LCU. If it is determined in step S206 that the pixel is not the last pixel in the LCU, the process returns to step S201, and the subsequent processes are repeated.
  • step S206 If it is determined in step S206 that the pixel is the last pixel in the LCU, the process proceeds to step S207.
  • step S207 the deblocking H filter unit 152, the deblocking V filter unit 153, and the adaptive loop filter unit 202 determine whether or not the pixel is the last pixel in the screen. If it is determined in step S207 that the pixel is not the last pixel in the screen, the process proceeds to step S208.
  • step S208 the deblocking H filter unit 152, the deblocking V filter unit 153, and the adaptive loop filter unit 202 select the next LCU, and the process returns to step S201. That is, the processing after step S201 is repeated for the LCU selected in step S208.
  • step S207 If it is determined in step S207 that the pixel is not the last pixel in the screen, the in-loop filter process is terminated.
  • the inputs of the three filter processes constituting the in-loop filter 31 are processed in parallel as reconstructed pixels, and the line memory is shared at the LCU boundary. It is possible to reduce the number of line memories.
  • the in-loop filter 31 has a simple configuration in which switching between normal processing and LCU boundary processing is performed only by switching whether a pixel from the previous stage or a pixel from the line memory is input. Thereby, control when the deblocking filter, the adaptive offset filter, and the adaptive offset filter each perform frame processing by software or the like can be easily performed.
  • the adaptive loop filter unit 202 is disposed before the calculation unit 203 has been described.
  • the adaptive loop filter unit 202 is added after the addition by the calculation unit 203.
  • the unit 202 can also be configured.
  • An example of the in-loop filter 31 constituting the adaptive loop filter after the addition will be described with reference to FIG.
  • FIG. 19 is a block diagram illustrating a configuration example of the in-loop filter.
  • the in-loop filter 31 shown in FIG. 19 is an example when an adaptive loop filter is configured after addition.
  • the in-loop filter 31 in FIG. 19 is common to the in-loop filter 31 of FIG. 15 in that it includes a deblocking H filter unit 152, a deblocking V filter unit 153, and an adaptive offset filter unit 154.
  • the in-loop filter 31 in FIG. 19 is common to the in-loop filter 31 in FIG. 15 in that it includes a line memory 201, a calculation unit 203, and a coefficient memory 204.
  • the in-loop filter 31 of FIG. 19 is different from the in-loop filter 31 of FIG. 15 in that an adaptive loop filter unit 221 is provided instead of the adaptive loop filter unit 202.
  • the adaptive offset filter unit 154 outputs the pixel after the filter processing to the arithmetic unit 203, similarly to the deblocking H filter unit 152 and the deblocking V filter unit 153.
  • Subtraction unit 181-3 of the operation unit 203 subtracts the input pixel P in the pixel P SAO after filtering from the adaptive offset filter unit 154, and outputs to the multiplier 182-3.
  • the multiplication unit 182-3 multiplies the input (P SAO ⁇ P in ) from the subtraction unit 181-3 by the coefficient C SAO corresponding to the adaptive offset filter unit 154 from the coefficient memory 204, and Output.
  • the coefficient memory 204 stores a coefficient C SAO corresponding to the adaptive offset filter unit 154.
  • the adaptive loop filter unit 221 is provided in the subsequent stage of the arithmetic unit 203 and is configured to include the loop filter 131 and the line memory 211 of FIG. 6, similarly to the adaptive loop filter unit 202 of FIG. 15.
  • the line memory 211 holds the pixels after the adaptive offset filter for four lines from the LCU boundary for the luminance signal, and holds the pixels after the adaptive offset filter for four lines from the LCU boundary for the color difference signal. It is configured.
  • the adaptive loop filter unit 221 performs block classification on the pixel from the adder 183 on a block basis and performs an adaptive loop filter process by the loop filter 131.
  • the adaptive loop filter unit 221 temporarily holds the pixels from the adder 183 in the line memory 211, classifies the pixels in the line memory 211 on a block basis, and performs an adaptive loop filter by the loop filter 131.
  • the adaptive loop filter unit 221 outputs the pixels after the filter processing by the loop filter 131 to a subsequent frame memory or the like.
  • the deblocking V filter unit 153 performs the first line through the LCU boundary until the four lines of pixels of the next LCU are input. A standby state is entered in the processing of the pixels on the third line. That is, since the deblocking V filter unit 153 can process only up to the fourth line from the LCU boundary, the other filter units in parallel need to align the output phase with the deblocking V filter unit 153.
  • the adaptive offset filter unit 154 also completes the process from the LCU boundary to the fourth line, and enters a standby state in the process of the pixel on the third line from the next LCU boundary.
  • the deblocking H filter unit 152 When the four lines of pixels of the next LCU are input, the deblocking H filter unit 152, the deblocking V filter unit 153, and the adaptive offset filter unit 154 output from the pixels on the third line from the LCU boundary. Start processing.
  • the deblocking H filter unit 152 needs to hold the pixels in the first to third lines from the LCU boundary in the line memory 201.
  • the deblocking V filter unit 153 the pixels in the first to fourth lines from the LCU boundary need to be held in the line memory 201.
  • the adaptive offset filter unit 154 needs to hold the pixels in the first to fourth lines from the LCU boundary in the line memory 201.
  • the line memory 201 only needs to hold pixels for four lines from the first to fourth lines from the LCU boundary.
  • the adaptive loop filter unit 202 processes the pixels after the filter processing by the adaptive offset filter unit 154, the processing from the LCU boundary to the sixth line is completed, and the processing of the pixels on the fifth line from the next LCU boundary is completed. In the standby state.
  • the pixel on the third line from the LCU boundary after the filter processing by the adaptive offset filter unit 154 is input to the adaptive loop filter unit 202. Therefore, from the pixel on the fifth line from the LCU boundary. Will be output.
  • the adaptive loop filter unit 202 needs to hold the pixels for four lines of the fourth to seventh lines from the LCU boundary in the line memory 211.
  • the in-loop filter 31 of FIG. 15 requires the line memory 201 for four lines and the line memory 211 for four lines, which is compared with the conventional nine lines of pixels described above with reference to FIG. Thus, the number of line memories for one line can be reduced.
  • the deblocking V filter unit 153 performs processing of pixels on the first line and the second line from the LCU boundary until pixels for two lines of the next LCU are input. It will be in a standby state. That is, since the deblocking V filter unit 153 can process only up to the third line from the LCU boundary, the other filter units in parallel need to align the output phase with the deblocking V filter unit 153.
  • the adaptive offset filter unit 154 completes the processing from the LCU boundary to the third line, and enters a standby state in the processing of the pixel on the second line from the next LCU boundary.
  • the deblocking H filter unit 152, the deblocking V filter unit 153, and the adaptive offset filter unit 154 output the pixels from the second line from the LCU boundary. Start processing.
  • the deblocking H filter unit 152 needs to hold the pixels on the first and second lines from the LCU boundary in the line memory 201.
  • the deblocking V filter unit 153 the pixels in the first and second lines from the LCU boundary need to be held in the line memory 201.
  • the adaptive offset filter unit 154 the pixels in the first and second lines from the LCU boundary need to be held in the line memory 201.
  • the line memory 201 only needs to hold pixels for two lines from the first to second lines from the LCU boundary.
  • the adaptive loop filter unit 202 processes the pixels after the filter processing by the adaptive offset filter unit 154, the processing from the LCU boundary to the fifth line is completed, and the processing of the pixels on the fourth line from the next LCU boundary is completed. In the standby state.
  • the second loop pixel from the LCU boundary after the filter processing by the adaptive offset filter unit 154 is input to the adaptive loop filter unit 202. Will be output.
  • the adaptive loop filter unit 202 needs to hold the pixels for four lines from the third to sixth lines from the LCU boundary in the line memory 211.
  • the in-loop filter 31 of FIG. 15 requires the line memory 201 for two lines and the line memory 211 for four lines, which is compared with the conventional pixel for seven lines described above with reference to FIG. Thus, the number of line memories for one line can be reduced.
  • the number of line memories can be reduced by one line as compared with the conventional pixels for seven lines described above with reference to FIG. .
  • FIG. 20 is an example of the in-loop filter process in step S22 of FIG. 2 described above, and is an example of the in-loop filter process in step S58 of FIG.
  • This in-loop filter process starts from the upper left LCU in the screen.
  • the reconstructed pixels constituting the LCU are input to each part of the in-loop filter 31 from the previous stage.
  • the processes in steps S231 to S233 are executed in parallel using the reconstructed pixels input from the previous stage.
  • the reconstructed pixels constituting the LCU are input to the line memory 201 from the previous stage.
  • the processes in steps S231 to S233 are executed in parallel using the reconstructed pixels held in the line memory 201.
  • steps S231 to S233 processing is started with the output phases aligned.
  • the input to each part is used by being switched by a switch or the like in each part of the in-loop filter 31.
  • step S231 the deblocking H filter unit 152 performs a filtering process by the H filter 111 on the line memory 201 or the reconstructed pixel from the previous stage.
  • the deblocking H filter unit 152 outputs the pixel after the filtering process by the H filter 111 to the calculation unit 203.
  • step S232 the deblocking V filter unit 153 performs a filtering process by the V filter 112 on the line memory 201 or the reconstructed pixel from the previous stage.
  • the deblocking V filter unit 153 outputs the pixel after the filter processing by the V filter 112 to the calculation unit 203.
  • step S233 the adaptive offset filter unit 154 performs filter processing by the offset filter 121 on the line memory 201 or the reconstructed pixel from the previous stage.
  • the adaptive offset filter unit 154 outputs the pixel after the filter processing by the offset filter 121 to the calculation unit 203.
  • step S234 the calculation unit 203 calculates three results after each filter process by the deblocking H filter unit 152, the deblocking V filter unit 153, and the adaptive offset filter unit 154.
  • the calculation unit 203 calculates the three results obtained by the deblocking H filter unit 152, the deblocking V filter unit 153, and the adaptive offset filter unit 154, for example, by linear sum, and outputs the calculation result to the adaptive loop filter unit 221. To do.
  • step S235 the deblocking H filter unit 152, the deblocking V filter unit 153, and the adaptive offset filter unit 154 determine whether it is the last pixel in the LCU. If it is determined in step S235 that the pixel is not the last pixel in the LCU, the process returns to step S231, and the subsequent processes are repeated.
  • step S235 If it is determined in step S235 that the pixel is the last pixel in the LCU, the process proceeds to step S236.
  • step S236 the deblocking H filter unit 152, the deblocking V filter unit 153, and the adaptive offset filter unit 154 determine whether or not it is the last pixel in the screen. If it is determined in step S236 that the pixel is not the last pixel in the screen, the process proceeds to step S237.
  • step S237 the deblocking H filter unit 152, the deblocking V filter unit 153, and the adaptive offset filter unit 154 select the next LCU, and the process returns to step S231. That is, the processing from step S231 onward is repeated for the LCU selected in step S237.
  • step S237 If it is determined in step S237 that the pixel is the last pixel in the screen, the process proceeds to step S238.
  • step S238 the adaptive loop filter unit 221 performs a filtering process by the loop filter 131 on the pixel having undergone the offset filter 121 from the line memory 211 or the calculation unit 203 (adding unit 183).
  • the adaptive loop filter unit 221 outputs the pixels after the filter processing by the loop filter 131 to a subsequent frame memory or the like.
  • step S239 the adaptive loop filter unit 221 determines whether it is the last pixel in the LCU. If it is determined in step S239 that the pixel is not the last pixel in the LCU, the processing returns to step S238, and the subsequent processing is repeated.
  • step S239 If it is determined in step S239 that the pixel is the last pixel in the LCU, the process proceeds to step S240.
  • step S240 the adaptive loop filter unit 221 determines whether it is the last pixel in the screen. If it is determined in step S240 that the pixel is not the last pixel in the screen, the process proceeds to step S241.
  • step S241 the adaptive loop filter unit 221 selects the next LCU, and the process returns to step S238. That is, the processing from step S238 onward is repeated for the LCU selected in step S241.
  • step S240 If it is determined in step S240 that the pixel is the last pixel in the screen, the in-loop filter process in FIG. 20 is terminated.
  • the inputs of the three filter processes constituting the in-loop filter 31 are processed in parallel as reconstructed pixels, and the line memory is shared at the LCU boundary. It is possible to reduce the number of line memories.
  • the in-loop filter 31 has a simple configuration in which switching between normal processing and LCU boundary processing is performed only by switching whether a pixel from the previous stage or a pixel from the line memory is input. Thereby, control when the deblocking filter, the adaptive offset filter, and the adaptive offset filter each perform frame processing by software or the like can be easily performed.
  • FIG. 21 is a block diagram illustrating a configuration example of the in-loop filter.
  • the in-loop filter 31 shown in FIG. 21 is a configuration example in the case of two parallels shown sixth from the left in FIG.
  • the in-loop filter 31 in FIG. 21 is configured to include the deblocking filter unit 101 in FIG. 6, the line memory 251, the adaptive offset filter unit 252, the adaptive loop filter unit 253, the calculation unit 254, and the coefficient memory 255. ing.
  • the deblocking filter unit 101 is configured to include an H (horizontal) filter 111, a V (vertical) filter 112, and a line memory 113.
  • the line memory 113 holds four lines of pixels for luminance (Y) and two lines of pixels for color difference (C) at the LCU boundary.
  • the deblocking filter unit 101 normally performs the filtering process by the H filter 111 and the filtering process by the V filter 112 on the reconstructed pixel that is the input pixel from the previous stage (except for the LCU boundary).
  • the deblocking filter unit 101 outputs the filtered pixels to the line memory 251 and the calculation unit 254.
  • the deblocking filter unit 101 once holds the reconstructed pixel, which is the input pixel from the previous stage, in the line memory 113.
  • the deblocking filter unit 101 performs filtering using the H filter 111 using the input pixels and the pixels held in the line memory 113, and the V filter 112. Filter processing is performed.
  • the deblocking filter unit 101 outputs the filtered pixels to the line memory 251 and the calculation unit 254.
  • the deblocking filter unit 101 performs filter processing using the H filter 111 and filter processing using the V filter 112 using the reconstructed pixels that are input pixels from the previous stage.
  • the deblocking filter unit 101 outputs the filtered pixels to the line memory 251 and the calculation unit 254.
  • the line memory 251 once holds the pixels after filtering by the deblocking filter unit 101.
  • the line memory 251 holds pixels for three lines for luminance (Y) and holds pixels for three lines for color difference (C). Note that the number of lines to be held is not limited because it depends on the architecture and the like.
  • the adaptive offset filter unit 252 and the adaptive loop filter unit 253 share the line memory 251 in which the pixels after filtering by the deblocking filter unit 101 are held.
  • the adaptive offset filter unit 252 is basically configured with the adaptive offset filter unit 154 of FIG. 7 so as to include the offset filter 121 of FIG.
  • the adaptive offset filter unit 252 reads out the pixels held in the line memory 251 and applies the filter processing by the offset filter 121 to the read out pixels.
  • the adaptive offset filter unit 252 outputs the pixel after the filter processing by the offset filter 121 to the calculation unit 254.
  • the adaptive offset filter unit 252 performs the filter process by the offset filter 121 on the pixels from the deblocking filter unit 101.
  • the adaptive offset filter unit 252 outputs the pixel after the filter processing by the offset filter 121 to the calculation unit 254.
  • the adaptive loop filter unit 253 is basically configured similarly to the adaptive loop filter unit 155 of FIG. 7 so as to include the loop filter 131 of FIG.
  • the adaptive loop filter unit 253 reads out the pixels held in the line memory 251 and applies the filter processing by the loop filter 131 to the read out pixels.
  • the adaptive loop filter unit 253 outputs the pixel after the filter processing by the loop filter 131 to the calculation unit 254.
  • the adaptive loop filter unit 253 performs filter processing by the loop filter 131 on the pixels from the deblocking filter unit 101.
  • the adaptive loop filter unit 253 outputs the pixel after the filter processing by the loop filter 131 to the calculation unit 254.
  • the adaptive offset filter unit 252 and the adaptive loop filter unit 253 align the output phases and restart the processing.
  • the calculation unit 254 is composed of subtraction units 181-1 and 181-2, multiplication units 182-1 and 182-2, and an addition unit 183, and calculates the output P after each filter processing by a linear sum. Note that as the input pixel in the arithmetic unit 254, the pixel after filtering from the deblocking filter unit 101 is normally used, but the pixel held in the line memory 251 is read and used at the LCU boundary. .
  • the subtracting unit 181-1 subtracts the pixel P DB after the deblocking filter from the filtered pixel P SAO from the adaptive offset filter unit 252, and outputs the result to the multiplying unit 182-1.
  • the multiplication unit 182-1 multiplies the input (P SAO ⁇ P DB ) from the subtraction unit 181-1 by the coefficient C SAO corresponding to the adaptive offset filter unit 252 from the coefficient memory 255, and the addition unit 183 Output.
  • the subtracting unit 181-2 subtracts the pixel P DB after the deblocking filter from the filtered pixel P ALF from the adaptive loop filter unit 253, and outputs the result to the multiplying unit 182-2.
  • the multiplication unit 182-2 multiplies the input (P ALF ⁇ P DB ) from the subtraction unit 181-2 by the coefficient C ALF corresponding to the adaptive loop filter unit P DB from the coefficient memory 255, and adds the addition unit 183. Output to.
  • Addition unit 183 the pixel P DB after deblocking filter, adds the multiplication result from the multiplying unit 182-1 and 182-2, a P is the addition result is output to the frame memory.
  • the coefficient memory 255 stores a coefficient corresponding to each filter.
  • the coefficient memory 255 stores a coefficient C SAO corresponding to the adaptive offset filter unit 154 and a coefficient C ALF corresponding to the adaptive loop filter unit 155.
  • the coefficient C SAO and the coefficient C ALF in the case of FIG. 21 are coefficients that are multiplied by the pixel subjected to the filtering process for the pixel after the deblocking filter unit 101, and therefore correspond to the deblocking filter unit 101, respectively. You may do it.
  • coefficients may also be set by the user via an operation input unit (not shown). These coefficients may also be set according to the characteristics of the image.
  • the deblocking filter unit 101 sequentially applies the filter processing by the H filter 111 and the V filter 112 to the reconstructed pixel that is the input pixel from the previous stage, and the filtered pixel is stored in the line memory 251. Output.
  • the pixels held in the line memory 251 are filtered in parallel by the adaptive offset filter unit 252 and the adaptive loop filter unit 253.
  • the adaptive offset filter unit 252 reads out the pixel after the deblocking filter held in the line memory 251, and performs a filter process by the offset filter 121 on the read out pixel.
  • the adaptive offset filter unit 252 performs filter processing by the offset filter 121 on the processing target pixel using eight pixels (SAO reference pixels in FIG. 22) around the processing target pixel.
  • the adaptive offset filter unit 252 outputs the pixel after the filter processing by the offset filter 121 to the calculation unit 254.
  • the adaptive loop filter unit 253 reads out the pixel after the deblocking filter held in the line memory 251, and performs a filtering process by the loop filter 131 on the read out pixel.
  • the adaptive loop filter unit 253 performs filter processing by the loop filter 131 on the processing target pixel, using 16 pixels of the 5-tap snowflake shape (ALF tap shown in FIG. 22) centering on the processing target pixel. .
  • the adaptive loop filter unit 253 outputs the pixel after the filter processing by the loop filter 131 to the calculation unit 254.
  • the calculation unit 254 calculates the pixel after the filter processing from the adaptive offset filter unit 252 and the pixel after the filter processing from the adaptive loop filter unit 253 as a linear sum, and outputs the result to the subsequent stage.
  • FIG. 23 shows an example of LCU boundary pixels in the case of a luminance signal.
  • the deblocking V (vertical) filter processing at the horizontal boundary is started when the next LCU is input to the deblocking filter. Represents a pixel.
  • the circles indicated by hatching in the first to third lines represent pixels that have been partially deblocked H (horizontal) filtered at the vertical boundary of the CUs included in the LCU.
  • white circles represent pixels that are not actually subjected to deblocking H filter processing at the vertical boundary of the CU.
  • Pixels on the 4th to 7th lines from the LCU boundary are pixels that have been subjected to deblocking V filter processing.
  • the pixel on the fourth line from the LCU boundary is a pixel referred to in the deblocking V filter processing on the first line to the third line.
  • a circle above the sixth line from the LCU boundary represents a pixel after the loop filter (ALF).
  • the deblocking filter unit 101 At the LCU boundary, as described above with reference to FIG. 5, in the case of a luminance signal, the deblocking filter unit 101 first to third lines from the LCU boundary until the four lines of pixels of the next LCU are input. A standby state is entered in the processing of the pixels on the line. That is, the deblocking filter unit 101 can process only up to the fourth line from the LCU boundary.
  • the adaptive offset filter unit 252 and the adaptive loop filter unit 253 process the pixels after the filter (DF) by the deblocking filter unit 101. Therefore, the adaptive loop filter unit 253 can process only up to the sixth line from the LCU boundary. For this reason, the adaptive offset filter unit 252 also enters a standby state in the processing of the pixel on the sixth line from the LCU boundary so that the process can be started from the sixth line from the LCU boundary.
  • the deblocking filter unit 101 starts processing to output from the pixels of the third line from the LCU boundary. Since the pixel on the third line from the LCU boundary is input from the deblocking filter unit 101, the adaptive loop filter unit 253 starts processing to output from the pixel on the fifth line from the LCU boundary. The adaptive offset filter unit 252 also starts processing so as to match the output phase with the adaptive loop filter unit 253 and output from the pixels on the fifth line from the LCU boundary.
  • the deblocking filter unit 101 needs to hold the pixels in the first to fourth lines from the LCU boundary in the line memory 113.
  • the pixels in the fifth and sixth lines from the LCU boundary need to be held in the line memory 251.
  • the pixels in the fifth to seventh lines from the LCU boundary need to be held in the line memory 251.
  • the line memory 251 only needs to hold pixels for three lines from the fifth to seventh lines from the LCU boundary.
  • the in-loop filter 31 of FIG. 21 requires the line memory 113 for 4 lines and the line memory 251 for 3 lines, which is compared with the conventional 9 lines of pixels. It is possible to reduce the number of line memories for two lines.
  • FIG. 24 shows an example of LCU boundary pixels in the case of a color difference signal.
  • the circle on the first line from the LCU boundary represents a pixel where the deblocking V (vertical) filter processing at the horizontal boundary is started when the next LCU is input to the deblocking filter. ing.
  • a circle indicated by hatching in the first line represents a pixel that has been partially deblocked H (horizontal) filtered at the vertical boundary of the CU included in the LCU.
  • white circles represent pixels that are not actually subjected to deblocking H filter processing at the vertical boundary of the CU.
  • Pixels on the 2nd to 5th lines from the LCU boundary are pixels that have been subjected to deblocking V filter processing.
  • the pixel on the second line from the LCU boundary is a pixel referred to in the deblocking V filter process on the first line.
  • a circle above the fourth line from the LCU boundary represents a pixel after the loop filter (ALF).
  • the deblocking filter unit 101 starts the first line from the LCU boundary until pixels for two lines of the next LCU are input. A standby state is entered in pixel processing. That is, the deblocking filter unit 101 can process only the second line from the LCU boundary.
  • the adaptive offset filter unit 252 and the adaptive loop filter unit 253 process the pixels after the filter (DF) by the deblocking filter unit 101. Therefore, the adaptive loop filter unit 253 can process only the fourth line from the LCU boundary. For this reason, the adaptive offset filter unit 252 also enters a standby state in the processing of the pixel on the third line from the LCU boundary so that the process can be started from the third line from the LCU boundary.
  • the deblocking filter unit 101 starts processing so as to output from the pixels on the first line from the LCU boundary. Since the pixel on the first line from the LCU boundary is input from the deblocking filter unit 101, the adaptive loop filter unit 253 starts processing so as to output from the pixel on the third line from the LCU boundary. The adaptive offset filter unit 252 also starts processing so as to match the output phase with the adaptive loop filter unit 253 and output from the pixels on the third line from the LCU boundary.
  • the deblocking filter unit 101 needs to hold the pixels on the first and second lines from the LCU boundary in the line memory 113.
  • the pixels in the third and fourth lines from the LCU boundary need to be held in the line memory 251.
  • the pixels in the third to fifth lines from the LCU boundary need to be held in the line memory 251.
  • the line memory 251 only needs to hold pixels for three lines from the third to fifth lines from the LCU boundary.
  • the in-loop filter 31 of FIG. 21 requires the line memory 113 for two lines and the line memory 251 for three lines, which are compared with the conventional pixels for seven lines. It is possible to reduce the number of line memories for two lines.
  • step S251 the deblocking filter unit 101 performs a deblocking filter process. That is, the deblocking filter unit 101 performs filter processing by the H filter 111 and filter processing by the V filter 112 on the reconstructed pixel.
  • the pixel after the filter processing is output to the line memory 251, the adaptive offset filter unit 252, the adaptive loop filter unit 253, and the calculation unit 254.
  • the deblocking filter unit 101 normally performs a filtering process by the H filter 111 on the reconstructed pixel that is an input pixel from the previous stage (other than the LCU boundary).
  • the deblocking filter unit 101 once holds the reconstructed pixel, which is the input pixel from the previous stage, in the line memory 113.
  • the deblocking filter unit 101 performs filtering using the H filter 111 using the input pixels and the pixels held in the line memory 113, and the V filter 112. Filter processing is performed.
  • step S252 the deblocking filter unit 101 determines whether it is the last pixel in the LCU. If it is determined in step S252 that the pixel is not the last pixel in the LCU, the process returns to step S251, and the subsequent processes are repeated.
  • step S252 If it is determined in step S252 that it is the last pixel in the LCU, the process proceeds to step S253.
  • step S253 the deblocking filter unit 101 determines whether it is the last pixel in the screen. If it is determined in step S253 that the pixel is not the last pixel in the screen, the process proceeds to step S254.
  • step S254 the deblocking filter unit 101 selects the next LCU, and the process returns to step S251. That is, the processing from step S251 onward is repeated for the LCU selected in step S254.
  • step S253 If it is determined in step S253 that the pixel is not the last pixel in the screen, the process proceeds to steps S255 and S256.
  • steps S255 and S256 are executed in parallel using the pixels held in the line memory 251.
  • processing is started with the output phases aligned in steps S255 and S256.
  • the adaptive offset filter unit 252 performs a filtering process by the offset filter 121 on the pixels from the line memory 251 or the deblocking filter unit 101 in step S255.
  • the adaptive offset filter unit 252 outputs the pixel after the filter processing by the offset filter 121 to the calculation unit 254.
  • the adaptive loop filter unit 253 performs filter processing by the loop filter 131 on the pixels from the line memory 251 or the deblocking filter unit 101 in step S256.
  • the adaptive loop filter unit 253 outputs the pixel after the filter processing by the loop filter 131 to the calculation unit 254.
  • step S257 the calculation unit 254 calculates two results after each filter processing by the adaptive offset filter unit 252 and the adaptive loop filter unit 253. The result after the calculation is output to the subsequent stage.
  • step S258 the adaptive offset filter unit 252 and the adaptive loop filter unit 253 determine whether it is the last pixel in the LCU. If it is determined in step S258 that the pixel is not the last pixel in the LCU, the processing returns to steps S255 and S256, and the subsequent processing is repeated.
  • step S258 If it is determined in step S258 that the pixel is the last pixel in the LCU, the process proceeds to step S259.
  • step S259 the adaptive offset filter unit 252 and the adaptive loop filter unit 253 determine whether or not it is the last pixel in the screen. If it is determined in step S259 that the pixel is not the last pixel in the screen, the process proceeds to step S260.
  • step S260 the adaptive offset filter unit 252 and the adaptive loop filter unit 253 select the next LCU, and the process returns to steps S255 and S256. That is, the processes after steps S255 and S256 are repeated for the LCU selected at step S260.
  • step S259 If it is determined in step S259 that the pixel is not the last pixel in the screen, the in-loop filter process is terminated.
  • the input of the two filter processes constituting the in-loop filter 31 is processed in parallel as the pixel after the deblocking filter, and the line memory is shared at the LCU boundary. Therefore, the number of line memories can be reduced.
  • the in-loop filter 31 has a simple configuration in which switching between normal processing and LCU boundary processing is performed only by switching whether a pixel from the previous stage or a pixel from the line memory is input. Accordingly, control when the adaptive offset filter and the adaptive offset filter each perform frame processing can be easily performed by software or the like.
  • the in-loop filter 31 is controlled to process each filter unit in parallel. It can also comprise so that the control part to perform may be provided.
  • the control unit also performs control such that the output phases from the filter units that perform processing in parallel are matched.
  • the HEVC method is used as the encoding method.
  • the present disclosure is not limited to this, and other encoding schemes / decoding schemes including at least two of a deblocking filter, an adaptive offset filter, and an adaptive loop filter can be applied as the in-loop filter. .
  • the present disclosure discloses, for example, image information (bitstream) compressed by orthogonal transformation such as discrete cosine transformation and motion compensation, such as HEVC, satellite broadcasting, cable television, the Internet, or a mobile phone.
  • the present invention can be applied to an image encoding device and an image decoding device used when receiving via a network medium.
  • the present disclosure can be applied to an image encoding device and an image decoding device that are used when processing on a storage medium such as an optical disk, a magnetic disk, and a flash memory.
  • FIG. 26 is a block diagram showing a hardware configuration example of a personal computer that executes the above-described series of processing by a program.
  • a CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • An input / output interface 505 is further connected to the bus 504.
  • the input / output interface 505 includes an input unit 506 made up of a keyboard, mouse, microphone, etc., an output unit 507 made up of a display, a speaker, etc., a storage unit 508 made up of a hard disk or nonvolatile memory, and a communication unit 509 made up of a network interface.
  • a drive 510 is connected to drive a removable medium 211 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
  • the input unit 506 includes a keyboard, a mouse, a microphone, and the like.
  • the output unit 507 includes a display, a speaker, and the like.
  • the storage unit 508 includes a hard disk, a nonvolatile memory, and the like.
  • the communication unit 509 includes a network interface or the like.
  • the drive 510 drives a removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
  • the CPU 501 loads the program stored in the storage unit 508 to the RAM 503 via the input / output interface 505 and the bus 504 and executes the program, for example. A series of processes are performed.
  • Programs executed by the computer (CPU 501) are, for example, a magnetic disk (including a flexible disk), an optical disk (CD-ROM (Compact Disc-Read Only Memory), DVD (Digital Versatile Disc), etc.), a magneto-optical disc, or a semiconductor.
  • the program can be provided by being recorded on a removable medium 511 as a package medium including a memory.
  • the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
  • the program can be installed in the storage unit 508 via the input / output interface 505 by attaching the removable medium 511 to the drive 510. Further, the program can be received by the communication unit 509 via a wired or wireless transmission medium and installed in the storage unit 508. In addition, the program can be installed in the ROM 502 or the storage unit 508 in advance.
  • the program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.
  • the step of describing the program recorded on the recording medium is not limited to the processing performed in chronological order according to the described order, but may be performed in parallel or It also includes processes that are executed individually.
  • system represents the entire apparatus composed of a plurality of devices (apparatuses).
  • the configuration described as one device (or processing unit) may be divided and configured as a plurality of devices (or processing units).
  • the configurations described above as a plurality of devices (or processing units) may be combined into a single device (or processing unit).
  • a configuration other than that described above may be added to the configuration of each device (or each processing unit).
  • a part of the configuration of a certain device (or processing unit) may be included in the configuration of another device (or other processing unit). . That is, the present technology is not limited to the above-described embodiment, and various modifications can be made without departing from the gist of the present technology.
  • An image encoding device and an image decoding device include a transmitter or a receiver in optical broadcasting, satellite broadcasting, cable broadcasting such as cable TV, distribution on the Internet, and distribution to terminals by cellular communication, etc.
  • the present invention can be applied to various electronic devices such as a recording device that records an image on a medium such as a magnetic disk and a flash memory, or a playback device that reproduces an image from these storage media.
  • a recording device that records an image on a medium such as a magnetic disk and a flash memory
  • a playback device that reproduces an image from these storage media.
  • FIG. 27 illustrates an example of a schematic configuration of a television apparatus to which the above-described embodiment is applied.
  • the television apparatus 900 includes an antenna 901, a tuner 902, a demultiplexer 903, a decoder 904, a video signal processing unit 905, a display unit 906, an audio signal processing unit 907, a speaker 908, an external interface 909, a control unit 910, a user interface 911, And a bus 912.
  • Tuner 902 extracts a signal of a desired channel from a broadcast signal received via antenna 901, and demodulates the extracted signal. Then, the tuner 902 outputs the encoded bit stream obtained by the demodulation to the demultiplexer 903. In other words, the tuner 902 serves as a transmission unit in the television apparatus 900 that receives an encoded stream in which an image is encoded.
  • the demultiplexer 903 separates the video stream and audio stream of the viewing target program from the encoded bit stream, and outputs each separated stream to the decoder 904. Further, the demultiplexer 903 extracts auxiliary data such as EPG (Electronic Program Guide) from the encoded bit stream, and supplies the extracted data to the control unit 910. Note that the demultiplexer 903 may perform descrambling when the encoded bit stream is scrambled.
  • EPG Electronic Program Guide
  • the decoder 904 decodes the video stream and audio stream input from the demultiplexer 903. Then, the decoder 904 outputs the video data generated by the decoding process to the video signal processing unit 905. In addition, the decoder 904 outputs audio data generated by the decoding process to the audio signal processing unit 907.
  • the video signal processing unit 905 reproduces the video data input from the decoder 904 and causes the display unit 906 to display the video.
  • the video signal processing unit 905 may cause the display unit 906 to display an application screen supplied via a network.
  • the video signal processing unit 905 may perform additional processing such as noise removal on the video data according to the setting.
  • the video signal processing unit 905 may generate a GUI (Graphical User Interface) image such as a menu, a button, or a cursor, and superimpose the generated image on the output image.
  • GUI Graphic User Interface
  • the display unit 906 is driven by a drive signal supplied from the video signal processing unit 905, and displays an image on a video screen of a display device (for example, a liquid crystal display, a plasma display, or an OELD (Organic ElectroLuminescence Display) (organic EL display)). Or an image is displayed.
  • a display device for example, a liquid crystal display, a plasma display, or an OELD (Organic ElectroLuminescence Display) (organic EL display)). Or an image is displayed.
  • the audio signal processing unit 907 performs reproduction processing such as D / A conversion and amplification on the audio data input from the decoder 904, and outputs audio from the speaker 908.
  • the audio signal processing unit 907 may perform additional processing such as noise removal on the audio data.
  • the external interface 909 is an interface for connecting the television apparatus 900 to an external device or a network.
  • a video stream or an audio stream received via the external interface 909 may be decoded by the decoder 904. That is, the external interface 909 also has a role as a transmission unit in the television apparatus 900 that receives an encoded stream in which an image is encoded.
  • the control unit 910 includes a processor such as a CPU and memories such as a RAM and a ROM.
  • the memory stores a program executed by the CPU, program data, EPG data, data acquired via a network, and the like.
  • the program stored in the memory is read and executed by the CPU when the television apparatus 900 is activated.
  • the CPU executes the program to control the operation of the television device 900 according to an operation signal input from the user interface 911, for example.
  • the user interface 911 is connected to the control unit 910.
  • the user interface 911 includes, for example, buttons and switches for the user to operate the television device 900, a remote control signal receiving unit, and the like.
  • the user interface 911 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 910.
  • the bus 912 connects the tuner 902, the demultiplexer 903, the decoder 904, the video signal processing unit 905, the audio signal processing unit 907, the external interface 909, and the control unit 910 to each other.
  • the decoder 904 has the function of the image decoding apparatus according to the above-described embodiment. As a result, when decoding an image on the television device 900, the line memory can be reduced with a simple processing structure.
  • FIG. 28 shows an example of a schematic configuration of a mobile phone to which the above-described embodiment is applied.
  • a mobile phone 920 includes an antenna 921, a communication unit 922, an audio codec 923, a speaker 924, a microphone 925, a camera unit 926, an image processing unit 927, a demultiplexing unit 928, a recording / reproducing unit 929, a display unit 930, a control unit 931, an operation A portion 932 and a bus 933.
  • the antenna 921 is connected to the communication unit 922.
  • the speaker 924 and the microphone 925 are connected to the audio codec 923.
  • the operation unit 932 is connected to the control unit 931.
  • the bus 933 connects the communication unit 922, the audio codec 923, the camera unit 926, the image processing unit 927, the demultiplexing unit 928, the recording / reproducing unit 929, the display unit 930, and the control unit 931 to each other.
  • the mobile phone 920 has various operation modes including a voice call mode, a data communication mode, a shooting mode, and a videophone mode, and is used for sending and receiving voice signals, sending and receiving e-mail or image data, taking images, and recording data. Perform the action.
  • the analog voice signal generated by the microphone 925 is supplied to the voice codec 923.
  • the audio codec 923 converts an analog audio signal into audio data, A / D converts the compressed audio data, and compresses it. Then, the audio codec 923 outputs the compressed audio data to the communication unit 922.
  • the communication unit 922 encodes and modulates the audio data and generates a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921. In addition, the communication unit 922 amplifies a radio signal received via the antenna 921 and performs frequency conversion to acquire a received signal.
  • the communication unit 922 demodulates and decodes the received signal to generate audio data, and outputs the generated audio data to the audio codec 923.
  • the audio codec 923 decompresses the audio data and performs D / A conversion to generate an analog audio signal. Then, the audio codec 923 supplies the generated audio signal to the speaker 924 to output audio.
  • the control unit 931 generates character data constituting the e-mail in response to an operation by the user via the operation unit 932.
  • the control unit 931 causes the display unit 930 to display characters.
  • the control unit 931 generates e-mail data in response to a transmission instruction from the user via the operation unit 932, and outputs the generated e-mail data to the communication unit 922.
  • the communication unit 922 encodes and modulates email data and generates a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921.
  • the communication unit 922 amplifies a radio signal received via the antenna 921 and performs frequency conversion to acquire a received signal.
  • the communication unit 922 demodulates and decodes the received signal to restore the email data, and outputs the restored email data to the control unit 931.
  • the control unit 931 displays the content of the electronic mail on the display unit 930 and stores the electronic mail data in the storage medium of the recording / reproducing unit 929.
  • the recording / reproducing unit 929 has an arbitrary readable / writable storage medium.
  • the storage medium may be a built-in storage medium such as RAM or flash memory, and is externally mounted such as a hard disk, magnetic disk, magneto-optical disk, optical disk, USB (Unallocated Space Space Bitmap) memory, or memory card. It may be a storage medium.
  • the camera unit 926 images a subject to generate image data, and outputs the generated image data to the image processing unit 927.
  • the image processing unit 927 encodes the image data input from the camera unit 926 and stores the encoded stream in the storage medium of the storage / playback unit 929.
  • the demultiplexing unit 928 multiplexes the video stream encoded by the image processing unit 927 and the audio stream input from the audio codec 923, and the multiplexed stream is the communication unit 922. Output to.
  • the communication unit 922 encodes and modulates the stream and generates a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921.
  • the communication unit 922 amplifies a radio signal received via the antenna 921 and performs frequency conversion to acquire a received signal.
  • These transmission signal and reception signal may include an encoded bit stream.
  • the communication unit 922 demodulates and decodes the received signal to restore the stream, and outputs the restored stream to the demultiplexing unit 928.
  • the demultiplexing unit 928 separates the video stream and the audio stream from the input stream, and outputs the video stream to the image processing unit 927 and the audio stream to the audio codec 923.
  • the image processing unit 927 decodes the video stream and generates video data.
  • the video data is supplied to the display unit 930, and a series of images is displayed on the display unit 930.
  • the audio codec 923 decompresses the audio stream and performs D / A conversion to generate an analog audio signal. Then, the audio codec 923 supplies the generated audio signal to the speaker 924 to output audio.
  • the image processing unit 927 has the functions of the image encoding device and the image decoding device according to the above-described embodiment. As a result, line memory can be reduced with a simple processing structure when encoding and decoding images with the mobile phone 920.
  • FIG. 29 shows an example of a schematic configuration of a recording / reproducing apparatus to which the above-described embodiment is applied.
  • the recording / reproducing device 940 encodes audio data and video data of a received broadcast program and records the encoded data on a recording medium.
  • the recording / reproducing device 940 may encode audio data and video data acquired from another device and record them on a recording medium, for example.
  • the recording / reproducing device 940 reproduces data recorded on the recording medium on a monitor and a speaker, for example, in accordance with a user instruction. At this time, the recording / reproducing device 940 decodes the audio data and the video data.
  • the recording / reproducing apparatus 940 includes a tuner 941, an external interface 942, an encoder 943, an HDD (Hard Disk Drive) 944, a disk drive 945, a selector 946, a decoder 947, an OSD (On-Screen Display) 948, a control unit 949, and a user interface. 950.
  • Tuner 941 extracts a signal of a desired channel from a broadcast signal received via an antenna (not shown), and demodulates the extracted signal. Then, the tuner 941 outputs the encoded bit stream obtained by the demodulation to the selector 946. That is, the tuner 941 has a role as a transmission unit in the recording / reproducing apparatus 940.
  • the external interface 942 is an interface for connecting the recording / reproducing apparatus 940 to an external device or a network.
  • the external interface 942 may be, for example, an IEEE1394 interface, a network interface, a USB interface, or a flash memory interface.
  • video data and audio data received via the external interface 942 are input to the encoder 943. That is, the external interface 942 serves as a transmission unit in the recording / reproducing device 940.
  • the encoder 943 encodes video data and audio data when the video data and audio data input from the external interface 942 are not encoded. Then, the encoder 943 outputs the encoded bit stream to the selector 946.
  • the HDD 944 records an encoded bit stream in which content data such as video and audio is compressed, various programs, and other data on an internal hard disk. Further, the HDD 944 reads out these data from the hard disk when reproducing video and audio.
  • the disk drive 945 performs recording and reading of data to and from the mounted recording medium.
  • the recording medium mounted on the disk drive 945 is, for example, a DVD disk (DVD-Video, DVD-RAM, DVD-R, DVD-RW, DVD + R, DVD + RW, etc.) or a Blu-ray (registered trademark) disk. It may be.
  • the selector 946 selects an encoded bit stream input from the tuner 941 or the encoder 943 when recording video and audio, and outputs the selected encoded bit stream to the HDD 944 or the disk drive 945. In addition, the selector 946 outputs the encoded bit stream input from the HDD 944 or the disk drive 945 to the decoder 947 during video and audio reproduction.
  • the decoder 947 decodes the encoded bit stream and generates video data and audio data. Then, the decoder 947 outputs the generated video data to the OSD 948. The decoder 904 outputs the generated audio data to an external speaker.
  • OSD 948 reproduces the video data input from the decoder 947 and displays the video. Further, the OSD 948 may superimpose a GUI image such as a menu, a button, or a cursor on the video to be displayed.
  • the control unit 949 includes a processor such as a CPU and memories such as a RAM and a ROM.
  • the memory stores a program executed by the CPU, program data, and the like.
  • the program stored in the memory is read and executed by the CPU when the recording / reproducing apparatus 940 is activated, for example.
  • the CPU controls the operation of the recording / reproducing apparatus 940 in accordance with an operation signal input from the user interface 950, for example, by executing the program.
  • the user interface 950 is connected to the control unit 949.
  • the user interface 950 includes, for example, buttons and switches for the user to operate the recording / reproducing device 940, a remote control signal receiving unit, and the like.
  • the user interface 950 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 949.
  • the encoder 943 has the function of the image encoding apparatus according to the above-described embodiment.
  • the decoder 947 has the function of the image decoding apparatus according to the above-described embodiment.
  • FIG. 30 illustrates an example of a schematic configuration of an imaging apparatus to which the above-described embodiment is applied.
  • the imaging device 960 images a subject to generate an image, encodes the image data, and records it on a recording medium.
  • the imaging device 960 includes an optical block 961, an imaging unit 962, a signal processing unit 963, an image processing unit 964, a display unit 965, an external interface 966, a memory 967, a media drive 968, an OSD 969, a control unit 970, a user interface 971, and a bus. 972.
  • the optical block 961 is connected to the imaging unit 962.
  • the imaging unit 962 is connected to the signal processing unit 963.
  • the display unit 965 is connected to the image processing unit 964.
  • the user interface 971 is connected to the control unit 970.
  • the bus 972 connects the image processing unit 964, the external interface 966, the memory 967, the media drive 968, the OSD 969, and the control unit 970 to each other.
  • the optical block 961 includes a focus lens and a diaphragm mechanism.
  • the optical block 961 forms an optical image of the subject on the imaging surface of the imaging unit 962.
  • the imaging unit 962 includes an image sensor such as a CCD (Charge-Coupled Device) or a CMOS (Complementary Metal-Oxide Semiconductor), and converts an optical image formed on the imaging surface into an image signal as an electrical signal by photoelectric conversion. Then, the imaging unit 962 outputs the image signal to the signal processing unit 963.
  • CCD Charge-Coupled Device
  • CMOS Complementary Metal-Oxide Semiconductor
  • the signal processing unit 963 performs various camera signal processing such as knee correction, gamma correction, and color correction on the image signal input from the imaging unit 962.
  • the signal processing unit 963 outputs the image data after the camera signal processing to the image processing unit 964.
  • the image processing unit 964 encodes the image data input from the signal processing unit 963 and generates encoded data. Then, the image processing unit 964 outputs the generated encoded data to the external interface 966 or the media drive 968. The image processing unit 964 also decodes encoded data input from the external interface 966 or the media drive 968 to generate image data. Then, the image processing unit 964 outputs the generated image data to the display unit 965. In addition, the image processing unit 964 may display the image by outputting the image data input from the signal processing unit 963 to the display unit 965. Further, the image processing unit 964 may superimpose display data acquired from the OSD 969 on an image output to the display unit 965.
  • the OSD 969 generates a GUI image such as a menu, a button, or a cursor, and outputs the generated image to the image processing unit 964.
  • the external interface 966 is configured as a USB input / output terminal, for example.
  • the external interface 966 connects the imaging device 960 and a printer, for example, when printing an image.
  • a drive is connected to the external interface 966 as necessary.
  • a removable medium such as a magnetic disk or an optical disk is attached to the drive, and a program read from the removable medium can be installed in the imaging device 960.
  • the external interface 966 may be configured as a network interface connected to a network such as a LAN or the Internet. That is, the external interface 966 has a role as a transmission unit in the imaging device 960.
  • the recording medium mounted on the media drive 968 may be any readable / writable removable medium such as a magnetic disk, a magneto-optical disk, an optical disk, or a semiconductor memory.
  • a recording medium may be fixedly mounted on the media drive 968, and a non-portable storage unit such as an internal hard disk drive or an SSD (Solid State Drive) may be configured.
  • the control unit 970 includes a processor such as a CPU and memories such as a RAM and a ROM.
  • the memory stores a program executed by the CPU, program data, and the like.
  • the program stored in the memory is read and executed by the CPU when the imaging device 960 is activated, for example.
  • the CPU controls the operation of the imaging device 960 according to an operation signal input from the user interface 971 by executing the program.
  • the user interface 971 is connected to the control unit 970.
  • the user interface 971 includes, for example, buttons and switches for the user to operate the imaging device 960.
  • the user interface 971 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 970.
  • the image processing unit 964 has the functions of the image encoding device and the image decoding device according to the above-described embodiment. As a result, when the image is encoded and decoded by the imaging device 960, the line memory can be reduced with a simple processing structure.
  • the filtering process for the vertical boundary is mainly performed before the filtering process for the horizontal boundary has been described.
  • the present invention is also applicable when the filtering process for the horizontal boundary is performed first.
  • the above-described effects of the technology according to the disclosure can be enjoyed equally.
  • the size of the processing unit of the deblocking filter or the size of the LCU is not limited to the example described in the present specification, and may be other sizes.
  • the filter for the filtering process for the vertical boundary is expressed as “H (horizontal) filter”.
  • the filter taps of the filtering process for the horizontal boundary are arranged along the vertical direction, the filter of the filtering process for the horizontal boundary is expressed as “V (vertical) filter”.
  • V (vertical) filter the filtering process for the vertical boundary
  • H (horizontal) filter the filtering process for the horizontal boundary
  • the method for transmitting such information is not limited to such an example.
  • these pieces of information may be transmitted or recorded as separate data associated with the encoded bitstream without being multiplexed into the encoded bitstream.
  • the term “associate” means that an image (which may be a part of an image such as a slice or a block) included in the bitstream and information corresponding to the image can be linked at the time of decoding. Means. That is, information may be transmitted on a transmission path different from that of the image (or bit stream).
  • Information may be recorded on a recording medium (or another recording area of the same recording medium) different from the image (or bit stream). Furthermore, the information and the image (or bit stream) may be associated with each other in an arbitrary unit such as a plurality of frames, one frame, or a part of the frame.
  • this technique can also take the following structures.
  • a decoding unit that decodes an encoded stream to generate an image;
  • a first filter that performs a first filter process on the reconstructed image of the image generated by the decoding unit;
  • a second filter that performs a second filter process different from the first filter process on the reconstructed image of the image generated by the decoding unit;
  • An image processing apparatus comprising: an arithmetic unit that performs arithmetic processing using the image on which the first filter processing has been performed and the image on which the second filter processing has been performed.
  • the above (1) further includes a control unit that controls the first filter and the second filter so that the first filter processing and the second filter processing are performed in parallel. The image processing apparatus described.
  • the image processing device wherein the control unit performs control so as to match output phases of the first filter and the second filter.
  • a memory for holding a reconstructed image of the image generated by the decoding unit is further provided.
  • the image processing apparatus according to any one of (1) to (3), wherein the first filter and the second filter acquire the reconstructed image from the memory.
  • the first filter is a filter that removes noise at a block boundary.
  • the first filter is a deblocking filter.
  • the deblocking filter includes a filter applied to left and right pixels of a vertical boundary, and a filter applied to pixels above and below a horizontal boundary.
  • the control unit performs control so that a filter process applied to pixels on the left and right of the vertical boundary and a filter process applied to pixels above and below the horizontal boundary are performed in parallel.
  • Image processing device (9) The image processing apparatus according to any one of (1) to (7), wherein the second filter includes a third filter that removes ringing or a fourth filter that performs class classification on a block basis. . (10) The image processing device according to (9), wherein the third filter is an adaptive offset filter, and the fourth filter is an adaptive loop filter.
  • the calculation unit may use the first calculation coefficient corresponding to the first filter processing as an image on which the first filter processing has been performed and an image on which the second filter processing has been performed.
  • the image processing device according to any one of (1) to (10), wherein arithmetic processing is performed such that addition is performed as a linear sum using a second arithmetic coefficient corresponding to the second filter processing.
  • (12) The image processing device according to (11), wherein the first calculation coefficient and the second calculation coefficient are set according to a distance from a vertical boundary and a horizontal boundary.
  • the image processing apparatus Decoding the encoded stream to generate an image, For the reconstructed image of the generated image, the first filter processing is performed, For the reconstructed image of the generated image, a second filter process different from the first filter process is performed, An image processing method for performing arithmetic processing using an image on which the first filter processing has been performed and an image on which the second filter processing has been performed.
  • a first filter that performs a first filter process on a reconstructed image of an image that has been locally decoded when the image is encoded;
  • a second filter that performs a second filter process different from the first filter process on the reconstructed image of the locally decoded image;
  • An arithmetic unit that performs arithmetic processing using the image subjected to the first filter processing and the image subjected to the second filter processing;
  • An image processing apparatus comprising: an encoding unit that encodes the image using an image that is a result of the arithmetic processing performed by the arithmetic unit.
  • the above (14) further comprising a control unit that controls the first filter and the second filter so that the first filter processing and the second filter processing are performed in parallel.
  • the image processing apparatus The first filter processing is performed on the reconstructed image of the image that has been subjected to local decoding processing when the image is encoded, A second filter process different from the first filter process is performed on the reconstructed image of the image subjected to the local decoding process, Using the image on which the first filter processing has been performed and the image on which the second filter processing has been performed, arithmetic processing is performed, An image processing method for encoding an image using an image that is a result of arithmetic processing.
  • a decoding unit that generates an image by decoding the encoded stream; A first filter that performs a first filter process on the reconstructed image of the image generated by the decoding unit; A second filter that performs a second filter process different from the first filter process for an image on which the first filter process has been performed by the first filter;
  • An image processing apparatus comprising: an arithmetic unit that performs arithmetic processing using the image on which the first filter processing has been performed and the image on which the second filter processing has been performed.
  • the image processing apparatus is Decoding the encoded stream to generate an image, For the reconstructed image of the generated image, the first filter processing is performed, For the image on which the first filter processing has been performed, a second filter processing different from the first filter processing is performed, An image processing method for performing arithmetic processing using an image on which the first filter processing has been performed and an image on which the second filter processing has been performed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Processing (AREA)

Abstract

La présente invention porte sur un dispositif et un procédé de traitement d'image au moyen desquels il est possible de réduire une mémoire de ligne à l'aide d'une structure de traitement simple. Une unité de filtrage H antiblocs, une unité de filtrage V antiblocs, une unité de filtrage de décalage adaptatif et une unité de filtrage de boucle adaptatif partagent la mémoire de ligne au niveau de la frontière LCU, et effectue chacune un traitement de filtrage en parallèle sur un pixel reconstruit conservé dans la mémoire de ligne. Une unité de calcul effectue des calculs tels qu'une addition pour le pixel après le traitement de filtrage par chaque unité de filtrage, et délivre le résultat de calcul à un étage suivant. Il est possible d'appliquer la présente invention à un dispositif de traitement d'image, par exemple.
PCT/JP2012/074092 2011-09-27 2012-09-20 Dispositif et procédé de traitement d'image WO2013047325A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011-211226 2011-09-27
JP2011211226A JP2013074416A (ja) 2011-09-27 2011-09-27 画像処理装置および方法

Publications (1)

Publication Number Publication Date
WO2013047325A1 true WO2013047325A1 (fr) 2013-04-04

Family

ID=47995357

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/074092 WO2013047325A1 (fr) 2011-09-27 2012-09-20 Dispositif et procédé de traitement d'image

Country Status (2)

Country Link
JP (1) JP2013074416A (fr)
WO (1) WO2013047325A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019182429A1 (fr) * 2018-03-23 2019-09-26 Samsung Electronics Co., Ltd. Procédé et appareil de gestion de faisceau destinés à une transmission à flux multiples
WO2019182159A1 (fr) * 2018-03-23 2019-09-26 シャープ株式会社 Dispositif de filtrage d'image, dispositif de décodage d'image et dispositif de codage d'image
CN113489998A (zh) * 2021-05-27 2021-10-08 杭州博雅鸿图视频技术有限公司 一种去块效应滤波方法、装置、电子设备及介质
WO2023077427A1 (fr) * 2021-11-05 2023-05-11 Apple Inc. Techniques de configuration et de faisceau par défaut à des vitesses de déplacement élevées

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101517806B1 (ko) * 2014-02-18 2015-05-06 전자부품연구원 영상 복호화 방법 및 이를 적용한 영상 기기
US12058314B2 (en) 2021-04-30 2024-08-06 Tencent America LLC Block-wise content-adaptive online training in neural image compression with post filtering

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011152425A1 (fr) * 2010-06-03 2011-12-08 シャープ株式会社 Dispositif de filtrage, dispositif de décodage d'image, dispositif de codage d'image et structure de données de paramètre de filtre

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011152425A1 (fr) * 2010-06-03 2011-12-08 シャープ株式会社 Dispositif de filtrage, dispositif de décodage d'image, dispositif de codage d'image et structure de données de paramètre de filtre

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MASARU IKEDA ET AL.: "CE12 Subset2: Parallel deblocking filter", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11, JCTVC-E181, 5TH MEETING, 16 March 2011 (2011-03-16) - 23 March 2011 (2011-03-23), GENEVA, CH, pages 1 - 9, XP030008687 *
MASARU IKEDA ET AL.: "Parallel deblocking improvement", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11, JCTVC-F214, 14 July 2011 (2011-07-14) - 22 July 2011 (2011-07-22), TORINO, IT, pages 1 - 7, XP030009237 *
SEMIH ESENLIK ET AL.: "Line Memory Reduction for ALF Decoding", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11, JCTVC-E225, 5TH MEETING, 16 March 2011 (2011-03-16) - 23 March 2011 (2011-03-23), GENEVA, CH, pages 1 - 10, XP030008731 *
TOMOHIRO IKAI ET AL.: "A parallel adaptive loop filter", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11, JCTVC-B064, 2ND MEETING, 21 July 2010 (2010-07-21) - 28 July 2010 (2010-07-28), GENEVA, CH, pages 1 - 11, XP030007644 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019182429A1 (fr) * 2018-03-23 2019-09-26 Samsung Electronics Co., Ltd. Procédé et appareil de gestion de faisceau destinés à une transmission à flux multiples
WO2019182159A1 (fr) * 2018-03-23 2019-09-26 シャープ株式会社 Dispositif de filtrage d'image, dispositif de décodage d'image et dispositif de codage d'image
US11889070B2 (en) 2018-03-23 2024-01-30 Sharp Kabushiki Kaisha Image filtering apparatus, image decoding apparatus, and image coding apparatus
CN113489998A (zh) * 2021-05-27 2021-10-08 杭州博雅鸿图视频技术有限公司 一种去块效应滤波方法、装置、电子设备及介质
WO2023077427A1 (fr) * 2021-11-05 2023-05-11 Apple Inc. Techniques de configuration et de faisceau par défaut à des vitesses de déplacement élevées

Also Published As

Publication number Publication date
JP2013074416A (ja) 2013-04-22

Similar Documents

Publication Publication Date Title
TWI749530B (zh) 資訊處理設備及資訊處理方法
KR102005209B1 (ko) 화상 처리 장치와 화상 처리 방법
JP5942990B2 (ja) 画像処理装置および方法
US10419756B2 (en) Image processing device and method
JP5464435B2 (ja) 画像復号装置および方法
WO2014002896A1 (fr) Dispositif et procédé de codage, dispositif et procédé de décodage
WO2011089972A1 (fr) Dispositif et procédé de traitement d'images
JP5884313B2 (ja) 画像処理装置、画像処理方法、プログラム及び記録媒体
US20140086501A1 (en) Image processing device and image processing method
WO2014050676A1 (fr) Dispositif et procédé de traitement d'image
WO2011125866A1 (fr) Dispositif et procédé de traitement d'images
JP5556996B2 (ja) 画像処理装置および方法
JP2014207536A (ja) 画像処理装置および方法
WO2013065570A1 (fr) Dispositif et procédé de traitement d'image
WO2013108688A1 (fr) Dispositif de traitement d'image et procédé
WO2013047325A1 (fr) Dispositif et procédé de traitement d'image
WO2014156708A1 (fr) Dispositif et procede de decodage d'image
WO2013051452A1 (fr) Dispositif et procédé de traitement d'image
JP6217826B2 (ja) 画像処理装置及び画像処理方法
WO2013065567A1 (fr) Dispositif et procédé de traitement d'image
JP2013074491A (ja) 画像処理装置および方法
WO2014002900A1 (fr) Dispositif et procédé de traitement d'images
WO2013105457A1 (fr) Dispositif et procédé de traitement d'image
WO2014103765A1 (fr) Dispositif et procédé de décodage, et dispositif et procédé d'encodage
JP2013012996A (ja) 画像処理装置および方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12836257

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12836257

Country of ref document: EP

Kind code of ref document: A1