WO2013105458A1 - Dispositif et procédé de traitement d'image - Google Patents

Dispositif et procédé de traitement d'image Download PDF

Info

Publication number
WO2013105458A1
WO2013105458A1 PCT/JP2012/083969 JP2012083969W WO2013105458A1 WO 2013105458 A1 WO2013105458 A1 WO 2013105458A1 JP 2012083969 W JP2012083969 W JP 2012083969W WO 2013105458 A1 WO2013105458 A1 WO 2013105458A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
filter
image
coefficient
buffer
Prior art date
Application number
PCT/JP2012/083969
Other languages
English (en)
Japanese (ja)
Inventor
央二 中神
裕音 櫻井
北村 卓也
矢ケ崎 陽一
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2013105458A1 publication Critical patent/WO2013105458A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing

Definitions

  • the present disclosure relates to an image processing apparatus and method, and more particularly, to an image processing apparatus and method that can simplify management of a buffer that stores a filter coefficient of an in-loop filter in encoding or decoding.
  • the adaptive loop filter is adopted in the draft of HEVC at present.
  • 16 sets of filter coefficients are transmitted per picture. This filter coefficient is transmitted prior to picture coding information such as prediction mode information, motion vector information, and DCT coefficient information.
  • the filter coefficient to be applied is determined by the dispersion of pixels in the region, and in the case of region base, the filter to be applied is determined by a flag.
  • Non-Patent Document 2 a proposal has been made to send a filter coefficient or index to the decoding side in units of LCU which is the maximum encoding unit.
  • the filter coefficients are stored in the filter coefficient buffer in the order of transmission.
  • the filter coefficients present in the buffer are referenced by an index.
  • the filter coefficient buffer is managed by FIFO, and the coefficients transmitted before the buffer size are discarded.
  • JCTVC-F803 Joint Collaborative-TeamVC -T SG16 WP3 and ISO / IEC JTC1 / SC29 / WG11 6th Meeting: Torino, IT, 14-22 July, 2011
  • A.Fuldseth Cisco Systems, G.bjontegaard, Cisco Systems, ”Improved ALF with low latency and reduced complexity”, JCTVC-G499, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 / WP3 and ISO JTC1 / SC29 / WG11 7th Meeting: Geneva, CH, 21-30 November, 2011
  • the filter coefficient buffer described in Non-Patent Document 2 is managed by a FIFO.
  • the number of filter coefficients of the adaptive loop filter is large, when managed by FIFO, buffer management is difficult, and it is difficult to perform decoding processing from the middle of the screen.
  • the present disclosure has been made in view of such a situation, and can simplify management of a buffer that stores a filter coefficient of an in-pool filter in encoding or decoding.
  • An image processing apparatus includes a receiving unit that receives a filter coefficient of a filter used for encoding or decoding in a transmission unit that is a unit larger than a maximum encoding unit, and decoding an encoded stream Decoding unit for processing to generate an image, coefficient writing unit for writing the filter coefficient received by the receiving unit into a buffer for storing the filter coefficient, and the coefficient for the image generated by the decoding unit And a filter unit that applies the filter for each maximum coding unit using the filter coefficient written in the buffer by the writing unit.
  • the coefficient writing unit can write the filter coefficient received by the receiving unit into an empty area of the buffer.
  • the coefficient writing unit can overwrite the buffer with the filter coefficient received by the receiving unit when there is no free space in the buffer.
  • the coefficient writing unit can write the filter coefficient by using storage position information indicating a position where a filter coefficient with a small number of references is stored in the buffer.
  • a buffer for storing the filter coefficient can be further provided.
  • the filter unit can apply a filter to the image subjected to the adaptive offset process.
  • the filter unit can perform a filter using a coefficient that minimizes an error from the original image.
  • an image processing apparatus receives a filter coefficient of a filter used for encoding or decoding in a transmission unit that is a unit larger than a maximum encoding unit, and an encoded stream. To generate an image, write the received filter coefficient to a buffer storing the filter coefficient, and use the filter coefficient written in the buffer for the generated image as a target for maximum encoding The filter is applied for each unit.
  • An image processing apparatus includes a coefficient writing unit that writes a filter coefficient of a filter used for encoding or decoding to a buffer, and a filter coefficient that is written to the buffer by the coefficient writing unit.
  • the filter unit that performs the filter for each maximum encoding unit, and the image that has been subjected to the filter by the filter unit, encodes the image and generates an encoded stream An encoding unit; and a transmission unit that transmits the filter coefficient in a transmission unit that is a unit larger than a maximum encoding unit.
  • the coefficient writing unit can write the filter coefficient into an empty area of the buffer.
  • the coefficient writing unit can overwrite the buffer coefficient in the buffer when there is no free space in the buffer.
  • the counter further includes a counter for measuring the number of reference times of the filter coefficient in the buffer, the coefficient writing unit refers to the counter, writes the filter coefficient, and the transmission unit has a filter coefficient with a small number of reference times in the buffer. Storage position information indicating the stored position can be transmitted.
  • a buffer for storing the filter coefficient can be further provided.
  • the filter unit can apply a filter to the image subjected to the adaptive offset process.
  • the filter unit can perform a filter using a coefficient that minimizes an error from the original image.
  • the image processing method writes a filter coefficient of a filter used for encoding or decoding into a buffer that stores the filter coefficient, and performs local decoding processing using the filter coefficient written in the buffer.
  • the image is subjected to the maximum coding unit, the image is encoded using the filtered image, an encoded stream is generated, and the determined filter coefficient is set to the maximum Transmit in transmission units that are larger than the coding unit
  • a filter coefficient of a filter used for encoding or decoding is received in a transmission unit that is a unit larger than the maximum encoding unit, and the received encoded stream is decoded.
  • An image is generated and the received filter coefficients are written into a buffer storing the filter coefficients. Then, for the generated image, the filter is applied for each maximum coding unit using the filter coefficient written in the buffer.
  • the filter coefficient of the filter used for encoding or decoding is determined, and the determined filter coefficient is written in the buffer. Then, with the filter coefficients written in the buffer, the filter is applied for each maximum coding unit for the locally decoded image, and the image is encoded using the filtered image. Then, an encoded stream is generated, and the filter coefficient is transmitted in a transmission unit that is a unit larger than the maximum encoding unit.
  • the above-described image processing apparatus may be an independent apparatus, or may be an internal block constituting one image encoding apparatus or image decoding apparatus.
  • an image can be decoded.
  • it is possible to simplify the management of the buffer that stores the filter coefficient.
  • an image can be encoded.
  • it is possible to simplify the management of the buffer that stores the filter coefficient.
  • FIG. 20 is a block diagram illustrating a main configuration example of a computer.
  • FIG. 1 illustrates a configuration of an embodiment of an image encoding device as an image processing device to which the present disclosure is applied.
  • the image encoding device 11 shown in FIG. 1 encodes image data using a prediction process.
  • a prediction process for example, HEVC (High) Efficiency Video Coding) method or the like is used.
  • the image encoding device 11 includes an A / D (Analog / Digital) conversion unit 21, a screen rearrangement buffer 22, a calculation unit 23, an orthogonal transformation unit 24, a quantization unit 25, and a lossless encoding unit 26. And a storage buffer 27.
  • the image encoding device 11 includes an inverse quantization unit 28, an inverse orthogonal transform unit 29, a calculation unit 30, a deblocking filter 31, a frame memory 32, a selection unit 33, an intra prediction unit 34, a motion prediction / compensation unit 35, A predicted image selection unit 36 and a rate control unit 37 are included.
  • the image encoding device 11 includes an adaptive offset filter 41 and an adaptive loop filter 42 between the deblocking filter 31 and the frame memory 32.
  • the A / D converter 21 A / D converts the input image data, outputs it to the screen rearrangement buffer 22, and stores it.
  • the screen rearrangement buffer 22 rearranges the stored frame images in the display order in the order of frames for encoding according to the GOP (Group of Picture) structure.
  • the screen rearrangement buffer 22 supplies the image with the rearranged frame order to the arithmetic unit 23.
  • the screen rearrangement buffer 22 also supplies the image in which the frame order is rearranged to the intra prediction unit 34 and the motion prediction / compensation unit 35.
  • the calculation unit 23 subtracts the predicted image supplied from the intra prediction unit 34 or the motion prediction / compensation unit 35 via the predicted image selection unit 36 from the image read from the screen rearrangement buffer 22, and the difference information Is output to the orthogonal transform unit 24.
  • the calculation unit 23 subtracts the prediction image supplied from the intra prediction unit 34 from the image read from the screen rearrangement buffer 22.
  • the calculation unit 23 subtracts the prediction image supplied from the motion prediction / compensation unit 35 from the image read from the screen rearrangement buffer 22.
  • the orthogonal transform unit 24 performs orthogonal transform such as discrete cosine transform and Karhunen-Loeve transform on the difference information supplied from the computation unit 23 and supplies the transform coefficient to the quantization unit 25.
  • the quantization unit 25 quantizes the transform coefficient output from the orthogonal transform unit 24.
  • the quantization unit 25 supplies the quantized transform coefficient to the lossless encoding unit 26.
  • the lossless encoding unit 26 performs lossless encoding such as variable length encoding and arithmetic encoding on the quantized transform coefficient.
  • the lossless encoding unit 26 acquires parameters such as information indicating the intra prediction mode from the intra prediction unit 34, and acquires parameters such as information indicating the inter prediction mode and motion vector information from the motion prediction / compensation unit 35.
  • the lossless encoding unit 26 encodes the quantized transform coefficient, encodes each acquired parameter (syntax element), and uses it as part of the header information of the encoded data (multiplexes).
  • the lossless encoding unit 26 supplies the encoded data obtained by encoding to the accumulation buffer 27 for accumulation.
  • lossless encoding processing such as variable length encoding or arithmetic encoding is performed.
  • variable length coding include CAVLC (Context-Adaptive Variable Length Coding).
  • arithmetic coding include CABAC (Context-Adaptive Binary Arithmetic Coding).
  • the accumulation buffer 27 temporarily holds the encoded stream (data) supplied from the lossless encoding unit 26, and, for example, as a coded image encoded at a predetermined timing, for example, a recording (not shown) in the subsequent stage. Output to devices and transmission lines. That is, the accumulation buffer 27 is also a transmission unit that transmits the encoded stream.
  • the transform coefficient quantized by the quantization unit 25 is also supplied to the inverse quantization unit 28.
  • the inverse quantization unit 28 inversely quantizes the quantized transform coefficient by a method corresponding to the quantization by the quantization unit 25.
  • the inverse quantization unit 28 supplies the obtained transform coefficient to the inverse orthogonal transform unit 29.
  • the inverse orthogonal transform unit 29 performs inverse orthogonal transform on the supplied transform coefficient by a method corresponding to the orthogonal transform processing by the orthogonal transform unit 24.
  • the inversely orthogonally transformed output (restored difference information) is supplied to the arithmetic unit 30.
  • the calculation unit 30 is supplied from the intra prediction unit 34 or the motion prediction / compensation unit 35 to the inverse orthogonal transform result supplied from the inverse orthogonal transform unit 29, that is, the restored difference information, via the predicted image selection unit 36. Predicted images are added to obtain a locally decoded image (decoded image).
  • the calculation unit 30 adds the prediction image supplied from the intra prediction unit 34 to the difference information.
  • the calculation unit 30 adds the predicted image supplied from the motion prediction / compensation unit 35 to the difference information.
  • the decoded image as the addition result is supplied to the deblocking filter 31 and the frame memory 32.
  • the deblocking filter 31 removes block distortion of the decoded image by appropriately performing deblocking filter processing.
  • the deblocking filter 31 supplies the filter processing result to the adaptive offset filter 41.
  • the adaptive offset filter 41 performs an offset filter (SAO: Sample adaptive ⁇ offset) process that mainly removes ringing on the image after filtering by the deblocking filter 31.
  • SAO Sample adaptive ⁇ offset
  • the adaptive offset filter 41 applies a quad-tree structure in which the type of the offset filter is determined for each divided region and the offset value for each divided region to the image after filtering by the deblocking filter 31. Apply processing.
  • the quad-tree structure and the offset value for each divided region are calculated by the adaptive offset filter 41 and used in the image encoding device 11.
  • the calculated quad-tree structure and the offset value for each divided region are encoded by the lossless encoding unit 26 and transmitted to the image decoding device 51 of FIG.
  • the adaptive offset filter 41 supplies the filtered image to the adaptive loop filter 42.
  • the adaptive loop filter 42 performs an adaptive loop filter (ALF) process in units of LCU, which is the maximum encoding unit, as an ALF processing unit.
  • ALF adaptive loop filter
  • the filtered image is supplied to the frame memory 32.
  • a two-dimensional Wiener filter is used as a filter.
  • filters other than the Wiener filter may be used.
  • the adaptive loop filter 42 resets the filter coefficients in units of screens, sequentially obtains the filter coefficients in the encoding head of the screen and during the encoding, stores them in the buffer, and sends them to the decoding side.
  • the adaptive loop filter 42 obtains a filter coefficient for each predetermined transmission unit such as an LCU line during encoding.
  • the LCU line (maximum encoding unit line) is a unit that is larger than the LCU (maximum encoding unit line) and collects a plurality of LCUs toward the edge of the screen.
  • the transmission unit is not limited to the LCU line.
  • the filter coefficient may be sent whenever it is necessary, even if it is not in a predetermined transmission unit, as long as the screen is being encoded.
  • the adaptive loop filter 42 filters the image from the adaptive offset filter 41 using the optimum filter coefficient among the filter coefficients stored in the buffer in units of LCUs.
  • the adaptive loop filter 42 supplies a buffer index indicating the storage position of the used filter coefficient together with a flag indicating on / off of the filter to the lossless encoding unit 26, and transmits it to the decoding side.
  • the adaptive loop filter 42 filters the filter coefficient in the area with the smallest index among the empty areas. Is stored. However, if there is no free space in the buffer, overwriting is performed. For example, the adaptive loop filter 42 overwrites an area of a coefficient with a low reference count among the filter coefficients stored in the buffer. Then, the adaptive loop filter 42 supplies the storage location information (that is, the buffer index) indicating the storage location where the overwriting is performed together with the filter coefficient to the lossless encoding unit 26 and transmits the same to the decoding side.
  • the storage location information that is, the buffer index
  • the frame memory 32 outputs the stored reference image to the intra prediction unit 34 or the motion prediction / compensation unit 35 via the selection unit 33 at a predetermined timing.
  • the frame memory 32 supplies the reference image to the intra prediction unit 34 via the selection unit 33.
  • the frame memory 32 supplies the reference image to the motion prediction / compensation unit 35 via the selection unit 33.
  • the selection unit 33 supplies the reference image to the intra prediction unit 34 when the reference image supplied from the frame memory 32 is an image to be subjected to intra coding.
  • the selection unit 33 also supplies the reference image to the motion prediction / compensation unit 35 when the reference image supplied from the frame memory 32 is an image to be inter-encoded.
  • the intra prediction unit 34 performs intra prediction (intra-screen prediction) for generating a predicted image using pixel values in the screen.
  • the intra prediction unit 34 performs intra prediction in a plurality of modes (intra prediction modes).
  • the intra prediction unit 34 generates prediction images in all intra prediction modes, evaluates each prediction image, and selects an optimal mode. When the optimal intra prediction mode is selected, the intra prediction unit 34 supplies the prediction image generated in the optimal mode to the calculation unit 23 and the calculation unit 30 via the predicted image selection unit 36.
  • the intra prediction unit 34 supplies parameters such as intra prediction mode information indicating the adopted intra prediction mode to the lossless encoding unit 26 as appropriate.
  • the motion prediction / compensation unit 35 uses the input image supplied from the screen rearrangement buffer 22 and the reference image supplied from the frame memory 32 via the selection unit 33 for the image to be inter-coded, Perform motion prediction. In addition, the motion prediction / compensation unit 35 performs a motion compensation process according to the motion vector detected by the motion prediction, and generates a predicted image (inter predicted image information).
  • the motion prediction / compensation unit 35 performs inter prediction processing in all candidate inter prediction modes, and generates a prediction image.
  • the motion prediction / compensation unit 35 supplies the generated predicted image to the calculation unit 23 and the calculation unit 30 via the predicted image selection unit 36.
  • the motion prediction / compensation unit 35 supplies parameters such as inter prediction mode information indicating the employed inter prediction mode and motion vector information indicating the calculated motion vector to the lossless encoding unit 26.
  • the predicted image selection unit 36 supplies the output of the intra prediction unit 34 to the calculation unit 23 and the calculation unit 30 in the case of an image to be subjected to intra coding, and in the case of an image to be subjected to inter coding, the motion prediction / compensation unit 35.
  • the output is supplied to the calculation unit 23 and the calculation unit 30.
  • the rate control unit 37 controls the quantization operation rate of the quantization unit 25 based on the compressed image stored in the storage buffer 27 so that overflow or underflow does not occur.
  • step S11 the A / D converter 21 A / D converts the input image.
  • step S12 the screen rearrangement buffer 22 stores the A / D converted images, and rearranges them from the display order of each picture to the encoding order.
  • a decoded image to be referred to is read from the frame memory 32 and the intra prediction unit via the selection unit 33 34.
  • the intra prediction unit 34 performs intra prediction on the pixels of the block to be processed in all candidate intra prediction modes. Note that pixels that have not been filtered by the deblocking filter 31 are used as decoded pixels that are referred to.
  • intra prediction is performed in all candidate intra prediction modes, and cost function values are calculated for all candidate intra prediction modes. Then, based on the calculated cost function value, the optimal intra prediction mode is selected, and the predicted image generated by the intra prediction of the optimal intra prediction mode and its cost function value are supplied to the predicted image selection unit 36.
  • the processing target image supplied from the screen rearrangement buffer 22 is an inter-processed image
  • the referenced image is read from the frame memory 32 and supplied to the motion prediction / compensation unit 35 via the selection unit 33. Is done.
  • the motion prediction / compensation unit 35 performs motion prediction / compensation processing.
  • motion prediction processing is performed in all candidate inter prediction modes, cost function values are calculated for all candidate inter prediction modes, and optimal inter prediction is performed based on the calculated cost function values. The mode is determined. Then, the predicted image generated in the optimal inter prediction mode and its cost function value are supplied to the predicted image selection unit 36.
  • step S15 the predicted image selection unit 36 selects one of the optimal intra prediction mode and the optimal inter prediction mode based on the cost function values output from the intra prediction unit 34 and the motion prediction / compensation unit 35. Determine the prediction mode. Then, the predicted image selection unit 36 selects a predicted image in the determined optimal prediction mode and supplies it to the calculation units 23 and 30. This predicted image is used for calculations in steps S16 and S21 described later.
  • the prediction image selection information is supplied to the intra prediction unit 34 or the motion prediction / compensation unit 35.
  • the intra prediction unit 34 supplies information indicating the optimal intra prediction mode (that is, a parameter related to intra prediction) to the lossless encoding unit 26.
  • the motion prediction / compensation unit 35 When a prediction image in the optimal inter prediction mode is selected, the motion prediction / compensation unit 35 performs lossless encoding of information indicating the optimal inter prediction mode and information corresponding to the optimal inter prediction mode (that is, parameters relating to motion prediction). To the unit 26.
  • Information according to the optimal inter prediction mode includes motion vector information and reference frame information.
  • step S16 the calculation unit 23 calculates a difference between the image rearranged in step S12 and the predicted image selected in step S15.
  • the predicted image is supplied from the motion prediction / compensation unit 35 in the case of inter prediction, and from the intra prediction unit 34 in the case of intra prediction, to the calculation unit 23 via the predicted image selection unit 36, respectively.
  • ⁇ Difference data has a smaller data volume than the original image data. Therefore, the data amount can be compressed as compared with the case where the image is encoded as it is.
  • step S17 the orthogonal transformation unit 24 orthogonally transforms the difference information supplied from the calculation unit 23. Specifically, orthogonal transformation such as discrete cosine transformation and Karhunen-Loeve transformation is performed, and transformation coefficients are output.
  • orthogonal transformation such as discrete cosine transformation and Karhunen-Loeve transformation is performed, and transformation coefficients are output.
  • step S18 the quantization unit 25 quantizes the transform coefficient.
  • the rate is controlled as will be described in the process of step S28 described later.
  • step S19 the inverse quantization unit 28 inversely quantizes the transform coefficient quantized by the quantization unit 25 with characteristics corresponding to the characteristics of the quantization unit 25.
  • step S ⁇ b> 20 the inverse orthogonal transform unit 29 performs inverse orthogonal transform on the transform coefficient inversely quantized by the inverse quantization unit 28 with characteristics corresponding to the characteristics of the orthogonal transform unit 24.
  • step S21 the calculation unit 30 adds the predicted image input via the predicted image selection unit 36 to the locally decoded difference information, and locally decoded (ie, locally decoded) image. (Image corresponding to the input to the calculation unit 23) is generated.
  • step S22 the deblocking filter 31 performs a deblocking filter process on the image output from the calculation unit 30. Thereby, block distortion is removed.
  • the filtered image from the deblocking filter 31 is output to the adaptive offset filter 41.
  • step S23 the adaptive offset filter 41 performs adaptive offset filter processing.
  • the filtering process is performed on the image after filtering by the deblocking filter 31 using the quad-tree structure in which the type of the offset filter is determined for each divided region and the offset value for each divided region. Applied.
  • the filtered image is supplied to the adaptive loop filter 42.
  • step S24 the adaptive loop filter 42 performs an adaptive loop filter process on the image after filtering by the adaptive offset filter 41 in units of LCUs.
  • filter coefficients are sequentially obtained at the beginning of encoding and during the encoding of the screen, and the obtained filter coefficients are stored in the buffer. While being stored, it is supplied to the lossless encoding unit 26.
  • filter processing is performed on the image from the adaptive offset filter 41 using the optimum filter coefficient among the filter coefficients stored in the buffer in units of LCUs. Further, the buffer index indicating the storage position of the used filter coefficient is supplied to the lossless encoding unit 26 together with a flag indicating on / off of the filter.
  • the filter coefficient is simply referred to as a set of filter coefficients.
  • Each piece of information (hereinafter collectively referred to as an adaptive loop filter parameter) supplied to the lossless encoding unit 26 is encoded in step S26 described later.
  • step S25 the frame memory 32 stores the filtered image.
  • images that are not filtered by the deblocking filter 31, the adaptive offset filter 41, and the adaptive loop filter 42 are also supplied from the computing unit 30 and stored.
  • the transform coefficient quantized in step S18 described above is also supplied to the lossless encoding unit 26.
  • the lossless encoding unit 26 encodes the quantized transform coefficient output from the quantization unit 25 and each supplied parameter. That is, the difference image is subjected to lossless encoding such as variable length encoding and arithmetic encoding, and is compressed.
  • lossless encoding such as variable length encoding and arithmetic encoding
  • step S27 the accumulation buffer 27 accumulates the encoded difference image (that is, the encoded stream) as a compressed image.
  • the compressed image stored in the storage buffer 27 is appropriately read out and transmitted to the decoding side via the transmission path.
  • step S28 the rate control unit 37 controls the quantization operation rate of the quantization unit 25 based on the compressed image stored in the storage buffer 27 so that overflow or underflow does not occur.
  • step S28 ends, the encoding process ends.
  • FIG. 3 illustrates a configuration of an embodiment of an image decoding device as an image processing device to which the present disclosure is applied.
  • An image decoding device 51 shown in FIG. 3 is a decoding device corresponding to the image encoding device 11 of FIG.
  • encoded data encoded by the image encoding device 11 is transmitted to an image decoding device 51 corresponding to the image encoding device 11 via a predetermined transmission path and decoded.
  • the image decoding device 51 includes a storage buffer 61, a lossless decoding unit 62, an inverse quantization unit 63, an inverse orthogonal transform unit 64, a calculation unit 65, a deblocking filter 66, a screen rearrangement buffer 67, And a D / A converter 68.
  • the image decoding device 51 includes a frame memory 69, a selection unit 70, an intra prediction unit 71, a motion prediction / compensation unit 72, and a selection unit 73.
  • the image decoding device 51 includes an adaptive offset filter 81 and an adaptive loop filter 82 between the deblocking filter 66, the screen rearrangement buffer 67 and the frame memory 69.
  • the accumulation buffer 61 is also a receiving unit that receives transmitted encoded data.
  • the accumulation buffer 61 receives and accumulates the transmitted encoded data.
  • This encoded data is encoded by the image encoding device 11.
  • the lossless decoding unit 62 decodes the encoded data read from the accumulation buffer 61 at a predetermined timing by a method corresponding to the encoding method of the lossless encoding unit 26 in FIG.
  • the lossless decoding unit 62 supplies parameters such as information indicating the decoded intra prediction mode to the intra prediction unit 71, and supplies parameters such as information indicating the inter prediction mode and motion vector information to the motion prediction / compensation unit 72. . Further, the lossless decoding unit 62 uses the decoded adaptive loop filter parameters (filter coefficient for each LCU line and storage position information indicating the storage position, on / off flag for each LCU, buffer index, and the like) as an adaptive loop. The filter 82 is supplied.
  • the inverse quantization unit 63 inversely quantizes the coefficient data (quantization coefficient) obtained by decoding by the lossless decoding unit 62 by a method corresponding to the quantization method of the quantization unit 25 in FIG. That is, the inverse quantization unit 63 uses the quantization parameter supplied from the image encoding device 11 to perform inverse quantization of the quantization coefficient by the same method as the inverse quantization unit 28 in FIG.
  • the inverse quantization unit 63 supplies the inversely quantized coefficient data, that is, the orthogonal transform coefficient, to the inverse orthogonal transform unit 64.
  • the inverse orthogonal transform unit 64 is a method corresponding to the orthogonal transform method of the orthogonal transform unit 24 in FIG. 1, performs inverse orthogonal transform on the orthogonal transform coefficient, and converts the residual data before the orthogonal transform in the image encoding device 11 Corresponding decoding residual data is obtained.
  • the decoded residual data obtained by the inverse orthogonal transform is supplied to the arithmetic unit 65. Further, a prediction image is supplied to the calculation unit 65 from the intra prediction unit 71 or the motion prediction / compensation unit 72 via the selection unit 73.
  • the calculating unit 65 adds the decoded residual data and the predicted image, and obtains decoded image data corresponding to the image data before the predicted image is subtracted by the calculating unit 23 of the image encoding device 11.
  • the arithmetic unit 65 supplies the decoded image data to the deblocking filter 66.
  • the deblocking filter 66 removes block distortion of the decoded image by appropriately performing deblocking filter processing.
  • the deblocking filter 66 supplies the filter processing result to the adaptive offset filter 81.
  • the adaptive offset filter 81 performs an offset filter (SAO) process that mainly removes ringing on the image after filtering by the deblocking filter 66.
  • SAO offset filter
  • the adaptive offset filter 81 uses the quad-tree structure in which the type of the offset filter is determined for each divided region and the offset value for each divided region, and performs filtering on the image after filtering by the deblocking filter 66. Apply processing.
  • the adaptive offset filter 81 supplies the filtered image to the adaptive loop filter 82.
  • the quad-tree structure and the offset value for each divided region are calculated by the adaptive offset filter 41 of the image encoding device 11, encoded and sent. Is.
  • the quad-tree structure encoded by the image encoding device 11 and the offset value for each divided region are received by the image decoding device 51, decoded by the lossless decoding unit 62, and used by the adaptive offset filter 81. .
  • the adaptive loop filter 82 is basically configured in the same manner as the adaptive loop filter 42 of the image encoding device 11 of FIG. 1, and performs adaptive loop filter processing in units of LCUs which are the maximum encoding units.
  • the filtered image is supplied to the screen rearrangement buffer 67 and the frame memory 69.
  • the adaptive loop filter 82 resets the buffer that stores the filter coefficient in units of screens, and stores the received filter coefficient in the buffer when receiving the filter coefficient from the lossless decoding unit 62 for each LCU line. At this time, if there is a free area, the adaptive loop filter 82 stores the filter coefficient in the area with the smallest index among the free areas. If there is no free space, the filter coefficient is overwritten in the buffer. For example, the adaptive loop filter 82 overwrites the filter coefficient in the storage position indicated by the storage position information received simultaneously with the filter coefficient.
  • the adaptive loop filter 82 reads the filter coefficient corresponding to the buffer index from the lossless decoding unit 62 from the buffer and filters the image from the adaptive offset filter 81 in units of LCUs. Details of the adaptive loop filter 82 will be described later with reference to FIG.
  • the screen rearrangement buffer 67 rearranges images. That is, the order of frames rearranged for the encoding order by the screen rearrangement buffer 22 in FIG. 1 is rearranged in the original display order.
  • the D / A converter 68 performs D / A conversion on the image supplied from the screen rearrangement buffer 67, and outputs and displays the image on a display (not shown).
  • the output of the adaptive loop filter 82 is further supplied to the frame memory 69.
  • the frame memory 69, the selection unit 70, the intra prediction unit 71, the motion prediction / compensation unit 72, and the selection unit 73 are the frame memory 32, the selection unit 33, the intra prediction unit 34, and the motion prediction / compensation unit of the image encoding device 11. 35 and the predicted image selection unit 36, respectively.
  • the selection unit 70 reads out the inter-processed image and the referenced image from the frame memory 69 and supplies the image to the motion prediction / compensation unit 72.
  • the selection unit 70 reads an image used for intra prediction from the frame memory 69 and supplies the image to the intra prediction unit 71.
  • the intra prediction unit 71 is appropriately supplied with information indicating the intra prediction mode obtained by decoding the header information from the lossless decoding unit 62. Based on this information, the intra prediction unit 71 generates a prediction image from the reference image acquired from the frame memory 69 and supplies the generated prediction image to the selection unit 73.
  • the motion prediction / compensation unit 72 is supplied with information (prediction mode information, motion vector information, reference frame information, flags, various parameters, etc.) obtained by decoding the header information from the lossless decoding unit 62.
  • the motion prediction / compensation unit 72 generates a prediction image from the reference image acquired from the frame memory 69 based on the information supplied from the lossless decoding unit 62 and supplies the generated prediction image to the selection unit 73.
  • the selection unit 73 selects the prediction image generated by the motion prediction / compensation unit 72 or the intra prediction unit 71 and supplies the selected prediction image to the calculation unit 65.
  • the storage buffer 61 stores the transmitted encoded data in step S51.
  • the lossless decoding unit 62 decodes the encoded data supplied from the accumulation buffer 61.
  • the I picture, P picture, and B picture encoded by the lossless encoding unit 26 in FIG. 1 are decoded.
  • parameter information such as motion vector information, reference frame information, and prediction mode information (intra prediction mode or inter prediction mode) is also decoded.
  • the prediction mode information is intra prediction mode information
  • the prediction mode information is supplied to the intra prediction unit 71.
  • the prediction mode information is inter prediction mode information
  • motion vector information corresponding to the prediction mode information is supplied to the motion prediction / compensation unit 72.
  • the parameters of the adaptive loop filter are decoded and supplied to the adaptive loop filter 82.
  • the storage location information indicating the filter coefficient and its storage location is decoded in units of LCU lines.
  • the on / off flag and the buffer index are decoded in units of LCUs.
  • step S53 the intra prediction unit 71 or the motion prediction / compensation unit 72 performs a prediction image generation process corresponding to the prediction mode information supplied from the lossless decoding unit 62, respectively.
  • the intra prediction unit 71 when the intra prediction mode information is supplied from the lossless decoding unit 62, the intra prediction unit 71 generates an intra prediction image in the intra prediction mode.
  • the motion prediction / compensation unit 72 performs an inter prediction mode motion prediction / compensation process to generate an inter prediction image.
  • the prediction image (intra prediction image) generated by the intra prediction unit 71 or the prediction image (inter prediction image) generated by the motion prediction / compensation unit 72 is supplied to the selection unit 73.
  • step S54 the selection unit 73 selects a predicted image. That is, a prediction image generated by the intra prediction unit 71 or a prediction image generated by the motion prediction / compensation unit 72 is supplied. Therefore, the supplied predicted image is selected and supplied to the calculation unit 65, and is added to the output of the inverse orthogonal transform unit 64 in step S57 described later.
  • step S52 the transform coefficient decoded by the lossless decoding unit 62 is also supplied to the inverse quantization unit 63.
  • step S55 the inverse quantization unit 63 inversely quantizes the transform coefficient decoded by the lossless decoding unit 62 with characteristics corresponding to the characteristics of the quantization unit 25 in FIG.
  • step S56 the inverse orthogonal transform unit 29 performs inverse orthogonal transform on the transform coefficient inversely quantized by the inverse quantization unit 28 with characteristics corresponding to the characteristics of the orthogonal transform unit 24 of FIG. As a result, the difference information corresponding to the input of the orthogonal transform unit 24 of FIG. 1 (the output of the calculation unit 23) is decoded.
  • step S57 the calculation unit 65 adds the prediction image selected in the process of step S54 described above and input via the selection unit 73 to the difference information. As a result, the original image is decoded.
  • step S58 the deblocking filter 66 performs deblocking filter processing on the image output from the calculation unit 65. Thereby, block distortion is removed.
  • the decoded image from the deblocking filter 66 is output to the adaptive offset filter 81.
  • step S59 the adaptive offset filter 81 performs adaptive offset filter processing.
  • the filtering process is performed on the image after filtering by the deblocking filter 66 using the quad-tree structure in which the type of the offset filter is determined for each divided region and the offset value for each divided region. Applied.
  • the filtered image is supplied to the adaptive loop filter 82.
  • step S60 the adaptive loop filter 82 performs an adaptive loop filter process on the image after filtering by the adaptive offset filter 81.
  • the adaptive loop filter 82 has a buffer for storing filter coefficients.
  • the filter coefficient received from the lossless decoding unit 62 is stored in the buffer. At this time, if there is a free area, the filter coefficient is stored in the area with the smallest index among the free areas. When there is no free area, the filter coefficient is stored in the storage position indicated by the storage position information received simultaneously with the filter coefficient.
  • the filter coefficient corresponding to the buffer index from the lossless decoding unit 62 is read from the buffer for the image from the adaptive offset filter 81 in units of LCUs, the filter is performed, and the filter processing result is displayed on the screen rearrangement buffer. 67 and the frame memory 69.
  • step S61 the frame memory 69 stores the filtered image.
  • step S62 the screen rearrangement buffer 67 rearranges the images after the adaptive loop filter 82. That is, the order of frames rearranged for encoding by the screen rearrangement buffer 22 of the image encoding device 11 is rearranged to the original display order.
  • step S63 the D / A conversion unit 68 D / A converts the image from the screen rearrangement buffer 67. This image is output to a display (not shown), and the image is displayed.
  • step S63 ends, the decoding process ends.
  • Each grid of the screen 111 represents an ALF processing unit (for example, LCU unit), and the right end of the screen 121 represents each ALF processing unit line end.
  • the alphanumeric characters shown in each grid represent the buffer index of the ALF coefficient buffer 112.
  • N filter coefficients are sent together at the top of the screen 111.
  • these filter coefficients are decoded and stored in the ALF coefficient buffer 112, and the filter coefficients corresponding to the buffer index sent for each ALF processing unit are read out and used for ALF processing.
  • the optimum one of the N filter coefficients stored in the ALF coefficient buffer 112 can be used on the decoding side even at the head of the screen 111.
  • Non-Patent Document 2 a proposal has been made to send a filter coefficient or index to the decoding side in ALF processing units (for example, LCU units).
  • ALF processing units for example, LCU units
  • Non-Patent Document 2 is managed by FIFO, and it is difficult to perform decoding processing from the middle of the screen.
  • Each grid of the screen 121 represents an ALF processing unit (for example, LCU unit), and the right end of the screen 121 represents an end of each ALF processing unit line.
  • the alphanumeric characters shown in each grid represent the buffer index of the ALF coefficient buffer 122.
  • the ALF coefficient buffer 122-1 represents the ALF coefficient buffer 122 in the initial state at the top of the screen 121.
  • filter coefficients are stored at buffer index positions 1 and 2, and other positions are empty.
  • the ALF coefficient buffer 122-2 represents the ALF coefficient buffer 122 in a state after transmission of filter coefficients (transmission 1 in the figure) at the uppermost ALF processing unit line end of the screen 121 is completed.
  • the ALF coefficient buffer 122-2 stores the filter coefficients at the positions of the buffer indexes 1 to 3, and the other positions are empty.
  • the ALF coefficient buffer 122-3 represents the ALF coefficient buffer 122 in a state after transmission of filter coefficients (transmission 2 in the figure) at the end of the second ALF processing unit line from the top of the screen 121 is completed.
  • filter coefficients are stored at the positions of buffer indexes 1 to 5, and other positions are empty.
  • the filter coefficients of the buffer indexes 1 and 2 are sent at the head of the screen 121 and stored in the ALF coefficient buffer 122-1 on the decoding side.
  • the filter coefficients of the buffer indexes 1 and 2 are, for example, default coefficients.
  • the filter coefficient of the buffer index 3 obtained at the top ALF processing unit line at the top ALF processing unit line end of the screen 121 is sent at the transmission 1 timing which is the timing during encoding. It is done. Then, the filter coefficient sent at the timing of transmission 1 is stored in the ALF coefficient buffer 122-2 on the decoding side.
  • the timing of transmission 2 which is the timing during encoding, that is, the buffer indexes 4 and 5 obtained at the second ALF processing unit line end from the top of the screen 121 and the second ALF processing unit line from the top. Filter coefficients are sent. Then, the filter coefficient sent at the timing of transmission 2 is stored in the ALF coefficient buffer 122-3 on the decoding side.
  • the optimum one of the filter coefficients of the buffer indexes 1 and 2 stored in the ALF coefficient buffer 122-1 is used.
  • each ALF processing unit of the second ALF processing unit line from the top of the screen 111, the optimum one of the filter coefficients of the buffer indexes 1 to 3 stored in the ALF coefficient buffer 122-2 is used.
  • each ALF processing unit of the second ALF processing unit line from the top of the screen 111 the optimum one of the filter coefficients of the buffer indexes 1 to 5 stored in the ALF coefficient buffer 122-3 is used.
  • the ALF coefficient obtained from the ALF processing unit line at the top of the screen can be applied to each ALF processing unit of the second ALF processing unit line. That is, a sub-optimal coefficient can be used.
  • the default filter coefficients are sent at the top of the screen, and the filter coefficients obtained from the first and second ALF processing unit lines from the top are sent at the timing of transmissions 1 and 2, respectively.
  • the filter coefficient obtained from the first ALF processing unit line from the top is sent, and at the timing of transmission 1, it is obtained from the second ALF processing unit line from the top.
  • Filter coefficients may be sent.
  • a buffer for storing an image corresponding to an ALF processing unit line is required, but the buffer capacity can be reduced as compared with the conventional case where a buffer for one frame is required.
  • the ALF processing unit is the LCU unit.
  • FIG. 7 is a block diagram illustrating a configuration example of an adaptive loop filter and a lossless encoding unit in the image encoding device of FIG.
  • the adaptive loop filter 42 is configured to include an image buffer 211, an ALF coefficient calculation unit 212, a coefficient writing unit 213, a referenced number counter 214, and a coefficient parameter setting unit 215.
  • the adaptive loop filter 42 is configured to include an ALF coefficient storage buffer 216, a coefficient reading unit 217, an ALF processing unit 218, an RD evaluation value calculation unit 219, and an LCU parameter setting unit 220.
  • the lossless encoding unit 26 is configured to include at least a syntax writing unit 221.
  • the image before the adaptive loop filter from the adaptive offset filter 41 and the original image from the screen rearrangement buffer 22 are input to the image buffer 211.
  • the image buffer 211 temporarily stores the pre-filter image and the original image, and supplies them to the ALF coefficient calculation unit 212 at a predetermined timing. Further, the image buffer 211 supplies the pre-filter image to the ALF processing unit 218 and supplies the original image to the RD evaluation value calculation unit 219.
  • the ALF coefficient calculation unit 212 calculates the correlation between the original image of each LCU line and the pre-filter image at each LCU line end, obtains one best ALF filter coefficient, and supplies the obtained ALF filter coefficient to the coefficient writing unit 213. Supply.
  • the coefficient writing unit 213 stores the ALF filter coefficient obtained by the ALF coefficient calculation unit 212 in the ALF coefficient storage buffer 216. At this time, if there is a free space in the ALF coefficient storage buffer 216, the coefficient writing unit 213 stores the ALF filter coefficient in the free space of the smallest index among the free space, and supplies the ALF filter coefficient to the coefficient parameter setting unit 215. .
  • the coefficient writing unit 213 When there is no free space in the ALF coefficient storage buffer 216, the coefficient writing unit 213 overwrites the ALF filter coefficient in the ALF coefficient storage buffer 216. More specifically, when there is no free space in the ALF coefficient storage buffer 216, the coefficient writing unit 213 overwrites the ALF filter coefficient in the index area where the counter value of the reference counter 214 is the smallest.
  • the coefficient writing unit 213 supplies the overwritten area index (storage position information) and the ALF filter coefficient to the coefficient parameter setting unit 215, and the counter value of the overwritten area index of the referenced number counter 214. To reset.
  • the referenced number counter 214 is a counter that counts the number of times the filter coefficient (index in which it is stored) stored in the ALF coefficient storage buffer 216 is referenced.
  • the number-of-references counter 214 has a storage capacity that can count up to the number of LCUs in the screen for N indexes.
  • an ALF filter coefficient index counter used in many LCUs indicates a large number
  • an ALF filter coefficient index counter that is not frequently used indicates a small number
  • the coefficient parameter setting unit 215 supplies the syntax writing unit 221 with the ALF filter coefficient for each LCU line from the coefficient writing unit 213 and the storage location information of the buffer that is the index of the overwritten area. If not overwritten, only the ALF filter coefficient is supplied.
  • the ALF coefficient storage buffer 216 is configured to store N patterns of ALF filter coefficients (sets) per screen (picture).
  • the buffer size is (ALF tap length ⁇ coefficient bit precision ⁇ N) bits. However, when the coefficient is compressed by variable length coding, it becomes smaller than this.
  • the ALF filter coefficient stored in the ALF coefficient storage buffer 216 is reset for each screen.
  • the coefficient reading unit 217 reads out the filter coefficient corresponding to the buffer index requested from the ALF processing unit 218 from the ALF coefficient storage buffer 216 and supplies it to the ALF processing unit 218. Further, after the RD evaluation, the coefficient reading unit 217 reads out the filter coefficient corresponding to the buffer index from the RD evaluation value calculation unit 219 from the ALF coefficient storage buffer 216 and supplies it to the ALF processing unit 218. At this time, the coefficient reading unit 217 increments the counter corresponding to the read buffer index by one in the referenced number counter 214.
  • the ALF processing unit 218 reads the filter coefficient stored in the ALF coefficient storage buffer 216 by supplying a buffer index to the coefficient reading unit 217.
  • the ALF processing unit 218 performs filter processing for each LCU on the pre-filter image from the image buffer 211 using the read filter coefficient, and sets the post-filter pixel value and the buffer index at that time as the RD evaluation value. It supplies to the calculation part 219.
  • the ALF processing unit 218 performs filter processing on the pre-filter image from the image buffer 211 using the filter coefficient supplied from the coefficient reading unit 217 for each LCU, and the post-filter pixel value. Is supplied to the frame memory 32.
  • the RD evaluation value calculation unit 219 uses the original image from the image buffer 211 and the filtered image from the ALF processing unit 218 to perform ALF processing on each LCU with each filter coefficient stored in the ALF coefficient storage buffer 216. Judgment is made by calculating an evaluation value by RD calculation.
  • the RD evaluation value calculation unit 219 supplies that fact and a buffer index corresponding to the ALF filter coefficient selected as optimal among the filter coefficients to the LCU parameter setting unit 220 and the coefficient reading unit 217.
  • the RD evaluation value calculation unit 219 supplies only the fact to the LCU parameter setting unit 220 and the coefficient reading unit 217.
  • the LCU parameter setting unit 220 refers to the information supplied from the RD evaluation value calculation unit 219, sets an on / off flag and a buffer index indicating whether or not to perform ALF processing, and sets the set on / off flag
  • the buffer index is supplied to the syntax writing unit 221.
  • the syntax writing unit 221 adds each parameter to the header of the encoded stream. For example, the syntax writing unit 221 adds the index (buffer storage position) and filter coefficient from the coefficient parameter setting unit 215 and the flag and index from the LCU parameter setting unit 220 to the header of the encoded stream.
  • ALF coefficient calculation unit 212 Next, a method for calculating an ALF coefficient by the ALF coefficient calculation unit 212 will be described. The ALF coefficient calculation method will be described separately for the first LCU line and the second and subsequent LCU lines.
  • the ALF coefficient calculation unit 212 calculates the correlation between the original image from the image buffer 211 and the pre-filter image in the first LCU line, and calculates the best ALF filter coefficient (that is, the coefficient that minimizes the error from the original image). Ask for one.
  • This ALF coefficient calculation process is performed after the LCU prediction process, the residual encoding process, and the deblocking filter at the right end of each LCU line.
  • the ALF coefficient calculation unit 212 supplies the obtained filter coefficient to the coefficient writing unit 213.
  • the coefficient writing unit 213 stores the filter coefficient in the empty area of the smallest index among the empty areas of the ALF coefficient storage buffer 216 and supplies the filter coefficient to the coefficient parameter setting unit 215 at the timing of the top of the screen. Send to the decryption side.
  • the RD evaluation value calculation unit 219 determines whether or not the ALF processing is performed using the obtained filter coefficient for each LCU in the first LCU line based on the evaluation value obtained by the RD calculation.
  • R1 (total amount of bits required for transmission of bit amount of on / off flag indicating that ALF is performed and filter coefficient index)
  • D1 (SAD of filtered image and original image (difference absolute sum))
  • R0 (bit amount of on / off flag indicating that ALF is not performed)
  • D0 (SAD (difference absolute sum) of original image and original image)
  • the ALF coefficient calculation unit 212 calculates the correlation between the original image from the image buffer 211 and the pre-filter image at the right end of the LCU line for the second and subsequent LCU lines, and obtains one best ALF filter coefficient.
  • the ALF coefficient calculation unit 212 supplies the obtained filter coefficient to the coefficient writing unit 213. Similar to the case of the first LCU line, the coefficient writing unit 213 stores the filter coefficient in the empty area of the smallest index among the empty areas of the ALF coefficient storage buffer 216. At this time, the coefficient writing unit 213 also supplies the filter coefficient to the coefficient parameter setting unit 215 so as to be transmitted to the decoding side at the head timing of the corresponding LCU line.
  • the ALF coefficient calculation unit 212 supplies the obtained filter coefficient to the coefficient writing unit 213.
  • the coefficient writing unit 213 overwrites the ALF coefficient storage buffer 216 with the filter coefficient.
  • the coefficient writing unit 213 overwrites the ALF filter coefficient in the ALF coefficient storage buffer 216 in the index area where the counter value of the referenced number counter 214 is the smallest.
  • the smallest index number is selected.
  • the counter value of the index of the overwritten area in the referenced number counter 214 is reset.
  • the coefficient writing unit 213 supplies the filter coefficient together with the overwritten index of the ALF coefficient storage buffer 216 to the coefficient parameter setting unit 215 and transmits it to the decoding side at the head timing of the corresponding LCU line.
  • the RD evaluation value calculation unit 219 determines whether to perform the ALF processing using the obtained filter coefficient for each LCU in the second LCU line. Judgment is made based on the evaluation value by RD calculation in (1).
  • the image before the adaptive loop filter from the adaptive offset filter 41 and the original image from the screen rearrangement buffer 22 are input to the image buffer 211.
  • the image buffer 211 temporarily stores the pre-filter image and the original image, and supplies them to the ALF coefficient calculation unit 212 at a predetermined timing.
  • the coefficient writing unit 213 initializes (resets) the ALF coefficient storage buffer 216 in step S211. At this time, the referenced counter 214 is also reset.
  • step S212 the ALF coefficient calculation unit 212 calculates the correlation between the original image of the corresponding LCU line and the pre-filter image at each LCU line end, and obtains one best ALF filter coefficient.
  • the ALF coefficient calculation unit 212 supplies the obtained ALF filter coefficient to the coefficient writing unit 213.
  • step S213 the coefficient writing unit 213 determines whether or not the ALF coefficient storage buffer 216 has a free space. Since the ALF coefficient storage buffer 216 is reset for each screen, in the case of the first LCU line, it is determined in step S213 that the ALF coefficient storage buffer 216 has a free space.
  • the coefficient writing unit 213 stores the ALF filter coefficient in the free space of the minimum index in the ALF coefficient storage buffer 216.
  • the coefficient writing unit 213 supplies the ALF filter coefficient to the coefficient parameter setting unit 215, encodes the ALF filter coefficient, and transmits it to the decoding side.
  • the coefficient parameter setting unit 215 supplies the ALF filter coefficient for each LCU line from the coefficient writing unit 213 to the syntax writing unit 221 as an adaptive loop filter parameter.
  • the syntax writing unit 221 writes the adaptive loop filter parameters from the coefficient parameter setting unit 215 in the header portion of the encoded stream in units of LCU lines. These parameters are transmitted to the head of the LCU line, for example.
  • step S213 If it is determined in step S213 that the ALF coefficient storage buffer 216 has no free space, the coefficient writing unit 213 sets the ALF coefficient storage buffer 216 in the ALF coefficient area where the counter value of the reference count counter 214 is the smallest. Overwrite filter coefficients.
  • step S215 the coefficient writing unit 213 supplies the storage location information and the ALF filter coefficient, which are the indexes of the overwritten area, to the coefficient parameter setting unit 215, and encodes the overwritten area index and the ALF filter coefficient. And transmitted to the decoding side. At this time, the counter value of the index of the overwritten area in the referenced number counter 214 is reset.
  • the coefficient parameter setting unit 215 uses the ALF filter coefficient for each LCU line from the coefficient writing unit 213 and the overwritten area index (storage position information) as a parameter of the adaptive loop filter, and the syntax writing unit 221. To supply.
  • the syntax writing unit 221 writes the adaptive loop filter parameters from the coefficient parameter setting unit 215 in the header portion of the encoded stream in units of LCU lines.
  • step S216 the RD evaluation value calculation unit 219 calculates, for each LCU, an evaluation value based on RD calculation as to whether or not to perform ALF processing with each filter coefficient stored in the ALF coefficient storage buffer 216.
  • the ALF processing unit 218 reads the filter coefficients stored in the ALF coefficient storage buffer 216 by supplying the buffer index to the coefficient reading unit 217.
  • the ALF processing unit 218 performs filter processing for each LCU on the pre-filter image from the image buffer 211 using the read filter coefficient, and sets the post-filter pixel value and the buffer index at that time as the RD evaluation value. It supplies to the calculation part 219.
  • the RD evaluation value calculation unit 219 uses each of the filter coefficients stored in the ALF coefficient storage buffer 216 for each LCU using the original image from the image buffer 211 and the filtered image from the ALF processing unit 218. Calculate the evaluation value by RD calculation. For the calculation of the evaluation value, the above-described formula (1) or the like is used.
  • step S217 the RD evaluation value calculation unit 219 determines whether or not to perform ALF processing with any one of the filter coefficients stored in the ALF coefficient storage buffer 216, based on the evaluation value obtained in step S216.
  • step S217 If it is determined in step S217 that the filter process is to be performed, the process proceeds to step S218. At this time, the RD evaluation value calculation unit 219 supplies the coefficient reading unit 217 and the LCU parameter setting unit 220 with the fact that the filter processing is performed and the buffer index corresponding to the ALF filter coefficient selected for use in the filter processing. To do.
  • step S218 the LCU parameter setting unit 220 sets a buffer index corresponding to the selected ALF filter coefficient and a flag indicating that filter processing is to be performed.
  • the LCU parameter setting unit 220 supplies the set flag and buffer index to the syntax writing unit 221 as adaptive loop filter parameters.
  • syntax writing unit 221 writes the parameter of the adaptive loop filter from the LCU parameter setting unit 220 in the header portion of the encoded stream in units of LCU in step S26 of FIG.
  • the coefficient reading unit 217 reads the filter coefficient corresponding to the buffer index from the RD evaluation value calculation unit 219 from the ALF coefficient storage buffer 216, and increments the counter of the filter coefficient by one in the referenced number counter 214 in step S219. .
  • the read filter coefficient is supplied to the ALF processing unit 218.
  • step S220 the ALF processing unit 218 performs filter processing on the pre-filter image from the image buffer 211 using the filter coefficient supplied from the coefficient reading unit 217 for each LCU, and converts the post-filter pixel value into the frame. This is supplied to the memory 32.
  • step S221 the LCU parameter setting unit 220 sets a flag indicating that the filtering process is not performed.
  • the buffer index is also set to 0.
  • syntax writing unit 221 writes the parameter of the adaptive loop filter from the LCU parameter setting unit 220 in the header portion of the encoded stream in units of LCU in step S26 of FIG.
  • step S222 the ALF processing unit 218 determines whether or not the processing target LCU is the last (right end) LCU in the LCU line. If it is determined in step S222 that the LCU line is not the last (rightmost) LCU, the process returns to step S216, and the subsequent processes are repeated.
  • step S222 If it is determined in step S222 that the LCU is the last (rightmost) LCU in the LCU line, the process proceeds to step S223.
  • step S223 the ALF coefficient calculation unit 212 determines whether the processing target LCU is the last LCU on the screen. If it is determined in step S223 that it is not the last LCU on the screen, the process returns to step S212, and the subsequent processes are repeated.
  • step S223 If it is determined in step S223 that the LCU is the last LCU on the screen, the adaptive loop filter process is terminated.
  • the ALF filter coefficient is obtained for each LCU line and transmitted to the decoding side, and the ALF process is performed for each LCU.
  • FIG. 9 is a block diagram illustrating a configuration example of the lossless decoding unit and the adaptive loop filter in the image decoding device of FIG.
  • the lossless decoding unit 62 is configured to include at least a syntax reading unit 251.
  • the adaptive loop filter 82 is configured to include a parameter receiving unit 261, a coefficient writing unit 262, a coefficient reading unit 263, an ALF coefficient storage buffer 264, and an ALF processing unit 265.
  • the syntax reading unit 251 reads the syntax from the header part of the encoded stream, and supplies the parameter of the adaptive loop filter to the parameter receiving unit 261.
  • the parameters of the adaptive loop filter include a parameter sent for each LCU line as a transmission unit and a parameter sent as an ALF processing unit, for example, an LCU unit. is there.
  • the parameters for each LCU line are, for example, ALF filter coefficients and storage position information (index) indicating a buffer storage position when there is no free space.
  • the buffer storage position indicated by this information is an index area where the counter value of the reference counter 214 is the smallest before the ALF coefficient is written.
  • the parameters for each LCU are, for example, a flag indicating on / off of ALF processing and a buffer index indicating the position of the buffer in which the ALF filter coefficient to be used is stored.
  • the parameter receiving unit 261 receives the parameters of the adaptive loop filter supplied from the syntax reading unit 251. When the storage position information indicating the ALF filter coefficient and the buffer storage position is received, the parameter receiving unit 261 supplies information indicating the ALF filter coefficient and the buffer storage position to the coefficient writing unit 262. In addition, when the flag indicating the on / off of the ALF process and the buffer index are received, the parameter receiving unit 261 supplies the flag indicating the on / off and the buffer index to the coefficient reading unit 263.
  • the coefficient writing unit 262 stores the ALF filter coefficient in the empty area of the smallest index when the ALF coefficient storage buffer 264 is empty.
  • the coefficient writing unit 262 overwrites the ALF coefficient storage buffer 264 when there is no free space in the ALF coefficient storage buffer 264. More specifically, the coefficient writing unit 262 indicates the storage position information added to the ALF filter coefficient from the parameter receiving unit 261 in the ALF coefficient storage buffer 264 when the ALF coefficient storage buffer 264 is not empty. Store the ALF filter coefficient in the storage location.
  • the ALF filter coefficient may be discarded. .
  • the coefficient reading unit 263 reads the ALF filter coefficient corresponding to the buffer index from the parameter receiving unit 261 when the on / off flag from the parameter receiving unit 261 indicates on (true). The coefficient reading unit 263 supplies the read ALF filter coefficient to the ALF processing unit 265.
  • the coefficient reading unit 263 controls the ALF processing unit 265 so that the ALF processing is not performed when the on / off flag from the parameter receiving unit 261 indicates off (false).
  • the ALF coefficient storage buffer 264 has basically the same configuration as the ALF coefficient storage buffer 216 in FIG. 7, and is configured to store N patterns of ALF filter coefficients (sets) per screen (picture).
  • the buffer size is (ALF tap length ⁇ coefficient bit precision ⁇ N) bits. However, when the coefficient is compressed by variable length coding, it becomes smaller than this.
  • the ALF filter coefficient stored in the ALF coefficient storage buffer 264 is reset for each screen.
  • the ALF processing unit 265 inputs the pre-ALF filter pixel value from the adaptive offset filter 81 and performs ALF processing using the ALF filter coefficient supplied from the coefficient reading unit 263.
  • the ALF processing unit 265 outputs the filtered pixel value to the screen rearrangement buffer 67 and the frame memory 69 in the subsequent stage.
  • the coefficient writing unit 262 initializes (resets) the ALF coefficient storage buffer 264 in step S251.
  • step S52 of FIG. 4 when the encoded stream is decoded, the syntax reading unit 251 reads the syntax from the header portion of the encoded stream, and supplies the parameter of the adaptive loop filter to the parameter receiving unit 261. .
  • the parameter receiving unit 261 receives the parameter of the adaptive loop filter supplied from the syntax reading unit 251, it determines whether or not an ALF filter coefficient is received in step S252.
  • the adaptive loop filter parameters include a parameter sent for each LCU line as a transmission unit and a parameter sent as an ALF processing unit, for example, an LCU unit.
  • the parameters for each LCU line are, for example, ALF filter coefficients and storage position information indicating a buffer storage position when there is no free space.
  • the parameters in units of LCUs are, for example, a flag indicating on / off of ALF processing and a buffer index indicating the position of the buffer storing the ALF filter coefficient to be used.
  • step S252 When the parameter receiving unit 261 receives a parameter sent for each LCU line, it is determined in step S252 that an ALF filter coefficient has been received, and the process proceeds to step S253. At this time, the parameter receiving unit 261 supplies the ALF filter coefficient and, if added, information indicating the buffer storage position to the coefficient writing unit 262.
  • step S253 the coefficient writing unit 262 determines whether storage position information indicating the storage position is added to the ALF filter coefficient. If it is determined in step S253 that storage location information indicating a storage location has been added, the process proceeds to step S254.
  • step S254 the coefficient writing unit 262 stores (overwrites) the ALF filter coefficient in the storage position indicated by the storage position information added to the ALF filter coefficient in the ALF coefficient storage buffer 264, and the process proceeds to step S258. move on.
  • step S253 If it is determined in step S253 that the storage position information indicating the storage position is not added, the process proceeds to step S255.
  • step S255 the coefficient writing unit 262 determines whether or not there is a free space in the ALF coefficient storage buffer 264. If it is determined in step S255 that the ALF coefficient storage buffer 264 is empty, the process proceeds to step S256.
  • step S256 the coefficient writing unit 262 stores the ALF filter coefficient in the empty area of the smallest index in the ALF coefficient storage buffer 264, and the process proceeds to step S258.
  • step S255 If it is determined in step S255 that the ALF coefficient storage buffer 264 is not empty, the process proceeds to step S257.
  • step S257 the coefficient writing unit 262 discards the ALF filter coefficient, and the process proceeds to step S258.
  • step S252 If the parameter receiving unit 261 has not received a parameter sent for each LCU line, it is determined in step S252 that an ALF filter coefficient has not been received, and the process proceeds to step S258.
  • step S258 the parameter receiving unit 261 receives the ALF parameter for each LCU, and supplies the ALF processing on / off flag and the buffer index to the coefficient reading unit 263.
  • the coefficient reading unit 263 reads the ALF filter coefficient corresponding to the buffer index from the parameter receiving unit 261 when the on / off flag from the parameter receiving unit 261 indicates on.
  • the coefficient reading unit 263 supplies the read ALF filter coefficient to the ALF processing unit 265.
  • the coefficient reading unit 263 controls the ALF processing unit 265 so that the ALF processing is not performed when the on / off flag from the parameter receiving unit 261 indicates off.
  • step S259 the ALF processing unit 265 inputs the pre-ALF filter pixel value from the adaptive offset filter 81, and performs ALF processing using the ALF filter coefficient supplied from the coefficient reading unit 263. Then, the ALF processing unit 265 outputs the post-filter pixel value to the subsequent stage (screen rearrangement buffer 67 and frame memory 69).
  • the ALF processing unit 265 inputs the pixel value before the ALF filter from the adaptive offset filter 81, and directly uses the pixel value before the ALF filter as Output to the subsequent stage.
  • step S260 the parameter receiving unit 261 determines whether or not the processing target LCU is the last LCU on the screen. If it is determined that the LCU is the last LCU on the screen, the adaptive loop filter process is terminated.
  • step S260 If it is determined in step S260 that it is not the last LCU on the screen, the process returns to step S252, and the subsequent processes are repeated.
  • the ALF filter coefficient is received from the encoding side for each LCU line and stored in the buffer, and the ALF process is performed for each LCU.
  • FIG. 11 is a diagram illustrating an example of syntax transmitted at the slice level generated by the image encoding device 11. The number at the left end of each line is the line number given for explanation.
  • the ninth to sixteenth lines are syntaxes corresponding to processing related to the present technology, and among these, the eleventh to fourteenth rows are newly added by the present technology. Is the syntax.
  • the num_xLCUinPicWidthLuma on the 10th line is a value calculated from the values already transmitted in the SPS (sequence parameter set) how many LCUs are in the X direction. This value is calculated as the following equation (2).
  • LCUWidth 1 ⁇ (log2_min_coding_block_size_minus3 + 3 + log2_diff_max_min_coding_block_size)
  • num_xLCUinPicWidthLuma PicWidthInSamples L / LCUWidth ... (2)
  • the IsTrasferAlfDataFlag on the 11th line is a 1-bit flag that determines whether or not to send ALF filter coefficient information at the end of the LCU line. When False, the ALFfiltercoef IV on the 12th line and Buffer_index on the 13th line are not sent. When sending ALFfiltercoef and Buffer_index, the flag on the 11th line is set to True.
  • the ALFfiltercoef on the 12th line is ALF filter coefficient information. Maximum number of taps x bit accuracy [bit]. In the case of performing lossless compression, it becomes smaller than this.
  • Buffer_index on the 13th line is an index (that is, storage position information) indicating in which position in the ALF coefficient buffer the filter coefficient information of ALFfiltercoef IV on the 12th line is stored.
  • index that is, storage position information
  • 0 is set.
  • overwriting values 1 to N that can be recorded in the ALF coefficient buffer are set.
  • FIG. 12 is a diagram illustrating an example of syntax transmitted at the CU level generated by the image encoding device 11. The number at the left end of each line is the line number given for explanation.
  • AlfBufferIndexNum on the ninth line is an index number indicating which coefficient in the ALF buffer coefficient is used in the corresponding CU. If 0, do not perform ALF processing. If it is 1 to N, ALF processing is performed using the corresponding filter coefficient in the buffer.
  • FIGS. 11 and 12 are merely examples, and the present invention is not limited to this.
  • the filter coefficients are sequentially sent to the decoding side during the encoding head of the screen and during the encoding.
  • the filter coefficient is sent not in the ALF processing unit but in the transmission unit, which is a unit in which a plurality of ALF processing units are gathered toward the screen edge, the bit for sending the filter coefficient as compared with Non-Patent Document 2. It is possible to reduce the number.
  • Non-Patent Document 2 by eliminating the FIFO processing of the coefficient buffer, buffer management can be simplified and parallel processing of decoding can be made possible.
  • a coefficient buffer may be configured. That is, in the case of Patent Document 2, since FIFO management is necessary, it is necessary to decode all the first LCU line ends, whereas in the case of this technology, it is not necessary to decode the filter coefficients for each LCU. It becomes.
  • one filter coefficient is transmitted to the decoding side at the head of the frame.
  • the present invention is not limited to one filter coefficient, and a plurality of filter coefficients may be sent.
  • additional filter coefficients are obtained for each LCU line and transmitted to the decoding side.
  • the timing at which the additional filter coefficients are sent is not limited to each LCU line, but is arbitrary It is.
  • the number to be transmitted is not limited to one at a time, and a plurality may be transmitted at a time. It is also possible to transmit one at a certain timing and two at another timing.
  • the method of obtaining the ALF filter coefficient is not limited to this.
  • the images of the first and second LCU lines are used and the third LCU is used.
  • the images of the first to third LCU lines may be used.
  • the HEVC method is used as the encoding method.
  • the present disclosure is not limited to this, and other encoding schemes / decoding schemes including at least an adaptive loop filter can be applied as the in-loop filter.
  • the present disclosure discloses, for example, image information (bitstream) compressed by orthogonal transformation such as discrete cosine transformation and motion compensation, such as HEVC, satellite broadcasting, cable television, the Internet, or a mobile phone.
  • the present invention can be applied to an image encoding device and an image decoding device used when receiving via a network medium.
  • the present disclosure can be applied to an image encoding device and an image decoding device that are used when processing on a storage medium such as an optical disk, a magnetic disk, and a flash memory.
  • FIG. 13 shows an example of a multi-view image encoding method.
  • the multi-viewpoint image includes a plurality of viewpoint images, and a predetermined one viewpoint image among the plurality of viewpoints is designated as the base view image.
  • Each viewpoint image other than the base view image is treated as a non-base view image.
  • a buffer for example, the ALF coefficient storage buffer 216 in FIG. 7 for storing the coefficient of the adaptive loop filter can be managed in each view (same view). Further, in each view (different view), it is possible to share management of buffers that store adaptive loop filter coefficients in other views.
  • the management of the buffer that stores the coefficient of the adaptive loop filter in the base view is shared by at least one non-base view.
  • each view (same view)
  • a buffer index of a buffer for storing the coefficient of the adaptive loop filter can be set.
  • each view (different view) can share a buffer index set in another view.
  • other adaptive loop filter parameters such as a filter coefficient, an on / off flag of an adaptive loop filter for each LCU, and storage position information can also be shared.
  • the buffer index set in the base view is used in at least one non-base view.
  • FIG. 14 is a diagram illustrating a multi-view image encoding apparatus that performs the above-described multi-view image encoding.
  • the multi-view image encoding device 600 includes an encoding unit 601, an encoding unit 602, and a multiplexing unit 603.
  • the encoding unit 601 encodes the base view image and generates a base view image encoded stream.
  • the encoding unit 602 encodes the non-base view image and generates a non-base view image encoded stream.
  • the multiplexing unit 603 multiplexes the base view image encoded stream generated by the encoding unit 601 and the non-base view image encoded stream generated by the encoding unit 602 to generate a multi-view image encoded stream. To do.
  • the image encoding device 11 (FIG. 1) can be applied to the encoding unit 601 and the encoding unit 602 of the multi-view image encoding device 600.
  • the multi-view image encoding apparatus 600 sets and transmits the buffer index set by the encoding unit 601 and the buffer index set by the encoding unit 602.
  • the buffer index set by the encoding unit 601 as described above may be set and transmitted so as to be shared by the encoding unit 601 and the encoding unit 602.
  • the buffer index set by the encoding unit 602 may be set and transmitted so as to be shared by the encoding unit 601 and the encoding unit 602.
  • FIG. 15 is a diagram illustrating a multi-view image decoding apparatus that performs the above-described multi-view image decoding.
  • the multi-view image decoding device 610 includes a demultiplexing unit 611, a decoding unit 612, and a decoding unit 613.
  • the demultiplexing unit 611 demultiplexes the multi-view image encoded stream in which the base view image encoded stream and the non-base view image encoded stream are multiplexed, and the base view image encoded stream and the non-base view image The encoded stream is extracted.
  • the decoding unit 612 decodes the base view image encoded stream extracted by the demultiplexing unit 611 to obtain a base view image.
  • the decoding unit 613 decodes the non-base view image encoded stream extracted by the demultiplexing unit 611 to obtain a non-base view image.
  • the image decoding device 51 (FIG. 3) can be applied to the decoding unit 612 and the decoding unit 613 of the multi-view image decoding device 610.
  • the multi-view image decoding apparatus 610 performs processing using the buffer index set by the encoding unit 601 and decoded by the decoding unit 612 and the buffer index set by the encoding unit 602 and decoded by the decoding unit 613. Do.
  • the buffer index set by the encoding unit 601 (or encoding 602) as described above may be set and transmitted so as to be shared by the encoding unit 601 and the encoding unit 602. .
  • processing is performed using the buffer index set by the encoding unit 601 (or encoding 602) and decoded by the decoding unit 612 (or decoding unit 613).
  • FIG. 16 shows an example of the multi-view image encoding method.
  • the hierarchical image includes images of a plurality of layers (resolutions), and a predetermined one layer image of the plurality of resolutions is designated as the base layer image. Images in each layer other than the base layer image are treated as non-base layer images.
  • the management of the buffer for storing the coefficient of the adaptive loop filter in the base layer is shared by at least one non-base layer.
  • each layer (same layer), a buffer index of a buffer for storing the coefficient of the adaptive loop filter can be set.
  • each layer (different layers) can share a buffer index set in another view.
  • other adaptive loop filter parameters such as a filter coefficient, an on / off flag of an adaptive loop filter for each LCU, and storage position information can also be shared.
  • the buffer index set in the base layer is used in at least one non-base layer.
  • FIG. 17 is a diagram illustrating a hierarchical image encoding apparatus that performs the above-described hierarchical image encoding.
  • the hierarchical image encoding device 620 includes an encoding unit 621, an encoding unit 622, and a multiplexing unit 623.
  • the encoding unit 621 encodes the base layer image and generates a base layer image encoded stream.
  • the encoding unit 622 encodes the non-base layer image and generates a non-base layer image encoded stream.
  • the multiplexing unit 623 multiplexes the base layer image encoded stream generated by the encoding unit 621 and the non-base layer image encoded stream generated by the encoding unit 622 to generate a hierarchical image encoded stream. .
  • the image encoding device 11 (FIG. 1) can be applied to the encoding unit 621 and the encoding unit 622 of the hierarchical image encoding device 620.
  • the hierarchical image encoding device 620 sets and transmits the buffer index set by the encoding unit 621 and the buffer index set by the encoding unit 622.
  • the buffer index set by the encoding unit 621 as described above may be set and transmitted so as to be shared by the encoding unit 621 and the encoding unit 622.
  • the buffer index set by the encoding unit 622 may be set and transmitted so as to be shared by the encoding unit 621 and the encoding unit 622.
  • FIG. 18 is a diagram illustrating a hierarchical image decoding apparatus that performs the hierarchical image decoding described above.
  • the hierarchical image decoding device 630 includes a demultiplexing unit 631, a decoding unit 632, and a decoding unit 633.
  • the demultiplexing unit 631 demultiplexes the hierarchical image encoded stream in which the base layer image encoded stream and the non-base layer image encoded stream are multiplexed, and the base layer image encoded stream and the non-base layer image code Stream.
  • the decoding unit 632 decodes the base layer image encoded stream extracted by the demultiplexing unit 631 to obtain a base layer image.
  • the decoding unit 633 decodes the non-base layer image encoded stream extracted by the demultiplexing unit 631 to obtain a non-base layer image.
  • the image decoding device 51 (FIG. 3) can be applied to the decoding unit 632 and the decoding unit 633 of the hierarchical image decoding device 630.
  • the hierarchical image decoding apparatus 630 performs processing using the buffer index set by the encoding unit 621, the buffer index decoded by the decoding unit 632, and the encoding unit 622, and the decoding unit 633 uses the buffer index.
  • the buffer index set by the encoding unit 621 (or encoding 622) as described above may be set and transmitted so as to be shared by the encoding unit 621 and the encoding unit 622.
  • processing is performed using the buffer index set by encoding unit 621 (or encoding 622) and decoded by decoding unit 632 (or decoding unit 633).
  • the series of processes described above can be executed by hardware or can be executed by software.
  • a program constituting the software is installed in the computer.
  • the computer includes, for example, a general-purpose personal computer capable of executing various functions by installing various programs by installing a computer incorporated in dedicated hardware.
  • FIG. 19 is a block diagram showing an example of the hardware configuration of a computer that executes the above-described series of processing by a program.
  • a CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • an input / output interface 805 is connected to the bus 804.
  • An input unit 806, an output unit 807, a storage unit 808, a communication unit 809, and a drive 810 are connected to the input / output interface 805.
  • the input unit 806 includes a keyboard, a mouse, a microphone, and the like.
  • the output unit 807 includes a display, a speaker, and the like.
  • the storage unit 808 includes a hard disk, a nonvolatile memory, and the like.
  • the communication unit 809 includes a network interface or the like.
  • the drive 810 drives a removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
  • the CPU 801 loads the program stored in the storage unit 808 to the RAM 803 via the input / output interface 805 and the bus 804 and executes the program, for example. Is performed.
  • the program executed by the computer 800 can be provided by being recorded in, for example, a removable medium 811 as a package medium or the like.
  • the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
  • the program can be installed in the storage unit 808 via the input / output interface 805 by attaching the removable medium 811 to the drive 810.
  • the program can be received by the communication unit 809 via a wired or wireless transmission medium and installed in the storage unit 808.
  • the program can be installed in the ROM 802 or the storage unit 808 in advance.
  • the program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.
  • the step of describing the program recorded on the recording medium is not limited to the processing performed in chronological order according to the described order, but may be performed in parallel or It also includes processes that are executed individually.
  • system represents the entire apparatus composed of a plurality of devices (apparatuses).
  • the configuration described as one device (or processing unit) may be divided and configured as a plurality of devices (or processing units).
  • the configurations described above as a plurality of devices (or processing units) may be combined into a single device (or processing unit).
  • a configuration other than that described above may be added to the configuration of each device (or each processing unit).
  • a part of the configuration of a certain device (or processing unit) may be included in the configuration of another device (or other processing unit). . That is, the present technology is not limited to the above-described embodiment, and various modifications can be made without departing from the gist of the present technology.
  • An image encoding device and an image decoding device include a transmitter or a receiver in optical broadcasting, satellite broadcasting, cable broadcasting such as cable TV, distribution on the Internet, and distribution to terminals by cellular communication, etc.
  • the present invention can be applied to various electronic devices such as a recording device that records an image on a medium such as a magnetic disk and a flash memory, or a playback device that reproduces an image from these storage media.
  • a recording device that records an image on a medium such as a magnetic disk and a flash memory
  • a playback device that reproduces an image from these storage media.
  • FIG. 20 shows an example of a schematic configuration of a television apparatus to which the above-described embodiment is applied.
  • the television apparatus 900 includes an antenna 901, a tuner 902, a demultiplexer 903, a decoder 904, a video signal processing unit 905, a display unit 906, an audio signal processing unit 907, a speaker 908, an external interface 909, a control unit 910, a user interface 911, And a bus 912.
  • Tuner 902 extracts a signal of a desired channel from a broadcast signal received via antenna 901, and demodulates the extracted signal. Then, the tuner 902 outputs the encoded bit stream obtained by the demodulation to the demultiplexer 903. In other words, the tuner 902 serves as a transmission unit in the television apparatus 900 that receives an encoded stream in which an image is encoded.
  • the demultiplexer 903 separates the video stream and audio stream of the viewing target program from the encoded bit stream, and outputs each separated stream to the decoder 904. Further, the demultiplexer 903 extracts auxiliary data such as EPG (Electronic Program Guide) from the encoded bit stream, and supplies the extracted data to the control unit 910. Note that the demultiplexer 903 may perform descrambling when the encoded bit stream is scrambled.
  • EPG Electronic Program Guide
  • the decoder 904 decodes the video stream and audio stream input from the demultiplexer 903. Then, the decoder 904 outputs the video data generated by the decoding process to the video signal processing unit 905. In addition, the decoder 904 outputs audio data generated by the decoding process to the audio signal processing unit 907.
  • the video signal processing unit 905 reproduces the video data input from the decoder 904 and causes the display unit 906 to display the video.
  • the video signal processing unit 905 may cause the display unit 906 to display an application screen supplied via a network.
  • the video signal processing unit 905 may perform additional processing such as noise removal on the video data according to the setting.
  • the video signal processing unit 905 may generate a GUI (Graphical User Interface) image such as a menu, a button, or a cursor, and superimpose the generated image on the output image.
  • GUI Graphic User Interface
  • the display unit 906 is driven by a drive signal supplied from the video signal processing unit 905, and displays an image on a video screen of a display device (for example, a liquid crystal display, a plasma display, or an OELD (Organic ElectroLuminescence Display) (organic EL display)). Or an image is displayed.
  • a display device for example, a liquid crystal display, a plasma display, or an OELD (Organic ElectroLuminescence Display) (organic EL display)). Or an image is displayed.
  • the audio signal processing unit 907 performs reproduction processing such as D / A conversion and amplification on the audio data input from the decoder 904, and outputs audio from the speaker 908.
  • the audio signal processing unit 907 may perform additional processing such as noise removal on the audio data.
  • the external interface 909 is an interface for connecting the television apparatus 900 to an external device or a network.
  • a video stream or an audio stream received via the external interface 909 may be decoded by the decoder 904. That is, the external interface 909 also has a role as a transmission unit in the television apparatus 900 that receives an encoded stream in which an image is encoded.
  • the control unit 910 includes a processor such as a CPU and memories such as a RAM and a ROM.
  • the memory stores a program executed by the CPU, program data, EPG data, data acquired via a network, and the like.
  • the program stored in the memory is read and executed by the CPU when the television apparatus 900 is activated.
  • the CPU executes the program to control the operation of the television device 900 according to an operation signal input from the user interface 911, for example.
  • the user interface 911 is connected to the control unit 910.
  • the user interface 911 includes, for example, buttons and switches for the user to operate the television device 900, a remote control signal receiving unit, and the like.
  • the user interface 911 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 910.
  • the bus 912 connects the tuner 902, the demultiplexer 903, the decoder 904, the video signal processing unit 905, the audio signal processing unit 907, the external interface 909, and the control unit 910 to each other.
  • the decoder 904 has the function of the image decoding apparatus according to the above-described embodiment. This can simplify the management of the buffer that stores the filter coefficient of the in-loop filter when the television device 900 decodes an image.
  • FIG. 21 shows an example of a schematic configuration of a mobile phone to which the above-described embodiment is applied.
  • a mobile phone 920 includes an antenna 921, a communication unit 922, an audio codec 923, a speaker 924, a microphone 925, a camera unit 926, an image processing unit 927, a demultiplexing unit 928, a recording / reproducing unit 929, a display unit 930, a control unit 931, an operation A portion 932 and a bus 933.
  • the antenna 921 is connected to the communication unit 922.
  • the speaker 924 and the microphone 925 are connected to the audio codec 923.
  • the operation unit 932 is connected to the control unit 931.
  • the bus 933 connects the communication unit 922, the audio codec 923, the camera unit 926, the image processing unit 927, the demultiplexing unit 928, the recording / reproducing unit 929, the display unit 930, and the control unit 931 to each other.
  • the mobile phone 920 has various operation modes including a voice call mode, a data communication mode, a shooting mode, and a videophone mode, and is used for sending and receiving voice signals, sending and receiving e-mail or image data, taking images, and recording data. Perform the action.
  • the analog voice signal generated by the microphone 925 is supplied to the voice codec 923.
  • the audio codec 923 converts an analog audio signal into audio data, A / D converts the compressed audio data, and compresses it. Then, the audio codec 923 outputs the compressed audio data to the communication unit 922.
  • the communication unit 922 encodes and modulates the audio data and generates a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921. In addition, the communication unit 922 amplifies a radio signal received via the antenna 921 and performs frequency conversion to acquire a received signal.
  • the communication unit 922 demodulates and decodes the received signal to generate audio data, and outputs the generated audio data to the audio codec 923.
  • the audio codec 923 decompresses the audio data and performs D / A conversion to generate an analog audio signal. Then, the audio codec 923 supplies the generated audio signal to the speaker 924 to output audio.
  • the control unit 931 generates character data constituting the e-mail in response to an operation by the user via the operation unit 932.
  • the control unit 931 causes the display unit 930 to display characters.
  • the control unit 931 generates e-mail data in response to a transmission instruction from the user via the operation unit 932, and outputs the generated e-mail data to the communication unit 922.
  • the communication unit 922 encodes and modulates email data and generates a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921.
  • the communication unit 922 amplifies a radio signal received via the antenna 921 and performs frequency conversion to acquire a received signal.
  • the communication unit 922 demodulates and decodes the received signal to restore the email data, and outputs the restored email data to the control unit 931.
  • the control unit 931 displays the content of the electronic mail on the display unit 930 and stores the electronic mail data in the storage medium of the recording / reproducing unit 929.
  • the recording / reproducing unit 929 has an arbitrary readable / writable storage medium.
  • the storage medium may be a built-in storage medium such as RAM or flash memory, and is externally mounted such as a hard disk, magnetic disk, magneto-optical disk, optical disk, USB (Unallocated Space Space Bitmap) memory, or memory card. It may be a storage medium.
  • the camera unit 926 images a subject to generate image data, and outputs the generated image data to the image processing unit 927.
  • the image processing unit 927 encodes the image data input from the camera unit 926 and stores the encoded stream in the storage medium of the storage / playback unit 929.
  • the demultiplexing unit 928 multiplexes the video stream encoded by the image processing unit 927 and the audio stream input from the audio codec 923, and the multiplexed stream is the communication unit 922. Output to.
  • the communication unit 922 encodes and modulates the stream and generates a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921.
  • the communication unit 922 amplifies a radio signal received via the antenna 921 and performs frequency conversion to acquire a received signal.
  • These transmission signal and reception signal may include an encoded bit stream.
  • the communication unit 922 demodulates and decodes the received signal to restore the stream, and outputs the restored stream to the demultiplexing unit 928.
  • the demultiplexing unit 928 separates the video stream and the audio stream from the input stream, and outputs the video stream to the image processing unit 927 and the audio stream to the audio codec 923.
  • the image processing unit 927 decodes the video stream and generates video data.
  • the video data is supplied to the display unit 930, and a series of images is displayed on the display unit 930.
  • the audio codec 923 decompresses the audio stream and performs D / A conversion to generate an analog audio signal. Then, the audio codec 923 supplies the generated audio signal to the speaker 924 to output audio.
  • the image processing unit 927 has the functions of the image encoding device and the image decoding device according to the above-described embodiment. This can simplify the management of the buffer that stores the filter coefficient of the in-loop filter when the mobile phone 920 encodes and decodes an image.
  • FIG. 22 shows an example of a schematic configuration of a recording / reproducing apparatus to which the above-described embodiment is applied.
  • the recording / reproducing device 940 encodes audio data and video data of a received broadcast program and records the encoded data on a recording medium.
  • the recording / reproducing device 940 may encode audio data and video data acquired from another device and record them on a recording medium, for example.
  • the recording / reproducing device 940 reproduces data recorded on the recording medium on a monitor and a speaker, for example, in accordance with a user instruction. At this time, the recording / reproducing device 940 decodes the audio data and the video data.
  • the recording / reproducing apparatus 940 includes a tuner 941, an external interface 942, an encoder 943, an HDD (Hard Disk Drive) 944, a disk drive 945, a selector 946, a decoder 947, an OSD (On-Screen Display) 948, a control unit 949, and a user interface. 950.
  • Tuner 941 extracts a signal of a desired channel from a broadcast signal received via an antenna (not shown), and demodulates the extracted signal. Then, the tuner 941 outputs the encoded bit stream obtained by the demodulation to the selector 946. That is, the tuner 941 has a role as a transmission unit in the recording / reproducing apparatus 940.
  • the external interface 942 is an interface for connecting the recording / reproducing apparatus 940 to an external device or a network.
  • the external interface 942 may be, for example, an IEEE1394 interface, a network interface, a USB interface, or a flash memory interface.
  • video data and audio data received via the external interface 942 are input to the encoder 943. That is, the external interface 942 serves as a transmission unit in the recording / reproducing device 940.
  • the encoder 943 encodes video data and audio data when the video data and audio data input from the external interface 942 are not encoded. Then, the encoder 943 outputs the encoded bit stream to the selector 946.
  • the HDD 944 records an encoded bit stream in which content data such as video and audio is compressed, various programs, and other data on an internal hard disk. Further, the HDD 944 reads out these data from the hard disk when reproducing video and audio.
  • the disk drive 945 performs recording and reading of data to and from the mounted recording medium.
  • the recording medium mounted on the disk drive 945 is, for example, a DVD disk (DVD-Video, DVD-RAM, DVD-R, DVD-RW, DVD + R, DVD + RW, etc.) or a Blu-ray (registered trademark) disk. It may be.
  • the selector 946 selects an encoded bit stream input from the tuner 941 or the encoder 943 when recording video and audio, and outputs the selected encoded bit stream to the HDD 944 or the disk drive 945. In addition, the selector 946 outputs the encoded bit stream input from the HDD 944 or the disk drive 945 to the decoder 947 during video and audio reproduction.
  • the decoder 947 decodes the encoded bit stream and generates video data and audio data. Then, the decoder 947 outputs the generated video data to the OSD 948. The decoder 904 outputs the generated audio data to an external speaker.
  • OSD 948 reproduces the video data input from the decoder 947 and displays the video. Further, the OSD 948 may superimpose a GUI image such as a menu, a button, or a cursor on the video to be displayed.
  • the control unit 949 includes a processor such as a CPU and memories such as a RAM and a ROM.
  • the memory stores a program executed by the CPU, program data, and the like.
  • the program stored in the memory is read and executed by the CPU when the recording / reproducing apparatus 940 is activated, for example.
  • the CPU controls the operation of the recording / reproducing apparatus 940 in accordance with an operation signal input from the user interface 950, for example, by executing the program.
  • the user interface 950 is connected to the control unit 949.
  • the user interface 950 includes, for example, buttons and switches for the user to operate the recording / reproducing device 940, a remote control signal receiving unit, and the like.
  • the user interface 950 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 949.
  • the encoder 943 has the function of the image encoding apparatus according to the above-described embodiment.
  • the decoder 947 has the function of the image decoding apparatus according to the above-described embodiment. This can simplify the management of the buffer that stores the filter coefficient of the in-loop filter when encoding and decoding an image in the recording / reproducing apparatus 940.
  • FIG. 23 illustrates an example of a schematic configuration of an imaging apparatus to which the above-described embodiment is applied.
  • the imaging device 960 images a subject to generate an image, encodes the image data, and records it on a recording medium.
  • the imaging device 960 includes an optical block 961, an imaging unit 962, a signal processing unit 963, an image processing unit 964, a display unit 965, an external interface 966, a memory 967, a media drive 968, an OSD 969, a control unit 970, a user interface 971, and a bus. 972.
  • the optical block 961 is connected to the imaging unit 962.
  • the imaging unit 962 is connected to the signal processing unit 963.
  • the display unit 965 is connected to the image processing unit 964.
  • the user interface 971 is connected to the control unit 970.
  • the bus 972 connects the image processing unit 964, the external interface 966, the memory 967, the media drive 968, the OSD 969, and the control unit 970 to each other.
  • the optical block 961 includes a focus lens and a diaphragm mechanism.
  • the optical block 961 forms an optical image of the subject on the imaging surface of the imaging unit 962.
  • the imaging unit 962 includes an image sensor such as a CCD (Charge-Coupled Device) or a CMOS (Complementary Metal-Oxide Semiconductor), and converts an optical image formed on the imaging surface into an image signal as an electrical signal by photoelectric conversion. Then, the imaging unit 962 outputs the image signal to the signal processing unit 963.
  • CCD Charge-Coupled Device
  • CMOS Complementary Metal-Oxide Semiconductor
  • the signal processing unit 963 performs various camera signal processing such as knee correction, gamma correction, and color correction on the image signal input from the imaging unit 962.
  • the signal processing unit 963 outputs the image data after the camera signal processing to the image processing unit 964.
  • the image processing unit 964 encodes the image data input from the signal processing unit 963 and generates encoded data. Then, the image processing unit 964 outputs the generated encoded data to the external interface 966 or the media drive 968. The image processing unit 964 also decodes encoded data input from the external interface 966 or the media drive 968 to generate image data. Then, the image processing unit 964 outputs the generated image data to the display unit 965. In addition, the image processing unit 964 may display the image by outputting the image data input from the signal processing unit 963 to the display unit 965. Further, the image processing unit 964 may superimpose display data acquired from the OSD 969 on an image output to the display unit 965.
  • the OSD 969 generates a GUI image such as a menu, a button, or a cursor, and outputs the generated image to the image processing unit 964.
  • the external interface 966 is configured as a USB input / output terminal, for example.
  • the external interface 966 connects the imaging device 960 and a printer, for example, when printing an image.
  • a drive is connected to the external interface 966 as necessary.
  • a removable medium such as a magnetic disk or an optical disk is attached to the drive, and a program read from the removable medium can be installed in the imaging device 960.
  • the external interface 966 may be configured as a network interface connected to a network such as a LAN or the Internet. That is, the external interface 966 has a role as a transmission unit in the imaging device 960.
  • the recording medium mounted on the media drive 968 may be any readable / writable removable medium such as a magnetic disk, a magneto-optical disk, an optical disk, or a semiconductor memory.
  • a recording medium may be fixedly mounted on the media drive 968, and a non-portable storage unit such as an internal hard disk drive or an SSD (Solid State Drive) may be configured.
  • the control unit 970 includes a processor such as a CPU and memories such as a RAM and a ROM.
  • the memory stores a program executed by the CPU, program data, and the like.
  • the program stored in the memory is read and executed by the CPU when the imaging device 960 is activated, for example.
  • the CPU controls the operation of the imaging device 960 according to an operation signal input from the user interface 971 by executing the program.
  • the user interface 971 is connected to the control unit 970.
  • the user interface 971 includes, for example, buttons and switches for the user to operate the imaging device 960.
  • the user interface 971 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 970.
  • the image processing unit 964 has the functions of the image encoding device and the image decoding device according to the above-described embodiment. Thereby, when the image is encoded and decoded by the imaging device 960, the management of the buffer that stores the filter coefficient of the in-loop filter can be simplified.
  • the method for transmitting such information is not limited to such an example.
  • these pieces of information may be transmitted or recorded as separate data associated with the encoded bitstream without being multiplexed into the encoded bitstream.
  • the term “associate” means that an image (which may be a part of an image such as a slice or a block) included in the bitstream and information corresponding to the image can be linked at the time of decoding. Means. That is, information may be transmitted on a transmission path different from that of the image (or bit stream).
  • Information may be recorded on a recording medium (or another recording area of the same recording medium) different from the image (or bit stream). Furthermore, the information and the image (or bit stream) may be associated with each other in an arbitrary unit such as a plurality of frames, one frame, or a part of the frame.
  • this technique can also take the following structures.
  • a receiving unit that receives a filter coefficient of a filter used for encoding or decoding in a transmission unit that is a unit larger than the maximum encoding unit;
  • a decoding unit that decodes the encoded stream to generate an image, and a coefficient writing unit that writes the filter coefficient received by the receiving unit into a buffer that stores the filter coefficient;
  • An image processing apparatus comprising: a filter unit that applies the filter for each maximum coding unit using the filter coefficient written in the buffer by the coefficient writing unit for an image generated by the decoding unit.
  • the image processing device performs a filter using a coefficient that minimizes an error from the original image.
  • the image processing apparatus is Receiving a filter coefficient of a filter used for encoding or decoding in a transmission unit that is a unit larger than the maximum encoding unit; Decoding the encoded stream to generate an image, Writing the received filter coefficients into a buffer storing the filter coefficients; An image processing method for applying the filter to each maximum coding unit using a filter coefficient written in the buffer for a generated image.
  • a coefficient writing unit that writes a filter coefficient of a filter used for encoding or decoding into a buffer;
  • a filter unit that applies the filter for each maximum encoding unit for a locally decoded image with a filter coefficient written in the buffer by the coefficient writing unit;
  • An encoding unit that encodes the image using the image subjected to the filter by the filter unit and generates an encoded stream;
  • An image processing apparatus comprising: a transmission unit that transmits the filter coefficient in a transmission unit that is a unit larger than a maximum encoding unit.
  • the coefficient writing unit writes the filter coefficient in an empty area of the buffer.
  • the image processing device according to any one of (9) to (14), wherein the filter unit applies a filter that uses a coefficient that minimizes an error from the original image.
  • the image processing apparatus Write the filter coefficient of the filter used for encoding or decoding to the buffer that stores the filter coefficient, With the filter coefficient written in the buffer, for the image subjected to local decoding processing, the filter is applied for each maximum coding unit, Encode the image using the filtered image to generate an encoded stream; An image processing method for transmitting a determined filter coefficient in a transmission unit that is a unit larger than a maximum coding unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention se rapporte à un dispositif et à un procédé de traitement d'image adaptés pour permettre une simplification de la gestion d'un tampon qui est utilisé afin de stocker des coefficients de filtre. Des coefficients de filtre ayant des indices de 1 et de 2 en haut d'un écran sont transmis et stockés dans un tampon de coefficients de filtre adaptatif à boucle (ALF), sur le côté de décodage. Comme il existe de l'espace libre disponible dans le tampon de coefficients ALF, des coefficients de filtre ayant un indice de 3, qui sont déterminés par la ligne supérieure du module de traitement ALF sur l'écran, sont transmis et stockés dans le tampon de coefficients ALF à la fin de la ligne supérieure du module de traitement ALF, ce qui correspond à la synchronisation d'une transmission ayant un indice de 1 durant le codage. Par voie de conséquence, les coefficients de filtre les plus performants, parmi les coefficients de filtre ayant des indices de 1 à 3, qui sont stockés dans le tampon de coefficients ALF, sont utilisés dans la deuxième ligne du module de traitement ALF en partant du haut de l'écran. La présente invention peut être mise en œuvre dans des dispositifs de traitement d'image, par exemple.
PCT/JP2012/083969 2012-01-12 2012-12-27 Dispositif et procédé de traitement d'image WO2013105458A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012003810 2012-01-12
JP2012-003810 2012-01-12

Publications (1)

Publication Number Publication Date
WO2013105458A1 true WO2013105458A1 (fr) 2013-07-18

Family

ID=48781414

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/083969 WO2013105458A1 (fr) 2012-01-12 2012-12-27 Dispositif et procédé de traitement d'image

Country Status (1)

Country Link
WO (1) WO2013105458A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017195532A1 (fr) * 2016-05-13 2017-11-16 シャープ株式会社 Dispositif de décodage d'image et dispositif de codage d'image
CN113950833A (zh) * 2019-06-20 2022-01-18 索尼集团公司 图像处理装置和图像处理方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009060317A (ja) * 2007-08-31 2009-03-19 Ricoh Co Ltd 画像データ符号化装置、画像データ符号化方法、画像形成装置、画像形成方法、画像データ復号化装置、及び画像データ復号化方法
WO2011083713A1 (fr) * 2010-01-06 2011-07-14 ソニー株式会社 Dispositif et procédé de traitement d'images
WO2011111341A1 (fr) * 2010-03-09 2011-09-15 パナソニック株式会社 Dispositif de décodage d'image dynamique, dispositif de codage d'image dynamique, circuit de décodage d'image dynamique, et procédé de décodage d'image dynamique
WO2011129090A1 (fr) * 2010-04-13 2011-10-20 パナソニック株式会社 Procédé de retrait des distorsions d'encodage, procédé d'encodage, procédé de décodage, dispositif de retrait des distorsions d'encodage, dispositif d'encodage et dispositif de décodage

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009060317A (ja) * 2007-08-31 2009-03-19 Ricoh Co Ltd 画像データ符号化装置、画像データ符号化方法、画像形成装置、画像形成方法、画像データ復号化装置、及び画像データ復号化方法
WO2011083713A1 (fr) * 2010-01-06 2011-07-14 ソニー株式会社 Dispositif et procédé de traitement d'images
WO2011111341A1 (fr) * 2010-03-09 2011-09-15 パナソニック株式会社 Dispositif de décodage d'image dynamique, dispositif de codage d'image dynamique, circuit de décodage d'image dynamique, et procédé de décodage d'image dynamique
WO2011129090A1 (fr) * 2010-04-13 2011-10-20 パナソニック株式会社 Procédé de retrait des distorsions d'encodage, procédé d'encodage, procédé de décodage, dispositif de retrait des distorsions d'encodage, dispositif d'encodage et dispositif de décodage

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
A. FULDSETH ET AL.: "Improved ALF with low latency and reduced complexity", JCTVC-G499, JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT- VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/ WG11, 21 November 2011 (2011-11-21), pages 1 - 7, XP030050626 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017195532A1 (fr) * 2016-05-13 2017-11-16 シャープ株式会社 Dispositif de décodage d'image et dispositif de codage d'image
CN113950833A (zh) * 2019-06-20 2022-01-18 索尼集团公司 图像处理装置和图像处理方法

Similar Documents

Publication Publication Date Title
JP6465226B2 (ja) 画像処理装置および方法、記録媒体、並びに、プログラム
JP6521013B2 (ja) 画像処理装置および方法、プログラム、並びに記録媒体
US11601685B2 (en) Image processing device and method using adaptive offset filter in units of largest coding unit
WO2014050731A1 (fr) Dispositif et procédé de traitement d'image
WO2013108688A1 (fr) Dispositif de traitement d'image et procédé
WO2014156708A1 (fr) Dispositif et procede de decodage d'image
WO2013047325A1 (fr) Dispositif et procédé de traitement d'image
JP5999449B2 (ja) 画像処理装置および方法、プログラム、並びに記録媒体
WO2013105458A1 (fr) Dispositif et procédé de traitement d'image
WO2013105457A1 (fr) Dispositif et procédé de traitement d'image
WO2014097703A1 (fr) Dispositif et procédé de traitement d'image
WO2014156707A1 (fr) Dispositif et procédé de codage d'image et dispositif et procédé de décodage d'image
AU2015255161A1 (en) Image Processing Device and Method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12865067

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12865067

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP