WO2013105457A1 - Dispositif et procédé de traitement d'image - Google Patents

Dispositif et procédé de traitement d'image Download PDF

Info

Publication number
WO2013105457A1
WO2013105457A1 PCT/JP2012/083968 JP2012083968W WO2013105457A1 WO 2013105457 A1 WO2013105457 A1 WO 2013105457A1 JP 2012083968 W JP2012083968 W JP 2012083968W WO 2013105457 A1 WO2013105457 A1 WO 2013105457A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
image
filter
filter coefficient
buffer
Prior art date
Application number
PCT/JP2012/083968
Other languages
English (en)
Japanese (ja)
Inventor
央二 中神
裕音 櫻井
北村 卓也
矢ケ崎 陽一
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2013105457A1 publication Critical patent/WO2013105457A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop

Definitions

  • the present disclosure relates to an image processing apparatus and method, and more particularly, to an image processing apparatus and method that can reduce transmission cost of filter coefficients of a filter in encoding or decoding.
  • the adaptive loop filter is adopted in the draft of HEVC at present.
  • 16 sets of filter coefficients are transmitted per picture. This filter coefficient is transmitted prior to picture coding information such as prediction mode information, motion vector information, and DCT coefficient information.
  • the filter coefficient to be applied is determined by the dispersion of pixels in the region, and in the case of region base, the filter to be applied is determined by a flag.
  • Non-Patent Document 2 a proposal has been made to send a filter coefficient or index to the decoding side in units of LCU which is the maximum encoding unit.
  • the filter coefficients are stored in the filter coefficient buffer in the order of transmission.
  • the filter coefficients present in the buffer are referenced by an index.
  • the filter coefficient buffer is managed by FIFO, and the coefficients transmitted before the buffer size are discarded.
  • JCTVC-F803 Joint Collaborative-TeamVC -T SG16 WP3 and ISO / IEC JTC1 / SC29 / WG11 6th Meeting: Torino, IT, 14-22 July, 2011
  • A.Fuldseth Cisco Systems, G.bjontegaard, Cisco Systems, ”Improved ALF with low latency and reduced complexity”, JCTVC-G499, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 / WP3 and ISO JTC1 / SC29 / WG11 7th Meeting: Geneva, CH, 21-30 November, 2011
  • Non-Patent Document 2 since the filter coefficient buffer described in Non-Patent Document 2 is managed by FIFO, past transmitted filter coefficients are discarded from the filter coefficient buffer regardless of whether the frequency of use is low. There was a case. In this case, even if the filter coefficient used for the filter is the same as the coefficient transmitted in the past, it is necessary to transmit again, and transmission cost is increased.
  • the present disclosure has been made in view of such a situation, and can reduce the transmission cost of the filter coefficient of the filter in encoding or decoding.
  • An image processing device includes a decoding unit that decodes an encoded stream to generate an image, and a filter coefficient corresponding to an index of a filter coefficient for the image generated by the decoding unit. And a management unit that manages a history of referring to the filter coefficient by the filter unit using the index.
  • a read unit that reads the filter coefficient from a buffer that stores the filter coefficient using the index of the filter coefficient is further provided, and the management unit sends the filter coefficient read by the read unit to the buffer.
  • the management unit can store a filter coefficient that is frequently referred to in the buffer longer than a filter coefficient that is frequently referred to.
  • the index value of the filter coefficient is set so as to decrease as the use frequency of the filter coefficient increases.
  • the management unit can reset the buffer for each range that is larger than the maximum encoding unit and closed by a horizontal line.
  • the management unit can store the filter coefficient in an area having the highest priority among the free areas of the buffer.
  • the buffer management unit may further include a receiving unit that receives the filter coefficient, and the buffer management unit may perform a FIFO process on the buffer using the filter coefficient received by the receiving unit when there is no free space in the buffer. it can.
  • a buffer for storing the filter coefficient can be further provided.
  • the filter unit can apply a filter to the image subjected to the adaptive offset process.
  • the filter unit can perform a filter using a coefficient that minimizes an error from the original image.
  • an image processing apparatus generates an image by decoding an encoded stream, and uses a filter coefficient corresponding to an index of a filter coefficient for the generated image.
  • a filter is applied to each maximum coding unit, and a history of referring to the filter coefficient is managed using the index.
  • the image processing apparatus targets an image that has been locally decoded when encoding an image, and uses the filter coefficient corresponding to the index of the filter coefficient for each maximum encoding unit.
  • a filter processing unit that performs filtering; an encoding unit that encodes the image using the image that has been subjected to the filter by the filter unit; and generates an encoded stream; And a management unit for managing a history when referring to the filter coefficient.
  • a read unit that reads the filter coefficient from a buffer that stores the filter coefficient using the index of the filter coefficient is further provided, and the management unit sends the filter coefficient read by the read unit to the buffer.
  • the management unit can store a filter coefficient that is frequently referred to in the buffer longer than a filter coefficient that is frequently referred to.
  • the index value of the filter coefficient is set so as to decrease as the use frequency of the filter coefficient increases.
  • the management unit can reset the buffer for each range that is larger than the maximum encoding unit and closed by a horizontal line.
  • the management unit can store the filter coefficient in an area having the highest priority among the empty areas of the buffer.
  • a filter coefficient determining unit that determines the filter coefficient; and when the buffer has no free space, the management unit performs a FIFO process on the buffer using the filter coefficient determined by the filter coefficient determining unit. It can be performed.
  • the filter unit can perform a filter using a coefficient that minimizes an error from the original image.
  • an image processing apparatus uses a filter coefficient corresponding to an index of a previous filter coefficient as a target for an image subjected to local decoding processing when an image is encoded. Apply the filter for each encoding unit, encode the image using the filtered image, generate an encoded stream, and use the index to refer to the history of referring to the filter coefficient. to manage.
  • an image is generated by decoding an encoded stream, and a filter coefficient corresponding to an index of the filter coefficient is used for the generated image as a target for each maximum encoding unit.
  • the filter is applied.
  • a history when referring to the filter coefficient is managed using the index.
  • the filter is applied to each maximum coding unit using a filter coefficient corresponding to the index of the filter coefficient for an image that has been locally decoded when the image is encoded. Then, the image is encoded using the filtered image, and an encoded stream is generated. Then, a history when referring to the filter coefficient is managed using the index.
  • the above-described image processing apparatus may be an independent apparatus, or may be an internal block constituting one image encoding apparatus or image decoding apparatus.
  • an image can be decoded.
  • the transmission cost of the filter coefficient can be reduced.
  • an image can be encoded.
  • the transmission cost of the filter coefficient can be reduced.
  • FIG. 20 is a block diagram illustrating a main configuration example of a computer. It is a block diagram which shows an example of a schematic structure of a television apparatus.
  • FIG. 1 illustrates a configuration of an embodiment of an image encoding device as an image processing device to which the present disclosure is applied.
  • the image encoding device 11 shown in FIG. 1 encodes image data using a prediction process.
  • a prediction process for example, HEVC (High) Efficiency Video Coding) method or the like is used.
  • the image encoding device 11 includes an A / D (Analog / Digital) conversion unit 21, a screen rearrangement buffer 22, a calculation unit 23, an orthogonal transformation unit 24, a quantization unit 25, and a lossless encoding unit 26. And a storage buffer 27.
  • the image encoding device 11 includes an inverse quantization unit 28, an inverse orthogonal transform unit 29, a calculation unit 30, a deblocking filter 31, a frame memory 32, a selection unit 33, an intra prediction unit 34, a motion prediction / compensation unit 35, A predicted image selection unit 36 and a rate control unit 37 are included.
  • the image encoding device 11 includes an adaptive offset filter 41 and an adaptive loop filter 42 between the deblocking filter 31 and the frame memory 32.
  • the A / D converter 21 A / D converts the input image data, outputs it to the screen rearrangement buffer 22, and stores it.
  • the screen rearrangement buffer 22 rearranges the stored frame images in the display order in the order of frames for encoding according to the GOP (Group of Picture) structure.
  • the screen rearrangement buffer 22 supplies the image with the rearranged frame order to the arithmetic unit 23.
  • the screen rearrangement buffer 22 also supplies the image in which the frame order is rearranged to the intra prediction unit 34 and the motion prediction / compensation unit 35.
  • the calculation unit 23 subtracts the predicted image supplied from the intra prediction unit 34 or the motion prediction / compensation unit 35 via the predicted image selection unit 36 from the image read from the screen rearrangement buffer 22, and the difference information Is output to the orthogonal transform unit 24.
  • the calculation unit 23 subtracts the prediction image supplied from the intra prediction unit 34 from the image read from the screen rearrangement buffer 22.
  • the calculation unit 23 subtracts the prediction image supplied from the motion prediction / compensation unit 35 from the image read from the screen rearrangement buffer 22.
  • the orthogonal transform unit 24 performs orthogonal transform such as discrete cosine transform and Karhunen-Loeve transform on the difference information supplied from the computation unit 23 and supplies the transform coefficient to the quantization unit 25.
  • the quantization unit 25 quantizes the transform coefficient output from the orthogonal transform unit 24.
  • the quantization unit 25 supplies the quantized transform coefficient to the lossless encoding unit 26.
  • the lossless encoding unit 26 performs lossless encoding such as variable length encoding and arithmetic encoding on the quantized transform coefficient.
  • the lossless encoding unit 26 acquires parameters such as information indicating the intra prediction mode from the intra prediction unit 34, and acquires parameters such as information indicating the inter prediction mode and motion vector information from the motion prediction / compensation unit 35.
  • the lossless encoding unit 26 encodes the quantized transform coefficient, encodes each acquired parameter (syntax element), and uses it as part of the header information of the encoded data (multiplexes).
  • the lossless encoding unit 26 supplies the encoded data obtained by encoding to the accumulation buffer 27 for accumulation.
  • lossless encoding processing such as variable length encoding or arithmetic encoding is performed.
  • variable length coding include CAVLC (Context-Adaptive Variable Length Coding).
  • arithmetic coding include CABAC (Context-Adaptive Binary Arithmetic Coding).
  • the accumulation buffer 27 temporarily holds the encoded stream (data) supplied from the lossless encoding unit 26, and, for example, as a coded image encoded at a predetermined timing, for example, a recording (not shown) in the subsequent stage. Output to devices and transmission lines. That is, the accumulation buffer 27 is also a transmission unit that transmits the encoded stream.
  • the transform coefficient quantized by the quantization unit 25 is also supplied to the inverse quantization unit 28.
  • the inverse quantization unit 28 inversely quantizes the quantized transform coefficient by a method corresponding to the quantization by the quantization unit 25.
  • the inverse quantization unit 28 supplies the obtained transform coefficient to the inverse orthogonal transform unit 29.
  • the inverse orthogonal transform unit 29 performs inverse orthogonal transform on the supplied transform coefficient by a method corresponding to the orthogonal transform processing by the orthogonal transform unit 24.
  • the inversely orthogonally transformed output (restored difference information) is supplied to the arithmetic unit 30.
  • the calculation unit 30 is supplied from the intra prediction unit 34 or the motion prediction / compensation unit 35 to the inverse orthogonal transform result supplied from the inverse orthogonal transform unit 29, that is, the restored difference information, via the predicted image selection unit 36. Predicted images are added to obtain a locally decoded image (decoded image).
  • the calculation unit 30 adds the prediction image supplied from the intra prediction unit 34 to the difference information.
  • the calculation unit 30 adds the predicted image supplied from the motion prediction / compensation unit 35 to the difference information.
  • the decoded image as the addition result is supplied to the deblocking filter 31 and the frame memory 32.
  • the deblocking filter 31 removes block distortion of the decoded image by appropriately performing deblocking filter processing.
  • the deblocking filter 31 supplies the filter processing result to the adaptive offset filter 41.
  • the adaptive offset filter 41 performs an offset filter (SAO: Sample adaptive ⁇ offset) process that mainly removes ringing on the image after filtering by the deblocking filter 31.
  • SAO Sample adaptive ⁇ offset
  • the adaptive offset filter 41 applies a quad-tree structure in which the type of the offset filter is determined for each divided region and the offset value for each divided region to the image after filtering by the deblocking filter 31. Apply processing.
  • the quad-tree structure and the offset value for each divided region are calculated by the adaptive offset filter 41 and used in the image encoding device 11.
  • the calculated quad-tree structure and the offset value for each divided region are encoded by the lossless encoding unit 26 and transmitted to the image decoding device 51 of FIG.
  • the adaptive offset filter 41 supplies the filtered image to the adaptive loop filter 42.
  • the adaptive loop filter 42 performs an adaptive loop filter (ALF) process in units of LCU, which is the maximum encoding unit, as an ALF processing unit.
  • ALF adaptive loop filter
  • the filtered image is supplied to the frame memory 32.
  • a two-dimensional Wiener filter is used as a filter.
  • filters other than the Wiener filter may be used.
  • the adaptive loop filter 42 has a buffer for storing filter coefficients.
  • this buffer is a range larger than the LCU and is reset for each closed range of the horizontal line, for example, every LCU line (maximum encoding unit line).
  • the adaptive loop filter 42 a filter coefficient is obtained for each LCU. Of the obtained filter coefficients and the filter coefficients stored in the buffer, the optimum filter coefficient is used for the filtering process.
  • the filter coefficient used for the filter processing is exchanged with the filter coefficient stored in the high priority area in the buffer (the storage area is replaced).
  • the filter coefficient used is stored in a high priority area in the free area in the buffer.
  • FIFO processing is performed in the buffer.
  • the buffer index of the used filter coefficient or the used filter coefficient is supplied to the lossless encoding unit 26 as an adaptive loop filter parameter together with the on / off flag of the adaptive loop filter for each LCU and transmitted to the decoding side.
  • the frame memory 32 outputs the stored reference image to the intra prediction unit 34 or the motion prediction / compensation unit 35 via the selection unit 33 at a predetermined timing.
  • the frame memory 32 supplies the reference image to the intra prediction unit 34 via the selection unit 33.
  • the frame memory 32 supplies the reference image to the motion prediction / compensation unit 35 via the selection unit 33.
  • the selection unit 33 supplies the reference image to the intra prediction unit 34 when the reference image supplied from the frame memory 32 is an image to be subjected to intra coding.
  • the selection unit 33 also supplies the reference image to the motion prediction / compensation unit 35 when the reference image supplied from the frame memory 32 is an image to be inter-encoded.
  • the intra prediction unit 34 performs intra prediction (intra-screen prediction) for generating a predicted image using pixel values in the screen.
  • the intra prediction unit 34 performs intra prediction in a plurality of modes (intra prediction modes).
  • the intra prediction unit 34 generates prediction images in all intra prediction modes, evaluates each prediction image, and selects an optimal mode. When the optimal intra prediction mode is selected, the intra prediction unit 34 supplies the prediction image generated in the optimal mode to the calculation unit 23 and the calculation unit 30 via the predicted image selection unit 36.
  • the intra prediction unit 34 supplies parameters such as intra prediction mode information indicating the adopted intra prediction mode to the lossless encoding unit 26 as appropriate.
  • the motion prediction / compensation unit 35 uses the input image supplied from the screen rearrangement buffer 22 and the reference image supplied from the frame memory 32 via the selection unit 33 for the image to be inter-coded, Perform motion prediction. In addition, the motion prediction / compensation unit 35 performs a motion compensation process according to the motion vector detected by the motion prediction, and generates a predicted image (inter predicted image information).
  • the motion prediction / compensation unit 35 performs inter prediction processing in all candidate inter prediction modes, and generates a prediction image.
  • the motion prediction / compensation unit 35 supplies the generated predicted image to the calculation unit 23 and the calculation unit 30 via the predicted image selection unit 36.
  • the motion prediction / compensation unit 35 supplies parameters such as inter prediction mode information indicating the employed inter prediction mode and motion vector information indicating the calculated motion vector to the lossless encoding unit 26.
  • the predicted image selection unit 36 supplies the output of the intra prediction unit 34 to the calculation unit 23 and the calculation unit 30 in the case of an image to be subjected to intra coding, and in the case of an image to be subjected to inter coding, the motion prediction / compensation unit 35.
  • the output is supplied to the calculation unit 23 and the calculation unit 30.
  • the rate control unit 37 controls the quantization operation rate of the quantization unit 25 based on the compressed image stored in the storage buffer 27 so that overflow or underflow does not occur.
  • step S11 the A / D converter 21 A / D converts the input image.
  • step S12 the screen rearrangement buffer 22 stores the A / D converted images, and rearranges them from the display order of each picture to the encoding order.
  • a decoded image to be referred to is read from the frame memory 32 and the intra prediction unit via the selection unit 33 34.
  • the intra prediction unit 34 performs intra prediction on the pixels of the block to be processed in all candidate intra prediction modes. Note that pixels that have not been filtered by the deblocking filter 31 are used as decoded pixels that are referred to.
  • intra prediction is performed in all candidate intra prediction modes, and cost function values are calculated for all candidate intra prediction modes. Then, based on the calculated cost function value, the optimal intra prediction mode is selected, and the predicted image generated by the intra prediction of the optimal intra prediction mode and its cost function value are supplied to the predicted image selection unit 36.
  • the processing target image supplied from the screen rearrangement buffer 22 is an inter-processed image
  • the referenced image is read from the frame memory 32 and supplied to the motion prediction / compensation unit 35 via the selection unit 33. Is done.
  • the motion prediction / compensation unit 35 performs motion prediction / compensation processing.
  • motion prediction processing is performed in all candidate inter prediction modes, cost function values are calculated for all candidate inter prediction modes, and optimal inter prediction is performed based on the calculated cost function values. The mode is determined. Then, the predicted image generated in the optimal inter prediction mode and its cost function value are supplied to the predicted image selection unit 36.
  • step S15 the predicted image selection unit 36 selects one of the optimal intra prediction mode and the optimal inter prediction mode based on the cost function values output from the intra prediction unit 34 and the motion prediction / compensation unit 35. Determine the prediction mode. Then, the predicted image selection unit 36 selects a predicted image in the determined optimal prediction mode and supplies it to the calculation units 23 and 30. This predicted image is used for calculations in steps S16 and S21 described later.
  • the prediction image selection information is supplied to the intra prediction unit 34 or the motion prediction / compensation unit 35.
  • the intra prediction unit 34 supplies information indicating the optimal intra prediction mode (that is, a parameter related to intra prediction) to the lossless encoding unit 26.
  • the motion prediction / compensation unit 35 When a prediction image in the optimal inter prediction mode is selected, the motion prediction / compensation unit 35 performs lossless encoding of information indicating the optimal inter prediction mode and information corresponding to the optimal inter prediction mode (that is, parameters relating to motion prediction). To the unit 26.
  • Information according to the optimal inter prediction mode includes motion vector information and reference frame information.
  • step S16 the calculation unit 23 calculates a difference between the image rearranged in step S12 and the predicted image selected in step S15.
  • the predicted image is supplied from the motion prediction / compensation unit 35 in the case of inter prediction, and from the intra prediction unit 34 in the case of intra prediction, to the calculation unit 23 via the predicted image selection unit 36, respectively.
  • ⁇ Difference data has a smaller data volume than the original image data. Therefore, the data amount can be compressed as compared with the case where the image is encoded as it is.
  • step S17 the orthogonal transformation unit 24 orthogonally transforms the difference information supplied from the calculation unit 23. Specifically, orthogonal transformation such as discrete cosine transformation and Karhunen-Loeve transformation is performed, and transformation coefficients are output.
  • orthogonal transformation such as discrete cosine transformation and Karhunen-Loeve transformation is performed, and transformation coefficients are output.
  • step S18 the quantization unit 25 quantizes the transform coefficient.
  • the rate is controlled as will be described in the process of step S28 described later.
  • step S19 the inverse quantization unit 28 inversely quantizes the transform coefficient quantized by the quantization unit 25 with characteristics corresponding to the characteristics of the quantization unit 25.
  • step S ⁇ b> 20 the inverse orthogonal transform unit 29 performs inverse orthogonal transform on the transform coefficient inversely quantized by the inverse quantization unit 28 with characteristics corresponding to the characteristics of the orthogonal transform unit 24.
  • step S21 the calculation unit 30 adds the predicted image input via the predicted image selection unit 36 to the locally decoded difference information, and locally decoded (ie, locally decoded) image. (Image corresponding to the input to the calculation unit 23) is generated.
  • step S22 the deblocking filter 31 performs a deblocking filter process on the image output from the calculation unit 30. Thereby, block distortion is removed.
  • the filtered image from the deblocking filter 31 is output to the adaptive offset filter 41.
  • step S23 the adaptive offset filter 41 performs adaptive offset filter processing.
  • the filtering process is performed on the image after filtering by the deblocking filter 31 using the quad-tree structure in which the type of the offset filter is determined for each divided region and the offset value for each divided region. Applied.
  • the filtered image is supplied to the adaptive loop filter 42.
  • step S24 the adaptive loop filter 42 performs an adaptive loop filter process on the image after filtering by the adaptive offset filter 41 in units of LCUs.
  • the filtered image is supplied to the frame memory 32.
  • filter coefficients are obtained for each LCU, and the optimum filter is selected from the obtained filter coefficients and the filter coefficients stored in the buffer. Coefficients are used for filtering.
  • the filter coefficient used for the filter processing is exchanged with the filter coefficient stored in the high priority area in the buffer (the storage area is replaced).
  • the filter coefficient used is stored in a high priority area in the free area in the buffer.
  • FIFO processing is performed in the buffer.
  • the buffer index of the used filter coefficient or the used filter coefficient is supplied to the lossless encoding unit 26 as an adaptive loop filter parameter together with the on / off flag of the adaptive loop filter for each LCU.
  • the filter coefficient is simply referred to as a set of filter coefficients.
  • step S25 the frame memory 32 stores the filtered image.
  • images that are not filtered by the deblocking filter 31, the adaptive offset filter 41, and the adaptive loop filter 42 are also supplied from the computing unit 30 and stored.
  • the transform coefficient quantized in step S18 described above is also supplied to the lossless encoding unit 26.
  • the lossless encoding unit 26 encodes the quantized transform coefficient output from the quantization unit 25 and each supplied parameter. That is, the difference image is subjected to lossless encoding such as variable length encoding and arithmetic encoding, and is compressed.
  • lossless encoding such as variable length encoding and arithmetic encoding
  • step S27 the accumulation buffer 27 accumulates the encoded difference image (that is, the encoded stream) as a compressed image.
  • the compressed image stored in the storage buffer 27 is appropriately read out and transmitted to the decoding side via the transmission path.
  • step S28 the rate control unit 37 controls the quantization operation rate of the quantization unit 25 based on the compressed image stored in the storage buffer 27 so that overflow or underflow does not occur.
  • step S28 ends, the encoding process ends.
  • FIG. 3 illustrates a configuration of an embodiment of an image decoding device as an image processing device to which the present disclosure is applied.
  • An image decoding device 51 shown in FIG. 3 is a decoding device corresponding to the image encoding device 11 of FIG.
  • encoded data encoded by the image encoding device 11 is transmitted to an image decoding device 51 corresponding to the image encoding device 11 via a predetermined transmission path and decoded.
  • the image decoding device 51 includes a storage buffer 61, a lossless decoding unit 62, an inverse quantization unit 63, an inverse orthogonal transform unit 64, a calculation unit 65, a deblocking filter 66, a screen rearrangement buffer 67, And a D / A converter 68.
  • the image decoding device 51 includes a frame memory 69, a selection unit 70, an intra prediction unit 71, a motion prediction / compensation unit 72, and a selection unit 73.
  • the image decoding device 51 includes an adaptive offset filter 81 and an adaptive loop filter 82 between the deblocking filter 66, the screen rearrangement buffer 67, and the frame memory 69.
  • the accumulation buffer 61 is also a receiving unit that receives transmitted encoded data.
  • the accumulation buffer 61 receives and accumulates the transmitted encoded data.
  • This encoded data is encoded by the image encoding device 11.
  • the lossless decoding unit 62 decodes the encoded data read from the accumulation buffer 61 at a predetermined timing by a method corresponding to the encoding method of the lossless encoding unit 26 in FIG.
  • the lossless decoding unit 62 supplies parameters such as information indicating the decoded intra prediction mode to the intra prediction unit 71, and supplies parameters such as information indicating the inter prediction mode and motion vector information to the motion prediction / compensation unit 72. . Further, the lossless decoding unit 62 supplies the decoded adaptive loop filter parameters (on / off flag for each LCU, filter coefficient, buffer index, etc.) to the adaptive loop filter 82.
  • the inverse quantization unit 63 inversely quantizes the coefficient data (quantization coefficient) obtained by decoding by the lossless decoding unit 62 by a method corresponding to the quantization method of the quantization unit 25 in FIG. That is, the inverse quantization unit 63 uses the quantization parameter supplied from the image encoding device 11 to perform inverse quantization of the quantization coefficient by the same method as the inverse quantization unit 28 in FIG.
  • the inverse quantization unit 63 supplies the inversely quantized coefficient data, that is, the orthogonal transform coefficient, to the inverse orthogonal transform unit 64.
  • the inverse orthogonal transform unit 64 is a method corresponding to the orthogonal transform method of the orthogonal transform unit 24 in FIG. 1, performs inverse orthogonal transform on the orthogonal transform coefficient, and converts the residual data before the orthogonal transform in the image encoding device 11 Corresponding decoding residual data is obtained.
  • the decoded residual data obtained by the inverse orthogonal transform is supplied to the arithmetic unit 65. Further, a prediction image is supplied to the calculation unit 65 from the intra prediction unit 71 or the motion prediction / compensation unit 72 via the selection unit 73.
  • the calculating unit 65 adds the decoded residual data and the predicted image, and obtains decoded image data corresponding to the image data before the predicted image is subtracted by the calculating unit 23 of the image encoding device 11.
  • the arithmetic unit 65 supplies the decoded image data to the deblocking filter 66.
  • the deblocking filter 66 removes block distortion of the decoded image by appropriately performing deblocking filter processing.
  • the deblocking filter 66 supplies the filter processing result to the adaptive offset filter 81.
  • the adaptive offset filter 81 performs an offset filter (SAO) process that mainly removes ringing on the image after filtering by the deblocking filter 66.
  • SAO offset filter
  • the adaptive offset filter 81 uses the quad-tree structure in which the type of the offset filter is determined for each divided region and the offset value for each divided region, and performs filtering on the image after filtering by the deblocking filter 66. Apply processing.
  • the adaptive offset filter 81 supplies the filtered image to the adaptive loop filter 82.
  • the quad-tree structure and the offset value for each divided region are calculated by the adaptive offset filter 41 of the image encoding device 11, encoded and sent. Is.
  • the quad-tree structure encoded by the image encoding device 11 and the offset value for each divided region are received by the image decoding device 51, decoded by the lossless decoding unit 62, and used by the adaptive offset filter 81. .
  • the adaptive loop filter 82 is configured basically in the same manner as the adaptive loop filter 42 of FIG. 1 and performs an adaptive loop filter (ALF: Adaptive Loop Filter) processing in units of LCUs which are the maximum coding units as ALF processing units.
  • ALF Adaptive Loop Filter
  • the filtered image is supplied to the screen rearrangement buffer 67 and the frame memory 69.
  • the adaptive loop filter 82 has a buffer for storing filter coefficients, like the adaptive loop filter 42 of FIG. In this buffer, the referenced filter coefficient is moved to a high priority area. In addition to the screen unit, this buffer is a range larger than the LCU and is reset for each closed range of the horizontal line, for example, every LCU line (maximum encoding unit line).
  • the filter coefficient from the lossless decoding unit 62 or the filter coefficient corresponding to the buffer index from the lossless decoding unit 62 stored in the buffer is used for the filter processing.
  • the filter coefficient used for the filter processing is exchanged with the filter coefficient stored in the high priority area in the buffer (stored) Area is swapped).
  • the filter coefficient used is stored in a high-priority area in the empty area in the buffer.
  • FIFO processing is performed in the buffer.
  • the screen rearrangement buffer 67 rearranges images. That is, the order of frames rearranged for the encoding order by the screen rearrangement buffer 22 in FIG. 1 is rearranged in the original display order.
  • the D / A converter 68 performs D / A conversion on the image supplied from the screen rearrangement buffer 67, and outputs and displays the image on a display (not shown).
  • the output of the adaptive loop filter 82 is further supplied to the frame memory 69.
  • the frame memory 69, the selection unit 70, the intra prediction unit 71, the motion prediction / compensation unit 72, and the selection unit 73 are the frame memory 32, the selection unit 33, the intra prediction unit 34, and the motion prediction / compensation unit of the image encoding device 11. 35 and the predicted image selection unit 36, respectively.
  • the selection unit 70 reads out the inter-processed image and the referenced image from the frame memory 69 and supplies the image to the motion prediction / compensation unit 72.
  • the selection unit 70 reads an image used for intra prediction from the frame memory 69 and supplies the image to the intra prediction unit 71.
  • the intra prediction unit 71 is appropriately supplied with information indicating the intra prediction mode obtained by decoding the header information from the lossless decoding unit 62. Based on this information, the intra prediction unit 71 generates a prediction image from the reference image acquired from the frame memory 69 and supplies the generated prediction image to the selection unit 73.
  • the motion prediction / compensation unit 72 is supplied with information (prediction mode information, motion vector information, reference frame information, flags, various parameters, etc.) obtained by decoding the header information from the lossless decoding unit 62.
  • the motion prediction / compensation unit 72 generates a prediction image from the reference image acquired from the frame memory 69 based on the information supplied from the lossless decoding unit 62 and supplies the generated prediction image to the selection unit 73.
  • the selection unit 73 selects the prediction image generated by the motion prediction / compensation unit 72 or the intra prediction unit 71 and supplies the selected prediction image to the calculation unit 65.
  • the storage buffer 61 stores the transmitted encoded data in step S51.
  • the lossless decoding unit 62 decodes the encoded data supplied from the accumulation buffer 61.
  • the I picture, P picture, and B picture encoded by the lossless encoding unit 26 in FIG. 1 are decoded.
  • parameter information such as motion vector information, reference frame information, and prediction mode information (intra prediction mode or inter prediction mode) is also decoded.
  • the prediction mode information is intra prediction mode information
  • the prediction mode information is supplied to the intra prediction unit 71.
  • the prediction mode information is inter prediction mode information
  • motion vector information corresponding to the prediction mode information is supplied to the motion prediction / compensation unit 72.
  • the parameters of the adaptive loop filter are decoded and supplied to the adaptive loop filter 82.
  • step S53 the intra prediction unit 71 or the motion prediction / compensation unit 72 performs a prediction image generation process corresponding to the prediction mode information supplied from the lossless decoding unit 62, respectively.
  • the intra prediction unit 71 when the intra prediction mode information is supplied from the lossless decoding unit 62, the intra prediction unit 71 generates an intra prediction image in the intra prediction mode.
  • the motion prediction / compensation unit 72 performs an inter prediction mode motion prediction / compensation process to generate an inter prediction image.
  • the prediction image (intra prediction image) generated by the intra prediction unit 71 or the prediction image (inter prediction image) generated by the motion prediction / compensation unit 72 is supplied to the selection unit 73.
  • step S54 the selection unit 73 selects a predicted image. That is, a prediction image generated by the intra prediction unit 71 or a prediction image generated by the motion prediction / compensation unit 72 is supplied. Therefore, the supplied predicted image is selected and supplied to the calculation unit 65, and is added to the output of the inverse orthogonal transform unit 64 in step S57 described later.
  • step S52 the transform coefficient decoded by the lossless decoding unit 62 is also supplied to the inverse quantization unit 63.
  • step S55 the inverse quantization unit 63 inversely quantizes the transform coefficient decoded by the lossless decoding unit 62 with characteristics corresponding to the characteristics of the quantization unit 25 in FIG.
  • step S56 the inverse orthogonal transform unit 29 performs inverse orthogonal transform on the transform coefficient inversely quantized by the inverse quantization unit 28 with characteristics corresponding to the characteristics of the orthogonal transform unit 24 of FIG. As a result, the difference information corresponding to the input of the orthogonal transform unit 24 of FIG. 1 (the output of the calculation unit 23) is decoded.
  • step S57 the calculation unit 65 adds the prediction image selected in the process of step S54 described above and input via the selection unit 73 to the difference information. As a result, the original image is decoded.
  • step S58 the deblocking filter 66 performs deblocking filter processing on the image output from the calculation unit 65. Thereby, block distortion is removed.
  • the decoded image from the deblocking filter 66 is output to the adaptive offset filter 81.
  • step S59 the adaptive offset filter 81 performs adaptive offset filter processing.
  • the filtering process is performed on the image after filtering by the deblocking filter 66 using the quad-tree structure in which the type of the offset filter is determined for each divided region and the offset value for each divided region. Applied.
  • the filtered image is supplied to the adaptive loop filter 82.
  • step S60 the adaptive loop filter 82 performs an adaptive loop filter process on the image after filtering by the adaptive offset filter 81.
  • the filtered image is supplied to the screen rearrangement buffer 67 and the frame memory 69.
  • the filter coefficient corresponding to the filter coefficient from the lossless decoding unit 62 or the buffer index from the lossless decoding unit 62 stored in the buffer. are used for filtering.
  • the filter coefficient used for the filter processing is exchanged with the filter coefficient stored in the high priority area in the buffer (stored) Area is swapped).
  • the filter coefficient used is stored in a high-priority area in the empty area in the buffer.
  • FIFO processing is performed in the buffer.
  • step S61 the frame memory 69 stores the filtered image.
  • step S62 the screen rearrangement buffer 67 rearranges the images after the adaptive loop filter 82. That is, the order of frames rearranged for encoding by the screen rearrangement buffer 22 of the image encoding device 11 is rearranged to the original display order.
  • step S63 the D / A conversion unit 68 D / A converts the image from the screen rearrangement buffer 67. This image is output to a display (not shown), and the image is displayed.
  • step S63 ends, the decoding process ends.
  • the filter coefficient 111 is a filter coefficient of the most recently transmitted LCU.
  • the filter coefficient is transmitted from the encoding side for each LCU.
  • the transmitted filter coefficient 111 of the latest LCU is stored at the head of the ALF coefficient buffer 112, and the filter coefficients stored up to that point are sequentially moved to the rear of the buffer. Therefore, when there is no free space, the filter coefficient of the smallest LCU number transmitted before is discarded no matter how many times it is used.
  • Non-Patent Document 2 even if the filter coefficient used for the filter process is the same filter coefficient as the coefficient transmitted in the past, it may be necessary to transmit again, and the transmission cost is reduced. It was hanging.
  • the referred filter coefficient is moved to a high priority area in the buffer. Further, the buffer is reset for each closed range of the horizontal line.
  • the closed range of the horizontal line is the range from the left end to the right end of the image frame.
  • the buffer is reset for each LCU line.
  • the range from the left end to the right end of the tile may be used.
  • the filter coefficient 111 is a filter coefficient of the most recently transmitted LCU.
  • the ALF coefficient buffer 121-1 represents an ALF coefficient buffer when the filter coefficient stored in the buffer is referred to in the latest LCU.
  • An ALF coefficient buffer 121-2 indicates an ALF coefficient buffer when the transmitted filter coefficient 111 is used in the latest LCU.
  • the referenced filter coefficient is moved to a higher priority area in the ALF coefficient buffer 121-1. Is done. In the case of the example of FIG. 6, since the front in the buffer is a high priority area, the order of the referenced filter coefficient in the buffer is moved forward.
  • the ALF coefficient buffer 121-2 when the filter coefficient 111 transmitted in the most recent LCU is used, when there is no free area, the ALF coefficient buffer 121-2 stores the filter coefficient 111. And FIFO processing is performed. That is, the transmitted filter coefficient 111 is stored in the area of the lowest index, which is the area with the highest priority, in the ALF coefficient buffer 121-1, and the stored filter coefficients are sequentially moved backward in the buffer. Moved. In the ALF coefficient buffer 121-1, the filter coefficient that is least frequently used is discarded. That is, filter coefficients that are not used are more likely to be discarded.
  • the filter coefficient 111 transmitted in the latest LCU is used and there is a free area
  • the filter coefficient 111 is stored in a free area having a high priority among the free areas.
  • the buffer is reset in the closed range of the horizontal line, for example, in units of LCU lines.
  • FIG. 7 is a block diagram illustrating a configuration example of an adaptive loop filter and a lossless encoding unit in the image encoding device of FIG.
  • the adaptive loop filter 42 is configured to include an image buffer 211, an ALF coefficient calculation unit 212, an ALF processing unit 213, a coefficient reading unit 214, and an ALF coefficient storage buffer 215.
  • the adaptive loop filter 42 is configured to include an RD evaluation value calculation unit 216, a buffer management unit 217, and an ALF parameter setting unit 218.
  • the lossless encoding unit 26 is configured to include at least a syntax writing unit 221.
  • the image before the adaptive loop filter from the adaptive offset filter 41 and the original image from the screen rearrangement buffer 22 are input to the image buffer 211.
  • the image buffer 211 temporarily stores the pre-filter image and the original image, and supplies them to the ALF coefficient calculation unit 212 at a predetermined timing. Further, the image buffer 211 supplies the pre-filter image to the ALF processing unit 213 and supplies the original image to the RD evaluation value calculation unit 216.
  • the ALF coefficient calculation unit 212 calculates the correlation between the original image and the pre-filter image for each LCU, finds the best ALF filter coefficient (that is, the coefficient that minimizes the error from the original image), and calculates the obtained ALF.
  • the filter coefficient is supplied to the ALF processing unit 213.
  • the ALF processing unit 213 reads the filter coefficient stored in the ALF coefficient storage buffer 215 by supplying the buffer index to the coefficient reading unit 214.
  • the ALF processing unit 213 performs filter processing on the pre-filter image from the image buffer 211 for each LCU using the filter coefficient from the ALF coefficient calculation unit 212 and the read filter coefficient. Then, the ALF processing unit 213 supplies the post-filter pixel value and the filter coefficient or buffer index at that time to the RD evaluation value calculation unit 216.
  • the ALF processing unit 213 performs, after RD evaluation, for each LCU on the pre-filter image from the image buffer 211 based on information indicating the presence / absence of processing (on / off) from the RD evaluation value calculation unit 216. Filter processing is performed using the filter coefficient supplied from the coefficient reading unit 214. Then, the ALF processing unit 213 supplies the filtered pixel value to the frame memory 32.
  • the coefficient reading unit 214 reads out the filter coefficient corresponding to the buffer index requested from the ALF processing unit 213 from the ALF coefficient storage buffer 215 and supplies it to the ALF processing unit 213. Further, after the RD evaluation, the coefficient reading unit 214 reads out the filter coefficient corresponding to the buffer index from the buffer management unit 217 from the ALF coefficient storage buffer 215 and supplies it to the ALF processing unit 213.
  • the ALF coefficient storage buffer 215 is configured to store N patterns of ALF filter coefficients (sets) per screen (picture).
  • the buffer size is (ALF tap length ⁇ coefficient bit precision ⁇ N) bits. However, when the coefficient is compressed by variable length coding, it becomes smaller than this.
  • the ALF filter coefficient stored in the ALF coefficient storage buffer 215 is reset for each screen and for each LUC line.
  • the ALF coefficient storage buffer 215 is managed by the buffer management unit 217 so that an area with a small index is an area with a high priority.
  • the RD evaluation value calculation unit 216 uses the original image from the image buffer 211 and the filtered image from the ALF processing unit 213 to determine whether or not to perform ALF processing with each filter coefficient for each LCU. Is calculated and determined.
  • the RD evaluation value calculation unit 216 supplies the fact to the ALF processing unit 213, the buffer management unit 217, and the ALF parameter setting unit 218.
  • the RD evaluation value calculation unit 216 supplies a filter coefficient to be used or a buffer index corresponding to the filter coefficient to the buffer management unit 217 and the ALF parameter setting unit 218.
  • the RD evaluation value calculation unit 216 supplies only the fact to the ALF processing unit 213, the buffer management unit 217, and the ALF parameter setting unit 218.
  • the buffer management unit 217 initializes (resets) the ALF coefficient storage buffer 215 at the beginning of the screen and at the right end of the LCU line.
  • the buffer management unit 217 causes the coefficient reading unit 214 to read out the filter coefficient corresponding to the buffer index in the ALF coefficient storage buffer 215.
  • the buffer management unit 217 moves the filter coefficient corresponding to the buffer index to a high priority area in the ALF coefficient storage buffer 215. Specifically, the buffer management unit 217 exchanges the read filter coefficient with the filter coefficient of the index immediately before the read filter coefficient index. That is, the referred filter coefficients are moved one by one to a higher priority area.
  • the buffer management unit 217 supplies the empty area in the ALF coefficient storage buffer 215 to the area with the highest priority (the area with the smallest index). Stored filter coefficients.
  • the buffer management unit 217 performs FIFO processing of the ALF coefficient storage buffer 215 using the filter coefficient from the RD evaluation value calculation unit 216. Then, the buffer management unit 217 causes the coefficient reading unit 214 to read the stored filter coefficient. That is, the buffer management unit 217 is also a unit that manages a history when referring to the filter coefficient.
  • the ALF parameter setting unit 218 refers to the information supplied from the RD evaluation value calculation unit 216, sets an on / off flag indicating whether or not to perform ALF processing, and writes the set on / off flag in syntax To the unit 221.
  • the buffer index or filter coefficient to be used is also supplied to the syntax writing unit 221.
  • the syntax writing unit 221 adds each parameter to the header of the encoded stream. For example, the syntax writing unit 221 adds an on / off flag from the ALF parameter setting unit 218 and, if necessary, an index or a filter coefficient to the header of the encoded stream.
  • the image before the adaptive loop filter from the adaptive offset filter 41 and the original image from the screen rearrangement buffer 22 are input to the image buffer 211.
  • the image buffer 211 temporarily stores the pre-filter image and the original image, and supplies them to the ALF coefficient calculation unit 212 at a predetermined timing.
  • step S211 the buffer management unit 217 initializes (resets) the ALF coefficient storage buffer 215.
  • step S212 the ALF coefficient calculation unit 212 calculates the correlation between the original image and the pre-filter image in each LCU, and obtains one best ALF filter coefficient. This ALF coefficient calculation process is performed after the prediction process, residual encoding process, and deblocking filter of each LCU.
  • the ALF coefficient calculation unit 212 supplies the obtained filter coefficient to the ALF processing unit 213.
  • the ALF processing unit 213 performs filter processing on the pre-filter image from the image buffer 211 for each LCU using the filter coefficient from the ALF coefficient calculation unit 212 and the read filter coefficient. Then, the ALF processing unit 213 supplies the post-filter pixel value and the filter coefficient or buffer index at that time to the RD evaluation value calculation unit 216.
  • the RD evaluation value calculation unit 216 calculates an evaluation value for each LCU. That is, the RD evaluation value calculation unit 216 determines, for each LCU, whether or not to perform ALF processing using the obtained filter coefficient, based on the evaluation value obtained by RD calculation. At that time, a filter coefficient to be used is also selected.
  • R1 (total amount of bits required for transmission of bit amount of on / off flag indicating that ALF is performed and filter coefficient index)
  • D1 (SAD of filtered image and original image (difference absolute sum))
  • R0 (bit amount of on / off flag indicating that ALF is not performed)
  • D0 (SAD (difference absolute sum) of original image and original image)
  • RD evaluation value calculation unit 216 determines whether or not to perform a filter process in step S214 based on the obtained evaluation value. If it is determined in step S214 that the filter process is to be performed, the process proceeds to step S215.
  • step S215 the RD evaluation value calculation unit 216 determines whether or not the selected filter coefficient is the filter coefficient obtained by the ALF coefficient calculation unit 212. If it is determined in step S215 that the filter coefficient is not obtained by the ALF coefficient calculation unit 212, the process proceeds to step S216. At this time, the RD evaluation value calculation unit 216 supplies the buffer management unit 217 and the ALF parameter setting unit 218 with the fact that the filter processing is performed and the buffer index corresponding to the selected filter coefficient.
  • step S216 the ALF parameter setting unit 218 sets a flag indicating that the filtering process is performed, and causes the syntax writing unit 221 to encode the set flag and the buffer index. That is, the ALF parameter setting unit 218 supplies the set flag and buffer index to the syntax writing unit 221 as adaptive loop filter parameters.
  • syntax writing unit 221 writes the parameter of the adaptive loop filter from the ALF parameter setting unit 218 to the header portion of the encoded stream in units of LCUs in step S26 of FIG.
  • step S217 the buffer management unit 217 causes the coefficient reading unit 214 to read out the filter coefficient corresponding to the buffer index from the RD evaluation value calculation unit 216 in the ALF coefficient storage buffer 215.
  • the coefficient reading unit 214 reads out the filter coefficient corresponding to the buffer index from the buffer management unit 217 from the ALF coefficient storage buffer 215 and supplies it to the ALF processing unit 213.
  • step S218 the ALF processing unit 213 performs a filtering process on the pre-filter image from the image buffer 211 using the filter coefficient read by the coefficient reading unit 214 for each LCU, and calculates the post-filter pixel value.
  • the frame memory 32 is supplied.
  • step S219 the buffer management unit 217 exchanges the filter coefficient of the referenced index with the filter coefficient of the previous index. In other words, the buffer management unit 217 moves the filter coefficient of the referenced index to a high priority area.
  • step S215 if it is determined in step S215 that the filter coefficient is obtained by the ALF coefficient calculation unit 212, the process proceeds to step S220. At this time, the RD evaluation value calculation unit 216 supplies the buffer management unit 217 and the ALF parameter setting unit 218 to the effect that the filtering process is performed and the selected filter coefficient.
  • step S220 the ALF parameter setting unit 218 sets a flag indicating that the filtering process is performed, and causes the syntax writing unit 221 to encode the set flag and the filter coefficient. That is, the ALF parameter setting unit 218 supplies the set flag and filter coefficient to the syntax writing unit 221 as parameters of the adaptive loop filter.
  • syntax writing unit 221 writes the parameter of the adaptive loop filter from the ALF parameter setting unit 218 to the header portion of the encoded stream in units of LCUs in step S26 of FIG.
  • step S221 the buffer management unit 217 determines whether or not there is a free space in the ALF coefficient storage buffer 215. If it is determined in step S221 that the ALF coefficient storage buffer 215 is free, the process proceeds to step S222.
  • step S222 the buffer management unit 217 stores the filter coefficient supplied from the RD evaluation value calculation unit 216 in the area of the smallest index among the free areas in the ALF coefficient storage buffer 215.
  • the filter coefficient is stored in a high priority area among the free areas.
  • step S221 If it is determined in step S221 that the ALF coefficient storage buffer 215 is not empty, the process proceeds to step S223.
  • step S223 the buffer management unit 217 performs FIFO processing of the ALF coefficient storage buffer 215 using the filter coefficient from the RD evaluation value calculation unit 216. That is, the filter coefficient from the RD evaluation value calculation unit 216 is stored in the area of the smallest index that is the highest priority area among the free areas in the ALF coefficient storage buffer 215. The stored filter coefficients are sequentially stored in the subsequent area, and the filter coefficients stored in the last area are discarded.
  • the buffer management unit 217 causes the coefficient reading unit 214 to read out the filter coefficients stored in the ALF coefficient storage buffer 215, and the process proceeds to step S224.
  • step S224 the ALF processing unit 213 performs filter processing on the pre-filter image from the image buffer 211 using the filter coefficient supplied from the coefficient reading unit 214 for each LCU, and the post-filter pixel value is converted into a frame. This is supplied to the memory 32.
  • step S214 if it is determined in step S214 that the filtering process is not performed, the RD evaluation value calculation unit 216 supplies the fact that the filtering process is not performed to the ALF processing unit 213, the buffer management unit 217, and the ALF parameter setting unit 218. To do.
  • the ALF processing unit 213 supplies the frame memory 32 as it is without performing filter processing on the pre-filter image from the image buffer 211.
  • step S225 the ALF parameter setting unit 218 sets a flag indicating that the filter processing is not performed, and supplies the set flag to the syntax writing unit 221 as an adaptive loop filter parameter.
  • syntax writing unit 221 writes the parameter of the adaptive loop filter from the ALF parameter setting unit 218 to the header portion of the encoded stream in units of LCUs in step S26 of FIG.
  • step S219 the process proceeds to step S226.
  • step S226 the ALF coefficient calculation unit 212 determines whether or not the processing target LCU is the last (right end) LCU in the LCU line. If it is determined in step S226 that the LCU line is not the last (rightmost) LCU, the process returns to step S212 and the subsequent processes are repeated.
  • step S225 If it is determined in step S225 that it is the last (right end) LCU in the LCU line, the process proceeds to step S227.
  • step S227 the ALF coefficient calculation unit 212 determines whether or not the processing target LCU is the last LCU on the screen. If it is determined in step S227 that it is not the last LCU on the screen, the process returns to step S211, the buffer is initialized, and the subsequent processes are repeated.
  • step S227 If it is determined in step S227 that the LCU is the last LCU on the screen, the adaptive loop filter process is terminated.
  • the filter coefficient stored in the buffer when used for the filter processing, the filter coefficient is moved to a high priority area in the buffer.
  • the filter coefficient when the obtained filter coefficient is used for the filtering process, the filter coefficient is stored in an area having the highest priority among the free areas. If there is no free area, FIFO processing is performed with the obtained filter coefficient.
  • FIG. 9 is a block diagram illustrating a configuration example of the lossless decoding unit and the adaptive loop filter in the image decoding device of FIG.
  • the lossless decoding unit 62 is configured to include at least a syntax reading unit 251.
  • the adaptive loop filter 82 is configured to include a parameter receiving unit 261, a coefficient writing unit 262, a buffer management unit 263, an ALF coefficient storage buffer 264, and an ALF processing unit 265.
  • the syntax reading unit 251 reads the syntax from the header part of the encoded stream, and supplies the parameter of the adaptive loop filter to the parameter receiving unit 261.
  • the parameters of the adaptive loop filter are, for example, a flag indicating on / off of ALF processing, and a filter coefficient to be used or a buffer index of the filter coefficient.
  • the parameter receiving unit 261 receives the parameters of the adaptive loop filter supplied from the syntax reading unit 251.
  • the parameter receiving unit 261 supplies a flag indicating ALF processing on / off to the ALF processing unit 265.
  • the parameter receiving unit 261 supplies the filter coefficient to be used or the buffer index of the filter coefficient to the coefficient reading unit 262 and the buffer management unit 263.
  • the coefficient reading unit 262 supplies the filter coefficient from the parameter receiving unit 261 to the ALF processing unit 265.
  • the coefficient reading unit 262 reads out the filter coefficient corresponding to the buffer index from the parameter receiving unit 261 from the ALF coefficient storage buffer 264 and supplies the read filter coefficient to the ALF processing unit 265.
  • the main configuration of the buffer management unit 263 is basically the same as the configuration of the buffer management unit 217 of FIG.
  • the buffer management unit 263 initializes (resets) the ALF coefficient storage buffer 264 at the beginning of the screen and at the right end of the LCU line.
  • the buffer management unit 263 moves the filter coefficient corresponding to the buffer index to a high priority area in the ALF coefficient storage buffer 264.
  • the buffer management unit 263 exchanges the read filter coefficient and the filter coefficient of the index immediately before the read filter coefficient index. That is, the referred filter coefficients are moved one by one to a higher priority area. As a result, filter coefficients that are not often referenced move to a low priority area, and are discarded when FIFO processing is performed.
  • the buffer management unit 263 when the filter coefficient is supplied from the parameter receiving unit 261, the buffer management unit 263 is supplied to the highest priority region (the region with the smallest index) among the free regions in the ALF coefficient storage buffer 264. Stores filter coefficients. If there is no free space in the ALF coefficient storage buffer 264, the buffer management unit 263 performs the FIFO processing of the ALF coefficient storage buffer 264 using the filter coefficient from the parameter reception unit 261. That is, the buffer management unit 263 is also a unit that manages a history when referring to the filter coefficient.
  • the ALF coefficient storage buffer 264 is basically configured in the same manner as the ALF coefficient storage buffer 215 of FIG. 7, and is configured to store N patterns of ALF filter coefficients (sets) per screen (picture).
  • the buffer size is (ALF tap length ⁇ coefficient bit precision ⁇ N) bits. However, when the coefficient is compressed by variable length coding, it becomes smaller than this.
  • the ALF filter coefficient stored in the ALF coefficient storage buffer 264 is reset for each screen and for each LUC line.
  • the ALF coefficient storage buffer 264 is managed by the buffer management unit 263 so that an area with a small index is an area with a high priority.
  • the ALF processing unit 265 receives the pre-ALF filter pixel value from the adaptive offset filter 81, and uses the filter coefficient supplied from the coefficient reading unit 262 according to on / off indicated by the flag from the parameter receiving unit 261. Execute ALF processing.
  • the ALF processing unit 265 outputs the filtered pixel value to the screen rearrangement buffer 67 and the frame memory 69 in the subsequent stage.
  • the coefficient writing unit 262 initializes (resets) the ALF coefficient storage buffer 264 in step S251.
  • step S52 of FIG. 4 when the encoded stream is decoded, the syntax reading unit 251 reads the syntax from the header portion of the encoded stream.
  • the syntax reading unit 251 supplies the adaptive loop filter parameters (ALF parameters) of the read syntax to the parameter receiving unit 261.
  • ALF parameters adaptive loop filter parameters
  • the parameter receiving unit 261 receives the parameter of the adaptive loop filter for each LCU supplied from the syntax reading unit 251 in step S252.
  • the parameters for each LCU are, for example, a flag indicating on / off of ALF processing, and a filter coefficient to be used or a buffer index corresponding to the filter coefficient.
  • step S253 the parameter receiving unit 261 determines whether or not the flag is on. If the parameter receiving unit 261 determines that the flag is on, the process proceeds to step S254. At this time, the flag is supplied to the ALF processing unit 265, and the filter coefficient to be used or the buffer index corresponding to the filter coefficient is supplied to the coefficient reading unit 262 or the buffer management unit 263.
  • step S254 the coefficient reading unit 262 determines whether or not the filter coefficient is received from the parameter receiving unit 261. If it is determined that the filter coefficient is not received, the process proceeds to step S255.
  • step S255 the coefficient reading unit 262 reads the filter coefficient corresponding to the buffer index from the parameter receiving unit 261 from the ALF coefficient storage buffer 264, and supplies the read filter coefficient to the ALF processing unit 265.
  • step S256 the ALF processing unit 265 performs ALF processing on the pre-ALF filter pixel value from the adaptive offset filter 81 using the filter coefficient read by the coefficient reading unit 262.
  • the ALF processing unit 265 outputs the filtered pixel value to the screen rearrangement buffer 67 and the frame memory 69 in the subsequent stage.
  • step S257 the buffer management unit 263 exchanges the filter coefficient of the referenced index with the filter coefficient of the previous index. In other words, the filter coefficient of the referenced index is moved to a higher priority area in the ALF coefficient storage buffer 264.
  • step S254 if it is determined in step S254 that the filter coefficient has been received, the coefficient reading unit 262 supplies the received filter coefficient to the ALF processing unit 265, and the process proceeds to step S258.
  • step S258 the ALF process is performed on the pre-ALF filter pixel value from the adaptive offset filter 81 using the filter coefficient received by the parameter receiving unit 261.
  • the ALF processing unit 265 outputs the filtered pixel value to the screen rearrangement buffer 67 and the frame memory 69 in the subsequent stage.
  • step S259 the buffer management unit 263 determines whether or not there is a free space in the ALF coefficient storage buffer 264. If it is determined in step S259 that there is a vacancy, the process proceeds to step S260.
  • step S260 the buffer management unit 263 stores the filter coefficient from the parameter receiving unit 261 in the highest priority area (the area with the smallest index) among the free areas.
  • step S259 If it is determined in step S259 that there is no space, the process proceeds to step S261.
  • step S ⁇ b> 261 the buffer management unit 263 performs FIFO processing of the ALF coefficient storage buffer 264 using the filter coefficient from the parameter reception unit 261.
  • the filter coefficient from the parameter receiving unit 261 is stored in the area of the smallest index among the free areas in the ALF coefficient storage buffer 264.
  • the stored filter coefficients are sequentially stored in the subsequent area, and the filter coefficients stored in the last area are discarded.
  • step S257 After step S257, step S260, and step S261, the process proceeds to step S262.
  • step S253 If it is determined in step S253 that the flag is not on, the process proceeds to step S262. At this time, the ALF processing unit 265 does not perform the filter process on the pre-filter image from the adaptive offset filter 81 and outputs it to the screen rearrangement buffer 67 and the frame memory 69 as it is.
  • step S262 the parameter receiving unit 261 determines whether or not the processing target LCU is the last (right end) LCU in the LCU line. If it is determined in step S262 that the LCU line is not the last (rightmost) LCU, the process returns to step S252, and the subsequent processes are repeated.
  • step S262 If it is determined in step S262 that the LCU is the last (rightmost) LCU in the LCU line, the process proceeds to step S263.
  • step S263 the buffer management unit 263 determines whether the processing target LCU is the last LCU on the screen. If it is determined in step S263 that it is not the last LCU on the screen, the processing returns to step S251, the buffer is initialized, and the subsequent processing is repeated.
  • step S263 If it is determined in step S263 that the LCU is the last LCU on the screen, the adaptive loop filter process is terminated.
  • the filter coefficient stored in the buffer when used for the filtering process, the filter coefficient is moved to a high priority area in the buffer.
  • the filter coefficient received from the encoding side when used for the filtering process, the filter coefficient is stored in an area having the highest priority among the free areas. If there is no free space, FIFO processing is performed with the received filter coefficient.
  • a filter coefficient having a large number of references is moved to an area having a high priority (most difficult to be discarded) in the buffer.
  • the management of the filter coefficient buffer is made more efficient than the FIFO processing described in Non-Patent Document 2, so that the retransmission of the filter coefficient can be reduced.
  • the transmission cost of the filter coefficient can be reduced, and the encoding efficiency related to the filter coefficient can be improved.
  • the buffer is reset in a closed range of the horizontal line, for example, in units of LCU lines, it is possible to prevent the history management from becoming too complicated.
  • the HEVC method is used as the encoding method.
  • the present disclosure is not limited to this, and other encoding schemes / decoding schemes including at least an adaptive loop filter can be applied as the in-loop filter.
  • the present disclosure discloses, for example, image information (bitstream) compressed by orthogonal transformation such as discrete cosine transformation and motion compensation, such as HEVC, satellite broadcasting, cable television, the Internet, or a mobile phone.
  • the present invention can be applied to an image encoding device and an image decoding device used when receiving via a network medium.
  • the present disclosure can be applied to an image encoding device and an image decoding device that are used when processing on a storage medium such as an optical disk, a magnetic disk, and a flash memory.
  • FIG. 11 shows an example of a multi-view image encoding method.
  • the multi-viewpoint image includes a plurality of viewpoint images, and a predetermined one viewpoint image among the plurality of viewpoints is designated as the base view image.
  • Each viewpoint image other than the base view image is treated as a non-base view image.
  • a buffer for example, ALF coefficient storage buffer 215 in FIG. 7 that stores the coefficient of the adaptive loop filter can be managed in each view (same view). Further, in each view (different view), it is possible to share management of buffers that store adaptive loop filter coefficients in other views.
  • the management of the buffer that stores the coefficient of the adaptive loop filter in the base view is shared by at least one non-base view.
  • each view (same view)
  • a buffer index of a buffer for storing the coefficient of the adaptive loop filter can be set.
  • each view (different view) can share a buffer index set in another view.
  • other adaptive loop filter parameters such as a filter coefficient and an on / off flag of an adaptive loop filter for each LCU can also be shared.
  • the buffer index set in the base view is used in at least one non-base view.
  • FIG. 12 is a diagram illustrating a multi-view image encoding apparatus that performs the above-described multi-view image encoding.
  • the multi-view image encoding device 600 includes an encoding unit 601, an encoding unit 602, and a multiplexing unit 603.
  • the encoding unit 601 encodes the base view image and generates a base view image encoded stream.
  • the encoding unit 602 encodes the non-base view image and generates a non-base view image encoded stream.
  • the multiplexing unit 603 multiplexes the base view image encoded stream generated by the encoding unit 601 and the non-base view image encoded stream generated by the encoding unit 602 to generate a multi-view image encoded stream. To do.
  • the image encoding device 11 (FIG. 1) can be applied to the encoding unit 601 and the encoding unit 602 of the multi-view image encoding device 600.
  • the multi-view image encoding apparatus 600 sets and transmits the buffer index set by the encoding unit 601 and the buffer index set by the encoding unit 602.
  • the buffer index set by the encoding unit 601 as described above may be set and transmitted so as to be shared by the encoding unit 601 and the encoding unit 602.
  • the buffer index set by the encoding unit 602 may be set and transmitted so as to be shared by the encoding unit 601 and the encoding unit 602.
  • FIG. 13 is a diagram illustrating a multi-view image decoding apparatus that performs the above-described multi-view image decoding.
  • the multi-view image decoding apparatus 610 includes a demultiplexing unit 611, a decoding unit 612, and a decoding unit 613.
  • the demultiplexing unit 611 demultiplexes the multi-view image encoded stream in which the base view image encoded stream and the non-base view image encoded stream are multiplexed, and the base view image encoded stream and the non-base view image The encoded stream is extracted.
  • the decoding unit 612 decodes the base view image encoded stream extracted by the demultiplexing unit 611 to obtain a base view image.
  • the decoding unit 613 decodes the non-base view image encoded stream extracted by the demultiplexing unit 611 to obtain a non-base view image.
  • the image decoding device 51 (FIG. 3) can be applied to the decoding unit 612 and the decoding unit 613 of the multi-view image decoding device 610.
  • the multi-view image decoding apparatus 610 performs processing using the buffer index set by the encoding unit 601 and decoded by the decoding unit 612 and the buffer index set by the encoding unit 602 and decoded by the decoding unit 613. Do.
  • the buffer index set by the encoding unit 601 (or encoding 602) as described above may be set and transmitted so as to be shared by the encoding unit 601 and the encoding unit 602. .
  • processing is performed using the buffer index set by the encoding unit 601 (or encoding 602) and decoded by the decoding unit 612 (or decoding unit 613).
  • FIG. 14 shows an example of a multi-view image encoding method.
  • the hierarchical image includes images of a plurality of layers (resolutions), and an image of a predetermined one layer among the plurality of resolutions is designated as a base layer image. Images in each layer other than the base layer image are treated as non-base layer images.
  • the management of the buffer for storing the coefficient of the adaptive loop filter in the base layer is shared by at least one non-base layer.
  • each layer (same layer), a buffer index of a buffer for storing the coefficient of the adaptive loop filter can be set.
  • each layer (different layers) can share a buffer index set in another view.
  • the buffer index set in the base layer is used in at least one non-base layer.
  • other adaptive loop filter parameters such as a filter coefficient and an on / off flag of an adaptive loop filter for each LCU can also be shared.
  • FIG. 15 is a diagram illustrating a hierarchical image encoding apparatus that performs the above-described hierarchical image encoding.
  • the hierarchical image encoding device 620 includes an encoding unit 621, an encoding unit 622, and a multiplexing unit 623.
  • the encoding unit 621 encodes the base layer image and generates a base layer image encoded stream.
  • the encoding unit 622 encodes the non-base layer image and generates a non-base layer image encoded stream.
  • the multiplexing unit 623 multiplexes the base layer image encoded stream generated by the encoding unit 621 and the non-base layer image encoded stream generated by the encoding unit 622 to generate a hierarchical image encoded stream. .
  • the image encoding device 11 (FIG. 1) can be applied to the encoding unit 621 and the encoding unit 622 of the hierarchical image encoding device 620.
  • the hierarchical image encoding device 620 sets and transmits the buffer index set by the encoding unit 621 and the buffer index set by the encoding unit 622.
  • the buffer index set by the encoding unit 621 as described above may be set and transmitted so as to be shared by the encoding unit 621 and the encoding unit 622.
  • the buffer index set by the encoding unit 622 may be set and transmitted so as to be shared by the encoding unit 621 and the encoding unit 622.
  • the demultiplexing unit 631 demultiplexes the hierarchical image encoded stream in which the base layer image encoded stream and the non-base layer image encoded stream are multiplexed, and the base layer image encoded stream and the non-base layer image code Stream.
  • the decoding unit 632 decodes the base layer image encoded stream extracted by the demultiplexing unit 631 to obtain a base layer image.
  • the decoding unit 633 decodes the non-base layer image encoded stream extracted by the demultiplexing unit 631 to obtain a non-base layer image.
  • the image decoding device 51 (FIG. 4) can be applied to the decoding unit 632 and the decoding unit 633 of the hierarchical image decoding device 630.
  • the hierarchical image decoding apparatus 630 performs processing using the buffer index set by the encoding unit 621, the buffer index decoded by the decoding unit 632, and the encoding unit 622, and the decoding unit 633 uses the buffer index.
  • the buffer index set by the encoding unit 621 (or encoding 622) as described above may be set and transmitted so as to be shared by the encoding unit 621 and the encoding unit 622.
  • processing is performed using the buffer index set by encoding unit 621 (or encoding 622) and decoded by decoding unit 632 (or decoding unit 633).
  • the series of processes described above can be executed by hardware or can be executed by software.
  • a program constituting the software is installed in the computer.
  • the computer includes, for example, a general-purpose personal computer capable of executing various functions by installing various programs by installing a computer incorporated in dedicated hardware.
  • FIG. 17 is a block diagram showing an example of the hardware configuration of a computer that executes the above-described series of processing by a program.
  • a CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • an input / output interface 805 is connected to the bus 804.
  • An input unit 806, an output unit 807, a storage unit 808, a communication unit 809, and a drive 810 are connected to the input / output interface 805.
  • the input unit 806 includes a keyboard, a mouse, a microphone, and the like.
  • the output unit 807 includes a display, a speaker, and the like.
  • the storage unit 808 includes a hard disk, a nonvolatile memory, and the like.
  • the communication unit 809 includes a network interface or the like.
  • the drive 810 drives a removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
  • the CPU 801 loads the program stored in the storage unit 808 to the RAM 803 via the input / output interface 805 and the bus 804 and executes the program, for example. Is performed.
  • the program executed by the computer 800 can be provided by being recorded in, for example, a removable medium 811 as a package medium or the like.
  • the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
  • the program can be installed in the storage unit 808 via the input / output interface 805 by attaching the removable medium 811 to the drive 810.
  • the program can be received by the communication unit 809 via a wired or wireless transmission medium and installed in the storage unit 808.
  • the program can be installed in the ROM 802 or the storage unit 808 in advance.
  • the program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.
  • the step of describing the program recorded on the recording medium is not limited to the processing performed in chronological order according to the described order, but may be performed in parallel or It also includes processes that are executed individually.
  • system represents the entire apparatus composed of a plurality of devices (apparatuses).
  • the configuration described as one device (or processing unit) may be divided and configured as a plurality of devices (or processing units).
  • the configurations described above as a plurality of devices (or processing units) may be combined into a single device (or processing unit).
  • a configuration other than that described above may be added to the configuration of each device (or each processing unit).
  • a part of the configuration of a certain device (or processing unit) may be included in the configuration of another device (or other processing unit). . That is, the present technology is not limited to the above-described embodiment, and various modifications can be made without departing from the gist of the present technology.
  • An image encoding device and an image decoding device include a transmitter or a receiver in optical broadcasting, satellite broadcasting, cable broadcasting such as cable TV, distribution on the Internet, and distribution to terminals by cellular communication, etc.
  • the present invention can be applied to various electronic devices such as a recording device that records an image on a medium such as a magnetic disk and a flash memory, or a playback device that reproduces an image from these storage media.
  • a recording device that records an image on a medium such as a magnetic disk and a flash memory
  • a playback device that reproduces an image from these storage media.
  • FIG. 18 shows an example of a schematic configuration of a television apparatus to which the above-described embodiment is applied.
  • the television apparatus 900 includes an antenna 901, a tuner 902, a demultiplexer 903, a decoder 904, a video signal processing unit 905, a display unit 906, an audio signal processing unit 907, a speaker 908, an external interface 909, a control unit 910, a user interface 911, And a bus 912.
  • Tuner 902 extracts a signal of a desired channel from a broadcast signal received via antenna 901, and demodulates the extracted signal. Then, the tuner 902 outputs the encoded bit stream obtained by the demodulation to the demultiplexer 903. In other words, the tuner 902 serves as a transmission unit in the television apparatus 900 that receives an encoded stream in which an image is encoded.
  • the demultiplexer 903 separates the video stream and audio stream of the viewing target program from the encoded bit stream, and outputs each separated stream to the decoder 904. Further, the demultiplexer 903 extracts auxiliary data such as EPG (Electronic Program Guide) from the encoded bit stream, and supplies the extracted data to the control unit 910. Note that the demultiplexer 903 may perform descrambling when the encoded bit stream is scrambled.
  • EPG Electronic Program Guide
  • the decoder 904 decodes the video stream and audio stream input from the demultiplexer 903. Then, the decoder 904 outputs the video data generated by the decoding process to the video signal processing unit 905. In addition, the decoder 904 outputs audio data generated by the decoding process to the audio signal processing unit 907.
  • the video signal processing unit 905 reproduces the video data input from the decoder 904 and causes the display unit 906 to display the video.
  • the video signal processing unit 905 may cause the display unit 906 to display an application screen supplied via a network.
  • the video signal processing unit 905 may perform additional processing such as noise removal on the video data according to the setting.
  • the video signal processing unit 905 may generate a GUI (Graphical User Interface) image such as a menu, a button, or a cursor, and superimpose the generated image on the output image.
  • GUI Graphic User Interface
  • the display unit 906 is driven by a drive signal supplied from the video signal processing unit 905, and displays an image on a video screen of a display device (for example, a liquid crystal display, a plasma display, or an OELD (Organic ElectroLuminescence Display) (organic EL display)). Or an image is displayed.
  • a display device for example, a liquid crystal display, a plasma display, or an OELD (Organic ElectroLuminescence Display) (organic EL display)). Or an image is displayed.
  • the audio signal processing unit 907 performs reproduction processing such as D / A conversion and amplification on the audio data input from the decoder 904, and outputs audio from the speaker 908.
  • the audio signal processing unit 907 may perform additional processing such as noise removal on the audio data.
  • the external interface 909 is an interface for connecting the television apparatus 900 to an external device or a network.
  • a video stream or an audio stream received via the external interface 909 may be decoded by the decoder 904. That is, the external interface 909 also has a role as a transmission unit in the television apparatus 900 that receives an encoded stream in which an image is encoded.
  • the control unit 910 includes a processor such as a CPU and memories such as a RAM and a ROM.
  • the memory stores a program executed by the CPU, program data, EPG data, data acquired via a network, and the like.
  • the program stored in the memory is read and executed by the CPU when the television apparatus 900 is activated.
  • the CPU executes the program to control the operation of the television device 900 according to an operation signal input from the user interface 911, for example.
  • the user interface 911 is connected to the control unit 910.
  • the user interface 911 includes, for example, buttons and switches for the user to operate the television device 900, a remote control signal receiving unit, and the like.
  • the user interface 911 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 910.
  • the bus 912 connects the tuner 902, the demultiplexer 903, the decoder 904, the video signal processing unit 905, the audio signal processing unit 907, the external interface 909, and the control unit 910 to each other.
  • the decoder 904 has the function of the image decoding apparatus according to the above-described embodiment. Thereby, the transmission cost of the filter coefficient can be reduced when the television device 900 decodes an image.
  • FIG. 19 shows an example of a schematic configuration of a mobile phone to which the above-described embodiment is applied.
  • a mobile phone 920 includes an antenna 921, a communication unit 922, an audio codec 923, a speaker 924, a microphone 925, a camera unit 926, an image processing unit 927, a demultiplexing unit 928, a recording / reproducing unit 929, a display unit 930, a control unit 931, an operation A portion 932 and a bus 933.
  • the antenna 921 is connected to the communication unit 922.
  • the speaker 924 and the microphone 925 are connected to the audio codec 923.
  • the operation unit 932 is connected to the control unit 931.
  • the bus 933 connects the communication unit 922, the audio codec 923, the camera unit 926, the image processing unit 927, the demultiplexing unit 928, the recording / reproducing unit 929, the display unit 930, and the control unit 931 to each other.
  • the mobile phone 920 has various operation modes including a voice call mode, a data communication mode, a shooting mode, and a videophone mode, and is used for sending and receiving voice signals, sending and receiving e-mail or image data, taking images, and recording data. Perform the action.
  • the analog voice signal generated by the microphone 925 is supplied to the voice codec 923.
  • the audio codec 923 converts an analog audio signal into audio data, A / D converts the compressed audio data, and compresses it. Then, the audio codec 923 outputs the compressed audio data to the communication unit 922.
  • the communication unit 922 encodes and modulates the audio data and generates a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921. In addition, the communication unit 922 amplifies a radio signal received via the antenna 921 and performs frequency conversion to acquire a received signal.
  • the communication unit 922 demodulates and decodes the received signal to generate audio data, and outputs the generated audio data to the audio codec 923.
  • the audio codec 923 decompresses the audio data and performs D / A conversion to generate an analog audio signal. Then, the audio codec 923 supplies the generated audio signal to the speaker 924 to output audio.
  • the control unit 931 generates character data constituting the e-mail in response to an operation by the user via the operation unit 932.
  • the control unit 931 causes the display unit 930 to display characters.
  • the control unit 931 generates e-mail data in response to a transmission instruction from the user via the operation unit 932, and outputs the generated e-mail data to the communication unit 922.
  • the communication unit 922 encodes and modulates email data and generates a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921.
  • the communication unit 922 amplifies a radio signal received via the antenna 921 and performs frequency conversion to acquire a received signal.
  • the communication unit 922 demodulates and decodes the received signal to restore the email data, and outputs the restored email data to the control unit 931.
  • the control unit 931 displays the content of the electronic mail on the display unit 930 and stores the electronic mail data in the storage medium of the recording / reproducing unit 929.
  • the recording / reproducing unit 929 has an arbitrary readable / writable storage medium.
  • the storage medium may be a built-in storage medium such as RAM or flash memory, and is externally mounted such as a hard disk, magnetic disk, magneto-optical disk, optical disk, USB (Unallocated Space Space Bitmap) memory, or memory card. It may be a storage medium.
  • the camera unit 926 images a subject to generate image data, and outputs the generated image data to the image processing unit 927.
  • the image processing unit 927 encodes the image data input from the camera unit 926 and stores the encoded stream in the storage medium of the storage / playback unit 929.
  • the demultiplexing unit 928 multiplexes the video stream encoded by the image processing unit 927 and the audio stream input from the audio codec 923, and the multiplexed stream is the communication unit 922. Output to.
  • the communication unit 922 encodes and modulates the stream and generates a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921.
  • the communication unit 922 amplifies a radio signal received via the antenna 921 and performs frequency conversion to acquire a received signal.
  • These transmission signal and reception signal may include an encoded bit stream.
  • the communication unit 922 demodulates and decodes the received signal to restore the stream, and outputs the restored stream to the demultiplexing unit 928.
  • the demultiplexing unit 928 separates the video stream and the audio stream from the input stream, and outputs the video stream to the image processing unit 927 and the audio stream to the audio codec 923.
  • the image processing unit 927 decodes the video stream and generates video data.
  • the video data is supplied to the display unit 930, and a series of images is displayed on the display unit 930.
  • the audio codec 923 decompresses the audio stream and performs D / A conversion to generate an analog audio signal. Then, the audio codec 923 supplies the generated audio signal to the speaker 924 to output audio.
  • the image processing unit 927 has the functions of the image encoding device and the image decoding device according to the above-described embodiment. Thereby, the transmission cost of the filter coefficient can be reduced when the mobile phone 920 encodes and decodes an image.
  • FIG. 20 shows an example of a schematic configuration of a recording / reproducing apparatus to which the above-described embodiment is applied.
  • the recording / reproducing device 940 encodes audio data and video data of a received broadcast program and records the encoded data on a recording medium.
  • the recording / reproducing device 940 may encode audio data and video data acquired from another device and record them on a recording medium, for example.
  • the recording / reproducing device 940 reproduces data recorded on the recording medium on a monitor and a speaker, for example, in accordance with a user instruction. At this time, the recording / reproducing device 940 decodes the audio data and the video data.
  • the recording / reproducing apparatus 940 includes a tuner 941, an external interface 942, an encoder 943, an HDD (Hard Disk Drive) 944, a disk drive 945, a selector 946, a decoder 947, an OSD (On-Screen Display) 948, a control unit 949, and a user interface. 950.
  • Tuner 941 extracts a signal of a desired channel from a broadcast signal received via an antenna (not shown), and demodulates the extracted signal. Then, the tuner 941 outputs the encoded bit stream obtained by the demodulation to the selector 946. That is, the tuner 941 has a role as a transmission unit in the recording / reproducing apparatus 940.
  • the external interface 942 is an interface for connecting the recording / reproducing apparatus 940 to an external device or a network.
  • the external interface 942 may be, for example, an IEEE1394 interface, a network interface, a USB interface, or a flash memory interface.
  • video data and audio data received via the external interface 942 are input to the encoder 943. That is, the external interface 942 serves as a transmission unit in the recording / reproducing device 940.
  • the encoder 943 encodes video data and audio data when the video data and audio data input from the external interface 942 are not encoded. Then, the encoder 943 outputs the encoded bit stream to the selector 946.
  • the HDD 944 records an encoded bit stream in which content data such as video and audio is compressed, various programs, and other data on an internal hard disk. Further, the HDD 944 reads out these data from the hard disk when reproducing video and audio.
  • the disk drive 945 performs recording and reading of data to and from the mounted recording medium.
  • the recording medium mounted on the disk drive 945 is, for example, a DVD disk (DVD-Video, DVD-RAM, DVD-R, DVD-RW, DVD + R, DVD + RW, etc.) or a Blu-ray (registered trademark) disk. It may be.
  • the selector 946 selects an encoded bit stream input from the tuner 941 or the encoder 943 when recording video and audio, and outputs the selected encoded bit stream to the HDD 944 or the disk drive 945. In addition, the selector 946 outputs the encoded bit stream input from the HDD 944 or the disk drive 945 to the decoder 947 during video and audio reproduction.
  • the decoder 947 decodes the encoded bit stream and generates video data and audio data. Then, the decoder 947 outputs the generated video data to the OSD 948. The decoder 904 outputs the generated audio data to an external speaker.
  • OSD 948 reproduces the video data input from the decoder 947 and displays the video. Further, the OSD 948 may superimpose a GUI image such as a menu, a button, or a cursor on the video to be displayed.
  • the control unit 949 includes a processor such as a CPU and memories such as a RAM and a ROM.
  • the memory stores a program executed by the CPU, program data, and the like.
  • the program stored in the memory is read and executed by the CPU when the recording / reproducing apparatus 940 is activated, for example.
  • the CPU controls the operation of the recording / reproducing apparatus 940 in accordance with an operation signal input from the user interface 950, for example, by executing the program.
  • the user interface 950 is connected to the control unit 949.
  • the user interface 950 includes, for example, buttons and switches for the user to operate the recording / reproducing device 940, a remote control signal receiving unit, and the like.
  • the user interface 950 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 949.
  • the encoder 943 has the function of the image encoding apparatus according to the above-described embodiment.
  • the decoder 947 has the function of the image decoding apparatus according to the above-described embodiment. Thereby, the transmission cost of the filter coefficient can be reduced when the image is encoded and decoded by the recording / reproducing apparatus 940.
  • FIG. 21 illustrates an example of a schematic configuration of an imaging apparatus to which the above-described embodiment is applied.
  • the imaging device 960 images a subject to generate an image, encodes the image data, and records it on a recording medium.
  • the imaging device 960 includes an optical block 961, an imaging unit 962, a signal processing unit 963, an image processing unit 964, a display unit 965, an external interface 966, a memory 967, a media drive 968, an OSD 969, a control unit 970, a user interface 971, and a bus. 972.
  • the optical block 961 is connected to the imaging unit 962.
  • the imaging unit 962 is connected to the signal processing unit 963.
  • the display unit 965 is connected to the image processing unit 964.
  • the user interface 971 is connected to the control unit 970.
  • the bus 972 connects the image processing unit 964, the external interface 966, the memory 967, the media drive 968, the OSD 969, and the control unit 970 to each other.
  • the optical block 961 includes a focus lens and a diaphragm mechanism.
  • the optical block 961 forms an optical image of the subject on the imaging surface of the imaging unit 962.
  • the imaging unit 962 includes an image sensor such as a CCD (Charge-Coupled Device) or a CMOS (Complementary Metal-Oxide Semiconductor), and converts an optical image formed on the imaging surface into an image signal as an electrical signal by photoelectric conversion. Then, the imaging unit 962 outputs the image signal to the signal processing unit 963.
  • CCD Charge-Coupled Device
  • CMOS Complementary Metal-Oxide Semiconductor
  • the signal processing unit 963 performs various camera signal processing such as knee correction, gamma correction, and color correction on the image signal input from the imaging unit 962.
  • the signal processing unit 963 outputs the image data after the camera signal processing to the image processing unit 964.
  • the image processing unit 964 encodes the image data input from the signal processing unit 963 and generates encoded data. Then, the image processing unit 964 outputs the generated encoded data to the external interface 966 or the media drive 968. The image processing unit 964 also decodes encoded data input from the external interface 966 or the media drive 968 to generate image data. Then, the image processing unit 964 outputs the generated image data to the display unit 965. In addition, the image processing unit 964 may display the image by outputting the image data input from the signal processing unit 963 to the display unit 965. Further, the image processing unit 964 may superimpose display data acquired from the OSD 969 on an image output to the display unit 965.
  • the OSD 969 generates a GUI image such as a menu, a button, or a cursor, and outputs the generated image to the image processing unit 964.
  • the external interface 966 is configured as a USB input / output terminal, for example.
  • the external interface 966 connects the imaging device 960 and a printer, for example, when printing an image.
  • a drive is connected to the external interface 966 as necessary.
  • a removable medium such as a magnetic disk or an optical disk is attached to the drive, and a program read from the removable medium can be installed in the imaging device 960.
  • the external interface 966 may be configured as a network interface connected to a network such as a LAN or the Internet. That is, the external interface 966 has a role as a transmission unit in the imaging device 960.
  • the recording medium mounted on the media drive 968 may be any readable / writable removable medium such as a magnetic disk, a magneto-optical disk, an optical disk, or a semiconductor memory.
  • a recording medium may be fixedly mounted on the media drive 968, and a non-portable storage unit such as an internal hard disk drive or an SSD (Solid State Drive) may be configured.
  • the control unit 970 includes a processor such as a CPU and memories such as a RAM and a ROM.
  • the memory stores a program executed by the CPU, program data, and the like.
  • the program stored in the memory is read and executed by the CPU when the imaging device 960 is activated, for example.
  • the CPU controls the operation of the imaging device 960 according to an operation signal input from the user interface 971 by executing the program.
  • the user interface 971 is connected to the control unit 970.
  • the user interface 971 includes, for example, buttons and switches for the user to operate the imaging device 960.
  • the user interface 971 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 970.
  • the image processing unit 964 has the functions of the image encoding device and the image decoding device according to the above-described embodiment. Thereby, the transmission cost of the filter coefficient can be reduced when encoding and decoding an image in the imaging device 960.
  • the method for transmitting such information is not limited to such an example.
  • these pieces of information may be transmitted or recorded as separate data associated with the encoded bitstream without being multiplexed into the encoded bitstream.
  • the term “associate” means that an image (which may be a part of an image such as a slice or a block) included in the bitstream and information corresponding to the image can be linked at the time of decoding. Means. That is, information may be transmitted on a transmission path different from that of the image (or bit stream).
  • Information may be recorded on a recording medium (or another recording area of the same recording medium) different from the image (or bit stream). Furthermore, the information and the image (or bit stream) may be associated with each other in an arbitrary unit such as a plurality of frames, one frame, or a part of the frame.
  • this technique can also take the following structures.
  • a decoding unit that decodes an encoded stream to generate an image; For the image generated by the decoding unit, using a filter coefficient corresponding to the index of the filter coefficient, a filter unit that performs the filter for each maximum coding unit;
  • An image processing apparatus comprising: a management unit that manages a history when the filter unit refers to the filter coefficient using the index.
  • the apparatus further includes a reading unit that reads out the filter coefficient from a buffer that stores the filter coefficient using the index of the filter coefficient, The image processing apparatus according to (1), wherein the management unit stores the filter coefficient read by the reading unit in an area that is frequently referred to the buffer.
  • a receiving unit that receives the filter coefficient is further provided, The management unit performs a FIFO process on the buffer using the filter coefficient received by the receiving unit when there is no free space in the buffer.
  • Image processing device The image processing device according to any one of (1) to (7), further including a buffer that stores the filter coefficient.
  • An image processing apparatus Decoding the encoded stream to generate an image, For the generated image, using the filter coefficient corresponding to the index of the filter coefficient, applying the filter for each maximum coding unit, An image processing method for managing a history when referring to the filter coefficient using the index.
  • a filter processing unit that applies the filter for each maximum coding unit using a filter coefficient corresponding to an index of a filter coefficient for an image that has been locally decoded when the image is encoded;
  • An encoding unit that encodes the image using the image subjected to the filter by the filter unit and generates an encoded stream;
  • An image processing apparatus comprising: a management unit that manages a history when the filter unit refers to the filter coefficient using the index.
  • the apparatus further includes a reading unit that reads out the filter coefficient from a buffer that stores the filter coefficient by using the index of the filter coefficient.
  • 15) The image processing device according to any one of (12) to (14), wherein an index value of the filter coefficient is set to be decreased as the use frequency of the filter coefficient is increased.
  • the image processing device according to any one of (12) to (18), wherein the filter unit performs a filter using a coefficient that minimizes an error from the original image.
  • the image processing apparatus is Using the filter coefficient corresponding to the index of the filter coefficient for the image subjected to local decoding processing when encoding the image, the filter is applied for each maximum coding unit, Encode the image using the filtered image to generate an encoded stream; An image processing method for managing a history when referring to the filter coefficient using the index.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention se rapporte à un dispositif et à un procédé de traitement d'image avec lesquels il est possible de réduire le coût de coefficients de filtres de transmission. Quand un coefficient de filtre stocké dans le dernier module de codage le plus important (LCU) est référencé, le coefficient de filtre référencé est déplacé vers une zone de haute priorité dans un tampon de coefficients de filtre adaptatif à boucle (ALF). Quand un coefficient de filtre transmis dans le dernier LCU est utilisé, s'il n'existe pas d'espace libre disponible, un traitement FIFO est exécuté dans le tampon de coefficients ALF au moyen du coefficient de filtre. D'autre part, quand le coefficient de filtre transmis dans le dernier LCU est utilisé, s'il existe de l'espace libre disponible, le coefficient de filtre est stocké dans une zone d'espace libre de haute priorité de l'espace libre. La présente invention peut être mise en œuvre dans des dispositifs de traitement d'images, par exemple.
PCT/JP2012/083968 2012-01-12 2012-12-27 Dispositif et procédé de traitement d'image WO2013105457A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012-004524 2012-01-12
JP2012004524 2012-01-12

Publications (1)

Publication Number Publication Date
WO2013105457A1 true WO2013105457A1 (fr) 2013-07-18

Family

ID=48781413

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/083968 WO2013105457A1 (fr) 2012-01-12 2012-12-27 Dispositif et procédé de traitement d'image

Country Status (1)

Country Link
WO (1) WO2013105457A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017195532A1 (fr) * 2016-05-13 2017-11-16 シャープ株式会社 Dispositif de décodage d'image et dispositif de codage d'image
JP2018006829A (ja) * 2016-06-27 2018-01-11 日本電信電話株式会社 映像フィルタリング方法、映像フィルタリング装置及び映像フィルタリングプログラム

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009060317A (ja) * 2007-08-31 2009-03-19 Ricoh Co Ltd 画像データ符号化装置、画像データ符号化方法、画像形成装置、画像形成方法、画像データ復号化装置、及び画像データ復号化方法
JP2010135864A (ja) * 2007-03-29 2010-06-17 Toshiba Corp 画像符号化方法及び装置並びに画像復号化方法及び装置
WO2011111341A1 (fr) * 2010-03-09 2011-09-15 パナソニック株式会社 Dispositif de décodage d'image dynamique, dispositif de codage d'image dynamique, circuit de décodage d'image dynamique, et procédé de décodage d'image dynamique

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010135864A (ja) * 2007-03-29 2010-06-17 Toshiba Corp 画像符号化方法及び装置並びに画像復号化方法及び装置
JP2009060317A (ja) * 2007-08-31 2009-03-19 Ricoh Co Ltd 画像データ符号化装置、画像データ符号化方法、画像形成装置、画像形成方法、画像データ復号化装置、及び画像データ復号化方法
WO2011111341A1 (fr) * 2010-03-09 2011-09-15 パナソニック株式会社 Dispositif de décodage d'image dynamique, dispositif de codage d'image dynamique, circuit de décodage d'image dynamique, et procédé de décodage d'image dynamique

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
A. FULDSETH ET AL.: "Improved ALF with low latency and reduced complexity", JCTVC-G499, JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT- VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/ WG11, 21 November 2011 (2011-11-21), pages 1 - 7, XP030050626 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017195532A1 (fr) * 2016-05-13 2017-11-16 シャープ株式会社 Dispositif de décodage d'image et dispositif de codage d'image
JP2018006829A (ja) * 2016-06-27 2018-01-11 日本電信電話株式会社 映像フィルタリング方法、映像フィルタリング装置及び映像フィルタリングプログラム

Similar Documents

Publication Publication Date Title
JP6465227B2 (ja) 画像処理装置および方法、記録媒体、並びに、プログラム
JP6521013B2 (ja) 画像処理装置および方法、プログラム、並びに記録媒体
JP6624462B2 (ja) 画像処理装置および方法、プログラム、並びに記録媒体
JPWO2014002896A1 (ja) 符号化装置および符号化方法、復号装置および復号方法
WO2014050676A1 (fr) Dispositif et procédé de traitement d'image
WO2014050731A1 (fr) Dispositif et procédé de traitement d'image
WO2013108688A1 (fr) Dispositif de traitement d'image et procédé
WO2013047326A1 (fr) Dispositif et procédé de traitement d'image
WO2013047325A1 (fr) Dispositif et procédé de traitement d'image
WO2013051453A1 (fr) Dispositif et procédé de traitement d'image
WO2014045954A1 (fr) Dispositif et procédé de traitement d'image
WO2013105457A1 (fr) Dispositif et procédé de traitement d'image
WO2013105458A1 (fr) Dispositif et procédé de traitement d'image
WO2014103765A1 (fr) Dispositif et procédé de décodage, et dispositif et procédé d'encodage
WO2014156707A1 (fr) Dispositif et procédé de codage d'image et dispositif et procédé de décodage d'image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12865188

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12865188

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP