WO2012077607A1 - 画像処理装置及び画像処理方法 - Google Patents
画像処理装置及び画像処理方法 Download PDFInfo
- Publication number
- WO2012077607A1 WO2012077607A1 PCT/JP2011/077953 JP2011077953W WO2012077607A1 WO 2012077607 A1 WO2012077607 A1 WO 2012077607A1 JP 2011077953 W JP2011077953 W JP 2011077953W WO 2012077607 A1 WO2012077607 A1 WO 2012077607A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- unit
- filtering
- image
- determination
- deblocking filter
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/436—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/36—Scalability techniques involving formatting the layers as a function of picture distortion after decoding, e.g. signal-to-noise [SNR] scalability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
Definitions
- the present disclosure relates to an image processing apparatus and an image processing method.
- H.264 which is one of the standard specifications of the image coding method.
- a deblock filter is applied to block boundaries for each 4 ⁇ 4 pixel block, for example, in order to suppress deterioration in image quality caused by block distortion that occurs during image coding.
- the amount of processing required for this deblocking filter is large, and it is said that, for example, it may account for 50% of the total amount of processing when decoding an image.
- JCTVC-A119 In standardization work of HEVC (High Efficiency Video Coding), a next-generation image coding method, a deblock filter is applied to each block of 8 ⁇ 8 pixels or more in JCT VC-A119 (see Non-Patent Document 1 below) It has been proposed to do.
- JCTVC-A119 the block size of the minimum unit to which the deblocking filter is applied is expanded, and filtering processing is performed in parallel for a plurality of block boundaries in the same direction within one macroblock. It is possible to
- JCT VC-A 119 Video coding technology proposal by Tandberg, Nokia, and Ericsson
- JCT-VC Joint Collaborative Team on Video Coding
- the technology according to the present disclosure is to provide an image processing device and an image processing method that enable further parallelization of processing upon application of a deblocking filter.
- a decoding unit that decodes an image from a coded stream and a determination process that determines whether a deblock filter is to be applied to an adjacent block adjacent to a block boundary in an image decoded by the decoding unit are performed.
- the image processing apparatus can be typically realized as an image decoding apparatus that decodes an image.
- an image processing method including: controlling the determination process so as to be performed using as a pixel.
- a determination unit that performs a determination process of determining whether to apply a deblock filter to an adjacent block adjacent to a block boundary in an image to be locally decoded when encoding an image to be encoded
- a filtering unit for applying a deblock filter to an adjacent block determined to apply the deblock filter by the determination unit, and the determination process for a vertical block boundary and a horizontal block boundary, which corresponds to the adjacent block of the reconstruct image
- An image processing apparatus comprising: a control unit that is executed by the determination unit using pixels as reference pixels; and an encoding unit that encodes the image to be encoded using an image filtered by the filtering unit.
- the image processing apparatus can be typically realized as an image coding apparatus that codes an image.
- a process of determining whether to apply a deblocking filter to an adjacent block adjacent to a block boundary in an image to be locally decoded when encoding an image to be encoded is performed.
- Applying a deblock filter to an adjacent block determined to apply the deblock filter in the determination process, and the determination process on a vertical block boundary and a horizontal block boundary refers to pixels of the adjacent block in the reconstitute image
- An image processing method comprising: controlling the determination process so as to be performed using as a pixel; and encoding the image to be encoded using an image filtered by the deblocking filter.
- FIG. 8 is a first explanatory diagram for describing reference pixels in the filtering necessity determination process according to the first embodiment. It is a 2nd explanatory view for explaining the reference pixel in the filtering necessity judging processing concerning a 1st example. It is an explanatory view for explaining the 1st example of the order of processing. It is an explanatory view for explaining the 2nd example of the order of processing. It is a block diagram which shows an example of a detailed structure of the deblocking filter which concerns on a 1st Example.
- FIG. 14 is an explanatory diagram for describing third and fourth examples of a determination method that can be implemented in the second embodiment.
- FIG. 18 is an explanatory diagram for describing fifth and sixth examples of a determination method that can be implemented in the second embodiment. It is an explanatory view for explaining the order of processing for every LCU. It is a flowchart which shows an example of the flow of the process for every LCU. It is an explanatory view for explaining an outline of a 3rd example.
- FIG. 18 is an explanatory diagram for describing a first example of the order of processing implemented in the third embodiment; It is explanatory drawing for demonstrating the 2nd example of the order of the process for a comparison.
- FIG. 18 is an explanatory diagram for describing a second example of the order of processing implemented in the third embodiment; It is a flowchart which shows an example of the flow of the process by the deblocking filter which concerns on a 3rd Example. It is a flowchart which shows an example of the flow of the pixel value calculation process shown in FIG. It is an explanatory view for explaining a multiview codec. It is explanatory drawing for demonstrating the application to the multi view codec of the image coding process concerning one Embodiment. It is an explanatory view for explaining application to a multi view codec of image decoding processing concerning one embodiment. It is an explanatory view for explaining a scalable codec.
- FIGS. 1 and 2 An outline of an example of an apparatus to which the technology disclosed in this specification can be applied will be described using FIGS. 1 and 2.
- the technology disclosed herein is applicable to, for example, an image coding apparatus and an image decoding apparatus.
- FIG. 1 is a block diagram showing an example of the configuration of an image coding apparatus 10 according to an embodiment.
- the image coding apparatus 10 includes an A / D (Analogue to Digital) conversion unit 11, a rearrangement buffer 12, a subtraction unit 13, an orthogonal conversion unit 14, a quantization unit 15, a lossless encoding unit 16, Storage buffer 17, rate control unit 18, inverse quantization unit 21, inverse orthogonal transformation unit 22, addition unit 23, deblock filter 24a, frame memory 25, selector 26, intra prediction unit 30, motion search unit 40, and mode selection A unit 50 is provided.
- a / D Analogue to Digital
- the A / D converter 11 converts an image signal input in an analog format into image data in a digital format, and outputs a series of digital image data to the sorting buffer 12.
- the rearrangement buffer 12 rearranges the images included in the series of image data input from the A / D converter 11.
- the rearrangement buffer 12 rearranges the images according to the GOP (Group of Pictures) structure related to the encoding process, and then outputs the rearranged image data to the subtraction unit 13, the intra prediction unit 30, and the motion search unit 40. Do.
- GOP Group of Pictures
- the subtraction unit 13 is supplied with the image data input from the reordering buffer 12 and predicted image data selected by the mode selection unit 50 described later.
- Subtraction unit 13 calculates prediction error data, which is the difference between the image data input from reordering buffer 12 and the prediction image data input from mode selection unit 50, and sends the calculated prediction error data to orthogonal transform unit 14. Output.
- the orthogonal transformation unit 14 performs orthogonal transformation on the prediction error data input from the subtraction unit 13.
- the orthogonal transformation performed by the orthogonal transformation unit 14 may be, for example, Discrete Cosine Transform (DCT) or Karhunen-Loeve Transform.
- the orthogonal transform unit 14 outputs transform coefficient data acquired by the orthogonal transform process to the quantization unit 15.
- the quantization unit 15 is supplied with transform coefficient data input from the orthogonal transform unit 14 and a rate control signal from the rate control unit 18 described later.
- the quantizing unit 15 quantizes the transform coefficient data, and outputs the quantized transform coefficient data (hereinafter, referred to as quantized data) to the lossless encoding unit 16 and the inverse quantization unit 21. Further, the quantization unit 15 changes the bit rate of the quantized data input to the lossless encoding unit 16 by switching the quantization parameter (quantization scale) based on the rate control signal from the rate control unit 18 Let
- the lossless encoding unit 16 includes quantized data input from the quantization unit 15 and intra prediction or inter prediction generated by the intra prediction unit 30 or the motion search unit 40 described later and selected by the mode selection unit 50.
- Information about the The information on intra prediction may include, for example, prediction mode information indicating an optimal intra prediction mode for each block.
- information on inter prediction may include, for example, prediction mode information for prediction of a motion vector for each block, differential motion vector information, and reference image information.
- the lossless encoding unit 16 generates an encoded stream by performing lossless encoding processing on the quantized data.
- the lossless coding by the lossless coding unit 16 may be, for example, variable length coding or arithmetic coding.
- the lossless encoding unit 16 multiplexes the information related to intra prediction or the information related to inter prediction described above in a header (for example, a block header or a slice header) of the encoded stream. Then, the lossless encoding unit 16 outputs the generated encoded stream to the accumulation buffer 17.
- the accumulation buffer 17 temporarily accumulates the encoded stream input from the lossless encoding unit 16 using a storage medium such as a semiconductor memory. Then, the accumulation buffer 17 outputs the accumulated encoded stream at a rate according to the band of the transmission path (or the output line from the image encoding device 10).
- the rate control unit 18 monitors the free space of the accumulation buffer 17. Then, the rate control unit 18 generates a rate control signal according to the free space of the accumulation buffer 17, and outputs the generated rate control signal to the quantization unit 15. For example, when the free space of the accumulation buffer 17 is small, the rate control unit 18 generates a rate control signal for reducing the bit rate of the quantized data. Also, for example, when the free space of the accumulation buffer 17 is sufficiently large, the rate control unit 18 generates a rate control signal for increasing the bit rate of the quantized data.
- the inverse quantization unit 21 performs inverse quantization processing on the quantized data input from the quantization unit 15. Then, the inverse quantization unit 21 outputs the transform coefficient data acquired by the inverse quantization process to the inverse orthogonal transformation unit 22.
- the inverse orthogonal transform unit 22 restores prediction error data by performing inverse orthogonal transform processing on the transform coefficient data input from the inverse quantization unit 21. Then, the inverse orthogonal transform unit 22 outputs the restored prediction error data to the addition unit 23.
- the addition unit 23 generates decoded image data by adding the restored prediction error data input from the inverse orthogonal transform unit 22 and the predicted image data input from the mode selection unit 50. Then, the adding unit 23 outputs the generated decoded image data to the deblocking filter 24 a and the frame memory 25.
- the deblocking filter 24a performs filtering processing to reduce block distortion that occurs during image coding. For example, the deblocking filter 24a determines whether or not filtering is necessary for each of the block boundaries for the decoded image data input from the adding unit 23, and applies the deblocking filter to the boundaries determined to apply the filter. . In addition to the decoded image data from the adding unit 23, the deblocking filter 24a also receives information (for example, mode information, transform coefficient information, and motion vector information) used to determine whether or not filtering is necessary. . Then, the deblocking filter 24 a outputs the decoded image data after filtering from which block distortion has been removed to the frame memory 25. The process performed by the deblocking filter 24a will be described in detail later.
- the frame memory 25 stores the decoded image data input from the adding unit 23 and the decoded image data after filtering input from the deblocking filter 24 a using a storage medium.
- the selector 26 reads the decoded image data before filtering used for intra prediction from the frame memory 25 and supplies the read decoded image data to the intra prediction unit 30 as reference image data. Further, the selector 26 reads the decoded image data after filtering used for inter prediction from the frame memory 25 and supplies the read decoded image data to the motion search unit 40 as reference image data.
- the intra prediction unit 30 performs intra prediction processing in each intra prediction mode based on the image data to be encoded input from the reordering buffer 12 and the decoded image data supplied via the selector 26. For example, the intra prediction unit 30 evaluates the prediction result in each intra prediction mode using a predetermined cost function. Then, the intra prediction unit 30 selects the intra prediction mode in which the cost function value is the smallest, that is, the intra prediction mode in which the compression rate is the highest, as the optimal intra prediction mode. Furthermore, the intra prediction unit 30 outputs, to the mode selection unit 50, prediction mode information indicating the optimal intra prediction mode, predicted image data, and information on intra prediction such as a cost function value.
- the motion search unit 40 performs inter prediction processing (inter-frame prediction processing) based on the image data to be encoded input from the reordering buffer 12 and the decoded image data supplied via the selector 26. For example, the motion search unit 40 evaluates the prediction result in each prediction mode using a predetermined cost function. Next, the motion search unit 40 selects the prediction mode in which the cost function value is the smallest, that is, the prediction mode in which the compression rate is the highest, as the optimal prediction mode. The motion search unit 40 also generates predicted image data according to the optimal prediction mode. Then, the motion search unit 40 outputs information on inter prediction such as prediction mode information indicating the selected optimal prediction mode, prediction image data, and cost function value to the mode selection unit 50.
- inter prediction processing inter-frame prediction processing
- the mode selection unit 50 compares the cost function value for intra prediction input from the intra prediction unit 30 with the cost function value for inter prediction input from the motion search unit 40. Then, the mode selection unit 50 selects a prediction method having a smaller cost function value from intra prediction and inter prediction. When intra prediction is selected, the mode selection unit 50 outputs information on intra prediction to the lossless encoding unit 16 and outputs predicted image data to the subtraction unit 13 and the addition unit 23. In addition, when the inter prediction is selected, the mode selection unit 50 outputs the above-described information on inter prediction to the lossless encoding unit 16 and outputs prediction image data to the subtraction unit 13 and the addition unit 23.
- FIG. 2 is a block diagram showing an example of the configuration of the image decoding apparatus 60 according to an embodiment.
- the image decoding apparatus 60 includes an accumulation buffer 61, a lossless decoding unit 62, an inverse quantization unit 63, an inverse orthogonal transformation unit 64, an addition unit 65, a deblocking filter 24b, a rearrangement buffer 67, and D / A. (Digital to Analogue) conversion unit 68, frame memory 69, selectors 70 and 71, intra prediction unit 80, and motion compensation unit 90.
- D / A. (Digital to Analogue) conversion unit 68 Digital to Analogue conversion unit 68
- frame memory 69 selectors 70 and 71
- intra prediction unit 80 intra prediction unit 80
- motion compensation unit 90 D / A.
- the accumulation buffer 61 temporarily accumulates the coded stream input through the transmission path using a storage medium.
- the lossless decoding unit 62 decodes the encoded stream input from the accumulation buffer 61 according to the encoding scheme used during encoding. Further, the lossless decoding unit 62 decodes the information multiplexed in the header area of the encoded stream.
- the information multiplexed in the header area of the encoded stream may include, for example, information on intra prediction in the block header and information on inter prediction.
- the lossless decoding unit 62 outputs information on intra prediction to the intra prediction unit 80. In addition, the lossless decoding unit 62 outputs information on inter prediction to the motion compensation unit 90.
- the inverse quantization unit 63 inversely quantizes the quantized data after being decoded by the lossless decoding unit 62.
- the inverse orthogonal transform unit 64 generates prediction error data by performing inverse orthogonal transform on the transform coefficient data input from the dequantization unit 63 according to the orthogonal transform scheme used at the time of encoding. Then, the inverse orthogonal transform unit 64 outputs the generated prediction error data to the addition unit 65.
- the addition unit 65 adds the prediction error data input from the inverse orthogonal transform unit 64 and the prediction image data input from the selector 71 to generate decoded image data. Then, the adding unit 65 outputs the generated decoded image data to the deblocking filter 24b and the frame memory 69.
- the deblocking filter 24 b performs filtering processing to reduce block distortion appearing in the decoded image.
- the deblocking filter 24b determines, for example, whether or not filtering is necessary for each of the block boundaries in the decoded image data input from the adding unit 65, and applies the deblocking filter to the boundaries determined to apply the filter. .
- information used for determining whether or not filtering is necessary is also input to the deblocking filter 24b.
- the deblocking filter 24 b outputs the decoded image data after filtering, from which block distortion has been removed, to the rearrangement buffer 67 and the frame memory 69. The process performed by the deblocking filter 24b will be described in detail later.
- the rearrangement buffer 67 rearranges the images input from the deblocking filter 24 b to generate a series of time-series image data. Then, the rearrangement buffer 67 outputs the generated image data to the D / A converter 68.
- the D / A converter 68 converts the digital image data input from the reordering buffer 67 into an analog image signal. Then, the D / A conversion unit 68 displays an image, for example, by outputting an analog image signal to a display (not shown) connected to the image decoding device 60.
- the frame memory 69 stores the decoded image data before filtering input from the adding unit 65 and the decoded image data after filtering input from the deblocking filter 24 b using a storage medium.
- the selector 70 switches the output destination of the image data from the frame memory 69 between the intra prediction unit 80 and the motion compensation unit 90 for each block in the image according to the mode information acquired by the lossless decoding unit 62. .
- the selector 70 outputs the decoded image data before filtering supplied from the frame memory 69 to the intra prediction unit 80 as reference image data.
- the selector 70 outputs the decoded image data after filtering supplied from the frame memory 69 to the motion compensation unit 90 as reference image data.
- the selector 71 outputs an output source of predicted image data to be supplied to the adding unit 65 for each block in the image in accordance with the mode information acquired by the lossless decoding unit 62 between the intra predicting unit 80 and the motion compensating unit 90. Switch between. For example, when the intra prediction mode is designated, the selector 71 supplies predicted image data output from the intra prediction unit 80 to the addition unit 65. The selector 71 supplies predicted image data output from the motion compensation unit 90 to the addition unit 65 when the inter prediction mode is specified.
- the intra prediction unit 80 performs intra prediction of pixel values based on the information on the intra prediction input from the lossless decoding unit 62 and the reference image data from the frame memory 69, and generates predicted image data. Then, the intra prediction unit 80 outputs the generated predicted image data to the selector 71.
- the motion compensation unit 90 performs motion compensation processing based on the information on inter prediction input from the lossless decoding unit 62 and the reference image data from the frame memory 69 to generate predicted image data. Then, the motion compensation unit 90 outputs the generated predicted image data to the selector 71.
- H The processing by the deblocking filter in the existing image coding method such as H.264 / AVC or HEVC includes two types of processing of filtering necessity determination processing and filtering processing. Hereinafter, these two processes will be described by taking HEVC as an example.
- the filtering necessity determination process is a process of determining whether or not the deblocking filter should be applied, for each block boundary in the input image.
- Block boundaries include vertical boundaries between adjacent blocks on the left and right, and horizontal boundaries between blocks adjacent on the top and bottom.
- a block size of 8 ⁇ 8 pixels is the smallest processing unit.
- there are four 8x8 pixel blocks within a 16x16 pixel macroblock, and one (left) vertical boundary and one (upper) horizontal boundary per block, ie a total of 4 + 4 Eight boundaries are to be judged.
- the term "macroblock” also includes a coding unit (CU: Coding Unit) in the context of HEVC.
- CU Coding Unit
- FIG. 3 is an explanatory view showing an example of pixels in two blocks (adjacent blocks) Ba and Bb adjacent to each other across a boundary.
- the pixels in the block Ba are indicated by the symbol p ij .
- i is the column index and j is the row index.
- the column index i is numbered 0, 1, 2, 3 sequentially from right to left from the column near the vertical boundary.
- the row index j is numbered from top to bottom as 0, 1, 2,.
- the left half of the block Ba is omitted in the figure.
- the pixels in the block Bb are indicated by the symbol q kj .
- k is the column index and j is the row index.
- the column index k is numbered 0, 1, 2, 3 (from left to right) sequentially from the column near the vertical boundary.
- the right half of the block Bb is also omitted in the figure.
- condition A (A1) block Ba or Bb is in intra prediction mode; (A2) block Ba or Bb has non-zero orthogonal transformation coefficients; or (A3)
- condition A3 the motion vector of the block Ba with Qpel (1/4 pixel) accuracy is (MVAx, MVAy), and the motion vector of the block Bb is (MVBx, MVBy).
- ⁇ in the condition B is an edge determination threshold. The initial value of ⁇ is given according to the quantization parameter. Also, the value of ⁇ can be designated by the user by the parameter in the slice header.
- the filtering process described below is performed on the boundary determined to apply the deblocking filter according to the above-described determination condition.
- a strong filter is selected if all of the following conditions C1 to C3 are satisfied, and a weak filter is selected if any one of the following conditions is not satisfied: (C1) d ⁇ ( ⁇ >> 2) (C2) (
- j is the row index for vertical boundaries and the column index for horizontal boundaries.
- ⁇ Clip (-t C, t C, (13 (q 0j -p 0j) +4 (q 1j -p 1j) -5 (q 2j -p 2j) +16) >> 5))
- p 0 j Clip 0-255 (p 0 j + ⁇ )
- q 0j Clip 0-255 (q 0j-? )
- p 1j Clip 0-255 (p 1j + ⁇ / 2)
- q 1j Clip 0-255 (q 1j- ⁇ / 2)
- p 0j Clip 0-255 ((p 2j + 2p 1j + 2p 0j + 2q 0j + q 1j +4) >> 3)
- q 0j Clip 0-255 ((p 1j + 2p 0j + 2q 0j + 2q 1j + q 2j +4) >> 3)
- p 1j Clip 0-255 ((p 2j + p 1j + p 0j + q 0j +2) >> 2)
- q 1j Clip 0-255 ((p 0j + q 0j + q 1j + q 2j +2) >> 2)
- p 2j Clip 0-255 ((2p 3j + 3p 2j + p 1j + p 0j + q 0j +4) >> 3)
- q 2j Clip 0-255 ((p 0j + q 0j + 3q 2j + 2q 3j +4) >> 3)
- Clip (a, b, c) clips the value c in the range of a ⁇ c ⁇ b
- Clip 0-255 (c) clips the value c in the range of 0 ⁇ c ⁇ 255.
- ⁇ Clip (-t C, t C, ((((q 0j -p 0j) ⁇ 2) + p 1j -q 1j +4) >> 3))
- p 0 j Clip 0-255 (p 0 j + ⁇ )
- q 0j Clip 0-255 (q 0j-? )
- the dependency between the filtering process for the vertical boundary and the filtering necessity determination process for the horizontal boundary is left. Also, the dependency between the filtering process on the horizontal boundary and the filtering necessity determination process on the vertical boundary is also left. Therefore, for example, when processing the vertical boundary before the horizontal boundary, the filtering necessity determination processing for the horizontal boundary in a certain macroblock is performed after the end of the filtering processing for the vertical boundary. . As an example, referring to FIG.
- the filtering necessity determination process J H0,0 depends on the result of the filtering process F V0,0 and F V0,1 , and the result of the filtering process F V0,1 It is shown that the filtering necessity determination process J H 0 , 1 depends on Similarly, the filtering necessity determination process for the vertical boundary in a certain macroblock is executed after the end of the filtering process for the horizontal boundary of the adjacent macroblock. As an example, referring to FIG.
- the macro on the result of the filtering process F H0,1 and F H0,3 blocks MB0 dependent filtering necessity determining process J V1,0 macroblock MB1, filtering of macroblocks MB0 filtering necessity determining process J V1,2 macroblocks MB1 to results F H0,3 is shown to be dependent.
- FIG. 10 is an explanatory diagram for describing an example of the order of processing of the deblocking filter in the existing method.
- an image having a size of 32 ⁇ 32 pixels is input to the deblocking filter.
- the input image includes four macroblocks MB0 to MB3 each having a size of 16 ⁇ 16 pixels.
- the filtering process F H0,0 for four horizontal boundaries of macro blocks MB0, F H0,1, F H0,2 and F H0,3 are executed in parallel.
- the processing (fifth step to eighth step) for the macroblock MB1 is sequentially performed.
- the processing (the ninth to twelfth steps) for the macroblock MB2 is sequentially performed.
- the processing (the thirteenth step to the sixteenth step) for the macroblock MB3 is sequentially performed.
- the processing by the deblocking filter 24 includes two types of processing, the above-described filtering necessity determination processing and the filtering processing.
- the deblocking filter 24 determines whether to apply the deblocking filter to each vertical boundary and each horizontal boundary based on the pixel value of the reference pixel different from the existing method. More specifically, when determining the vertical boundary, the deblocking filter 24 determines, among the pixels of adjacent blocks across the vertical boundary, pixels belonging to a row to which the deblocking filter for the horizontal boundary is not applied as a reference pixel. Do.
- the deblocking filter 24 determines, as the reference pixel, among the pixels of the blocks adjacent to each other across the horizontal boundary, the deblocking filter 24 belongs to a column to which the deblocking filter for the vertical boundary is not applied. Also in this section, as an example, it is assumed that the processing by the deblocking filter 24 has a block size of 8 ⁇ 8 pixels as one processing unit.
- FIG. 11 is an explanatory diagram for describing reference pixels in the filtering necessity determination processing for the vertical boundary by the deblocking filter 24.
- FIG. Referring to FIG. 11, a macroblock MB0 having a size of 16 ⁇ 16 pixels is shown.
- the deblocking filter 24 uses the pixels belonging to at least one of the fourth and fifth rows (L4 and L5) of each block as a reference pixel as to whether the filtering for the four vertical boundaries of the macroblock MB0 is necessary or not. Use to judge.
- These two rows are rows to which the deblock filter for the horizontal boundary is not applied (see FIG. 9).
- Such a configuration eliminates the dependency between the filtering process on the horizontal boundary and the filtering necessity determination process on the vertical boundary.
- FIG. 12 is an explanatory diagram for describing reference pixels in the filtering necessity determination processing for the horizontal boundary by the deblocking filter 24.
- FIG. Referring to FIG. 12, the macroblock MB0 is shown again.
- the deblocking filter 24 determines whether or not the filtering for the four horizontal boundaries of the macroblock MB0 is necessary, using pixels belonging to at least one of the fourth and fifth columns (C4 and C5) of each block as reference pixels. Use to judge. These two columns are columns to which the deblock filter for the vertical boundary is not applied (see FIG. 7 or 8).
- Such a configuration also eliminates the dependency between the filtering process on the vertical boundary and the filtering necessity determination process on the horizontal boundary.
- filtering necessity determination processing can be performed in parallel for vertical boundaries and horizontal boundaries of all macroblocks in the input image.
- FIG. 13 is an explanatory diagram for describing a first example of an order of processing that can be realized by the deblocking filter 24.
- an image having a size of 32 ⁇ 32 pixels is input to the deblocking filter.
- the input image includes four macroblocks MB0 to MB3 each having a size of 16 ⁇ 16 pixels.
- FIG. 13 is an example of maximizing the degree of parallelism (the number of processes executed in parallel) by parallelizing processing between macroblocks
- the deblocking filter 24 is, as in the example of FIG. Processing for each macroblock can also be realized.
- the same number of processes as in FIGS. 10 and 13 are integrated into 12 process steps. That is, in the first step, the filtering necessity determination process for four vertical boundaries and four horizontal boundaries of the macroblock MB0 are performed in parallel J V0,0 to J V0,3 and J H0,0 to J H0,3 in parallel. To be executed. Next, in the second step, filtering processes F V0,0 to F V0,3 on the four vertical boundaries of the macroblock MB0 are executed in parallel. Next, in the third step, filtering necessity determination processing for four vertical boundaries and four horizontal boundaries of the macroblock MB1 J V1,0 to J V1,3 and J H1,0 to J H1,3 are parallel. To be executed.
- filtering processes F V1,0 to F V1,3 for the four vertical boundaries of the macroblock MB1 are executed in parallel.
- filtering processes F H0,0 to F H0,3 on the four horizontal boundaries of the macroblock MB0 are executed in parallel.
- filtering necessity determination processing for four vertical boundaries and four horizontal boundaries of the macroblock MB2 J V2,0 to J V2,3 and J H2,0 to J H2,3 are parallel.
- filtering processes F V2,0 to F V2,3 on the four vertical boundaries of the macroblock MB2 are executed in parallel.
- filtering processes F H1,0 to F H1,3 on the four horizontal boundaries of the macroblock MB1 are executed in parallel.
- filtering necessity determination processing for four vertical boundaries and four horizontal boundaries of the macroblock MB3 J V3,0 to J V3,3 and J H3,0 to J H3,3 are parallel.
- filtering processes F V3,0 to F V3,3 on the four vertical boundaries of the macroblock MB3 are executed in parallel.
- filtering processes F H2,0 to F H2,3 for the four horizontal boundaries of the macroblock MB2 are executed in parallel.
- filtering processes F H3,0 to F H3,3 on the four horizontal boundaries of the macroblock MB3 are executed in parallel.
- the degree of parallelism is lower than in the example of FIG. 13, the processing of the deblocking filter 24 for the entire input image can be performed with a smaller number of processing steps than the existing method.
- FIG. 15 is a block diagram showing an example of a detailed configuration of the deblocking filter 24 according to the first embodiment for realizing the parallel processing described above.
- the deblocking filter 24 includes a determination block 110, a horizontal filtering block 130, a vertical filtering block 140, and a parallelization control unit 150.
- the determination block 110 includes a plurality of vertical boundary determination units 112-1 to 112-n and a plurality of horizontal boundary determination units 114-1 to 114-n. Each vertical boundary determination unit 112 and each horizontal boundary determination unit 114 are supplied with an input image to the deblocking filter 24 and determination information used to determine whether or not filtering is necessary.
- each demarcation filter should be applied to each vertical boundary using the pixel values of the reference pixels belonging to the row to which the deblocking filter for the horizontal boundary as illustrated in FIG. 11 is not applied.
- Determine The pixel value of the reference pixel here is an input pixel value to the deblocking filter 24.
- each vertical boundary determination unit 112 generates information (for example, binary information indicating a determination result that "1" should apply the deblocking filter ") indicating the determination result about each vertical boundary, the horizontal filtering block 130.
- each demarcation filter should be applied to each horizontal boundary using each pixel value of the reference pixel belonging to the column to which the deblock filter for the vertical boundary as illustrated in FIG. 12 is not applied.
- Determine The pixel value of the reference pixel here is also an input pixel value to the deblocking filter 24.
- the determination process by each horizontal boundary determination unit 114 is performed in parallel with the determination process by each vertical boundary determination unit 112. Then, each horizontal boundary determination unit 114 outputs information indicating the determination result for each horizontal boundary to the vertical filtering block 140.
- FIG. 16 is a block diagram showing an example of a more detailed configuration of each vertical boundary determination unit 112 and each horizontal boundary determination unit 114.
- each determination unit includes a tap construction unit 121, an operation unit 122, a threshold comparison unit 123, a distortion evaluation unit 124, and a filtering determination unit 125.
- the tap constructing unit 121 acquires the pixel value of the reference pixel from the pixel values of two blocks adjacent to each other across the boundary of interest in the input image, and taps for determination of the determination condition B of the luminance component described above (see Construct a set of pixel values). For example, in the case where the size of each block is 8 ⁇ 8 pixels, when the boundary of interest is a vertical boundary, the tap constructing unit 121 sets one or both of the fourth and fifth rows of the two left and right blocks. Construct a tap from the pixel values to which it belongs. Also, when the boundary of interest is a horizontal boundary, the tap construction unit 121 constructs a tap from pixel values belonging to one or both of the fourth and fifth columns of the upper and lower two blocks.
- the calculation unit 122 substitutes the tap constructed by the tap construction unit 121 into the left side of the judgment formula of the judgment condition B, and calculates the value of the edge to be compared with the edge judgment threshold value ⁇ .
- the threshold comparison unit 123 compares the value calculated by the calculation unit 122 with the edge determination threshold value ⁇ , and outputs the comparison result to the filtering determination unit 125.
- the distortion evaluation unit 124 evaluates the above-described determination condition A of the luminance component using the mode information (MB mode), the conversion coefficient information, and the motion vector information supplied as the determination information. Then, the distortion evaluation unit 124 outputs the evaluation result to the filtering determination unit 125. For the color difference component, only the determination on the determination condition A1 based on the mode information by the distortion evaluation unit 124 may be performed.
- the filtering determination unit 125 deblocks the focused boundary based on the comparison result of the determination condition B input from the threshold comparison unit 123 and the evaluation result of the determination condition A input from the distortion evaluation unit 124. To determine if it should apply. Then, the filtering determination unit 125 outputs information indicating the determination result.
- the horizontal filtering block 130 includes a plurality of horizontal filtering units 132-1 to 132-n. Each horizontal filtering unit 132 is supplied with the input image and the determination result of each vertical boundary from the determination block 110.
- Each horizontal filtering unit 132 applies the deblocking filter for the vertical boundary to the pixels on the left and right of the corresponding vertical boundary when the determination result by the corresponding vertical boundary determination unit 112 indicates that the filter should be applied. . Then, each horizontal filtering unit 132 outputs the pixel value after filtering for the pixel to which the filter is applied and the pixel value of the input image for the other pixels to the vertical filtering block 140.
- the vertical filtering block 140 includes a plurality of vertical filtering units 142-1 to 142-n. Each vertical filtering unit 142 is supplied with the input image and the determination result for each horizontal boundary from the determination block 110.
- Each vertical filtering unit 142 applies the deblock filter for the horizontal boundary to the pixels above and below the corresponding horizontal boundary when the determination result by the corresponding horizontal boundary determination unit 114 indicates that the filter should be applied. . Then, each vertical filtering unit 142 outputs the pixel value after filtering for the pixel to which the filter is applied, and outputs the pixel value supplied from the horizontal filtering block 130 for the other pixels.
- the output from each vertical filtering unit 142 may constitute the output image from the deblocking filter 24.
- the parallelization control unit 150 controls the degree of parallelism of the filtering necessity determination process in the determination block 110 and the degree of parallelism of the filtering process in the horizontal filtering block 130 and the vertical filtering block 140.
- the parallelization control unit 150 may control the degree of parallelism of processing of each block based on the size of the input image. More specifically, when the size of the input image is relatively large, the parallelization control unit 150 increases the degree of parallelism of processing of each block. Thereby, it is possible to adaptively prevent a delay or a decrease in data rate due to an increase in processing amount according to the size of the image. Also, for example, the parallelization control unit 150 may control the degree of parallelism of processing of each block based on parameters included in the sequence parameter set, the picture parameter set, or the slice header. Thus, the degree of parallelism can be set flexibly according to the requirements of each user who develops the device. For example, the degree of parallelism may be set according to the constraints of the implementation environment such as the number of processor cores or the number of software threads.
- the parallelization control unit 150 can control the order of the filtering necessity determination process in the determination block 110 and the order of the filtering process in the horizontal filtering block 130 and the vertical filtering block 140 for each block.
- the parallelization control unit 150 may control the order of the filtering process in accordance with the dependency of the filtering process between macroblocks. For example, when attempting to implement parallel processing for each slice in an image in the existing method, processing dependencies between adjacent macroblocks across slice boundaries may cause processing delays. However, in the present embodiment, the parallelization control unit 150 can perform the filtering process on the macroblock adjacent to the slice boundary prior to the other macroblocks.
- FIG. 17 shows eight macro blocks MB10 to MB13 and MB20 to MB23 adjacent to each other across a slice boundary.
- the macroblocks MB10 to MB13 belong to the slice SL1.
- the macroblocks MB20 to MB23 belong to the slice SL2.
- the filtering process on the horizontal boundary of the macroblock MB20 of the slice SL2 among the macroblocks depends on the filtering process on the vertical boundary of the macroblock MB12 of the slice SL1.
- the filtering process on the horizontal boundary of the macroblock MB21 of the slice SL2 depends on the filtering process on the vertical boundary of the macroblock MB13 of the slice SL1.
- the parallelization control unit 150 performs the filtering process on the vertical boundaries of the macroblocks MB12 and MB13 in the filtering process of the slice SL1 from the process on other boundaries. Also give priority to execution. As a result, in the filtering process of slice SL2, a large delay in the filtering process on the horizontal boundaries of macroblocks MB20 and MB21 can be prevented. Also, as in the example of FIG. 19, the horizontal boundaries of macroblocks MB20 and MB21 of slice SL2 are also obtained when the filtering process on the vertical boundaries of all the macroblocks included in slice SL1 is initially performed in parallel. There is no delay in the filtering process for.
- FIG. 20 is a flowchart showing an example of the flow of processing in a first scenario by the deblocking filter 24.
- the first scenario is a scenario corresponding to the example of FIG. 13 with a large degree of parallelism.
- the vertical boundary determination units 112-1 to 112-n determine the necessity of filtering in parallel for all vertical boundaries included in a plurality of macroblocks in the input image (steps S102). Also, the horizontal boundary determination units 114-1 to 114-n determine in parallel whether or not it is necessary to filter all horizontal boundaries included in a plurality of macroblocks in the input image (step S104). The steps S102 and S104 are also performed in parallel.
- the horizontal filtering units 132-1 to 132-n apply deblocking filters in parallel for all vertical boundaries determined to apply the deblocking filter in step S102 (step S110).
- the vertical filtering units 142-1 to 142-n apply deblocking filters in parallel to all horizontal boundaries determined to apply the deblocking filter in step S104 (step S120).
- FIG. 21 is a flow chart showing an example of the flow of processing in a second scenario by the deblocking filter 24.
- the second scenario is a scenario corresponding to the example of FIG. 14 with a lower degree of parallelism.
- the vertical boundary determination units 112-1 to 112-n determine in parallel whether or not filtering is necessary for all vertical boundaries included in one target macroblock in the input image (see FIG. 21).
- Step S202 the horizontal boundary determination units 114-1 to 114-n determine in parallel whether or not it is necessary to filter all horizontal boundaries included in the target macro block (step S204). These steps S202 and S204 are also performed in parallel.
- the horizontal filtering units 132-1 to 132-n apply the deblocking filter in parallel to the vertical boundaries in the macroblock of interest determined to apply the deblocking filter in step S202 (step S210). ).
- step S220 The process of the next step S220 is performed on the target macro block in the previous loop.
- the process of step S220 may be skipped.
- the vertical filtering units 142-1 to 142-n apply deblocking filters in parallel to horizontal boundaries determined to apply the deblocking filter in step S204 of the immediately preceding loop (step S220).
- steps S202 to S220 is repeated for a new target macroblock (step S230).
- the vertical filtering units 142-1 to 142-n determine that the target block in the final loop should be applied to the deblocking filter. Deblock filters are applied in parallel for horizontal boundaries (step S240). Then, the process ends.
- the processing by the deblocking filter 24 may be parallelized in various units such as a specific number (such as 2 or 4) of macroblocks or a group of blocks arranged horizontally or vertically.
- FIG. 22 is a flowchart showing an example of the flow of the filtering necessity determination process corresponding to steps S102 and S104 of FIG. 21 and steps S202 and S204 of FIG.
- the distortion evaluation unit 124 evaluates the distortion of each boundary based on the mode information, transform coefficient information, and motion vector information (step S130).
- the process proceeds to step S134.
- the process proceeds to step S140 (step S132).
- step S134 the value of the edge is calculated by the calculation unit 122 based on the tap of the reference pixel constructed by the tap construction unit 121 (step S134). Then, the threshold comparison unit 123 compares the calculated value with the edge determination threshold ⁇ (step S136). Here, when the edge value is smaller than the threshold value ⁇ (when the determination condition B is true), the process proceeds to step S138. On the other hand, if the edge value is not smaller than the threshold value ⁇ , the process proceeds to step S140.
- step S138 the filtering determination unit 125 determines that the deblocking filter should be applied to the boundary of the determination target (step S138).
- step S140 the filtering determination unit 125 determines that the deblocking filter should not be applied to the boundary of the determination target (step S140).
- Second embodiment> In the first embodiment, pixel values of pixels not updated by the filtering process for other blocks are used for the filtering necessity determination process for one block.
- the memory for holding the input pixel value to the deblocking filter is provided to eliminate the restriction of the filtering necessity determination process and to use more various determination conditions. Make it possible.
- FIG. 23 is a block diagram showing an example of a detailed configuration of the deblocking filter 24 according to the second embodiment.
- the deblocking filter 24 includes a line memory 208, a determination block 210, a horizontal filtering block 130, a vertical filtering block 140, and a parallelization control unit 150.
- the line memory 208 stores pixel values of the input image input to the deblocking filter 24.
- the pixel values stored by line memory 208 are not updated by the filtering process in horizontal filtering block 130 and vertical filtering block 140.
- the pixel values stored in the line memory 208 are referred to in the filtering necessity determination process by each unit in the determination block 210 described below.
- a memory provided in the apparatus for the purpose different from the processing of the deblocking filter 24 may be reused (shared) as the line memory 208.
- the determination block 210 includes a plurality of vertical boundary determination units 212-1 to 212-n and a plurality of horizontal boundary determination units 214-1 to 214-n.
- each vertical boundary determination unit 212 and each horizontal boundary determination unit 214 the pixel value of the input image to the deblocking filter 24 stored by the line memory 208, and the determination information used for determining the necessity of the filtering Is supplied.
- Each vertical boundary determination unit 212 uses the input pixel value to the deblocking filter 24 to determine whether or not the deblocking filter should be applied to each vertical boundary. Then, each vertical boundary determination unit 212 outputs information indicating the determination result on each vertical boundary to the horizontal filtering block 130.
- Each horizontal boundary determination unit 214 also uses the input pixel value to the deblocking filter 24 to determine whether or not the deblocking filter should be applied to each horizontal boundary. The determination process by each horizontal boundary determination unit 214 is performed in parallel with the determination process by each vertical boundary determination unit 212. Then, each horizontal boundary determination unit 214 outputs information indicating the determination result on each horizontal boundary to the vertical filtering block 140.
- each vertical boundary judgment unit 212 refers to the pixels in the third and sixth rows of the block as in the existing method shown in FIG. It may be determined whether or not filtering is required for the vertical boundary of However, the pixel value to be referred to is the pixel value of the input image to the deblocking filter 24 stored by the line memory 208. Similarly, each horizontal boundary determination unit 214 may determine whether or not the filtering of the horizontal boundary of each block is necessary with reference to the pixels of the third and sixth columns of the block. In this case, the configuration according to the present embodiment can be easily realized without changing the determination condition of the filtering necessity determination process implemented in the existing apparatus.
- each vertical boundary determination unit 212 may refer to pixels in three or more rows of the block at the time of determination.
- each horizontal boundary determination unit 214 may refer to pixels in three or more columns of the block at the time of determination.
- each vertical boundary determination unit 212 and each horizontal boundary determination unit 214 may use a determination condition expression different from the existing method.
- FIGS. 24 to 26 six examples of determination methods that can be realized in the present embodiment will be described using FIGS. 24 to 26, respectively.
- FIG. 24 is an explanatory view for respectively explaining first and second examples of the determination method.
- the pixels of all the first to eighth columns of each block are referred to.
- the determination condition of the luminance component may be defined as follows: -Judgment condition of luminance component (Luma) ... if both condition A and condition B are true-Condition A: (A1) block Ba or Bb is in intra prediction mode; (A2) block Ba or Bb has non-zero orthogonal transformation coefficients; or (A3)
- -Condition B: iD 0
- iD 1
- iD 2
- the determination condition of the color difference component may be the same as the existing method described above.
- a weighted average may be calculated when calculating the average value iDave of the four determination parameters iD 0 to iD 3 .
- iD 1
- iD 2
- iD 3
- Formulas for calculating the four determination parameters iD 0 to iD 3 are the same as in the first example.
- a condition may be used that not all of the four determination parameters iD 0 to iD 3 but at least three, two or one fall below the edge determination threshold ⁇ .
- FIG. 25 is an explanatory diagram for describing third and fourth examples of the determination method.
- pixels in four rows L1, L3, L6, and L8 of each block are referred to in the filtering necessity determination processing (especially determination of the luminance component determination condition B) for the vertical boundary. Ru. Also in the filtering necessity determination process for the horizontal boundary, the pixels in the four columns of each block are referred to.
- the determination condition of the luminance component may be defined as follows: -Judgment condition of luminance component (Luma) ... if both condition A and condition B are true-Condition A: (A1) block Ba or Bb is in intra prediction mode; (A2) block Ba or Bb has non-zero orthogonal transformation coefficients; or (A3)
- -Condition B: iD 0
- iD 2
- iD ave (iD 0
- the determination condition of the color difference component may be the same as the existing method described above. Note that when two determination parameter iD 0 and calculating the average value iD ave of iD 2, weighted average may be calculated.
- iD 2
- Formulas for calculating the two determination parameters iD 0 and iD 2 are the same as in the third example. Note that the condition that at least one of the two determination parameters iD 0 and iD 2 but not at least one of them falls below the edge determination threshold ⁇ may be used.
- first, third, sixth and eighth rows (or columns) L1, L3, L6 and L8 of the block are referred to in the determination, the example has been described. Combinations may be referenced.
- FIG. 26 is an explanatory view for respectively explaining fifth and sixth examples of the determination method.
- pixels in four rows L1, L3, L5 and L7 of each block are referred to in the filtering necessity determination processing for the vertical boundary.
- the pixels in the four columns of each block are referred to.
- the determination condition of the luminance component may be defined as follows: -Judgment condition of luminance component (Luma) ... if both condition A and condition B are true-Condition A: (A1) block Ba or Bb is in intra prediction mode; (A2) block Ba or Bb has non-zero orthogonal transformation coefficients; or (A3)
- -Condition B: iD 0
- iD 2
- iD ave (iD 0 +
- the determination condition of the color difference component may be the same as the existing method described above. Note that when two determination parameter iD 0 and calculating the average value iD ave of iD 2, weighted average may be calculated.
- iD 2
- Formulas for calculating the two determination parameters iD 0 and iD 2 are the same as in the fifth example. Note that the condition that at least one of the two determination parameters iD 0 and iD 2 but not at least one of them falls below the edge determination threshold ⁇ may be used.
- the variation of the parameter for each row or column may be excessively affected by comparing the average value iD ave of the determination parameter with the edge determination threshold value ⁇ . Instead, appropriate determination can be made on a block basis.
- the deblocking filter 24 can operate at different degrees of parallelism.
- the vertical boundary determination units 212-1 to 212-n parallelize necessity / non-necessity of filtering for all vertical boundaries included in a plurality of macroblocks in the input image. judge. Also, the horizontal boundary determination units 214-1 to 214-n determine in parallel whether or not it is necessary to filter all horizontal boundaries included in a plurality of macroblocks in the input image. The determination processing for these vertical boundaries and horizontal boundaries is also performed in parallel. Thereafter, the horizontal filtering units 132-1 to 132-n and the vertical filtering units 142-1 to 142-n apply the deblocking filter to the vertical boundary and the horizontal boundary determined to apply the deblocking filter, respectively ( See Figure 20).
- the vertical boundary determination units 212-1 to 212-n parallelize the necessity of filtering for all vertical boundaries included in one target macroblock in the input image.
- the horizontal boundary determination units 214-1 to 214-n determine the necessity of the filtering in parallel for all horizontal boundaries included in the target macroblock. The determination processing for these vertical boundaries and horizontal boundaries is also performed in parallel.
- the horizontal filtering units 132-1 to 132-n apply the deblocking filter to the vertical boundary determined to apply the deblocking filter.
- the vertical filtering units 142-1 to 142-n apply the deblocking filter to the horizontal boundary determined to apply the deblocking filter. Such processing is repeated for all macroblocks in the input image (see FIG. 21).
- processing by the deblocking filter 24 is parallelized in various units such as a specific number (such as 2 or 4) of macroblocks or a group of blocks arranged horizontally or vertically. May be done. Also in the second embodiment, as in the first embodiment, the parallelism and the order of the processes in the deblocking filter 24 may be controlled by the parallelization control unit 150.
- HEVC High Efficiency Video Coding Unit
- LCU Large Coding Unit
- an LCU of 64 ⁇ 64 pixels can be selected.
- the smallest selectable CU size is 8 ⁇ 8 pixels.
- processing is usually performed for each LCU in accordance with the order of raster scanning, starting from the LCU at the upper left of the picture (or slice). Therefore, in this section, an example of such processing for each LCU in the deblocking filter 24 will be described.
- FIG. 27 is an explanatory diagram for describing an order of processing for each LCU related to the second embodiment described above.
- the size of the LCU is 16 ⁇ 16 pixels
- the size of the CU is 8 ⁇ 8 pixels.
- the processes of the second stage on the upper right and the third stage on the lower left in FIG. 27 are processes for the n-th LCU.
- filtering necessity determination processing for all vertical boundaries and all horizontal boundaries belonging to the n-th LCU is performed in parallel using input pixel values to the deblocking filter 24. It will be.
- the filtering process for the vertical boundary determined to apply the deblock filter among the vertical boundaries belonging to the n-th LCU uses the input pixel value to the deblock filter 24. It is done in parallel.
- the filtering process is performed in parallel on the horizontal boundaries determined to apply the deblock filter using the pixel values after the second stage processing. .
- the fourth stage processing at the lower right of FIG. 27 is processing for the (n + 1) th LCU.
- the filtering necessity determination processing for all vertical boundaries and all horizontal boundaries belonging to the (n + 1) th LCU is performed in parallel, in the fourth stage, the vertical boundaries determined to apply the deblocking filter The filtering process for is performed in parallel.
- the size of the LCU is 16 ⁇ 16 pixels as an example, but the size of the LCU may be 32 ⁇ 32 pixels or 64 ⁇ 64 pixels.
- the size of the selected LCU is larger, the number of vertical boundaries and horizontal boundaries belonging to one LCU also increases, so the effect of shortening the processing time by parallelization is further enhanced.
- FIG. 28 is a flowchart showing an example of the flow of processing for each LCU by the deblocking filter 24.
- the vertical boundary determination units 212-1 to 212-n determine the necessity of filtering in parallel for all vertical boundaries included in one target LCU in the input image (steps S252). Also, the horizontal boundary determination units 214-1 to 214-n determine in parallel whether or not it is necessary to filter all horizontal boundaries included in the target LCU (step S254). These steps S252 and S254 are also performed in parallel.
- the horizontal filtering units 132-1 to 132-n apply the deblocking filter in parallel to the vertical boundaries in the target LCU determined to apply the deblocking filter in step S252 (step S260). .
- the vertical filtering units 142-1 to 142-n apply deblocking filters in parallel to horizontal boundaries in the target LCU determined to apply the deblocking filter in step S254 (step S270). .
- FIG. 29 is an explanatory diagram for describing an outline of this embodiment.
- a graphic representing an input pixel also referred to as a “reconstruct pixel”
- the input pixel to the deblocking filter is referenced not only in the filtering necessity determination process for the vertical border and the horizontal border but also in both the filtering process for the vertical border and the filtering process for the horizontal border. .
- the dependency between these two filtering processes is eliminated. Also, these two filtering processes are performed in parallel.
- the deblocking filter calculates one output pixel value from two filter outputs for pixels that are redundantly updated by two filters operating in parallel.
- FIG. 30 is a block diagram showing an example of a detailed configuration of the deblocking filter 24 according to the third embodiment.
- the deblocking filter 24 includes a line memory 208, a determination block 210, a horizontal filtering block 330, a vertical filtering block 340, a parallelization control unit 150, and a calculation unit 360.
- the determination block 210 includes a plurality of vertical boundary determination units 212-1 to 212-n and a plurality of horizontal boundary determination units 214-1 to 214-n.
- Each vertical boundary determination unit 212 and each horizontal boundary determination unit 214 may determine the necessity of filtering for each boundary according to various determination conditions as exemplified in the second embodiment.
- the horizontal filtering block 330 includes a plurality of horizontal filtering units 332-1 to 332-n. Each horizontal filtering unit 332 is supplied with the input pixel value from the line memory 208 and the determination result of each vertical boundary from the determination block 210.
- Each horizontal filtering unit 332 applies the deblock filter for the vertical boundary to the pixels on the left and right of the corresponding vertical boundary when the determination result by the corresponding vertical boundary determination unit 212 indicates that the filter should be applied. . Then, each horizontal filtering unit 332 outputs the pixel value after filtering for the pixel to which the filter is applied and the input pixel value for the other pixels to the calculation unit 360.
- the vertical filtering block 340 includes a plurality of vertical filtering units 342-1 to 342-n. Each vertical filtering unit 342 is supplied with the input pixel value from the line memory 208 and the determination result on each horizontal boundary from the determination block 210.
- Each vertical filtering unit 342 applies the deblock filter for the horizontal boundary to the pixels above and below the corresponding horizontal boundary when the determination result by the corresponding horizontal boundary determination unit 214 indicates that the filter should be applied. .
- the filtering process by the vertical filtering units 342-1 to 342-n is performed in parallel with the filtering process by the horizontal filtering units 332-1 to 332-n. Then, each vertical filtering unit 342 outputs the pixel value after filtering for the pixel to which the filter is applied and the input pixel value for the other pixels to the calculation unit 360.
- the output pixel value from the horizontal filtering block 330 and the output pixel value from the vertical filtering block 340 are supplied to the calculation unit 360 in parallel. Furthermore, the determination results from the vertical boundary determination unit 212 and the horizontal boundary determination unit 214 are supplied to the calculation unit 360.
- the calculator 360 outputs the filter output from the horizontal filtering block 330 and the filter output from the vertical filtering block 340 for pixels filtered by both the horizontal filtering block 330 and the vertical filtering block 340 according to the supplied determination result. Calculate the output pixel value based on it.
- the calculation unit 360 calculates an average of two filter outputs for pixels to be redundantly filtered.
- the average of the two filter outputs calculated by the calculation unit 360 may be a simple average.
- the calculator 360 may calculate a weighted average of the two filter outputs.
- the calculator 360 may determine, for example, the weight of the weighted average for each pixel according to the distance to the vertical boundary of each pixel and the distance to the horizontal boundary.
- FIG. 31 is an explanatory diagram for describing determination of weights for weighted averaging by the calculation unit 360.
- the pixel of interest P Z is located in one of the overlapping position shown in FIG. 29 is shown in black.
- the distance D V between the pixel of interest P Z and the nearest vertical boundary V Z is three pixels.
- the distance D H between the pixel of interest P Z and the nearest horizontal boundary H Z is two pixels.
- the distance D H is smaller than the distance D V.
- calculation unit 360 a weight for the output of the deblock filter for horizontal boundary H Z, may be determined larger than the weight for the output of the deblock filter for vertical boundary V Z.
- the weight ratio between the filter output V out for the vertical boundary V Z and the filter output H out for the horizontal boundary H Z is determined to be 2: 3.
- the deblocking filter 24 may have one two-dimensional filter that simultaneously calculates horizontal filtering, vertical filtering and weighted average. However, in such a case, there is a need to vary the filter coefficients for each pixel, which makes the implementation extremely complicated.
- the two-dimensional filter is substantially used while utilizing the existing deblocking filter mechanism. Equal processing can be easily realized.
- the calculator 360 may weight each pixel according to the strength of the edge of the vertical boundary and the horizontal boundary corresponding to each pixel.
- the weight of the average may be determined.
- the edge strength here can be expressed by a parameter such as the value of the edge calculated by the calculation unit 122 shown in FIG. 16, for example.
- the weight of the filter output for stronger edge boundaries may be determined to be greater than the weight of the filter output for weaker edge boundaries.
- the calculation unit 360 selects the output from the block actually subjected to the filtering for the pixels filtered by one of the horizontal filtering block 330 and the vertical filtering block 340. In addition, for the pixels that are not filtered by any of the horizontal filtering block 330 and the vertical filtering block 340, the calculation unit 360 outputs the input pixel value to the deblocking filter 24 as it is.
- the output pixel values from the calculation unit 360 according to the determination result of the necessity of filtering are shown in the table of FIG.
- Example of order of processing Hereinafter, two examples of the processing order that can be realized by the deblocking filter 24 in the present embodiment will be described.
- the input image includes four macroblocks MB0 to MB3 each having a size of 16 ⁇ 16 pixels.
- FIG. 34 shows the order of processing when the dependency between the filtering processing for the vertical boundary and the filtering processing for the horizontal boundary remains.
- the order of the processes shown in FIG. 34 is substantially the same as the order of the processes shown in FIG. 13 in relation to the first embodiment.
- the filtering necessity determination process for all vertical boundaries and all horizontal boundaries of four macroblocks MB0 to MB3 J V0,0 to J V3,3 and J H0,0 to J H3 , 3 are executed in parallel.
- filtering processes F V0,0 to F V3,3 are performed on 16 vertical boundaries of four macroblocks MB0 to MB3.
- each pixel value after the filtering process for the horizontal boundary is stored in a memory for output from the deblocking filter 24.
- FIG. 35 shows a first example of the order of processing implemented in the present embodiment.
- the first step filtering necessity determining process J V0,0 on all four vertical boundary and all horizontal boundaries of macro blocks MB0 ⁇ MB3 ⁇ J V3,3 and J H0,0 ⁇ J H3 , 3 are executed in parallel.
- the filtering processing F V0,0 to F V3,3 and F H0,0 to F H3,3 for all vertical boundaries and all horizontal boundaries of the four macroblocks MB0 to MB3 are performed in parallel. (In fact, only the boundaries determined to require filtering are filtered).
- each pixel value is stored in a memory for output from the deblocking filter 24. In that case, for pixels filtered by both the horizontal filtering block 330 and the vertical filtering block 340, a weighted average of the two filter outputs may be calculated as an output pixel value.
- FIG. 36 shows the order of processing for each macroblock in the case where the dependency between the filtering processing for the vertical boundary and the filtering processing for the horizontal boundary remains.
- the order of the processes shown in FIG. 36 is substantially the same as the order of the processes shown in FIG. 14 in relation to the first embodiment.
- four processing steps (sixth, tenth, fourteenth and sixteenth steps) for storing the pixel values in the memory for output, which are not shown for simplicity of explanation in FIG. It is clearly stated.
- the process of FIG. 36 is composed of 16 process steps including these four process steps.
- FIG. 37 shows a second example of the order of processing implemented in the present embodiment.
- filtering necessity determination processing for four vertical boundaries and four horizontal boundaries of the macroblock MB0 are paralleled in parallel J V0,0 to J V0,3 and J H0,0 to J H0,3.
- filtering processes F V0,0 to F V0,3 and F H0,0 to F H0,3 for four vertical boundaries and four horizontal boundaries of the macroblock MB0 are executed in parallel. Ru.
- each pixel value of the macroblock MB0 is stored in the memory for output from the deblocking filter 24.
- a weighted average of the two filter outputs may then be calculated as the output pixel value for pixels that have been redundantly filtered by the two filters.
- the macroblock MB1 is processed in the fourth to sixth steps, the macroblock MB2 in the seventh to ninth steps, and the macroblock MB3 in the tenth to twelfth steps.
- the process of FIG. 37 is composed of 12 process steps less than the process of FIG.
- the processing in the deblocking filter 24 can be performed with fewer processing steps.
- one of the advantages of referencing only the input pixel to the deblocking filter in the filtering process is that the filtering process for the vertical boundary and the filtering process for the horizontal boundary are performed regardless of how the filter taps are configured. There is no dependency between them. Therefore, it is possible to improve the image quality by configuring the filter taps from more pixels than the existing method. For example, in the existing approach, three pixels of filter taps were used on each side of each boundary, as described in connection with FIG.
- the degree of parallelism and the order of processing in the deblocking filter 24 may be controlled by the parallelization control unit 150.
- FIG. 38 is a flow chart showing an example of the flow of processing by the deblocking filter according to the third embodiment.
- FIG. 39 is a flowchart showing an example of the flow of the pixel value calculation process shown in FIG.
- the vertical boundary determination units 212-1 to 212-n determine in parallel whether or not filtering is necessary for all vertical boundaries in the input image (or in the macro block) (step S302). ). Also, the horizontal boundary determination units 214-1 to 214-n determine in parallel whether or not it is necessary to filter all horizontal boundaries in the input image (or in the macro block) (step S304). These steps S302 and S304 are also performed in parallel.
- the horizontal filtering units 332-1 to 332-n apply deblocking filters in parallel to all vertical boundaries determined to apply the deblocking filter in step S302 (step S306). Also, the vertical filtering units 342-1 to 342-n apply deblocking filters in parallel to all the horizontal boundaries determined to apply the deblocking filter in step S304 (step S308). These steps S306 and S308 are also performed in parallel.
- step S310 the pixel value calculation process shown in FIG. 39 is performed by the calculation unit 360 (step S310).
- step S310 the processing from step S314 to step S326 is looped for each pixel to be processed (step S312).
- step S314 the calculation unit 360 determines whether the pixel of interest has been filtered by both of the vertical and horizontal filters (step S314). Here, if the pixel of interest is filtered by both of the two filters, the process proceeds to step S322. On the other hand, when the pixel of interest is not filtered by both of the two filters, the process proceeds to step S316.
- step S316 the calculation unit 360 determines whether the pixel of interest has been filtered by one of the two filters for the vertical boundary and the horizontal boundary (step S316). Here, if the pixel of interest is filtered by one of the two filters, the process proceeds to step S320. On the other hand, when the pixel of interest is not filtered by any filter, the process proceeds to step S318.
- step S318 the calculation unit 360 acquires an input pixel value to the deblocking filter 24 (step S318).
- step S320 the calculation unit 360 obtains the filter output from the filter that has actually performed the filtering on the target pixel (step S320).
- step S322 the calculator 360 calculates the weight value for calculating the weighted average of the filter outputs from the two filters for the pixel of interest, for example, the distance to the vertical boundary of the pixel of interest and the distance to the horizontal boundary, or It is determined according to the strength of the edge of the vertical boundary and the horizontal boundary corresponding to the pixel (step S322). The calculator 360 then uses the determined weights to calculate a weighted average of the filter outputs from the two filters (step S324).
- the calculation unit 360 stores the pixel value of the pixel of interest acquired in step S318 or step S320 or calculated in step S324 in the memory (step S326).
- the series of processing shown in FIGS. 38 and 39 is finished.
- a multi-view codec is an image coding scheme for coding and decoding so-called multi-view video.
- FIG. 40 is an explanatory diagram for describing a multiview codec. Referring to FIG. 40, a sequence of frames of three views taken respectively at three viewpoints is shown. Each view is assigned a view ID (view_id). One of these multiple views is designated as a base view. Views other than the base view are called non-base views. In the example of FIG. 40, a view whose view ID is "0" is a base view, and two views whose view ID is "1" or "2" are non-base views.
- the data size of the encoded stream as a whole may be compressed by encoding the non-baseview frame based on the encoding information for the baseview frame.
- a deblocking filter may be applied to each view in the encoding and decoding processes according to the multiview codec described above.
- the filtering necessity determination process for vertical block boundaries and horizontal block boundaries in each view may be parallelized in accordance with the technology according to the present disclosure.
- the filtering necessity determination process and the filtering process in each view may be parallelized.
- parameters for controlling the filtering necessity determination process or the filtering process may be set for each view.
- parameters set in the base view may be reused in the non-base view.
- the filtering necessity determination process or the filtering process may be parallelized across multiple views.
- Parameters for controlling the filtering necessity determination process or the filtering process may be standardized across a plurality of views.
- a flag may be additionally specified to indicate whether or not parameters are shared across multiple views.
- FIG. 41 is an explanatory diagram for describing an application of the image coding process described above to a multiview codec. Referring to FIG. 41, the configuration of an exemplary multi-view coding device 710 is shown.
- the multiview coding apparatus 710 includes a first coding unit 720, a second coding unit 730, and a multiplexing unit 740.
- the first encoding unit 720 encodes the base view image to generate a base view encoded stream.
- the second encoding unit 730 encodes the non-base view image to generate a non-base view encoded stream.
- the multiplexing unit 740 multiplexes the coded stream of the base view generated by the first coding unit 720 and the coded stream of one or more non-base views generated by the second coding unit 730, Generate a multiview multiplexed stream.
- the first encoding unit 720 and the second encoding unit 730 illustrated in FIG. 41 have the same configuration as the image encoding device 10 according to the above-described embodiment. Thereby, upon application of the deblocking filter to each view, it is possible to parallelize the filtering necessity determination process for vertical block boundaries and horizontal block boundaries, or to parallelize the filtering necessity determination process and the filtering process. Become. Parameters for controlling these processes may be inserted into the header area of the coded stream of each view, or may be inserted into a common header area in the multiplexed stream.
- FIG. 42 is an explanatory diagram for describing an application of the image decoding processing described above to a multiview codec.
- the multiview decoding device 760 includes an inverse multiplexing unit 770, a first decoding unit 780, and a second decoding unit 790.
- the demultiplexing unit 770 demultiplexes the multiview multiplexed stream into a baseview coded stream and one or more non-base view coded streams.
- the first decoding unit 780 decodes a base view image from the base view coded stream.
- the second decoding unit 730 decodes the non-base view image from the non-base view coded stream.
- the first decoding unit 780 and the second decoding unit 790 illustrated in FIG. 42 have the same configuration as the image decoding device 60 according to the above-described embodiment. Thereby, upon application of the deblocking filter to each view, it is possible to parallelize the filtering necessity determination process for vertical block boundaries and horizontal block boundaries, or to parallelize the filtering necessity determination process and the filtering process. Become. Parameters for controlling these processes may be obtained from the header area of the coded stream of each view, or may be obtained from a common header area in the multiplexed stream.
- FIG. 43 is an explanatory diagram for describing a scalable codec.
- a sequence of frames of three layers different in spatial resolution, temporal resolution or image quality is shown.
- Each layer is assigned a layer ID (layer_id).
- the layer having the lowest resolution (or image quality) among the plurality of layers is a base layer.
- Layers other than the base layer are called enhancement layers.
- a layer whose layer ID is “0” is a base layer
- two layers whose layer ID is “1” or “2” are enhancement layers.
- a deblocking filter may be applied to each layer in the encoding process and the decoding process according to the scalable codec described above.
- the filtering necessity determination process for vertical block boundaries and horizontal block boundaries in each layer may be parallelized according to the technology according to the present disclosure.
- the filtering necessity determination process and the filtering process in each layer may be parallelized.
- parameters for controlling the filtering necessity determination process or the filtering process may be set for each layer.
- parameters set in the base layer may be reused in the enhancement layer.
- the filtering necessity determination process or the filtering process may be parallelized across multiple layers.
- Parameters for controlling the filtering necessity determination process or the filtering process may be standardized across multiple layers.
- a flag indicating whether parameters are shared across multiple layers may be additionally specified.
- FIG. 44 is an explanatory diagram for describing an application of the image coding process described above to a scalable codec.
- the scalable coding apparatus 810 includes a first coding unit 820, a second coding unit 830, and a multiplexing unit 840.
- the first coding unit 820 codes the base layer image to generate a coded stream of the base layer.
- the second coding unit 830 codes the enhancement layer image to generate a coded stream of the enhancement layer.
- the multiplexing unit 840 multiplexes the base layer coded stream generated by the first coding unit 820 and the coded stream of one or more enhancement layers generated by the second coding unit 830, Generate a multiplexed stream of layers.
- the first encoding unit 820 and the second encoding unit 830 illustrated in FIG. 44 have the same configuration as the image encoding device 10 according to the above-described embodiment.
- Parameters for controlling these processes may be inserted into the header area of the encoded stream of each layer, or may be inserted into a common header area in the multiplexed stream.
- FIG. 45 is an explanatory diagram for describing an application of the image decoding process described above to a scalable codec.
- the scalable decoding device 860 includes a demultiplexer 870, a first decoding unit 880, and a second decoding unit 890.
- the demultiplexing unit 870 demultiplexes the multiplexed stream of the multilayer into the coded stream of the base layer and the coded stream of one or more enhancement layers.
- the first decoding unit 880 decodes a base layer image from the coded stream of the base layer.
- the second decoding unit 830 decodes the enhancement layer image from the enhancement layer coded stream.
- the first decoding unit 880 and the second decoding unit 890 illustrated in FIG. 45 have the same configuration as the image decoding device 60 according to the above-described embodiment.
- the deblocking filter upon application of the deblocking filter to each layer, it is possible to parallelize filtering necessity determination processing for vertical block boundaries and horizontal block boundaries or to parallelize filtering necessity determination processing and filtering processing.
- Parameters for controlling these processes may be obtained from the header area of the coded stream of each layer, or may be obtained from a common header area in the multiplexed stream.
- the image encoding device 10 and the image decoding device 60 according to the embodiments described above are transmitters or receivers for satellite broadcasting, cable broadcasting such as cable TV, distribution over the Internet, distribution to terminals by cellular communication, etc.
- the present invention can be applied to various electronic devices such as a recording device which records an image on a medium such as an optical disk, a magnetic disk and a flash memory, or a reproducing device which reproduces an image from the storage medium.
- a recording device which records an image on a medium such as an optical disk, a magnetic disk and a flash memory
- a reproducing device which reproduces an image from the storage medium.
- FIG. 46 shows an example of a schematic configuration of a television set to which the embodiment described above is applied.
- the television device 900 includes an antenna 901, a tuner 902, a demultiplexer 903, a decoder 904, a video signal processing unit 905, a display unit 906, an audio signal processing unit 907, a speaker 908, an external interface 909, a control unit 910, a user interface 911, And a bus 912.
- the tuner 902 extracts a signal of a desired channel from a broadcast signal received via the antenna 901, and demodulates the extracted signal. Then, the tuner 902 outputs the coded bit stream obtained by demodulation to the demultiplexer 903. That is, the tuner 902 has a role as a transmission means in the television apparatus 900 for receiving a coded stream in which an image is coded.
- the demultiplexer 903 separates the video stream and audio stream of the program to be viewed from the coded bit stream, and outputs the separated streams to the decoder 904. Also, the demultiplexer 903 extracts auxiliary data such as an EPG (Electronic Program Guide) from the encoded bit stream, and supplies the extracted data to the control unit 910. When the coded bit stream is scrambled, the demultiplexer 903 may perform descrambling.
- EPG Electronic Program Guide
- the decoder 904 decodes the video stream and audio stream input from the demultiplexer 903. Then, the decoder 904 outputs the video data generated by the decoding process to the video signal processing unit 905. Further, the decoder 904 outputs the audio data generated by the decoding process to the audio signal processing unit 907.
- the video signal processing unit 905 reproduces the video data input from the decoder 904 and causes the display unit 906 to display a video. Also, the video signal processing unit 905 may cause the display unit 906 to display an application screen supplied via the network. Further, the video signal processing unit 905 may perform additional processing such as noise removal on the video data according to the setting. Furthermore, the video signal processing unit 905 may generate an image of a graphical user interface (GUI) such as a menu, a button, or a cursor, for example, and may superimpose the generated image on the output image.
- GUI graphical user interface
- the display unit 906 is driven by a drive signal supplied from the video signal processing unit 905, and displays a video or an image on the video surface of a display device (for example, a liquid crystal display, a plasma display, or an OLED).
- a display device for example, a liquid crystal display, a plasma display, or an OLED.
- the audio signal processing unit 907 performs reproduction processing such as D / A conversion and amplification on audio data input from the decoder 904, and causes the speaker 908 to output audio. Further, the audio signal processing unit 907 may perform additional processing such as noise removal on the audio data.
- the external interface 909 is an interface for connecting the television device 900 to an external device or a network.
- a video stream or an audio stream received via the external interface 909 may be decoded by the decoder 904. That is, the external interface 909 also serves as a transmission means in the television apparatus 900 for receiving the coded stream in which the image is coded.
- the control unit 910 includes a processor such as a central processing unit (CPU) and memories such as a random access memory (RAM) and a read only memory (ROM).
- the memory stores a program executed by the CPU, program data, EPG data, data acquired via a network, and the like.
- the program stored by the memory is read and executed by the CPU, for example, when the television device 900 is started.
- the CPU controls the operation of the television apparatus 900 according to an operation signal input from, for example, the user interface 911 by executing a program.
- the user interface 911 is connected to the control unit 910.
- the user interface 911 has, for example, buttons and switches for the user to operate the television device 900, a receiver of remote control signals, and the like.
- the user interface 911 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 910.
- the bus 912 mutually connects the tuner 902, the demultiplexer 903, the decoder 904, the video signal processing unit 905, the audio signal processing unit 907, the external interface 909, and the control unit 910.
- the decoder 904 has the function of the image decoding device 60 according to the above-described embodiment.
- the degree of parallelism of the processing of the deblocking filter can be increased to speed up the processing.
- FIG. 47 shows an example of a schematic configuration of a mobile phone to which the embodiment described above is applied.
- the mobile phone 920 includes an antenna 921, a communication unit 922, an audio codec 923, a speaker 924, a microphone 925, a camera unit 926, an image processing unit 927, a multiplexing and separating unit 928, a recording and reproducing unit 929, a display unit 930, a control unit 931, an operation.
- a unit 932 and a bus 933 are provided.
- the antenna 921 is connected to the communication unit 922.
- the speaker 924 and the microphone 925 are connected to the audio codec 923.
- the operation unit 932 is connected to the control unit 931.
- the bus 933 mutually connects the communication unit 922, the audio codec 923, the camera unit 926, the image processing unit 927, the demultiplexing unit 928, the recording / reproducing unit 929, the display unit 930, and the control unit 931.
- the cellular phone 920 can transmit and receive audio signals, transmit and receive electronic mail or image data, capture an image, and record data in various operation modes including a voice call mode, a data communication mode, a shooting mode, and a videophone mode. Do the action.
- the analog voice signal generated by the microphone 925 is supplied to the voice codec 923.
- the audio codec 923 converts an analog audio signal into audio data, and A / D converts and compresses the converted audio data. Then, the audio codec 923 outputs the compressed audio data to the communication unit 922.
- the communication unit 922 encodes and modulates audio data to generate a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921.
- the communication unit 922 also amplifies and frequency-converts a radio signal received via the antenna 921 to obtain a reception signal.
- the communication unit 922 demodulates and decodes the received signal to generate audio data, and outputs the generated audio data to the audio codec 923.
- the audio codec 923 decompresses and D / A converts audio data to generate an analog audio signal. Then, the audio codec 923 supplies the generated audio signal to the speaker 924 to output audio.
- the control unit 931 generates character data constituting an electronic mail in accordance with an operation by the user via the operation unit 932. Further, the control unit 931 causes the display unit 930 to display characters. Further, the control unit 931 generates electronic mail data in response to a transmission instruction from the user via the operation unit 932, and outputs the generated electronic mail data to the communication unit 922.
- a communication unit 922 encodes and modulates electronic mail data to generate a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921. The communication unit 922 also amplifies and frequency-converts a radio signal received via the antenna 921 to obtain a reception signal.
- the communication unit 922 demodulates and decodes the received signal to restore the e-mail data, and outputs the restored e-mail data to the control unit 931.
- the control unit 931 causes the display unit 930 to display the content of the e-mail, and stores the e-mail data in the storage medium of the recording and reproduction unit 929.
- the recording and reproducing unit 929 includes an arbitrary readable and writable storage medium.
- the storage medium may be a built-in storage medium such as a RAM or a flash memory, or an externally mounted storage medium such as a hard disk, a magnetic disk, a magnetooptical disk, an optical disk, a USB memory, or a memory card. May be
- the camera unit 926 captures an image of a subject to generate image data, and outputs the generated image data to the image processing unit 927.
- the image processing unit 927 encodes the image data input from the camera unit 926, and stores the encoded stream in the storage medium of the recording and reproduction unit 929.
- the demultiplexing unit 928 multiplexes the video stream encoded by the image processing unit 927 and the audio stream input from the audio codec 923, and the communication unit 922 multiplexes the multiplexed stream.
- Output to The communication unit 922 encodes and modulates the stream to generate a transmission signal.
- the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921.
- the communication unit 922 also amplifies and frequency-converts a radio signal received via the antenna 921 to obtain a reception signal.
- the transmission signal and the reception signal may include a coded bit stream.
- the communication unit 922 demodulates and decodes the received signal to restore the stream, and outputs the restored stream to the demultiplexing unit 928.
- the demultiplexing unit 928 separates the video stream and the audio stream from the input stream, and outputs the video stream to the image processing unit 927 and the audio stream to the audio codec 923.
- the image processing unit 927 decodes the video stream to generate video data.
- the video data is supplied to the display unit 930, and the display unit 930 displays a series of images.
- the audio codec 923 decompresses and D / A converts the audio stream to generate an analog audio signal. Then, the audio codec 923 supplies the generated audio signal to the speaker 924 to output audio.
- the image processing unit 927 has the functions of the image encoding device 10 and the image decoding device 60 according to the above-described embodiment. As a result, when encoding and decoding an image in the mobile phone 920, the degree of parallelism of the processing of the deblocking filter can be increased to speed up the processing.
- FIG. 48 shows an example of a schematic configuration of a recording and reproducing apparatus to which the embodiment described above is applied.
- the recording / reproducing device 940 encodes, for example, audio data and video data of the received broadcast program, and records the encoded data on a recording medium.
- the recording and reproduction device 940 may encode, for example, audio data and video data acquired from another device and record the encoded data on a recording medium.
- the recording / reproducing device 940 reproduces the data recorded on the recording medium on the monitor and the speaker, for example, in accordance with the user's instruction. At this time, the recording / reproducing device 940 decodes the audio data and the video data.
- the recording / reproducing apparatus 940 includes a tuner 941, an external interface 942, an encoder 943, an HDD (Hard Disk Drive) 944, a disk drive 945, a selector 946, a decoder 947, an OSD (On-Screen Display) 948, a control unit 949, and a user interface. And 950.
- the tuner 941 extracts a signal of a desired channel from a broadcast signal received via an antenna (not shown) and demodulates the extracted signal. Then, the tuner 941 outputs the coded bit stream obtained by demodulation to the selector 946. That is, the tuner 941 has a role as a transmission means in the recording / reproducing device 940.
- the external interface 942 is an interface for connecting the recording and reproducing device 940 to an external device or a network.
- the external interface 942 may be, for example, an IEEE 1394 interface, a network interface, a USB interface, or a flash memory interface.
- video data and audio data received via the external interface 942 are input to the encoder 943. That is, the external interface 942 has a role as a transmission unit in the recording / reproducing device 940.
- the encoder 943 encodes video data and audio data when the video data and audio data input from the external interface 942 are not encoded. Then, the encoder 943 outputs the coded bit stream to the selector 946.
- the HDD 944 records an encoded bit stream obtained by compressing content data such as video and audio, various programs, and other data in an internal hard disk. Also, the HDD 944 reads these data from the hard disk when reproducing video and audio.
- the disk drive 945 records and reads data on the attached recording medium.
- the recording medium mounted on the disk drive 945 may be, for example, a DVD disk (DVD-Video, DVD-RAM, DVD-R, DVD-RW, DVD + R, DVD + RW, etc.) or a Blu-ray (registered trademark) disk, etc. .
- the selector 946 selects the coded bit stream input from the tuner 941 or the encoder 943 at the time of recording video and audio, and outputs the selected coded bit stream to the HDD 944 or the disk drive 945. Also, the selector 946 outputs the encoded bit stream input from the HDD 944 or the disk drive 945 to the decoder 947 at the time of reproduction of video and audio.
- the decoder 947 decodes the coded bit stream to generate video data and audio data. Then, the decoder 947 outputs the generated video data to the OSD 948. Also, the decoder 904 outputs the generated audio data to an external speaker.
- the OSD 948 reproduces the video data input from the decoder 947 and displays the video.
- the OSD 948 may superimpose an image of a GUI such as a menu, a button, or a cursor on the video to be displayed.
- the control unit 949 includes a processor such as a CPU, and memories such as a RAM and a ROM.
- the memory stores programs executed by the CPU, program data, and the like.
- the program stored by the memory is read and executed by the CPU, for example, when the recording and reproducing device 940 is started.
- the CPU controls the operation of the recording / reproducing apparatus 940 in accordance with an operation signal input from, for example, the user interface 950 by executing a program.
- the user interface 950 is connected to the control unit 949.
- the user interface 950 includes, for example, buttons and switches for the user to operate the recording and reproducing device 940, a receiver of a remote control signal, and the like.
- the user interface 950 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 949.
- the encoder 943 has the function of the image coding apparatus 10 according to the embodiment described above.
- the decoder 947 has the function of the image decoding device 60 according to the above-described embodiment.
- FIG. 49 shows an example of a schematic configuration of an imaging device to which the embodiment described above is applied.
- the imaging device 960 captures an object to generate an image, encodes image data, and records the image data in a recording medium.
- the imaging device 960 includes an optical block 961, an imaging unit 962, a signal processing unit 963, an image processing unit 964, a display unit 965, an external interface 966, a memory 967, a media drive 968, an OSD 969, a control unit 970, a user interface 971, and a bus. 972 is provided.
- the optical block 961 is connected to the imaging unit 962.
- the imaging unit 962 is connected to the signal processing unit 963.
- the display unit 965 is connected to the image processing unit 964.
- the user interface 971 is connected to the control unit 970.
- the bus 972 mutually connects the image processing unit 964, the external interface 966, the memory 967, the media drive 968, the OSD 969, and the control unit 970.
- the optical block 961 has a focus lens, an aperture mechanism, and the like.
- the optical block 961 forms an optical image of a subject on the imaging surface of the imaging unit 962.
- the imaging unit 962 includes an image sensor such as a CCD or a CMOS, and converts an optical image formed on an imaging surface into an image signal as an electrical signal by photoelectric conversion. Then, the imaging unit 962 outputs the image signal to the signal processing unit 963.
- the signal processing unit 963 performs various camera signal processing such as knee correction, gamma correction, and color correction on the image signal input from the imaging unit 962.
- the signal processing unit 963 outputs the image data after camera signal processing to the image processing unit 964.
- the image processing unit 964 encodes the image data input from the signal processing unit 963 to generate encoded data. Then, the image processing unit 964 outputs the generated encoded data to the external interface 966 or the media drive 968. The image processing unit 964 also decodes encoded data input from the external interface 966 or the media drive 968 to generate image data. Then, the image processing unit 964 outputs the generated image data to the display unit 965.
- the image processing unit 964 may output the image data input from the signal processing unit 963 to the display unit 965 to display an image. The image processing unit 964 may superimpose the display data acquired from the OSD 969 on the image to be output to the display unit 965.
- the OSD 969 generates an image of a GUI such as a menu, a button, or a cursor, for example, and outputs the generated image to the image processing unit 964.
- a GUI such as a menu, a button, or a cursor
- the external interface 966 is configured as, for example, a USB input / output terminal.
- the external interface 966 connects the imaging device 960 and the printer, for example, when printing an image.
- a drive is connected to the external interface 966 as necessary.
- removable media such as a magnetic disk or an optical disk may be attached to the drive, and a program read from the removable media may be installed in the imaging device 960.
- the external interface 966 may be configured as a network interface connected to a network such as a LAN or the Internet. That is, the external interface 966 has a role as a transmission unit in the imaging device 960.
- the recording medium mounted in the media drive 968 may be, for example, any readable / writable removable medium such as a magnetic disk, a magneto-optical disk, an optical disk, or a semiconductor memory.
- the recording medium may be fixedly attached to the media drive 968, and a non-portable storage unit such as, for example, a built-in hard disk drive or a solid state drive (SSD) may be configured.
- SSD solid state drive
- the control unit 970 includes a processor such as a CPU, and memories such as a RAM and a ROM.
- the memory stores programs executed by the CPU, program data, and the like.
- the program stored by the memory is read and executed by the CPU, for example, when the imaging device 960 starts up.
- the CPU controls the operation of the imaging device 960 according to an operation signal input from, for example, the user interface 971 by executing a program.
- the user interface 971 is connected to the control unit 970.
- the user interface 971 includes, for example, buttons and switches for the user to operate the imaging device 960.
- the user interface 971 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 970.
- the image processing unit 964 has the functions of the image encoding device 10 and the image decoding device 60 according to the above-described embodiment. As a result, when encoding and decoding an image in the imaging device 960, the degree of parallelism of processing of the deblocking filter can be increased to speed up the processing.
- the pixel values used for determining the necessity of applying the deblock filter for the vertical boundary are the pixel values of the pixels belonging to the row to which the deblock filter for the horizontal boundary is not applied. It is a pixel value.
- the pixel values used for determining the necessity of application of the deblock filter for the horizontal boundary are pixel values of pixels belonging to the row to which the deblock filter for the vertical boundary is not applied. That is, the pixel values used for the filtering necessity determination process for one block are not updated by the filtering process for another block. Therefore, even when the filtering process for another block is performed before the filtering necessity determination process for a certain block, it is not necessary to hold the input pixel value to the deblocking filter in a separate memory. Thus, the cost of hardware required to implement the device can be saved.
- the input pixel value to the deblocking filter is stored in the memory not updated in the filtering process, and the input pixel value is referred to in the filtering necessity determination process.
- the filtering necessity determination process there is no restriction on the position of the pixel to be referred to in the filtering necessity determination process, and therefore, for various purposes (for example, more accurate determination of the necessity of filtering or determination with less processing cost) It is possible to use flexible judgment conditions according to the situation.
- input pixels to the deblocking filter are filtered in both the filtering process for the vertical boundary and the horizontal boundary.
- the processing in the deblocking filter can be further speeded up.
- an output pixel value is calculated based on the two filter outputs.
- the output pixel value may be calculated as a weighted average of the two filter outputs.
- the present disclosure may be applied to the case where the filtering process for the horizontal boundary is performed first.
- the above-mentioned effects of the technique according to the present invention can be equally enjoyed.
- the size of the processing unit of the deblocking filter or the size of the macroblock is not limited to the example described herein, and may be another size.
- the method of transmitting information used for parallelization of the deblocking filter process from the encoding side to the decoding side is not limited to the method of multiplexing the information in the header of the encoded stream.
- the information may be transmitted or recorded as separate data associated with the coded bit stream without being multiplexed into the coded bit stream.
- the term “associate” allows an image (a slice or a block, which may be a part of an image) included in a bitstream to be linked at the time of decoding with information corresponding to the image. Means That is, the information may be transmitted on a different transmission path from the image (or bit stream).
- the information may be recorded on a recording medium (or another recording area of the same recording medium) different from the image (or bit stream).
- the information and the image (or bit stream) may be associated with each other in any unit such as, for example, a plurality of frames, one frame, or a part in a frame.
- the filtering process on the vertical boundary is expressed as “horizontal filtering”, and the filtering process on the horizontal boundary is expressed as “vertical filtering”.
- the filter taps of the filtering process for the vertical boundary are arranged along the horizontal direction, and the filter taps of the filtering process for the horizontal boundary are arranged along the vertical direction. Therefore, the name of the filtering process as described above is adopted.
Abstract
Description
1.装置の概要
1-1.画像符号化装置
1-2.画像復号装置
2.既存の手法の説明
2-1.デブロックフィルタの基本的な構成
2-2.既存の手法における処理間の依存関係
3.第1の実施例
3-1.デブロックフィルタの構成例
3-2.処理の流れ
4.第2の実施例
4-1.デブロックフィルタの構成例
4-2.処理の流れ
4-3.LCUごとの処理の例
5.第3の実施例
5-1.概略
5-2.デブロックフィルタの構成例
5-3.処理の順序の例
5-4.処理の流れ
6.様々なコーデックへの適用
6-1.マルチビューコーデック
6-2.スケーラブルコーデック
7.応用例
8.まとめ
まず、図1及び図2を用いて、本明細書で開示する技術を適用可能な一例としての装置の概要を説明する。本明細書で開示する技術は、例えば、画像符号化装置及び画像復号装置に適用可能である。
図1は、一実施形態に係る画像符号化装置10の構成の一例を示すブロック図である。図1を参照すると、画像符号化装置10は、A/D(Analogue to Digital)変換部11、並べ替えバッファ12、減算部13、直交変換部14、量子化部15、可逆符号化部16、蓄積バッファ17、レート制御部18、逆量子化部21、逆直交変換部22、加算部23、デブロックフィルタ24a、フレームメモリ25、セレクタ26、イントラ予測部30、動き探索部40、及びモード選択部50を備える。
図2は、一実施形態に係る画像復号装置60の構成の一例を示すブロック図である。図2を参照すると、画像復号装置60は、蓄積バッファ61、可逆復号部62、逆量子化部63、逆直交変換部64、加算部65、デブロックフィルタ24b、並べ替えバッファ67、D/A(Digital to Analogue)変換部68、フレームメモリ69、セレクタ70及び71、イントラ予測部80、並びに動き補償部90を備える。
[2-1.デブロックフィルタの基本的な構成]
一般的に、H.264/AVC又はHEVCなどの既存の画像符号化方式におけるデブロックフィルタによる処理は、フィルタリング要否判定処理及びフィルタリング処理という2種類の処理を含む。以下、HEVCを例にとって、これら2つの処理について説明する。
フィルタリング要否判定処理は、入力画像内のブロックの境界ごとにデブロックフィルタを適用すべきか否かを判定する処理である。ブロックの境界は、左右に隣接するブロック間の垂直境界と、上下に隣接するブロック間の水平境界とを含む。JCTVC-A119では、8×8画素のブロックサイズが最小の処理単位である。例えば、16×16画素のマクロブロック内には4つの8×8画素のブロックが存在し、ブロックごとに1つの(左の)垂直境界及び1つの(上の)水平境界、即ち合計で4+4=8個の境界が判定の対象となる。なお、本明細書において、マクロブロックとの用語は、HEVCの文脈における符号化単位(CU:Coding Unit)をも含むものとする。
-条件A:
(A1)ブロックBa若しくはBbがイントラ予測モードである;
(A2)ブロックBa若しくはBbが非ゼロの直交変換係数を有する;又は
(A3)|MVAx-MVBx|≧4若しくは|MVAy-MVBy|≧4
-条件B:
|p22-2p12+p02|+|q22-2q12+q02|+|p25-2p15+p05|+|q25-2q15+q05|<β
-条件A1:ブロックBa又はBbがイントラ予測モードである
ある境界についてデブロックフィルタを適用すべきと判定されると、垂直境界については当該境界の左右の画素、水平境界については当該境界の上下の画素に、フィルタリング処理が行われる。輝度成分については、画素値に応じてフィルタ強度が強(strong)フィルタと弱(weak)フィルタとの間で切り替えられ得る。
-強度選択・・・1行ごと(又は1列ごと)にフィルタ強度を選択。以下の条件C1~C3が全て満たされる場合には強フィルタ、いずれか1つでも満たされない場合には弱フィルタが選択される:
(C1)d<(β>>2)
(C2)(|p3j-p0j|+|q0j-q3j|)<(β>>3)
(C3)|p0j-q0j|<((5tC+1)>>1)
ここで、jは、垂直境界については行、水平境界については列のインデックスである。また、d=|p22-2p12+p02|+|q22-2q12+q02|+|p25-2p15+p05|+|q25-2q15+q05|である。
Δ=Clip(-tC,tC,(13(q0j-p0j)+4(q1j-p1j)-5(q2j-p2j)+16)>>5))
p0j=Clip0-255(p0j+Δ)
q0j=Clip0-255(q0j-Δ)
p1j=Clip0-255(p1j+Δ/2)
q1j=Clip0-255(q1j-Δ/2)
p0j=Clip0-255((p2j+2p1j+2p0j+2q0j+q1j+4)>>3)
q0j=Clip0-255((p1j+2p0j+2q0j+2q1j+q2j+4)>>3)
p1j=Clip0-255((p2j+p1j+p0j+q0j+2)>>2)
q1j=Clip0-255((p0j+q0j+q1j+q2j+2)>>2)
p2j=Clip0-255((2p3j+3p2j+p1j+p0j+q0j+4)>>3)
q2j=Clip0-255((p0j+q0j+q1j+3q2j+2q3j+4)>>3)
Δ=Clip(-tC,tC,((((q0j-p0j)<<2)+p1j-q1j+4)>>3))
p0j=Clip0-255(p0j+Δ)
q0j=Clip0-255(q0j-Δ)
ここで、説明のために、図6に示したように、16×16画素のサイズを有するマクロブロックMBx(MB0、MB1…)の左上の垂直境界をVx,0、中央上の垂直境界をVx,1、左下の垂直境界をVx,2、中央下の垂直境界をVx,3、左上の水平境界をHx,0、右上の水平境界をHx,1、左中央の水平境界をHx,2、右中央の水平境界をHx,3、と表すものとする。また、例えば境界Zについてのフィルタリング要否判定処理をJZ、フィルタリング処理をFZなどと表すものとする。
[3-1.デブロックフィルタの構成例]
本節では、図1に示した画像符号化装置10のデブロックフィルタ24a及び図2に示した画像復号装置60のデブロックフィルタ24bの第1の実施例に係る構成の一例を説明する。なお、デブロックフィルタ24a及びデブロックフィルタ24bの構成は、共通であってよい。従って、これ以降の説明では、特に両者を区別する必要が無い場合には、デブロックフィルタ24a及びデブロックフィルタ24bをデブロックフィルタ24と総称する。
本実施例においても、デブロックフィルタ24による処理は、上述したフィルタリング要否判定処理及びフィルタリング処理という2種類の処理を含む。但し、デブロックフィルタ24は、既存の手法とは異なる参照画素の画素値に基づいて、各垂直境界及び各水平境界についてデブロックフィルタを適用すべきか否かを判定する。より具体的には、デブロックフィルタ24は、垂直境界についての判定に際しては、垂直境界を挟んで隣接するブロックの画素のうち水平境界についてのデブロックフィルタが適用されない行に属する画素を参照画素とする。また、デブロックフィルタ24は、水平境界についての判定に際しては、水平境界を挟んで隣接するブロックの画素のうち垂直境界についてのデブロックフィルタが適用されない列に属する画素を参照画素とする。なお、本項においても、一例として、デブロックフィルタ24による処理は8×8画素のブロックサイズを1つの処理単位とするものとする。
図15は、上述した並列処理を実現する第1の実施例に係るデブロックフィルタ24の詳細な構成の一例を示すブロック図である。図15を参照すると、デブロックフィルタ24は、判定ブロック110、水平フィルタリングブロック130、垂直フィルタリングブロック140及び並列化制御部150を有する。
判定ブロック110は、複数の垂直境界判定部112-1~112-n及び複数の水平境界判定部114-1~114-nを含む。各垂直境界判定部112及び各水平境界判定部114には、デブロックフィルタ24への入力画像、及びフィルタリングの要否の判定のために使用される判定情報が供給される。
図15に戻り、デブロックフィルタ24の構成の説明を継続する。水平フィルタリングブロック130は、複数の水平フィルタリング部132-1~132-nを含む。各水平フィルタリング部132には、入力画像及び判定ブロック110からの各垂直境界についての判定結果が供給される。
垂直フィルタリングブロック140は、複数の垂直フィルタリング部142-1~142-nを含む。各垂直フィルタリング部142には、入力画像及び判定ブロック110からの各水平境界についての判定結果が供給される。
並列化制御部150は、判定ブロック110におけるフィルタリング要否判定処理の並列度、並びに水平フィルタリングブロック130及び垂直フィルタリングブロック140におけるフィルタリング処理の並列度を制御する。
次に、図20~図22を用いて、デブロックフィルタ24による処理の流れを説明する。
図20は、デブロックフィルタ24による第1のシナリオにおける処理の流れ一例を示すフローチャートである。第1のシナリオは、並列度の大きい図13の例に対応するシナリオである。
図21は、デブロックフィルタ24による第2のシナリオにおける処理の流れ一例を示すフローチャートである。第2のシナリオは、並列度のより小さい図14の例に対応するシナリオである。
図22は、図21のステップS102及びS104並びに図22のステップS202及びS204に相当する、フィルタリング要否判定処理の流れの一例を示すフローチャートである。
第1の実施例では、あるブロックについてのフィルタリング要否判定処理のために、他のブロックについてのフィルタリング処理によって更新されない画素の画素値が使用された。これに対し、本節で説明する第2の実施例では、デブロックフィルタへの入力画素値を保持するメモリを設けることで、フィルタリング要否判定処理の制約をなくし、より多様な判定条件を用いることを可能とする。
(1)各部の説明
図23は、第2の実施例に係るデブロックフィルタ24の詳細な構成の一例を示すブロック図である。図23を参照すると、デブロックフィルタ24は、ラインメモリ208、判定ブロック210、水平フィルタリングブロック130、垂直フィルタリングブロック140及び並列化制御部150を有する。
本実施例において、各垂直境界判定部212は、図4に示した既存の手法のように、ブロックの3番目及び6番目の行の画素を参照して、各ブロックの垂直境界についてのフィルタリングの要否を判定してもよい。但し、参照される画素値は、ラインメモリ208により記憶されているデブロックフィルタ24への入力画像の画素値である。同様に、各水平境界判定部214は、ブロックの3番目及び6番目の列の画素を参照して、各ブロックの水平境界についてのフィルタリングの要否を判定してもよい。この場合には、既存の装置に実装されているフィルタリング要否判定処理の判定条件を変えることなく、本実施例に係る構成を容易に実現することができる。
図24は、判定の手法の第1及び第2の例をそれぞれ説明するための説明図である。第1及び第2の例では、垂直境界についてのフィルタリング要否判定処理(特に、輝度成分の判定条件Bの判定)において、各ブロックの1番目から8番目までの全ての行L1~L8の画素が参照される。水平境界についてのフィルタリング要否判定処理においても、各ブロックの1番目から8番目までの全ての列の画素が参照される。
・輝度成分(Luma)の判定条件・・・条件A及び条件Bがいずれも真なら適用
-条件A:
(A1)ブロックBa若しくはBbがイントラ予測モードである;
(A2)ブロックBa若しくはBbが非ゼロの直交変換係数を有する;又は
(A3)|MVAx-MVBx|≧4若しくは|MVAy-MVBy|≧4
-条件B:
iD0=|p20-2p10+p00|+|q20-2q10+q00|+|p27-2p17+p07|+|q27-2q17+q07|
iD1=|p21-2p11+p01|+|q21-2q11+q01|+|p26-2p16+p06|+|q26-2q16+q06|
iD2=|p22-2p12+p02|+|q22-2q12+q02|+|p25-2p15+p05|+|q25-2q15+q05|
iD3=|p23-2p13+p03|+|q23-2q13+q03|+|p24-2p14+p04|+|q24-2q14+q04|
iDave=(iD0+iD1+iD2+iD3)>>2
とした場合に、iDave<β
第2の例において、輝度成分の判定条件Bは、次のように定義され得る:
-条件B:
iD0=|p20-2p10+p00|+|q20-2q10+q00|+|p27-2p17+p07|+|q27-2q17+q07|
iD1=|p21-2p11+p01|+|q21-2q11+q01|+|p26-2p16+p06|+|q26-2q16+q06|
iD2=|p22-2p12+p02|+|q22-2q12+q02|+|p25-2p15+p05|+|q25-2q15+q05|
iD3=|p23-2p13+p03|+|q23-2q13+q03|+|p24-2p14+p04|+|q24-2q14+q04|
とした場合に、
iD0<β かつ iD1<β かつ iD2<β かつ iD3<β
図25は、判定の手法の第3及び第4の例をそれぞれ説明するための説明図である。第3及び第4の例では、垂直境界についてのフィルタリング要否判定処理(特に、輝度成分の判定条件Bの判定)において、各ブロックの4つの行L1、L3、L6及びL8の画素が参照される。水平境界についてのフィルタリング要否判定処理においても、各ブロックの4つの列の画素が参照される。
・輝度成分(Luma)の判定条件・・・条件A及び条件Bがいずれも真なら適用
-条件A:
(A1)ブロックBa若しくはBbがイントラ予測モードである;
(A2)ブロックBa若しくはBbが非ゼロの直交変換係数を有する;又は
(A3)|MVAx-MVBx|≧4若しくは|MVAy-MVBy|≧4
-条件B:
iD0=|p20-2p10+p00|+|q20-2q10+q00|+|p27-2p17+p07|+|q27-2q17+q07|
iD2=|p22-2p12+p02|+|q22-2q12+q02|+|p25-2p15+p05|+|q25-2q15+q05|
iDave=(iD0+iD2)>>1
とした場合に、iDave<β
第4の例において、輝度成分の判定条件Bは、次のように定義され得る:
-条件B:
iD0=|p20-2p10+p00|+|q20-2q10+q00|+|p27-2p17+p07|+|q27-2q17+q07|
iD2=|p22-2p12+p02|+|q22-2q12+q02|+|p25-2p15+p05|+|q25-2q15+q05|
とした場合に、
iD0<β かつ iD2<β
図26は、判定の手法の第5及び第6の例をそれぞれ説明するための説明図である。第5及び第6の例では、垂直境界についてのフィルタリング要否判定処理において、各ブロックの4つの行L1、L3、L5及びL7の画素が参照される。水平境界についてのフィルタリング要否判定処理においても、各ブロックの4つの列の画素が参照される。
・輝度成分(Luma)の判定条件・・・条件A及び条件Bがいずれも真なら適用
-条件A:
(A1)ブロックBa若しくはBbがイントラ予測モードである;
(A2)ブロックBa若しくはBbが非ゼロの直交変換係数を有する;又は
(A3)|MVAx-MVBx|≧4若しくは|MVAy-MVBy|≧4
-条件B:
iD0=|p20-2p10+p00|+|q20-2q10+q00|+|p26-2p16+p06|+|q26-2q16+q06|
iD2=|p22-2p12+p02|+|q22-2q12+q02|+|p24-2p14+p04|+|q24-2q14+q04|
iDave=(iD0+iD2)>>1
とした場合に、iDave<β
第6の例において、輝度成分の判定条件Bは、次のように定義され得る:
-条件B:
iD0=|p20-2p10+p00|+|q20-2q10+q00|+|p26-2p16+p06|+|q26-2q16+q06|
iD2=|p22-2p12+p02|+|q22-2q12+q02|+|p24-2p14+p04|+|q24-2q14+q04|
とした場合に、
iD0<β かつ iD2<β
第2の実施例においても、第1の実施例と同様、デブロックフィルタ24は異なる並列度で動作することができる。
上述したように、本明細書で説明される様々な実施例に係る技術は、HEVCの符号化単位(CU)に基づく処理として実現されてもよい。HEVCにおいて、サイズの最も大きい符号化単位はLCU(Largest Coding Unit)と呼ばれ、例えば64×64画素のLCUが選択可能である。選択可能な最小のCUのサイズは、8×8画素である。画像の符号化及び復号に際しては、通常、ピクチャ(又はスライス)の左上のLCUを起点とし、ラスタスキャンの順序に従って、LCUごとに処理が行われる。そこで、本項では、デブロックフィルタ24におけるこのようなLCUごとの処理の例について説明する。
[5-1.概略]
第2の実施例では、垂直境界及び水平境界の双方についてのフィルタリング要否判定処理においてデブロックフィルタへの入力画素値を参照することで、これらフィルタリング要否判定処理の間の依存関係が解消され、判定処理の並列化が可能となる。本節で説明する第3の実施例では、第2の実施例の考え方をフィルタリング処理にも適用する。即ち、垂直境界及び水平境界の双方についてのフィルタリング処理においても、デブロックフィルタへの入力画素値をフィルタリングすることとする。
図30は、第3の実施例に係るデブロックフィルタ24の詳細な構成の一例を示すブロック図である。図30を参照すると、デブロックフィルタ24は、ラインメモリ208、判定ブロック210、水平フィルタリングブロック330、垂直フィルタリングブロック340、並列化制御部150及び計算部360を有する。判定ブロック210は、複数の垂直境界判定部212-1~212-n及び複数の水平境界判定部214-1~214-nを含む。各垂直境界判定部212及び各水平境界判定部214は、第2の実施例において例示したような様々な判定条件に従って、各境界についてのフィルタリングの要否を判定してよい。
以下、本実施例においてデブロックフィルタ24により実現可能な処理の順序の2つの例を説明する。ここでも、一例として、32×32画素のサイズを有する画像がデブロックフィルタに入力されるものとする。当該入力画像は、それぞれ16×16画素のサイズを有する4つのマクロブロックMB0~MB3を含む。
まず、比較のために、垂直境界についてのフィルタリング処理と水平境界についてのフィルタリング処理との間の依存関係が残っている場合の処理の順序を図34に示す。なお、図34に示した処理の順序は、第1の実施例に関連して図13に示した処理の順序と実質的に同じである。図34において、第1ステップでは、4つのマクロブロックMB0~MB3の全ての垂直境界及び全ての水平境界についてのフィルタリング要否判定処理JV0,0~JV3,3及びJH0,0~JH3,3が並列的に実行される。次に、第2ステップでは、4つのマクロブロックMB0~MB3の16個の垂直境界についてのフィルタリング処理FV0,0~FV3,3が実行される。次に、第3ステップでは、4つのマクロブロックMB0~MB3の16個の水平境界についてのフィルタリング処理FH0,0~FH3,3が実行される。その後、第4ステップ(図13では省略されている)では、水平境界についてのフィルタリング処理後の各画素値が、デブロックフィルタ24からの出力用のメモリへ格納される。
上述した第1の例は並列度を最大限に高める例である一方、本実施例においても、デブロックフィルタ24は、マクロブロックごとの処理を実現することもできる。
図38は、第3の実施例に係るデブロックフィルタによる処理の流れの一例を示すフローチャートである。また、図39は、図38に示した画素値計算処理の流れの一例を示すフローチャートである。
本開示に係る技術は、画像の符号化及び復号に関連する様々なコーデックに適用可能である。本節では、本開示に係る技術がマルチビューコーデック及びスケーラブルコーデックにそれぞれ適用される例について説明する。
マルチビューコーデックは、いわゆる多視点映像を符号化し及び復号するための画像符号化方式である。図40は、マルチビューコーデックについて説明するための説明図である。図40を参照すると、3つの視点においてそれぞれ撮影される3つのビューのフレームのシーケンスが示されている。各ビューには、ビューID(view_id)が付与される。これら複数のビューのうちいずれか1つのビューが、ベースビュー(base view)に指定される。ベースビュー以外のビューは、ノンベースビューと呼ばれる。図40の例では、ビューIDが"0"であるビューがベースビューであり、ビューIDが"1"又は"2"である2つのビューがノンベースビューである。これらマルチビューの画像データを符号化する際、ベースビューのフレームについての符号化情報に基づいてノンベースビューのフレームを符号化することにより、全体としての符号化ストリームのデータサイズが圧縮され得る。
スケーラブルコーデックは、いわゆる階層符号化を実現するための画像符号化方式である。図43は、スケーラブルコーデックについて説明するための説明図である。図43を参照すると、空間解像度、時間解像度又は画質の異なる3つのレイヤのフレームのシーケンスが示されている。各レイヤには、レイヤID(layer_id)が付与される。これら複数のレイヤのうち、最も解像度(又は画質)の低いレイヤが、ベースレイヤ(base layer)である。ベースレイヤ以外のレイヤは、エンハンスメントレイヤと呼ばれる。図43の例では、レイヤIDが"0"であるレイヤがベースレイヤであり、レイヤIDが"1"又は"2"である2つのレイヤがエンハンスメントレイヤである。これらマルチレイヤの画像データを符号化する際、ベースレイヤのフレームについての符号化情報に基づいてエンハンスメントレイヤのフレームを符号化することにより、全体としての符号化ストリームのデータサイズが圧縮され得る。
上述した実施形態に係る画像符号化装置10及び画像復号装置60は、衛星放送、ケーブルTVなどの有線放送、インターネット上での配信、及びセルラー通信による端末への配信などにおける送信機若しくは受信機、光ディスク、磁気ディスク及びフラッシュメモリなどの媒体に画像を記録する記録装置、又は、これら記憶媒体から画像を再生する再生装置などの様々な電子機器に応用され得る。以下、4つの応用例について説明する。
図46は、上述した実施形態を適用したテレビジョン装置の概略的な構成の一例を示している。テレビジョン装置900は、アンテナ901、チューナ902、デマルチプレクサ903、デコーダ904、映像信号処理部905、表示部906、音声信号処理部907、スピーカ908、外部インタフェース909、制御部910、ユーザインタフェース911、及びバス912を備える。
図47は、上述した実施形態を適用した携帯電話機の概略的な構成の一例を示している。携帯電話機920は、アンテナ921、通信部922、音声コーデック923、スピーカ924、マイクロホン925、カメラ部926、画像処理部927、多重分離部928、記録再生部929、表示部930、制御部931、操作部932、及びバス933を備える。
図48は、上述した実施形態を適用した記録再生装置の概略的な構成の一例を示している。記録再生装置940は、例えば、受信した放送番組の音声データ及び映像データを符号化して記録媒体に記録する。また、記録再生装置940は、例えば、他の装置から取得される音声データ及び映像データを符号化して記録媒体に記録してもよい。また、記録再生装置940は、例えば、ユーザの指示に応じて、記録媒体に記録されているデータをモニタ及びスピーカ上で再生する。このとき、記録再生装置940は、音声データ及び映像データを復号する。
図49は、上述した実施形態を適用した撮像装置の概略的な構成の一例を示している。撮像装置960は、被写体を撮像して画像を生成し、画像データを符号化して記録媒体に記録する。
ここまで、図1~図49を用いて、一実施形態に係る画像符号化装置10及び画像復号装置60のデブロックフィルタの3つの実施例について詳しく説明した。これら3つの実施例によれば、垂直境界についてのデブロックフィルタの適用の要否の判定及び水平境界についてのデブロックフィルタの適用の要否の判定が、共にデブロックフィルタへの入力画素値を用いて行われる。従って、フィルタリング処理の結果に依存することなく、これら判定処理を並列的に実行することができる。また、マクロブロック間の処理の依存関係も解消され、複数のマクロブロックについての処理の並列化も可能となる。その結果、デブロックフィルタの多大な処理量を原因とする遅延又はデータレートの低下を回避し、処理を高速化することができる。また、画像のサイズ又は実装環境の制約などの様々な条件に応じてデブロックフィルタの処理の並列度及び順序を柔軟に設定することも可能となる。
112-1~n,212-1~n 第1判定部(垂直境界判定部)
114-1~n,214-1~n 第2判定部(水平境界判定部)
132-1~n,332-1~n 第1フィルタリング部(水平フィルタリング部)
142-1~n,342-1~n 第2フィルタリング部(垂直フィルタリング部)
150 並列化制御部
208 ラインメモリ(メモリ)
360 計算部
Claims (12)
- 符号化ストリームから画像を復号する復号部と、
前記復号部により復号される画像内のブロック境界に隣接する隣接ブロックにデブロックフィルタを適用するかを判定する判定処理を実行する判定部と、
前記判定部によりデブロックフィルタを適用すると判定された隣接ブロックにデブロックフィルタを適用するフィルタリング部と、
垂直ブロック境界及び水平ブロック境界についての前記判定処理を、リコンストラクト画像の前記隣接ブロックの画素を参照画素として用いて、前記判定部に実行させる制御部と、
を備える画像処理装置。 - 前記制御部は、
前記垂直ブロック境界についての前記判定処理の前記参照画素として、前記フィルタリング部による前記水平ブロック境界についてのフィルタリングにおいて更新されない位置の画素を前記判定部に使用させ、
前記水平ブロック境界についての前記判定処理の前記参照画素として、前記フィルタリング部による前記垂直ブロック境界についてのフィルタリングにおいて更新されない位置の画素を前記判定部に使用させる、
請求項1に記載の画像処理装置。 - 前記制御部は、前記垂直ブロック境界及び前記水平ブロック境界についての前記判定処理を並列的に前記判定部に実行させる、請求項2に記載の画像処理装置。
- 前記制御部は、
前記垂直ブロック境界についての前記判定処理の前記参照画素として、前記垂直ブロック境界の中央近傍で前記垂直ブロック境界と直交する水平ラインに含まれる画素を前記判定部に使用させ、
前記水平ブロック境界についての前記判定処理の前記参照画素として、前記水平ブロック境界の中央近傍で前記水平ブロック境界と直交する垂直ラインに含まれる画素を前記判定部に使用させる、
請求項3に記載の画像処理装置。 - 前記制御部は、
前記水平ラインとして、前記垂直ブロック境界の中央近傍で前記垂直ブロック境界と直交する2つのラインを前記判定部に使用させ、
前記垂直ラインとして、前記水平ブロック境界の中央近傍で前記水平ブロック境界と直交する2つのラインを前記判定部に使用させる、
請求項4に記載の画像処理装置。 - 前記制御部は、シーケンスパラメータセット、ピクチャパラメータセット又はスライスヘッダ内に含まれるパラメータに基づいて、前記判定部による前記判定処理の並列度、又は前記フィルタリング部によるフィルタリングの並列度を制御する、請求項3に記載の画像処理装置。
- 前記制御部は、前記画像のサイズに基づいて、前記判定部による前記判定処理の並列度、又は前記フィルタリング部によるフィルタリングの並列度を制御する、請求項3に記載の画像処理装置。
- 符号化ストリームから画像を復号することと、
復号される画像内のブロック境界に隣接する隣接ブロックにデブロックフィルタを適用するかを判定する判定処理を実行することと、
前記判定処理においてデブロックフィルタを適用すると判定された隣接ブロックにデブロックフィルタを適用することと、
垂直ブロック境界及び水平ブロック境界についての前記判定処理がリコンストラクト画像の前記隣接ブロックの画素を参照画素として用いて実行されるように、前記判定処理を制御することと、
を含む画像処理方法。 - 符号化対象画像を符号化する際にローカルデコードされる画像内のブロック境界に隣接する隣接ブロックにデブロックフィルタを適用するかを判定する判定処理を実行する判定部と、
前記判定部によりデブロックフィルタを適用すると判定された隣接ブロックにデブロックフィルタを適用するフィルタリング部と、
垂直ブロック境界及び水平ブロック境界についての前記判定処理を、リコンストラクト画像の前記隣接ブロックの画素を参照画素として用いて、前記判定部に実行させる制御部と、
前記フィルタリング部によりフィルタリングされた画像を用いて、前記符号化対象画像を符号化する符号化部と、
を備える画像処理装置。 - 前記制御部は、
前記垂直ブロック境界についての前記判定処理の前記参照画素として、前記フィルタリング部による前記水平ブロック境界についてのフィルタリングにおいて更新されない位置の画素を前記判定部に使用させ、
前記水平ブロック境界についての前記判定処理の前記参照画素として、前記フィルタリング部による前記垂直ブロック境界についてのフィルタリングにおいて更新されない位置の画素を前記判定部に使用させる、
請求項9に記載の画像処理装置。 - 前記制御部は、前記垂直ブロック境界及び前記水平ブロック境界についての前記判定処理を並列的に前記判定部に実行させる、請求項10に記載の画像処理装置。
- 符号化対象画像を符号化する際にローカルデコードされる画像内のブロック境界に隣接する隣接ブロックにデブロックフィルタを適用するかを判定する判定処理を実行することと、
前記判定処理においてデブロックフィルタを適用すると判定された隣接ブロックにデブロックフィルタを適用することと、
垂直ブロック境界及び水平ブロック境界についての前記判定処理がリコンストラクト画像の前記隣接ブロックの画素を参照画素として用いて実行されるように、前記判定処理を制御することと、
前記デブロックフィルタによりフィルタリングされた画像を用いて、前記符号化対象画像を符号化することと、
を含む画像処理方法。
Priority Applications (20)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
RU2013125498/07A RU2585662C2 (ru) | 2010-12-07 | 2011-12-02 | Устройство обработки изображений и способ обработки изображений |
EP20185677.0A EP3748962A1 (en) | 2010-12-07 | 2011-12-02 | Image processing device and image processing method |
AU2011339460A AU2011339460B2 (en) | 2010-12-07 | 2011-12-02 | Image processing device and image processing method |
EP11846148.2A EP2651127B1 (en) | 2010-12-07 | 2011-12-02 | Image processing device and image processing method |
KR1020187019829A KR101963096B1 (ko) | 2010-12-07 | 2011-12-02 | 화상 처리 장치, 화상 처리 방법 및 기록 매체 |
US13/990,893 US9973763B2 (en) | 2010-12-07 | 2011-12-02 | Image processing device and image processing method for applying filtering determination processes in parallel |
JP2012547833A JP6011341B2 (ja) | 2010-12-07 | 2011-12-02 | 画像処理装置、画像処理方法、プログラム及び記録媒体 |
ES11846148T ES2856823T3 (es) | 2010-12-07 | 2011-12-02 | Dispositivo de procesamiento de imágenes y método de procesamiento de imágenes |
EP20185688.7A EP3748964A1 (en) | 2010-12-07 | 2011-12-02 | Image processing device and image processing method |
EP20185680.4A EP3748963B1 (en) | 2010-12-07 | 2011-12-02 | Image processing device and image processing method |
MX2016005041A MX355896B (es) | 2010-12-07 | 2011-12-02 | Dispositivo de procesamiento de imagenes y metodo de procesamiento de imagenes. |
MX2013006131A MX2013006131A (es) | 2010-12-07 | 2011-12-02 | Dispositivo de procesamiento de imagenes y metodo de procesamiento de imagenes. |
KR1020137013869A KR101879890B1 (ko) | 2010-12-07 | 2011-12-02 | 화상 처리 장치, 화상 처리 방법 및 기록 매체 |
KR1020187019836A KR101963097B1 (ko) | 2010-12-07 | 2011-12-02 | 화상 처리 장치 및 화상 처리 방법 |
CA2815819A CA2815819C (en) | 2010-12-07 | 2011-12-02 | Image processing device and image processing method for applying filtering determination processes in parallel |
CN201180057815.8A CN103229506B (zh) | 2010-12-07 | 2011-12-02 | 图像处理设备和图像处理方法 |
BR112013013509-3A BR112013013509B1 (pt) | 2010-12-07 | 2011-12-02 | Dispositivo de processamento de imagem, e, método de processamento de imagem |
US15/902,870 US10362318B2 (en) | 2010-12-07 | 2018-02-22 | Image processing device and image processing method that horizontal filtering on pixel blocks |
US16/391,314 US10582202B2 (en) | 2010-12-07 | 2019-04-23 | Image processing device and image processing method that horizontal filtering on pixel blocks |
US16/693,374 US10931955B2 (en) | 2010-12-07 | 2019-11-25 | Image processing device and image processing method that horizontal filtering on pixel blocks |
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010-272907 | 2010-12-07 | ||
JP2010272907 | 2010-12-07 | ||
JP2011004391 | 2011-01-12 | ||
JP2011-004391 | 2011-01-12 | ||
JP2011-045652 | 2011-03-02 | ||
JP2011045652 | 2011-03-02 | ||
JP2011117557 | 2011-05-26 | ||
JP2011-117557 | 2011-05-26 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/990,893 A-371-Of-International US9973763B2 (en) | 2010-12-07 | 2011-12-02 | Image processing device and image processing method for applying filtering determination processes in parallel |
US15/902,870 Continuation US10362318B2 (en) | 2010-12-07 | 2018-02-22 | Image processing device and image processing method that horizontal filtering on pixel blocks |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012077607A1 true WO2012077607A1 (ja) | 2012-06-14 |
Family
ID=46207095
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/077953 WO2012077607A1 (ja) | 2010-12-07 | 2011-12-02 | 画像処理装置及び画像処理方法 |
Country Status (13)
Country | Link |
---|---|
US (4) | US9973763B2 (ja) |
EP (4) | EP2651127B1 (ja) |
JP (3) | JP6011341B2 (ja) |
KR (3) | KR101963097B1 (ja) |
CN (4) | CN103229506B (ja) |
AU (1) | AU2011339460B2 (ja) |
CA (1) | CA2815819C (ja) |
ES (1) | ES2856823T3 (ja) |
MX (3) | MX355896B (ja) |
MY (1) | MY163146A (ja) |
RU (2) | RU2691962C2 (ja) |
TW (4) | TWI520504B (ja) |
WO (1) | WO2012077607A1 (ja) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2014087861A1 (ja) * | 2012-12-06 | 2017-01-05 | ソニー株式会社 | 画像処理装置、画像処理方法、およびプログラム |
JP2018014743A (ja) * | 2010-12-07 | 2018-01-25 | ソニー株式会社 | 画像処理装置及び画像処理方法 |
US10334279B2 (en) | 2010-12-07 | 2019-06-25 | Sony Corporation | Image processing device and image processing method |
WO2023047950A1 (ja) * | 2021-09-22 | 2023-03-30 | シャープ株式会社 | 動画像符号化装置、動画像復号装置 |
JP7386883B2 (ja) | 2019-02-27 | 2023-11-27 | テレフオンアクチーボラゲット エルエム エリクソン(パブル) | サブペル動きベクトルしきい値を使用するデブロッキング |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7295609B2 (en) | 2001-11-30 | 2007-11-13 | Sony Corporation | Method and apparatus for coding image information, method and apparatus for decoding image information, method and apparatus for coding and decoding image information, and system of coding and transmitting image information |
JP5359657B2 (ja) | 2009-07-31 | 2013-12-04 | ソニー株式会社 | 画像符号化装置および方法、記録媒体、並びにプログラム |
JP5914962B2 (ja) | 2010-04-09 | 2016-05-11 | ソニー株式会社 | 画像処理装置および方法、プログラム、並びに、記録媒体 |
TWI508534B (zh) | 2010-05-18 | 2015-11-11 | Sony Corp | Image processing apparatus and image processing method |
AU2011259939B2 (en) | 2010-06-04 | 2015-07-30 | Sony Corporation | Image processing device and method |
JP5741076B2 (ja) | 2010-12-09 | 2015-07-01 | ソニー株式会社 | 画像処理装置及び画像処理方法 |
JP5884313B2 (ja) | 2011-01-18 | 2016-03-15 | ソニー株式会社 | 画像処理装置、画像処理方法、プログラム及び記録媒体 |
SG10201606972VA (en) | 2011-02-10 | 2016-10-28 | Sony Corp | Image processing device and image processing method |
BR122020013609B1 (pt) | 2011-03-11 | 2023-02-23 | Sony Corporation | Aparelho e método de processamento de imagem |
TWI530161B (zh) | 2011-06-07 | 2016-04-11 | Sony Corp | Image processing apparatus and method |
CN107318019B (zh) | 2011-06-28 | 2020-02-28 | 索尼公司 | 图像处理装置和图像处理方法 |
JP5907367B2 (ja) | 2011-06-28 | 2016-04-26 | ソニー株式会社 | 画像処理装置および方法、プログラム、並びに記録媒体 |
TWI667913B (zh) * | 2012-04-06 | 2019-08-01 | 日商新力股份有限公司 | Image processing device, image processing method, computer program, and recording medium |
CN105451025B (zh) * | 2014-07-31 | 2019-03-01 | 富士通株式会社 | 图像处理方法以及装置 |
US20160173897A1 (en) * | 2014-12-10 | 2016-06-16 | Haihua Wu | High Parallelism Dependency Pattern for GPU Based Deblock |
KR102597231B1 (ko) * | 2016-09-30 | 2023-11-03 | 삼성디스플레이 주식회사 | 영상 처리 장치, 표시 장치 및 두부 장착 표시 장치 |
WO2018137190A1 (en) * | 2017-01-25 | 2018-08-02 | Shanghai United Imaging Healthcare Co., Ltd. | System and method for image reconstruction |
KR102517616B1 (ko) * | 2017-04-06 | 2023-04-04 | 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 | 부호화 장치, 복호 장치, 부호화 방법 및 복호 방법 |
JP2019074967A (ja) * | 2017-10-17 | 2019-05-16 | キヤノン株式会社 | フィルタ処理装置およびその制御方法 |
US11153607B2 (en) * | 2018-01-29 | 2021-10-19 | Mediatek Inc. | Length-adaptive deblocking filtering in video coding |
JP7155568B2 (ja) | 2018-03-26 | 2022-10-19 | 東洋製罐グループホールディングス株式会社 | 金属加工物 |
CA3094818A1 (en) * | 2018-03-29 | 2019-10-03 | Sony Corporation | Image processing apparatus and image processing method |
CN113545041A (zh) * | 2019-03-07 | 2021-10-22 | 数字洞察力有限公司 | 图像编码/解码方法和设备 |
JP7026065B2 (ja) * | 2019-03-12 | 2022-02-25 | Kddi株式会社 | 画像復号装置、画像復号方法及びプログラム |
CN110213579A (zh) * | 2019-06-11 | 2019-09-06 | 上海富瀚微电子股份有限公司 | 双核心计算单元实现环路滤波的方法 |
CN113965764B (zh) * | 2020-07-21 | 2023-04-07 | Oppo广东移动通信有限公司 | 图像编码方法、图像解码方法及相关装置 |
CN116114245A (zh) | 2021-09-02 | 2023-05-12 | 辉达公司 | 在视频编码过程中视频帧的并行处理 |
CN116097649A (zh) * | 2021-09-06 | 2023-05-09 | 辉达公司 | 无滤波相依性的视频帧的并行编码 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS63104586A (ja) * | 1986-10-21 | 1988-05-10 | Sony Corp | ブロツク符号化された画像信号の復号装置 |
JP2003333597A (ja) * | 2002-03-07 | 2003-11-21 | Seiko Epson Corp | 画像処理装置及び画像処理プログラム、並びに画像処理方法 |
JP2006174138A (ja) * | 2004-12-16 | 2006-06-29 | Sony Corp | データ処理装置、画像処理装置、それらの方法およびプログラム |
JP2006174486A (ja) * | 2004-12-17 | 2006-06-29 | Samsung Electronics Co Ltd | ビデオデータの水平及び垂直フィルタリングを同時に行うデブロッキングフィルタ及び動作方法 |
JP2008048181A (ja) * | 2006-08-17 | 2008-02-28 | Fujitsu Ltd | デブロッキングフィルタ、画像符号化装置および画像復号化装置 |
Family Cites Families (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2757297B1 (fr) * | 1996-12-18 | 1999-03-05 | Thomson Multimedia Sa | Procede et dispositif d'etiquetage de region |
CN1239021C (zh) * | 2000-04-21 | 2006-01-25 | 索尼公司 | 信息处理设备及方法、程序和记录介质 |
US7295609B2 (en) | 2001-11-30 | 2007-11-13 | Sony Corporation | Method and apparatus for coding image information, method and apparatus for decoding image information, method and apparatus for coding and decoding image information, and system of coding and transmitting image information |
CN101448162B (zh) * | 2001-12-17 | 2013-01-02 | 微软公司 | 处理视频图像的方法 |
JP4404639B2 (ja) | 2002-02-15 | 2010-01-27 | ソニー株式会社 | 脚式移動ロボットの脚装置並びに脚式移動ロボットの制御方法 |
US7620109B2 (en) * | 2002-04-10 | 2009-11-17 | Microsoft Corporation | Sub-pixel interpolation in motion estimation and compensation |
WO2003098929A1 (fr) * | 2002-05-20 | 2003-11-27 | Sony Corporation | Appareil et procede de traitement de signal video |
EP1845529B1 (en) * | 2003-02-19 | 2011-05-04 | Panasonic Corporation | Recording medium, playback apparatus and recording method |
KR100644618B1 (ko) * | 2004-07-02 | 2006-11-10 | 삼성전자주식회사 | 블록 단위로 부호화된 영상의 블록경계에서의 불연속성제거필터 및 방법 |
US8116379B2 (en) | 2004-10-08 | 2012-02-14 | Stmicroelectronics, Inc. | Method and apparatus for parallel processing of in-loop deblocking filter for H.264 video compression standard |
KR100679035B1 (ko) * | 2005-01-04 | 2007-02-06 | 삼성전자주식회사 | 인트라 bl 모드를 고려한 디블록 필터링 방법, 및 상기방법을 이용하는 다 계층 비디오 인코더/디코더 |
US7792385B2 (en) * | 2005-01-25 | 2010-09-07 | Globalfoundries Inc. | Scratch pad for storing intermediate loop filter data |
JP2006270851A (ja) * | 2005-03-25 | 2006-10-05 | Victor Co Of Japan Ltd | 画像符号化装置及び画像復号装置 |
CN100420269C (zh) * | 2005-12-09 | 2008-09-17 | 逐点半导体(上海)有限公司 | 一种图像增强处理系统和处理方法 |
JP4591405B2 (ja) * | 2006-05-10 | 2010-12-01 | ソニー株式会社 | 情報処理装置及び情報処理方法、並びにコンピュータ・プログラム |
US20080123750A1 (en) * | 2006-11-29 | 2008-05-29 | Michael Bronstein | Parallel deblocking filter for H.264 video codec |
WO2008090793A1 (ja) * | 2007-01-22 | 2008-07-31 | Nec Corporation | 画像再符号化装置、画像再符号化方法及び画像符号化プログラム |
US8204129B2 (en) * | 2007-03-27 | 2012-06-19 | Freescale Semiconductor, Inc. | Simplified deblock filtering for reduced memory access and computational complexity |
US20110002395A1 (en) * | 2008-03-31 | 2011-01-06 | Nec Corporation | Deblocking filtering processor and deblocking filtering method |
US20090304086A1 (en) * | 2008-06-06 | 2009-12-10 | Apple Inc. | Method and system for video coder and decoder joint optimization |
JP2010014513A (ja) * | 2008-07-03 | 2010-01-21 | Fuji Electric Holdings Co Ltd | はんだフラックス中の金属成分分析方法 |
CN101321290B (zh) * | 2008-07-17 | 2010-12-15 | 北京数码视讯科技股份有限公司 | 基于数字信号处理器的去块滤波方法 |
JPWO2010035403A1 (ja) * | 2008-09-25 | 2012-02-16 | パナソニック株式会社 | フィルタ装置およびフィルタ方法 |
US8861586B2 (en) * | 2008-10-14 | 2014-10-14 | Nvidia Corporation | Adaptive deblocking in a decoding pipeline |
JP2010136245A (ja) * | 2008-12-08 | 2010-06-17 | Toshiba Corp | 動画像処理装置及び方法 |
JP2010141513A (ja) * | 2008-12-10 | 2010-06-24 | Toshiba Corp | 演算装置及び動画像符号化装置 |
JP5344238B2 (ja) | 2009-07-31 | 2013-11-20 | ソニー株式会社 | 画像符号化装置および方法、記録媒体、並びにプログラム |
JP5359657B2 (ja) | 2009-07-31 | 2013-12-04 | ソニー株式会社 | 画像符号化装置および方法、記録媒体、並びにプログラム |
CN102860005B (zh) * | 2010-02-05 | 2016-07-06 | 瑞典爱立信有限公司 | 去块滤波控制 |
US9185430B2 (en) * | 2010-03-15 | 2015-11-10 | Mediatek Singapore Pte. Ltd. | Deblocking filtering method and deblocking filter |
JP5914962B2 (ja) | 2010-04-09 | 2016-05-11 | ソニー株式会社 | 画像処理装置および方法、プログラム、並びに、記録媒体 |
JPWO2011129090A1 (ja) | 2010-04-13 | 2013-07-11 | パナソニック株式会社 | 符号化歪み除去方法、符号化方法、復号化方法、符号化歪み除去装置、符号化装置、および復号化装置 |
TWI508534B (zh) | 2010-05-18 | 2015-11-11 | Sony Corp | Image processing apparatus and image processing method |
AU2011259939B2 (en) | 2010-06-04 | 2015-07-30 | Sony Corporation | Image processing device and method |
CN106713934B (zh) | 2010-12-07 | 2019-04-16 | 索尼公司 | 图像处理设备和图像处理方法 |
MY163146A (en) * | 2010-12-07 | 2017-08-15 | Sony Corp | Image processing device and image processing method |
JP5741076B2 (ja) | 2010-12-09 | 2015-07-01 | ソニー株式会社 | 画像処理装置及び画像処理方法 |
SG10201606972VA (en) | 2011-02-10 | 2016-10-28 | Sony Corp | Image processing device and image processing method |
BR122020013609B1 (pt) | 2011-03-11 | 2023-02-23 | Sony Corporation | Aparelho e método de processamento de imagem |
TWI530161B (zh) | 2011-06-07 | 2016-04-11 | Sony Corp | Image processing apparatus and method |
JP5907367B2 (ja) | 2011-06-28 | 2016-04-26 | ソニー株式会社 | 画像処理装置および方法、プログラム、並びに記録媒体 |
MX342497B (es) | 2012-06-29 | 2016-10-03 | Sony Corp | Dispositivo de codificacion y metodo de codificacion. |
-
2011
- 2011-12-02 MY MYPI2013001873A patent/MY163146A/en unknown
- 2011-12-02 CN CN201180057815.8A patent/CN103229506B/zh active Active
- 2011-12-02 EP EP11846148.2A patent/EP2651127B1/en active Active
- 2011-12-02 CN CN201610834002.XA patent/CN106385586A/zh active Pending
- 2011-12-02 AU AU2011339460A patent/AU2011339460B2/en active Active
- 2011-12-02 EP EP20185677.0A patent/EP3748962A1/en not_active Withdrawn
- 2011-12-02 CN CN201610833557.2A patent/CN106454358B/zh active Active
- 2011-12-02 RU RU2016105120A patent/RU2691962C2/ru active
- 2011-12-02 MX MX2016005041A patent/MX355896B/es unknown
- 2011-12-02 CN CN201610833935.7A patent/CN106454359B/zh active Active
- 2011-12-02 KR KR1020187019836A patent/KR101963097B1/ko active IP Right Grant
- 2011-12-02 MX MX2013006131A patent/MX2013006131A/es active IP Right Grant
- 2011-12-02 KR KR1020137013869A patent/KR101879890B1/ko active IP Right Grant
- 2011-12-02 RU RU2013125498/07A patent/RU2585662C2/ru active
- 2011-12-02 JP JP2012547833A patent/JP6011341B2/ja active Active
- 2011-12-02 EP EP20185680.4A patent/EP3748963B1/en active Active
- 2011-12-02 EP EP20185688.7A patent/EP3748964A1/en not_active Withdrawn
- 2011-12-02 US US13/990,893 patent/US9973763B2/en active Active
- 2011-12-02 CA CA2815819A patent/CA2815819C/en active Active
- 2011-12-02 KR KR1020187019829A patent/KR101963096B1/ko active IP Right Grant
- 2011-12-02 ES ES11846148T patent/ES2856823T3/es active Active
- 2011-12-02 WO PCT/JP2011/077953 patent/WO2012077607A1/ja active Application Filing
- 2011-12-05 TW TW100144707A patent/TWI520504B/zh active
- 2011-12-05 TW TW104111383A patent/TWI580250B/zh active
- 2011-12-05 TW TW104111379A patent/TWI580249B/zh active
- 2011-12-05 TW TW104111385A patent/TWI569627B/zh active
-
2013
- 2013-05-31 MX MX2014013389A patent/MX338737B/es unknown
-
2016
- 2016-07-21 JP JP2016142996A patent/JP6222305B2/ja active Active
-
2017
- 2017-09-11 JP JP2017173732A patent/JP6471787B2/ja active Active
-
2018
- 2018-02-22 US US15/902,870 patent/US10362318B2/en active Active
-
2019
- 2019-04-23 US US16/391,314 patent/US10582202B2/en active Active
- 2019-11-25 US US16/693,374 patent/US10931955B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS63104586A (ja) * | 1986-10-21 | 1988-05-10 | Sony Corp | ブロツク符号化された画像信号の復号装置 |
JP2003333597A (ja) * | 2002-03-07 | 2003-11-21 | Seiko Epson Corp | 画像処理装置及び画像処理プログラム、並びに画像処理方法 |
JP2006174138A (ja) * | 2004-12-16 | 2006-06-29 | Sony Corp | データ処理装置、画像処理装置、それらの方法およびプログラム |
JP2006174486A (ja) * | 2004-12-17 | 2006-06-29 | Samsung Electronics Co Ltd | ビデオデータの水平及び垂直フィルタリングを同時に行うデブロッキングフィルタ及び動作方法 |
JP2008048181A (ja) * | 2006-08-17 | 2008-02-28 | Fujitsu Ltd | デブロッキングフィルタ、画像符号化装置および画像復号化装置 |
Non-Patent Citations (1)
Title |
---|
K.UGUR; K.R.ANDERSSON; A.FULDSETH: "JCTVC-A119:Video coding technology proposal", DOCUMENTS OF THE FIRST MEETING OF THE JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC, 15 April 2010 (2010-04-15) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018014743A (ja) * | 2010-12-07 | 2018-01-25 | ソニー株式会社 | 画像処理装置及び画像処理方法 |
US10334279B2 (en) | 2010-12-07 | 2019-06-25 | Sony Corporation | Image processing device and image processing method |
US10362318B2 (en) | 2010-12-07 | 2019-07-23 | Sony Corporation | Image processing device and image processing method that horizontal filtering on pixel blocks |
US10582202B2 (en) | 2010-12-07 | 2020-03-03 | Sony Corporation | Image processing device and image processing method that horizontal filtering on pixel blocks |
US10785504B2 (en) | 2010-12-07 | 2020-09-22 | Sony Corporation | Image processing device and image processing method |
US10931955B2 (en) | 2010-12-07 | 2021-02-23 | Sony Corporation | Image processing device and image processing method that horizontal filtering on pixel blocks |
US11381846B2 (en) | 2010-12-07 | 2022-07-05 | Sony Corporation | Image processing device and image processing method |
JPWO2014087861A1 (ja) * | 2012-12-06 | 2017-01-05 | ソニー株式会社 | 画像処理装置、画像処理方法、およびプログラム |
JP7386883B2 (ja) | 2019-02-27 | 2023-11-27 | テレフオンアクチーボラゲット エルエム エリクソン(パブル) | サブペル動きベクトルしきい値を使用するデブロッキング |
US11902516B2 (en) | 2019-02-27 | 2024-02-13 | Telefonaktiebolaget Lm Ericsson (Publ) | Deblocking using sub-pel motion vector threshold |
WO2023047950A1 (ja) * | 2021-09-22 | 2023-03-30 | シャープ株式会社 | 動画像符号化装置、動画像復号装置 |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6471787B2 (ja) | 画像処理装置及び画像処理方法 | |
JP6471786B2 (ja) | 画像処理装置及び画像処理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11846148 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2012547833 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2815819 Country of ref document: CA |
|
ENP | Entry into the national phase |
Ref document number: 20137013869 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13990893 Country of ref document: US Ref document number: 2013125498 Country of ref document: RU Ref document number: MX/A/2013/006131 Country of ref document: MX |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2011339460 Country of ref document: AU Date of ref document: 20111202 Kind code of ref document: A |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112013013509 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 112013013509 Country of ref document: BR Kind code of ref document: A2 Effective date: 20130531 |