WO2012096177A1 - Image coding apparatus, image coding method and program, image decoding apparatus, and image decoding method and program - Google Patents

Image coding apparatus, image coding method and program, image decoding apparatus, and image decoding method and program Download PDF

Info

Publication number
WO2012096177A1
WO2012096177A1 PCT/JP2012/000147 JP2012000147W WO2012096177A1 WO 2012096177 A1 WO2012096177 A1 WO 2012096177A1 JP 2012000147 W JP2012000147 W JP 2012000147W WO 2012096177 A1 WO2012096177 A1 WO 2012096177A1
Authority
WO
WIPO (PCT)
Prior art keywords
processing unit
exemplary embodiment
intra prediction
prediction value
coded
Prior art date
Application number
PCT/JP2012/000147
Other languages
French (fr)
Inventor
Masato Shima
Original Assignee
Canon Kabushiki Kaisha
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Kabushiki Kaisha filed Critical Canon Kabushiki Kaisha
Publication of WO2012096177A1 publication Critical patent/WO2012096177A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes

Definitions

  • the present invention relates to an image coding apparatus, an image coding method and a program, an image decoding apparatus, and an image decoding method and a program.
  • the present invention relates to an intra-frame prediction coding method in an image.
  • H.264 H.264/ MPEG-4 Advanced Video Coding
  • ISO International Telecommunication Union Telecommunication Standardization Sector
  • ISO International Standardization Organization
  • IEC International Electrotechnical Commission
  • H.264 which has been standardized in 2003 is broadly used in one-segment terrestrial digital broadcasting.
  • H.264 is characteristic in that integer transform is performed in 4-by-4 pixel units in addition to a conventional coding method, so that a plurality of intra prediction processes is also provided.
  • a plurality of previous and subsequent frames can be referred to with use of a loop filter, and motion compensation is performed in seven types of sub-blocks at the same time.
  • motion compensation of 1/4 pixel accuracy can be performed, similarly as in MPEG-4 Visual.
  • universal variable length coding or context-based adaptive variable length coding is used in performing entropy coding (refer to ITU-T H.264 Advance video coding for generic audiovisual services).
  • a coding method such as above-described H.264 performs intra prediction using surrounding reconstructed pixels that have been coded.
  • a coding method it is necessary to complete, in order to refer to a block, each of orthogonal transform, quantization, inverse quantization, and inverse orthogonal transform processes with respect to a prediction error in the block to be referred to. It is thus extremely difficult to perform pipelining of the intra prediction process.
  • the present invention is directed to an image coding apparatus which realizes an intra prediction process whose processing speed can be increased by performing pipeline processing.
  • an image coding apparatus includes a block division unit configured to divide an input image into a plurality of blocks; a processing unit division unit configured to divide each block into processing units of a same size or of a smaller size as compared to the block, an intra prediction value calculation unit configured to refer to, for each processing unit, a plurality of predetermined processing units or one predetermined processing unit surrounding the processing unit and calculate a prediction value of the processing unit; a reconstructed pixel value calculation unit configured to calculate for each processing unit a reconstructed pixel value of the processing unit according to the input image and a result of the intra prediction calculation unit with use of a predetermined method, a reconstructed pixel value storing unit configured to store a result of the reconstructed pixel value calculation unit, and an intra prediction value storing unit configured to store a result of the intra prediction value calculation unit, wherein the intra prediction value calculation unit calculates a prediction value of the processing unit by switching, based on a positional relationship between the processing unit and a processing unit to be
  • the intra prediction process can be performed at high speed by pipelining the process, and high speed coding and decoding can thus be realized.
  • Fig. 1 is a block diagram illustrating an image coding apparatus according to first, second, third, fourth, fifth, sixth, seventh, and fifteenth exemplary embodiments of the present invention.
  • Fig. 2 is a block diagram illustrating a configuration of an image decoding apparatus according eighth, ninth, tenth, eleventh, twelfth, thirteenth, and fourteenth exemplary embodiments of the present invention.
  • Fig. 3 is a flowchart illustrating an image coding process according to the first to seventh exemplary embodiments of the present invention.
  • Fig. 4 is a flowchart illustrating an intra prediction process according to the first to seventh exemplary embodiments of the present invention.
  • Fig. 1 is a block diagram illustrating an image coding apparatus according to first, second, third, fourth, fifth, sixth, seventh, and fifteenth exemplary embodiments of the present invention.
  • Fig. 2 is a block diagram illustrating a configuration of an image decoding apparatus according eighth, ninth, tenth, eleventh, twelfth, thirteenth
  • FIG. 5 is a flowchart illustrating another example of the intra prediction process according to the first to seventh exemplary embodiments of the present invention.
  • Fig. 6 is a flowchart illustrating an image decoding process according to the eighth to fourteenth exemplary embodiments of the present invention.
  • Fig. 7 is a flowchart illustrating a pixel reconstruction process according to the eighth to fourteenth exemplary embodiments of the present invention.
  • Fig. 8 illustrates a derived form of the pixel reconstruction process according to the eighth to fourteenth exemplary embodiments of the present invention.
  • Fig. 9 illustrates an example of a state in a reference method used in the intra prediction according to the first exemplary embodiment of the present invention.
  • FIG. 10 is a flowchart illustrating a method for determining a reference pixel used in performing intra prediction in a conventional image coding apparatus.
  • Fig. 11 is a flowchart illustrating a method for determining a reference pixel used in performing intra prediction according to the first and eighth exemplary embodiments of the present invention.
  • Fig. 12 is a flowchart illustrating a method for determining a reference pixel used in performing intra prediction according to the second and ninth exemplary embodiments of the present invention.
  • Fig. 13 is a flowchart illustrating a method for determining a reference pixel used in performing intra prediction according to the third and tenth exemplary embodiments of the present invention.
  • Fig. 11 is a flowchart illustrating a method for determining a reference pixel used in performing intra prediction according to the first and eighth exemplary embodiments of the present invention.
  • Fig. 12 is a flowchart illustrating a method for determining a reference pixel used in performing intra prediction according to the second and ninth
  • FIG. 14 is a flowchart illustrating a method for determining a reference pixel used in performing intra prediction according to the fourth and eleventh exemplary embodiments of the present invention.
  • Fig. 15 is a flowchart illustrating a method for determining a reference pixel used in performing intra prediction according to the fifth and twelfth exemplary embodiments of the present invention.
  • Fig. 16 is a flowchart illustrating a method for determining a reference pixel used in performing intra prediction according to the sixth and thirteenth exemplary embodiments of the present invention.
  • Fig. 17 is a flowchart illustrating a method for determining a reference pixel used in performing intra prediction according to the seventh and fourteenth exemplary embodiments of the present invention.
  • Fig. 15 is a flowchart illustrating a method for determining a reference pixel used in performing intra prediction according to the fifth and twelfth exemplary embodiments of the present invention.
  • Fig. 16 is a flowchart illustrating a method for determining
  • FIG. 18 is a block diagram illustrating a configuration of an image coding-decoding apparatus according to a sixteenth exemplary embodiment of the present invention.
  • Fig. 19A illustrates an example of a processing order for performing intra prediction previous and subsequent to applying the present invention.
  • Fig. 19B illustrates an example of a processing order for performing intra prediction previous and subsequent to applying the present invention.
  • Fig. 20 is a flowchart illustrating the intra prediction process according to the fifteenth exemplary embodiment of the present invention.
  • Fig. 1 is a block diagram illustrating an image coding apparatus to which the present invention is applied.
  • a block division unit 1 divides an input image into a plurality of blocks.
  • a processing unit division unit 2 divides each block into processing units of a same size as the block or of a smaller size as compared to the block.
  • An intra prediction unit 34 determines an intra prediction mode for each processing unit, and calculates an intra prediction value using the determined intra prediction mode.
  • An intra prediction value storing unit 21 stores the calculated intra prediction value.
  • a transform-quantization unit 5 calculates a difference between the intra prediction value and an input pixel corresponding to each processing unit, transforms and quantizes the difference as coefficient data, and calculates a quantization coefficient.
  • a reconstructed pixel value calculation unit 6 calculates a reconstructed pixel value by performing inverse quantization and inverse transform on the quantization coefficient calculated by the transform-quantization unit 5, and adding the result to the intra prediction value.
  • a reconstructed pixel value storing unit 7 stores the reconstructed pixel value calculated by the reconstructed pixel value calculation unit 6.
  • An entropy coding unit 8 performs entropy coding on the quantization coefficient.
  • moving image data is input to the image coding apparatus by frames.
  • still image data corresponding to one frame may also be input to the image coding apparatus.
  • intra prediction process will be described for ease of description.
  • the present exemplary embodiment is not limited to handle the intra prediction, and may use the intra prediction in combination with an inter prediction process.
  • the input image data corresponding to one frame is input to the block division unit 1, divided into blocks, and output to the processing unit division unit 2.
  • the processing unit division unit 2 divides the image data divided into blocks to processing units of a size smaller than or equal to that of the block.
  • the image data divided into processing units is then input to the intra prediction unit 34 and the transform-quantization unit 5.
  • the intra prediction unit 34 determines an optimal intra prediction mode with respect to the input image data divided into processing units.
  • the intra prediction unit 34 then acquires a prediction value of a surrounding processing unit from the intra prediction value storing unit 21, and a reconstructed pixel value of the surrounding processing unit from the reconstructed pixel value storing unit 7.
  • the intra prediction unit 34 uses the acquired values and the determined intra prediction mode to calculate the intra prediction value.
  • the intra prediction unit 34 outputs the calculated intra prediction value to the intra prediction value storing unit 21, the reconstructed pixel value calculation unit 6, and the transform-quantization unit 5.
  • the intra prediction value storing unit 21 stores information about the calculated intra prediction value. If a target intra prediction value is referred to in the process of calculating an intra prediction value by an intra prediction calculation unit, the intra prediction storing unit 21 outputs the stored intra prediction value to the intra prediction unit 34.
  • the intra prediction unit 34 thus uses the input intra prediction value to calculate an intra prediction value.
  • the transform-quantization unit 5 receives the intra prediction value from the intra prediction unit 34 and the input image data corresponding to each processing unit from the processing unit division unit 2, and calculates the difference between the input intra prediction value and the input image data. The transform-quantization unit 5 then performs transform and quantization on the acquired difference, and calculates a quantization coefficient. The transform-quantization unit 5 outputs the calculated quantization coefficient to the reconstructed pixel value calculation unit 6 and the entropy coding unit 8.
  • the reconstructed pixel value calculation unit 6 performs inverse quantization and inverse transform on the input quantization coefficient, adds the intra prediction value output from the intra prediction unit 34 to the processing result, and calculates the reconstructed pixel value.
  • the reconstructed pixel value calculation unit 6 then outputs the calculated reconstructed pixel value to the reconstructed pixel value storing unit 7.
  • the reconstructed pixel value storing unit 7 stores information about the received reconstructed pixel value.
  • the reconstructed pixel value storing unit 7 outputs to the intra prediction unit 34 the stored reconstructed pixel value, which is referred to as necessary in calculating the intra prediction value.
  • the entropy coding unit 8 performs entropy coding on the quantization coefficient, and outputs the result as a bit stream.
  • FIG. 3 is a flowchart illustrating the image coding process performed by the image coding apparatus according to the first exemplary embodiment.
  • step 1001 the image coding apparatus divides the input image by frames into blocks.
  • step 1002 the image coding apparatus divides each block into processing units of a size smaller than or equal to that of the block.
  • step 1004 the image coding apparatus determines the intra prediction mode of the processing unit to be coded, and calculates the prediction value of the processing unit to be coded. The image coding apparatus then acquires the difference between the prediction value and the input image, and performs transform and quantization on the difference. In step 1005, the image coding apparatus performs entropy coding on the quantized coefficient.
  • step 1006 the image coding apparatus determines whether all of the processing units in the block have been coded. If coding has been completed (COMPLETED in step 1006), the process proceeds to step 1007. If coding has not been completed (NOT COMPLETED in step 1006), the process returns to step 1002, and the image coding apparatus performs the process on the next processing unit.
  • step 1007 the image coding apparatus determines whether all of the blocks in the frame have been coded. If coding has been completed (COMPLETED in step 1007), the image coding apparatus stops all operations, and ends the process. If coding has not been completed (NOT COMPLETED in step 1007), the process returns to step 1001, and the image coding apparatus performs the process on the next block.
  • Fig. 4 is a flowchart illustrating in detail the process in step 1004 according to the first exemplary embodiment.
  • the image coding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit to be coded.
  • the image coding apparatus calculates the left reference pixel by a method illustrated in Fig. 11.
  • the image coding apparatus calculates the reference pixel from the prediction value of a left processing unit.
  • the reference pixel is calculated only from the prediction value of the reference processing unit.
  • step 202 illustrated in Fig. 4 the image coding apparatus calculates an upper reference pixel to be used in calculating the intra prediction value of the processing unit to be coded.
  • the image coding apparatus calculates the upper reference pixel by the method illustrated in Fig. 11, similarly as in step 201.
  • step 421 illustrated in Fig. 11 the image coding apparatus thus calculates the reference pixel from the prediction value of an upper processing unit.
  • both the upper reference pixel and the left reference pixel are calculated only from the prediction values of the respective reference processing units.
  • a processing unit T is to be coded according to the first exemplary embodiment.
  • prediction values of a processing unit S to the left and a processing unit L above the processing unit T are used as the reference pixels for calculating the prediction value of the processing unit T.
  • step 204 the image coding apparatus calculates the prediction value for each intra prediction mode using the calculated reference pixels.
  • step 208 the image coding apparatus compares each of the prediction values and determines the optimum intra prediction mode for performing coding.
  • the image coding apparatus stores the prediction value of the determined intra prediction mode which may be referred to by the processing units in the future.
  • the image coding apparatus acquires the difference between the input pixel and the calculated prediction value, performs transform and quantization on the acquired difference, and calculates the quantization coefficient.
  • step 206 the image coding apparatus performs inverse quantization and inverse transform on the calculated quantization coefficient and calculates a residual coefficient.
  • step 207 the image coding apparatus adds the prediction value to the calculated residual coefficient and calculates the reconstructed pixel value.
  • the image coding apparatus stores the calculated reconstructed value which may be referred to by the processing units in the future, and the process then ends.
  • Fig. 19A illustrates an example of a conventional coding method, i.e., the processing order of the intra prediction process according to H. 264.
  • the prediction value of the processing unit (n + 1) to be coded is calculated using only the reconstructed pixel of the reference processing unit (n). Therefore, if the processing units which are horizontally adjacent to each other are to be continuously processed, it is necessary to wait for the reconstructed pixel of the previous processing unit (n) to be calculated to calculate the prediction value of the next processing unit (n + 1).
  • Fig. 19B illustrates an example of the processing order of the intra prediction process according to the present invention.
  • the prediction value of the processing unit (n + 1) to be coded is calculated using the prediction value of the reference processing unit (n).
  • the prediction value of the processing unit (n + 1) to be coded can be calculated without waiting for calculation of the reconstructed pixel of the previous processing unit (n), so that the processing speed can be increased by pipelining the process.
  • the left processing unit and the upper processing unit are referred to as illustrated in Fig. 4, in calculating the reference pixels to be used in performing intra prediction.
  • the present exemplary embodiment is not limited to this configuration, and may use an upper left processing unit to calculate the reference pixel, as illustrated in Fig. 5 (e.g., a processing unit K illustrated in Fig. 9, when the processing unit T is to be coded).
  • an upper right processing unit or a lower left processing unit may also be used in calculating the reference pixels (e.g., a processing unit E or a processing unit S when the processing unit L is to be processed illustrated in Fig. 9).
  • the frames only employ the intra prediction process.
  • the present exemplary embodiment may also be applied to the frames that can employ the inter prediction process.
  • each or all units in the image coding apparatus may be described in software, and may be performed by a processing unit such as a central processing unit (CPU).
  • a processing unit such as a central processing unit (CPU).
  • the present exemplary embodiment may be configured to switch between the conventional method and the above-described method by using side information in the coded data which indicates whether the above-described method will be employed in units of frames or slices before coding the block, so that detailed control can be performed. For example, if high speed is demanded, the above-described method according to the present exemplary embodiment is used. However, if compatibility with the conventional method is preferred over high speed, the above-described method is not used.
  • the configuration of the image coding apparatus (illustrated in Fig. 1) and the flowchart of the process performed by the image coding apparatus (illustrated in Fig. 3) according to a second exemplary embodiment of the present invention is similar to those according to the first exemplary embodiment. Description of these will thus be omitted.
  • Fig. 4 is the flowchart illustrating in detail the process in step 1004 illustrated in Fig. 3, according to the second exemplary embodiment.
  • the flowchart illustrated in Fig. 4 is the same as the flowchart according to the first exemplary embodiment. However, since the contents of step 201 and step 202 are different from those according to the first exemplary embodiment, only step 201 and step 202 will be described below.
  • the image coding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit to be coded.
  • the left reference pixel is calculated by the method illustrated in Fig. 12.
  • step 431 illustrated in Fig. 12 the image coding apparatus determines whether the left reference processing unit is the processing unit on which the prediction process has been performed immediately before the processing unit to be coded. If the left reference processing unit is the processing unit on which the prediction process has been performed immediately before the processing unit to be coded (YES in step 431), the process proceeds to step 421, whereas if not (NO in step 431), the process proceeds to step 401.
  • step 401 the image coding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit.
  • step 421 the image coding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
  • step 202 illustrated in Fig. 4 the image coding apparatus then calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit to be coded.
  • the upper reference pixel is calculated by the method illustrated in Fig. 12.
  • step 431 illustrated in Fig. 12 the image coding apparatus determines whether the upper reference processing unit is the processing unit on which the prediction process has been performed immediately before the processing unit to be coded. If the upper reference processing unit is the processing unit on which the prediction process has been performed immediately before the processing unit to be coded (YES in step 431), the process proceeds to step 421, whereas if not (NO in step 431), the process proceeds to step 401.
  • step 401 the image coding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit.
  • step 421 the image coding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
  • the processing unit T is to be coded, and the prediction process is performed on in the order of the processing units K, L, S, and T in the block to which the processing unit T belongs.
  • the left processing unit S is the processing unit on which the prediction process has been performed immediately before the processing unit T, so that the image coding apparatus calculates the left reference pixel using the prediction value of the processing unit S.
  • the image coding apparatus thus calculates the upper reference pixel using the reconstructed pixel value of the processing unit L.
  • steps 204, 208, 226, 205, 206, and 207 are the same as those according to the first exemplary embodiment, description will be omitted.
  • the image coding apparatus calculates the reference pixel to be employed in performing the intra prediction by switching between using the prediction value and the reconstructed pixel of the reference processing unit, based on a condition.
  • the prediction value is increasingly used, prediction accuracy is lowered, and coding efficiency is also lowered as a result.
  • the reference processing unit whose prediction value is to be used is limited to the processing unit on which the prediction process has been performed immediately before the processing unit to be coded. The processing speed can thus be increased by pipelining the process in the processing unit level while reducing the influence of lowered coding efficiency.
  • the reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the first exemplary embodiment.
  • the conventional method is replaced by the above-described method.
  • the image coding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the first exemplary embodiment.
  • the configuration of the image coding apparatus (illustrated in Fig. 1) and the flowchart of the process performed by the image coding apparatus (illustrated in Fig. 3) according to a third exemplary embodiment of the present invention is similar to those according to the first exemplary embodiment. Description of these will thus be omitted.
  • Fig. 4 is the flowchart illustrating in detail the process in step 1004 illustrated in Fig. 3, according to the third exemplary embodiment.
  • the flowchart illustrated in Fig. 4 is the same as the flowchart according to the first exemplary embodiment. However, since the contents of step 201 and step 202 are different from those according to the first exemplary embodiment, only step 201 and step 202 will be described below.
  • the image coding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit to be coded.
  • the left reference pixel is calculated by the method illustrated in Fig. 13.
  • step 441 illustrated in Fig. 13 the image coding apparatus determines whether the left reference processing unit belongs to the same block as the processing unit to be coded. If the left reference processing unit belongs to the same block as the processing unit to be coded (YES in step 441), the process proceeds to step 421. Whereas, if the left reference processing unit does not belong to the same block as the processing unit to be coded (NO in step 441), the process proceeds to step 401.
  • step 401 the image coding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit.
  • step 421 the image coding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
  • step 202 illustrated in Fig. 4 the image coding apparatus calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit to be coded.
  • the upper reference pixel is calculated by the method illustrated in Fig. 13.
  • step 441 the image coding apparatus determines whether the upper reference processing unit belongs to the same block as the processing unit to be coded. If the upper reference processing unit belongs to the same block as the processing unit to be coded (YES in step 441), the process proceeds to step 421. Whereas, if the upper reference processing unit does not belong to the same block as the processing unit to be coded (NO in step 441), the process proceeds to step 401.
  • step 401 the image coding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit.
  • step 421 the image coding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
  • the processing unit S is to be coded.
  • the left reference pixel is calculated using the reconstructed pixel value of the processing unit R.
  • the upper processing unit K and the processing unit S belong to the same block.
  • the prediction value of the processing unit K is used to calculate the upper reference pixel.
  • steps 204, 208, 226, 205, 206, and 207 are the same as those according to the first exemplary embodiment, description will be omitted.
  • the reference processing unit whose prediction value is to be used is limited to the processing unit belonging to the same block as the processing unit to be coded.
  • the processing speed can thus be increased by pipelining the process in the processing unit level while reducing the influence of lowered coding efficiency.
  • the reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the first exemplary embodiment.
  • the conventional method is replaced by the above-described method.
  • the image coding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the first exemplary embodiment.
  • the configuration of the image coding apparatus (illustrated in Fig. 1) and the flowchart of the process performed by the image coding apparatus (illustrated in Fig. 3) according to a fourth exemplary embodiment of the present invention is similar to those according to the first exemplary embodiment. Description of these will thus be omitted.
  • Fig. 4 is the flowchart illustrating in detail the process in step 1004 illustrated in Fig. 3, according to the fourth exemplary embodiment.
  • the flowchart illustrated in Fig. 4 is the same as the flowchart according to the first exemplary embodiment. However, since the contents of step 201 and step 202 are different from those according to the first exemplary embodiment, only step 201 and step 202 will be described below.
  • the image coding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit to be coded.
  • the left reference pixel is calculated by the method illustrated in Fig. 14.
  • step 451 illustrated in Fig. 14 the image coding apparatus determines whether the left reference processing unit belongs to the same block as the processing unit to be coded, or belongs to the block in which the prediction process has been performed immediately before the processing unit to be coded. If the left reference processing unit belongs to the same block as the processing unit to be coded, or belongs to the block in which the prediction process has been performed immediately before the processing unit to be coded (YES in step 451), the process proceeds to step 421, whereas if not (NO in step 451), the process proceeds to step 401.
  • step 401 the image coding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit.
  • step 421 the image coding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
  • step 202 illustrated in Fig. 4 the image coding apparatus calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit to be coded.
  • the upper reference pixel is calculated by the method illustrated in Fig. 14.
  • step 451 the image coding apparatus determines whether the upper reference processing unit belongs to the same block as the processing unit to be coded, or belongs to the block in which the prediction process has been performed immediately before the processing unit to be coded. If the upper reference processing unit belongs to the same block as the processing unit to be coded, or belongs to the block in which the prediction process has been performed immediately before the processing unit to be coded (YES in step 451), the process proceeds to step 421, whereas if not (NO in step 451), the process proceeds to step 401.
  • step 401 the image coding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit.
  • step 421 the image coding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
  • the processing unit K is to be coded.
  • the left processing unit J belongs to the block in which the prediction process has been performed immediately before the block to which the processing unit K belongs.
  • the left reference pixel is thus calculated using the prediction value of the processing unit J.
  • the upper processing unit C and the processing unit K belong to different blocks.
  • the upper processing unit C does not belong to the block in which the prediction process has been performed immediately before the block to which the processing unit K belongs. According to the fourth exemplary embodiment, the reconstructed pixel of the processing unit C is thus used to calculate the upper reference pixel.
  • steps 204, 208, 226, 205, 206, and 207 are the same as those according to the first exemplary embodiment, description will be omitted.
  • the reference processing unit whose prediction value is used is limited to the processing unit belonging to the same block as the processing unit to be coded, or to the block which has been predicted immediately before the block to which the processing unit to be coded belongs.
  • the processing speed can thus be increased by pipelining the process in the processing unit level while reducing the influence of lowered coding efficiency.
  • the reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the first exemplary embodiment.
  • the conventional method is replaced by the above-described method.
  • the image coding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the first exemplary embodiment.
  • the configuration of the image coding apparatus (illustrated in Fig. 1) and the flowchart of the process performed by the image coding apparatus (illustrated in Fig. 3) according to a fifth exemplary embodiment of the present invention is similar to those according to the first exemplary embodiment. Description of these will thus be omitted.
  • Fig. 4 is the flowchart illustrating in detail the process in step 1004 illustrated in Fig. 3, according to the fifth exemplary embodiment.
  • the flowchart illustrated in Fig. 4 is the same as the flowchart according to the first exemplary embodiment. However, since the contents of step 201 and step 202 are different from those according to the first exemplary embodiment, only step 201 and step 202 will be described below.
  • the image coding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit to be coded.
  • the left reference pixel is calculated by the method illustrated in Fig. 15.
  • step 461 illustrated in Fig. 15 the image coding apparatus determines whether a perpendicular position of the block to which the left reference processing unit belongs is the same as that of the block to which the processing unit to be coded belongs. If the perpendicular position of the block to which the left reference processing unit belongs is the same as that of the block to which the processing unit to be coded belongs (YES in step 461), the process proceeds to step 421, whereas if not (NO in step 461), the process proceeds to step 401.
  • step 401 the image coding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit.
  • step 421 the image coding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
  • step 202 illustrated in Fig. 4 the image coding apparatus then calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit to be coded.
  • the upper reference pixel is calculated by the method illustrated in Fig. 15.
  • step 461 the image coding apparatus determines whether a perpendicular position of the block to which the upper reference processing unit belongs is the same as that of the block to which the processing unit to be coded belongs. If the perpendicular position of the block to which the upper reference processing unit belongs is the same as that of the block to which the processing unit to be coded belongs (YES in step 461), the process proceeds to step 421, whereas if not (NO in step 461), the process proceeds to step 401.
  • step 401 the image coding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit.
  • step 421 the image coding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
  • the processing unit M is to be coded.
  • the left processing unit L belongs to the block whose perpendicular position is the same as that of the block to which the processing unit M belongs.
  • the image coding apparatus thus calculates the left reference pixel using the prediction value of the processing unit L.
  • the upper processing unit E belongs to the block whose perpendicular position is not the same as that of the block to which the processing unit M belongs.
  • the image coding unit thus uses the reconstructed pixel value of the processing unit E to calculate the upper reference pixel.
  • steps 204, 208, 226, 205, 206, and 207 are the same as those according to the first exemplary embodiment, description will be omitted.
  • the reference processing unit whose prediction value is used is limited to the processing unit belonging to the block whose perpendicular position is the same as that of the block to which the processing unit to be coded belongs.
  • the processing speed can thus be increased by pipelining the process in one block line in a horizontal direction while reducing the influence of lowered coding efficiency.
  • the reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the first exemplary embodiment.
  • the conventional method is replaced by the above-described method.
  • the image coding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the first exemplary embodiment.
  • the configuration of the image coding apparatus (illustrated in Fig. 1) and the flowchart of the process performed by the image coding apparatus (illustrated in Fig. 3) according to a sixth exemplary embodiment of the present invention is similar to those according to the first exemplary embodiment. Description of these will thus be omitted.
  • Fig. 4 is the flowchart illustrating in detail the process in step 1004 illustrated in Fig. 3, according to the sixth exemplary embodiment.
  • the flowchart illustrated in Fig. 4 is the same as the flowchart according to the first exemplary embodiment. However, since the contents of step 201 and step 202 are different from those according to the first exemplary embodiment, only step 201 and step 202 will be described below.
  • the image coding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit to be coded.
  • the left reference pixel is calculated by the method illustrated in Fig. 16.
  • step 471 illustrated in Fig. 16 the image coding apparatus determines whether the block to which the left reference processing unit belongs is within a predetermined number of blocks previous to the block to which the processing unit to be coded belongs. If the block to which the left reference processing unit belongs is within the predetermined number of blocks previous to the block to which the processing unit to be coded belongs (YES in step 471), the process proceeds to step 421, whereas if not (NO in step 471), the process proceeds to step 401.
  • step 401 the image coding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit.
  • step 421 the image coding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
  • step 202 illustrated in Fig. 4 the image coding apparatus then calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit to be coded.
  • the upper reference pixel is calculated by the method illustrated in Fig. 16.
  • step 471 the image coding apparatus determines whether the block to which the upper reference processing unit belongs is within the predetermined number of blocks previous to the block to which the processing unit to be coded belongs. If the block to which the upper reference processing unit belongs is within the predetermined number of blocks previous to the block to which the processing unit to be coded belongs (YES in step 471), the process proceeds to step 421, whereas if not (NO in step 471), the process proceeds to step 401.
  • step 401 the image coding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit.
  • step 421 the image coding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
  • the processing unit O is to be coded, and the predetermined number of blocks is set to five blocks.
  • the block to which the left processing unit N belongs is one block previous to the block to which the processing unit O belongs. Since one is smaller than five, i.e., the predetermined number of blocks, the image coding apparatus calculates the left reference pixel using the prediction value of the processing unit N.
  • the block to which the upper processing unit G belongs is four blocks previous to the block to which the processing unit O belongs. Since four is smaller than five, i.e., the predetermined number of blocks, the image coding apparatus calculates the upper reference pixel using the prediction value of the processing unit G.
  • steps 204, 208, 226, 205, 206, and 207 are the same as those according to the first exemplary embodiment, description will be omitted.
  • the reference processing unit whose prediction value is used is limited to the processing unit belonging to the block which is within a predetermined number of blocks previous to the block to which the processing unit to be coded belongs.
  • the processing speed can thus be increased by pipelining the process for each of a plurality of blocks while reducing the influence of lowered coding efficiency.
  • the reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the first exemplary embodiment.
  • the conventional method is replaced by the above-described method.
  • the image coding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the first exemplary embodiment.
  • the configuration of the image coding apparatus (illustrated in Fig. 1) and the flowchart of the process performed by the image coding apparatus (illustrated in Fig. 3) according to a seventh exemplary embodiment of the present invention is similar to those according to the first exemplary embodiment. Description of these will thus be omitted.
  • Fig. 4 is the flowchart illustrating in detail the process in step 1004 illustrated in Fig. 3, according to the seventh exemplary embodiment.
  • the flowchart illustrated in Fig. 4 is the same as the flowchart according to the first exemplary embodiment. However, since the contents of step 201 and step 202 are different from those according to the first exemplary embodiment, only step 201 and step 202 will be described below.
  • the image coding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit to be coded.
  • the left reference pixel is calculated by the method illustrated in Fig. 17.
  • step 481 illustrated in Fig. 17 the image coding apparatus determines whether a size of the processing unit to be coded is smaller than a predetermined size of the processing unit. If the processing unit to be coded is smaller than the predetermined size of the processing unit (YES in step 481), the process proceeds to step 421. If the processing unit to be coded is not smaller than a predetermined size of the processing unit (NO in step 481), the process proceeds to step 401.
  • step 401 the image coding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit.
  • step 421 the image coding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
  • step 202 illustrated in Fig. 4 the image coding apparatus calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit to be coded.
  • the upper reference pixel is calculated by the method illustrated in Fig. 17.
  • step 481 the image coding apparatus determines whether a size of the processing unit to be coded is smaller than the predetermined size of the processing unit. If the processing unit to be coded is smaller than the predetermined size of the processing unit (YES in step 481), the process proceeds to step 421. If the processing unit to be coded is not smaller than a predetermined size of the processing unit (NO in step 481), the process proceeds to step 401.
  • step 401 the image coding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit.
  • step 421 the image coding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
  • steps 204, 208, 226, 205, 206, and 207 are the same as those according to the first exemplary embodiment, description will be omitted.
  • the reference processing unit whose prediction value is used is dependent on the size of the processing unit to be coded.
  • the processing speed can thus be increased by pipelining the process which is different according to the size of the processing unit to be coded, while reducing the influence of lowered coding efficiency.
  • the reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the first exemplary embodiment.
  • the conventional method is replaced by the above-described method.
  • the image coding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the first exemplary embodiment.
  • the predetermined size of the processing unit may be a fixed value, or may be multiplexed in the coded data as the side information.
  • the predetermined size of the processing unit and the block size may be interdependent. For example, if the size of the processing unit is smaller than or equal to half the size of the block, the image coding apparatus calculates the reference pixel using the prediction value of the reference processing unit. The image coding apparatus calculates the reference pixel using the reconstructed pixel value of the reference processing unit for other cases.
  • Fig. 2 is a block diagram illustrating a configuration of an image decoding apparatus according to an eighth exemplary embodiment of the present invention. According to the present exemplary embodiment, decoding of the coded data generated according to the first exemplary embodiment will be described below as an example.
  • a code data input unit 101 inputs the code data.
  • a header information decoding unit 102 decodes information such as the configuration of the processing unit or the intra prediction mode, from the code data.
  • An entropy decoding unit 103 decodes the quantization coefficient by processing unit.
  • a reconstructed pixel value decoding unit 104 performs inverse quantization and inverse orthogonal transform on the quantization coefficient acquired by the entropy decoding unit 103.
  • the reconstructed pixel value decoding unit 104 then adds the result to the intra prediction value calculated by an intra prediction value calculation unit 136 to be described below, and calculates the reconstructed pixel value.
  • a reconstructed pixel value storing unit 105 stores the calculated reconstructed pixel value.
  • the intra prediction value calculation unit 136 calculates the intra prediction value using the intra prediction mode decoded by the header information decoding unit 102.
  • An intra prediction value storing unit 121 stores the calculated intra prediction value.
  • the code data input unit 101 inputs the coded data corresponding to one frame, and outputs the coded data corresponding to each block to the header information decoding unit 102.
  • the header information decoding unit 102 receives the coded data corresponding to each block and decodes the information such as the configuration of the processing unit and the intra prediction mode.
  • the header information decoding unit 102 then outputs the intra prediction mode information to the intra prediction value calculation unit 136. Further, the header information decoding unit 102 extracts a portion corresponding to the quantization coefficient from the coded data for each block, and outputs the extracted portion to the entropy decoding unit 103.
  • the entropy decoding unit 103 decodes the quantization coefficient from the extracted coded data, and outputs the quantization coefficient to the reconstructed pixel value decoding unit 104.
  • the reconstructed pixel value decoding unit 104 performs inverse quantization and inverse transform on the input quantization coefficient and calculates the residual coefficient.
  • the reconstructed pixel value decoding unit 104 then adds the prediction value input from the intra prediction value calculation unit 136, which is described below, to the calculated residual coefficient, and calculates the reconstructed pixel value.
  • the reconstructed pixel value decoding unit 104 outputs the calculated reconstructed pixel value to the reconstructed pixel value storing unit 105.
  • the reconstructed pixel value storing unit 105 stores the calculated reconstructed pixel value and outputs it to the intra prediction value calculation unit 136 as necessary. Further, the reconstructed pixel value storing unit 105 outputs to the outside the reconstructed pixel value as a portion of a decoded image.
  • the intra prediction value calculation unit 136 calculates as appropriate the prediction value from the surrounding prediction values stored in the intra prediction value storing unit 121 and the surrounding pixel values stored in the reconstructed pixel value storing unit 105 with use of the intra prediction mode output from the header information decoding unit 102.
  • the intra prediction value calculation unit 136 then outputs the calculated prediction value to the reconstructed pixel value decoding unit 104 and the intra prediction value storing unit 121.
  • the intra prediction value storing unit 121 stores the prediction value, and outputs the prediction value to the intra prediction value calculation unit 136 as necessary.
  • Fig. 6 is a flowchart illustrating the image decoding process performed in the image decoding apparatus according to the eighth exemplary embodiment.
  • step 1101 the image decoding apparatus decodes from the input coded data the information corresponding to each block.
  • step 1102 the image decoding apparatus decodes the intra prediction mode of each processing unit in which prediction is to be performed, existing in each block.
  • step 1103 the image decoding apparatus performs entropy decoding on the information and the coefficient in each processing unit.
  • step 1104 the image decoding apparatus performs inverse quantization and inverse transform on the decoded prediction error, and calculates the coefficient value.
  • step 1105 the image decoding apparatus calculates the prediction value from the result acquired in step 1102, adds the calculated prediction value to the coefficient value acquired in step 1104, and calculates the reconstructed pixel.
  • step 1106 the image decoding apparatus determines whether all of the processing units in the block have been decoded. If decoding has been completed (COMPLETED in step 1106), the process proceeds to step 1107. If decoding has not been completed (NOT COMPLETED in step 1106), the process returns to step 1102, and the image decoding apparatus performs the process on the next processing unit.
  • step 1107 the image decoding apparatus determines whether all of the blocks in the frame have been decoded. If decoding has been completed (COMPLETED in step 1107), the image decoding apparatus stops all operations, and ends the process. If decoding has not been completed (NOT COMPLETED in step 1107), the process returns to step 1101, and the image decoding apparatus performs the process on the next block.
  • Fig. 7 is a flowchart illustrating in detail the process of calculating the reconstructed pixel performed in step 1105 according to the eighth exemplary embodiment.
  • the image decoding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded.
  • the image decoding apparatus calculates the left reference pixel by the method illustrated in Fig. 11.
  • the image decoding apparatus calculates the reference pixel from the prediction value of the left processing unit.
  • step 302 illustrated in Fig. 7 the image decoding apparatus calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded.
  • the image decoding apparatus calculates the upper reference pixel by the method illustrated in Fig. 11, similarly as in step 301.
  • the processing unit T has been coded.
  • the prediction values of the left processing unit S and the upper processing unit L are used as the reference pixels for calculating the prediction value of the processing unit T.
  • step 304 the image decoding apparatus calculates the prediction value using the calculated reference pixels.
  • step 326 the image decoding apparatus stores the calculated prediction value which may be referred to by the processing units in the future.
  • step 305 the image decoding apparatus acquires the inverse transform coefficient calculated in step 1104 illustrated in Fig. 6, adds the acquired inverse transform coefficient to the prediction value, and calculates the reconstructed pixel value.
  • step 306 the image decoding apparatus stores the calculated reconstructed pixel value which may be referred to by the processing units in the future, and then ends the process.
  • the coded data generated by the high-speed coding method realized by pipelining the process according to the first exemplary embodiment can be decoded.
  • the left processing unit and the upper processing unit are referred to as illustrated in Fig. 7, in calculating the reference pixels to be used in performing intra prediction.
  • the present exemplary embodiment is not limited to this configuration, and may use the upper left processing unit to calculate the reference pixel, as illustrated in Fig. 8 (e.g., the processing unit K illustrated in Fig. 9, when the processing unit T has been coded).
  • the upper right processing unit or the lower left processing unit may also be used in calculating the reference pixels (e.g., the processing unit E or S when the processing unit L has been coded).
  • the frames employ only the intra prediction process.
  • the present exemplary embodiment may also be applied to the frames that can employ the inter prediction process.
  • each or all units may be described in software, and may be performed by a processing unit such as the CPU.
  • the present exemplary embodiment may use a code which employs side information in the coded data which indicates whether the above-described method will be employed in units of frames or slices before coding the block. Accordingly, the present exemplary embodiment may be configured to switch between the conventional method and the above-described method, so that detailed control can be performed. For example, if high speed is demanded, the above-described method according to the present exemplary embodiment is used. However, if compatibility with the conventional method is preferred over high speed, the above-described method is not used.
  • the configuration of the image decoding apparatus (illustrated in Fig. 2) and the flowchart of the process performed by the image decoding apparatus (illustrated in Fig. 6) according to a ninth exemplary embodiment of the present invention is similar to those according to the eighth exemplary embodiment. Description of these will thus be omitted. According to the present exemplary embodiment, decoding of the coded data generated in the second exemplary embodiment will be described below as an example.
  • Fig. 7 is the flowchart illustrating in detail the process in step 1105 illustrated in Fig. 6, according to the ninth exemplary embodiment.
  • the flowchart illustrated in Fig. 7 is the same as the flowchart according to the eighth exemplary embodiment. However, since the contents of step 301 and step 302 are different from those according to the eighth exemplary embodiment, only step 301 and step 302 will be described below.
  • the image decoding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded.
  • the left reference pixel is calculated by the method illustrated in Fig. 12.
  • step 431 illustrated in Fig. 12 the image decoding apparatus determines whether the left reference processing unit is the processing unit on which the prediction process has been performed immediately before the processing unit that has been coded. If the left reference processing unit is the processing unit on which the prediction process has been performed immediately before the processing unit that has been coded (YES in step 431), the process proceeds to step 421, whereas if not (NO in step 431), the process proceeds to step 401.
  • step 401 the image decoding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit.
  • step 421 the image decoding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
  • step 302 illustrated in Fig. 7 the image decoding apparatus then calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded.
  • the upper reference pixel is calculated by the method illustrated in Fig. 12.
  • step 431 the image decoding apparatus determines whether the upper reference processing unit is the processing unit on which the prediction process has been performed immediately before the processing unit that has been coded. If the upper reference processing unit is the processing unit on which the prediction process has been performed immediately before the processing unit that has been coded (YES in step 431), the process proceeds to step 421, whereas if not (NO in step 431), the process proceeds to step 401.
  • step 401 the image decoding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit.
  • step 421 the image decoding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
  • the left processing unit S is the processing unit on which the prediction process has been performed immediately before the processing unit T, so that the image decoding apparatus calculates the left reference pixel using the prediction value of the processing unit S.
  • the image decoding apparatus thus calculates the upper reference pixel using the reconstructed pixel value of the processing unit L.
  • steps 304, 326, 305, and 306 are the same as those according to the eighth exemplary embodiment, description will be omitted.
  • the coded data generated by the high-speed coding method realized by pipelining the process according to the second exemplary embodiment can be decoded.
  • the reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the eighth exemplary embodiment.
  • the conventional method is replaced by the above-described method.
  • the image decoding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the eighth exemplary embodiment.
  • the configuration of the image decoding apparatus (illustrated in Fig. 2) and the flowchart of the process performed by the image decoding apparatus (illustrated in Fig. 6) according to a tenth exemplary embodiment of the present invention is similar to those according to the eighth exemplary embodiment. Description of these will thus be omitted. According to the present exemplary embodiment, decoding of the coded data generated in the third exemplary embodiment will be described below as an example.
  • Fig. 7 is the flowchart illustrating in detail the process in step 1105 illustrated in Fig. 6, according to the tenth exemplary embodiment.
  • the flowchart illustrated in Fig. 7 is the same as the flowchart according to the eighth exemplary embodiment. However, since the contents of step 301 and step 302 are different from those according to the eighth exemplary embodiment, only step 301 and step 302 will be described below.
  • the image decoding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded.
  • the left reference pixel is calculated by the method illustrated in Fig. 13.
  • step 441 illustrated in Fig. 13 the image decoding apparatus determines whether the left reference processing unit belongs to the same block as the processing unit that has been coded. If the left reference processing unit belongs to the same block as the processing unit that has been coded (YES in step 441), the process proceeds to step 421. On the other hand, if the left reference processing unit does not belong to the same block as the processing unit that has been coded (NO in step 441), the process proceeds to step 401.
  • step 401 the image decoding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit.
  • step 421 the image decoding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
  • step 302 illustrated in Fig. 7 the image decoding apparatus then calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded.
  • the upper reference pixel is calculated by the method illustrated in Fig. 13.
  • step 441 the image decoding apparatus determines whether the upper reference processing unit belongs to the same block as the processing unit that has been coded. If the upper reference processing unit belongs to the same block as the processing unit that has been coded (YES in step 441), the process proceeds to step 421. On the other hand, if the upper reference processing unit does not belong to the same block as the processing unit that has been coded (NO in step 441), the process proceeds to step 401.
  • step 401 the image decoding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit.
  • step 421 the image decoding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
  • the left reference pixel is calculated using the reconstructed pixel value of the processing unit R.
  • the upper processing unit K and the processing unit S belong to the same block. According to the tenth exemplary embodiment, the prediction value of the processing unit K is thus used to calculate the upper reference pixel.
  • steps 304, 326, 305, and 306 are the same as those according to the eighth exemplary embodiment, description will be omitted.
  • the coded data generated using the high-speed coding method realized by pipelining the process according to the third exemplary embodiment can be decoded.
  • the reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the eighth exemplary embodiment.
  • the conventional method is replaced by the above-described method.
  • the image decoding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the eighth exemplary embodiment.
  • the configuration of the image decoding apparatus (illustrated in Fig. 2) and the flowchart of the process performed by the image decoding apparatus (illustrated in Fig. 6) according to the eleventh exemplary embodiment of the present invention is similar to those according to the eighth exemplary embodiment. Description of these will thus be omitted. According to the present exemplary embodiment, decoding of the coded data generated in the fourth exemplary embodiment will be described below as an example.
  • Fig. 7 is the flowchart illustrating in detail the process in step 1105 illustrated in Fig. 6, according to the eleventh exemplary embodiment.
  • the flowchart illustrated in Fig. 7 is the same as the flowchart according to the eighth exemplary embodiment. However, since the contents of step 301 and step 302 are different from those according to the eighth exemplary embodiment, only step 301 and step 302 will be described below.
  • the image decoding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded.
  • the left reference pixel is calculated by the method illustrated in Fig. 14.
  • step 451 illustrated in Fig. 14 the image decoding apparatus determines whether the left reference processing unit belongs to the same block as the processing unit that has been coded, or belongs to the block in which the prediction process has been performed immediately before the processing unit that has been coded. If the left reference processing unit belongs to the same block as the processing unit that has been coded, or belongs to the block in which the prediction process has been performed immediately before the processing unit that has been coded (YES in step 451), the process proceeds to step 421, whereas if not (NO in step 451), the process proceeds to step 401.
  • step 401 the image decoding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit.
  • step 421 the image decoding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
  • step 302 illustrated in Fig. 7 the image decoding apparatus then calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded.
  • the upper reference pixel is calculated by the method illustrated in Fig. 14.
  • step 451 the image decoding apparatus determines whether the upper reference processing unit belongs to the same block as the processing unit that has been coded, or belongs to the block in which the prediction process has been performed immediately before the processing unit that has been coded. If the upper reference processing unit belongs to the same block as the processing unit that has been coded, or belongs to the block in which the prediction process has been performed immediately before the processing unit that has been coded (YES in step 451), the process proceeds to step 421, whereas if not (NO in step 451), the process proceeds to step 401.
  • step 401 the image decoding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit.
  • step 421 the image decoding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
  • the left processing unit J belongs to the block in which the prediction process has been performed immediately before the block to which the processing unit K belongs.
  • the left reference pixel is thus calculated using the prediction value of the processing unit J.
  • the upper processing unit C and the processing unit K belong to different blocks. Further, the upper processing unit C does not belong to the block in which the prediction process has been performed immediately before the block to which the processing unit K belongs. According to the eleventh exemplary embodiment, the reconstructed pixel of the processing unit C is thus used to calculate the upper reference pixel.
  • steps 304, 326, 305, and 306 are the same as those according to the eighth exemplary embodiment, description will be omitted.
  • the coded data generated using the high-speed coding method realized by pipelining the process according to the fourth exemplary embodiment can be decoded.
  • the reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the eighth exemplary embodiment.
  • the conventional method is replaced by the above-described method.
  • the image decoding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the eighth exemplary embodiment.
  • the configuration of the image decoding apparatus (illustrated in Fig. 2) and the flowchart of the process performed by the image decoding apparatus (illustrated in Fig. 6) according to a twelfth exemplary embodiment of the present invention is similar to those according to the eighth exemplary embodiment. Description of these will thus be omitted. According to the present exemplary embodiment, decoding of the coded data generated in the fifth exemplary embodiment will be described below as an example.
  • Fig. 7 is the flowchart illustrating in detail the process in step 1105 illustrated in Fig. 6, according to the twelfth exemplary embodiment.
  • the flowchart illustrated in Fig. 7 is the same as the flowchart according to the eighth exemplary embodiment. However, since the contents of step 301 and step 302 are different from those according to the eighth exemplary embodiment, only step 301 and step 302 will be described below.
  • the image decoding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded.
  • the left reference pixel is calculated by the method illustrated in Fig. 15.
  • step 461 illustrated in Fig. 15 the image decoding apparatus determines whether the perpendicular position of the block to which the left reference processing unit belongs is the same as that of the block to which the processing unit that has been coded belongs. If the perpendicular position of the block to which the left reference processing unit belongs is the same as that of the block to which the processing unit that has been coded belongs (YES in step 461), the process proceeds to step 421, whereas if not (NO in step 461), the process proceeds to step 401.
  • step 401 the image decoding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit.
  • step 421 the image decoding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
  • step 302 illustrated in Fig. 7 the image decoding apparatus then calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded.
  • the upper reference pixel is calculated by the method illustrated in Fig. 15.
  • step 461 the image decoding apparatus determines whether the perpendicular position of the block to which the upper reference processing unit belongs is the same as that of the block to which the processing unit that has been coded belongs. If the perpendicular position of the block to which the upper reference processing unit belongs is the same as that of the block to which the processing unit that has been coded belongs (YES in step 461), the process proceeds to step 421, whereas if not (NO in step 461), the process proceeds to step 401.
  • step 401 the image decoding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit.
  • step 421 the image decoding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
  • the processing unit M has been coded.
  • the left processing unit L belongs to the block whose perpendicular position is the same as that of the block to which the processing unit M belongs.
  • the image decoding apparatus thus calculates the left reference pixel using the prediction value of the processing unit L.
  • the image decoding apparatus uses the reconstructed pixel of the processing unit E to calculate the upper reference pixel.
  • steps 304, 326, 305, and 306 are the same as those according to the eighth exemplary embodiment, description will be omitted.
  • the coded data generated using the high-speed coding method realized by pipelining the process according to the fifth exemplary embodiment can be decoded.
  • the reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the eighth exemplary embodiment.
  • the conventional method is replaced by the above-described method.
  • the image decoding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the eighth exemplary embodiment.
  • the configuration of the image decoding apparatus (illustrated in Fig. 2) and the flowchart of the process performed by the image decoding apparatus (illustrated in Fig. 6) according to a thirteenth exemplary embodiment of the present invention is similar to those according to the eighth exemplary embodiment. Description of these will thus be omitted. According to the present exemplary embodiment, decoding of the coded data generated in the sixth exemplary embodiment will be described below as an example.
  • Fig. 7 is the flowchart illustrating in detail the process in step 1105 illustrated in Fig. 6, according to the thirteenth exemplary embodiment.
  • the flowchart illustrated in Fig. 7 is the same as the flowchart according to the eighth exemplary embodiment. However, since the contents of step 301 and step 302 are different from those according to the eighth exemplary embodiment, only step 301 and step 302 will be described below.
  • the image decoding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded.
  • the left reference pixel is calculated by the method illustrated in Fig. 16.
  • step 471 illustrated in Fig. 16 the image decoding apparatus determines whether the block to which the left reference processing unit belongs is within a predetermined number of blocks previous to the block to which the processing unit that has been coded belongs. If the block to which the left reference processing unit belongs is within the predetermined number of blocks previous to the block to which the processing unit that has been coded belongs (YES in step 471), the process proceeds to step 421, whereas if not (NO in step 471), the process proceeds to step 401.
  • step 401 the image decoding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit.
  • step 421 the image decoding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
  • step 302 illustrated in Fig. 7 the image decoding apparatus then calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded.
  • the upper reference pixel is calculated by the method illustrated in Fig. 16.
  • step 471 the image decoding apparatus determines whether the block to which the upper reference processing unit belongs is within the predetermined number of blocks previous to the block to which the processing unit that has been coded belongs. If the block to which the upper reference processing unit belongs is within the predetermined number of blocks previous to the block to which the processing unit that has been coded belongs (YES in step 471), the process proceeds to step 421, whereas if not (NO in step 471), the process proceeds to step 401.
  • step 401 the image decoding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit.
  • step 421 the image decoding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
  • the processing unit O has been coded, and the predetermined number of blocks is set to five blocks.
  • the block to which the left processing unit N belongs is one block previous to the block to which the processing unit O belongs.
  • the image decoding apparatus calculates the left reference pixel using the prediction value of the processing unit N.
  • the block to which the upper processing unit G belongs is four blocks previous to the block to which the processing unit O belongs.
  • the image decoding apparatus calculates the left reference pixel using the prediction value of the processing unit G.
  • steps 304, 326, 305, and 306 are the same as those according to the eighth exemplary embodiment, description will be omitted.
  • the coded data generated using the high-speed coding method realized by pipelining the process according to the sixth exemplary embodiment can be decoded.
  • the reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the eighth exemplary embodiment.
  • the conventional method is replaced by the above-described method.
  • the image decoding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the eighth exemplary embodiment.
  • the configuration of the image decoding apparatus (illustrated in Fig. 2) and the flowchart of the process performed by the image decoding apparatus (illustrated in Fig. 6) according to a fourteenth exemplary embodiment of the present invention is similar to those according to the eighth exemplary embodiment. Description of these will thus be omitted. According to the present exemplary embodiment, decoding of the coded data generated in the seventh exemplary embodiment will be described below as an example.
  • Fig. 7 is the flowchart illustrating in detail the process in step 1105 illustrated in Fig. 6, according to the fourteenth exemplary embodiment.
  • the flowchart illustrated in Fig. 7 is the same as the flowchart according to the eighth exemplary embodiment. However, since the contents of step 301 and step 302 are different from those according to the eighth exemplary embodiment, only step 301 and step 302 will be described below.
  • the image decoding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded.
  • the left reference pixel is calculated by the method illustrated in Fig. 17.
  • step 481 illustrated in Fig. 17 the image decoding apparatus determines whether the size of the processing unit that has been coded is smaller than a predetermined size of the processing unit. If the processing unit that has been coded is smaller than the predetermined size of the processing unit (YES in step 481), the process proceeds to step 421. If the processing unit that has been coded is not smaller than the predetermined size of the processing unit (NO in step 481), the process proceeds to step 401.
  • step 401 the image decoding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit.
  • step 421 the image decoding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
  • step 302 illustrated in Fig. 7 the image decoding apparatus then calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded.
  • the upper reference pixel is calculated by the method illustrated in Fig. 17.
  • step 481 the image decoding apparatus determines whether the size of the processing unit that has been coded is smaller than the predetermined size of the processing unit. If the processing unit that has been coded is smaller than the predetermined size of the processing unit (YES in step 481), the process proceeds to step 421. If the processing unit that has been coded is not smaller than the predetermined size of the processing unit (NO in step 481), the process proceeds to step 401.
  • step 401 the image decoding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit.
  • step 421 the image decoding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
  • steps 304, 326, 305, and 306 are the same as those according to the eighth exemplary embodiment, description will be omitted.
  • the coded data generated by the high-speed coding method realized by pipelining the process according to the seventh exemplary embodiment can be decoded.
  • the reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the eighth exemplary embodiment.
  • the conventional method is replaced by the above-described method.
  • the image decoding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the eighth exemplary embodiment.
  • the configuration of the image coding apparatus (illustrated in Fig. 1) and the flowchart of the process performed by the image coding apparatus (illustrated in Fig. 3) according to a fifteenth exemplary embodiment of the present invention is similar to those according to the first exemplary embodiment. Description of these will thus be omitted.
  • Fig. 20 is a flowchart illustrating in detail the process performed in step 1004 illustrated in Fig. 3, according to the fifteenth exemplary embodiment.
  • step 511 the image coding apparatus determines whether the processing unit to be coded is adjacent to an upper end of the block to which the processing unit belongs. If the processing unit to be coded is adjacent to the upper end of the block to which the processing unit belongs (YES in step 511), the process proceeds to step 512. If the processing unit to be coded is not adjacent to the upper end of the block to which the processing unit belongs (NO in step 511), the process proceeds to step 513.
  • step 512 the image coding apparatus determines whether the processing unit to be coded is adjacent to a left end of the block to which the processing unit belongs. If the processing unit to be coded is adjacent to the left end of the block to which the processing unit belongs (YES in step 512), the process proceeds to step 501. If the processing unit to be coded is not adjacent to the left end of the block to which the processing unit belongs (NO in step 512), the process proceeds to step 503.
  • step 513 the image coding apparatus determines whether the processing unit to be coded is adjacent to the left end of the block to which the processing unit belongs. If the processing unit to be coded is adjacent to the left end of the block to which the processing unit belongs (YES in step 513), the process proceeds to step 505. If the processing unit to be coded is not adjacent to the left end of the block to which the processing unit belongs (NO in step 513), the process proceeds to step 507.
  • the image coding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit to be coded. According to the fifteenth exemplary embodiment, the image coding apparatus calculates the left reference pixel by the method illustrated in Fig. 10 or Fig. 11.
  • the image coding apparatus may calculate the left reference pixel by the method illustrated in Fig. 11.
  • the image coding apparatus may calculate the left reference pixel by the method illustrated in Fig. 10.
  • the combination of the calculation methods is not limited to the above.
  • the image coding apparatus calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit to be coded. According to the fifteenth exemplary embodiment, the image coding apparatus calculates the upper reference pixel by the method illustrated in Fig. 10 or Fig. 11.
  • the image coding apparatus may calculate the upper reference pixel by the method illustrated in Fig. 11.
  • the image coding apparatus may calculate the upper reference pixel by the method illustrated in Fig. 10.
  • the combination of the calculation methods is not limited to the above.
  • steps 204, 208, 226, 205, 206, and 207 are the same as those according to the first exemplary embodiment, description will be omitted.
  • the reference pixel whose prediction value is to be used is flexibly set according to a relative position of the processing unit to be coded within the block.
  • the reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the first exemplary embodiment.
  • the conventional method is replaced by the above-described method.
  • the image coding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the first exemplary embodiment.
  • the predetermined size of the processing unit may be a fixed value, or may be multiplexed in the coded data as the side information.
  • Fig. 18 is a block diagram illustrating an image coding-decoding apparatus according to a sixteenth exemplary embodiment of the present invention.
  • a CPU 600 performs control of the apparatus and various processes.
  • a memory 601 provides a storage area necessary for storing an operating system (OS) and software, and for performing operations for controlling the image coding-decoding apparatus.
  • a bus 602 connects various devices for receiving and transmitting data and control signals.
  • a terminal 603 is used for activating the apparatus, setting various conditions, and instructing reproduction to the apparatus.
  • a storage device 604 stores the software.
  • a storage device 605 stores the streams. The storage devices 604 and 605 can be media that can be separated from the system and be moved.
  • a camera 606 captures a moving image.
  • a monitor 607 displays the image.
  • a communication circuit 609 includes a local area network (LAN), a public line, a wireless line, and airwaves.
  • a communication interface 608 transmits and receives the streams via the communication circuit 609.
  • the storage device 604 stores image coding software in which the process of the flowchart illustrated in Fig. 3 is written.
  • the image coding software is read out from the storage device 604 and loaded in the memory 601, and the coding process is started.
  • the image data input from the camera 606 is thus coded by the image coding software using the optimum intra prediction mode, and is output via the communication interface 608.
  • the storage device 604 stores image decoding software in which the process of the flowchart illustrated in Fig. 6 is written.
  • the terminal 603 activates the software
  • the image decoding software is read out from the storage device 604 and loaded in the memory 601, and the decoding process is started.
  • the coded data input from the communication circuit 609 via the communication interface 608 is decoded by the image decoding software, so that the image data is generated and displayed on the monitor 607.
  • the image data is not limited the one to be input from the camera 606, and may be read out from the storage device 605.
  • the coded data is also not limited to the one to be input to and output from the communication interface 608, and may be recorded in and read out from the storage device 605.
  • the present invention may be implemented as software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

An image coding apparatus includes a block division unit configured to divide an input image into a plurality of blocks, a processing unit division unit configured to divide each block into processing units of a same size or of a smaller size as compared to the block, an intra prediction value calculation unit configured to refer to, a plurality of predetermined processing units or one predetermined processing unit surrounding the processing unit and calculate a prediction value of the processing unit, a reconstructed pixel value calculation unit configured to calculate for each processing unit a reconstructed pixel value of the processing unit, a reconstructed pixel value storing unit configured to store a result of the reconstructed pixel value calculation unit, and an intra prediction value storing unit configured to store a result of the intra prediction value calculation unit.

Description

IMAGE CODING APPARATUS, IMAGE CODING METHOD AND PROGRAM, IMAGE DECODING APPARATUS, AND IMAGE DECODING METHOD AND PROGRAM
The present invention relates to an image coding apparatus, an image coding method and a program, an image decoding apparatus, and an image decoding method and a program. In particular, the present invention relates to an intra-frame prediction coding method in an image.
Various moving image standards have been developed and are also currently being developed in response to advancements in video applications to be used in performing communication and recording. Such standards include H.261, H.263, MPEG-2 Video, MPEG-4 Visual, and H.264/ MPEG-4 Advanced Video Coding (AVC) (hereinafter referred to as H.264). These standards are set by International Telecommunication Union Telecommunication Standardization Sector (ITU-T) and International Standardization Organization (ISO)/International Electrotechnical Commission (IEC) (refer to ISO/IEC144960-10:2004 Information technology - Coding of audio-visual objects - Part 10: Advanced Vide Coding).
H.264 which has been standardized in 2003 is broadly used in one-segment terrestrial digital broadcasting. H.264 is characteristic in that integer transform is performed in 4-by-4 pixel units in addition to a conventional coding method, so that a plurality of intra prediction processes is also provided. Further, according to H.264, a plurality of previous and subsequent frames can be referred to with use of a loop filter, and motion compensation is performed in seven types of sub-blocks at the same time. Furthermore, according to H.264, motion compensation of 1/4 pixel accuracy can be performed, similarly as in MPEG-4 Visual. Moreover, according to H.264, universal variable length coding or context-based adaptive variable length coding is used in performing entropy coding (refer to ITU-T H.264 Advance video coding for generic audiovisual services).
A coding method such as above-described H.264 performs intra prediction using surrounding reconstructed pixels that have been coded. In such a coding method, it is necessary to complete, in order to refer to a block, each of orthogonal transform, quantization, inverse quantization, and inverse orthogonal transform processes with respect to a prediction error in the block to be referred to. It is thus extremely difficult to perform pipelining of the intra prediction process.
ISO/IEC14496-10:2004 Information technology - Coding of audio-visual objects - Part 10: Advanced Vide Coding
The present invention is directed to an image coding apparatus which realizes an intra prediction process whose processing speed can be increased by performing pipeline processing.
According to an aspect of the present invention, an image coding apparatus includes a block division unit configured to divide an input image into a plurality of blocks;
a processing unit division unit configured to divide each block into processing units of a same size or of a smaller size as compared to the block, an intra prediction value calculation unit configured to refer to, for each processing unit, a plurality of predetermined processing units or one predetermined processing unit surrounding the processing unit and calculate a prediction value of the processing unit;
a reconstructed pixel value calculation unit configured to calculate for each processing unit a reconstructed pixel value of the processing unit according to the input image and a result of the intra prediction calculation unit with use of a predetermined method, a reconstructed pixel value storing unit configured to store a result of the reconstructed pixel value calculation unit, and an intra prediction value storing unit configured to store a result of the intra prediction value calculation unit, wherein the intra prediction value calculation unit calculates a prediction value of the processing unit by switching, based on a positional relationship between the processing unit and a processing unit to be referred to or a status of the processing unit, between a reconstructed pixel value of the processing unit to be referred to acquired from the reconstructed pixel value storing unit and an intra prediction value of the processing unit to be referred to acquired from the intra prediction value storing unit.
According to the present invention, the intra prediction process can be performed at high speed by pipelining the process, and high speed coding and decoding can thus be realized.
Further features and aspects of the present invention will become apparent from the following detailed description of exemplary embodiments with reference to the attached drawings.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate exemplary embodiments, features, and aspects of the invention and, together with the description, serve to explain the principles of the invention.
Fig. 1 is a block diagram illustrating an image coding apparatus according to first, second, third, fourth, fifth, sixth, seventh, and fifteenth exemplary embodiments of the present invention. Fig. 2 is a block diagram illustrating a configuration of an image decoding apparatus according eighth, ninth, tenth, eleventh, twelfth, thirteenth, and fourteenth exemplary embodiments of the present invention. Fig. 3 is a flowchart illustrating an image coding process according to the first to seventh exemplary embodiments of the present invention. Fig. 4 is a flowchart illustrating an intra prediction process according to the first to seventh exemplary embodiments of the present invention. Fig. 5 is a flowchart illustrating another example of the intra prediction process according to the first to seventh exemplary embodiments of the present invention. Fig. 6 is a flowchart illustrating an image decoding process according to the eighth to fourteenth exemplary embodiments of the present invention. Fig. 7 is a flowchart illustrating a pixel reconstruction process according to the eighth to fourteenth exemplary embodiments of the present invention. Fig. 8 illustrates a derived form of the pixel reconstruction process according to the eighth to fourteenth exemplary embodiments of the present invention. Fig. 9 illustrates an example of a state in a reference method used in the intra prediction according to the first exemplary embodiment of the present invention. Fig. 10 is a flowchart illustrating a method for determining a reference pixel used in performing intra prediction in a conventional image coding apparatus. Fig. 11 is a flowchart illustrating a method for determining a reference pixel used in performing intra prediction according to the first and eighth exemplary embodiments of the present invention. Fig. 12 is a flowchart illustrating a method for determining a reference pixel used in performing intra prediction according to the second and ninth exemplary embodiments of the present invention. Fig. 13 is a flowchart illustrating a method for determining a reference pixel used in performing intra prediction according to the third and tenth exemplary embodiments of the present invention. Fig. 14 is a flowchart illustrating a method for determining a reference pixel used in performing intra prediction according to the fourth and eleventh exemplary embodiments of the present invention. Fig. 15 is a flowchart illustrating a method for determining a reference pixel used in performing intra prediction according to the fifth and twelfth exemplary embodiments of the present invention. Fig. 16 is a flowchart illustrating a method for determining a reference pixel used in performing intra prediction according to the sixth and thirteenth exemplary embodiments of the present invention. Fig. 17 is a flowchart illustrating a method for determining a reference pixel used in performing intra prediction according to the seventh and fourteenth exemplary embodiments of the present invention. Fig. 18 is a block diagram illustrating a configuration of an image coding-decoding apparatus according to a sixteenth exemplary embodiment of the present invention. Fig. 19A illustrates an example of a processing order for performing intra prediction previous and subsequent to applying the present invention. Fig. 19B illustrates an example of a processing order for performing intra prediction previous and subsequent to applying the present invention. Fig. 20 is a flowchart illustrating the intra prediction process according to the fifteenth exemplary embodiment of the present invention.
Various exemplary embodiments, features, and aspects of the invention will be described in detail below with reference to the drawings.
The configurations in the exemplary embodiments to be described below are merely examples, and the present invention is not limited to the disclosed exemplary embodiments.
A first exemplary embodiment according to the present invention will be described below with reference to the drawings. Fig. 1 is a block diagram illustrating an image coding apparatus to which the present invention is applied. Referring to Fig. 1, a block division unit 1 divides an input image into a plurality of blocks. A processing unit division unit 2 divides each block into processing units of a same size as the block or of a smaller size as compared to the block. An intra prediction unit 34 determines an intra prediction mode for each processing unit, and calculates an intra prediction value using the determined intra prediction mode.
An intra prediction value storing unit 21 stores the calculated intra prediction value. A transform-quantization unit 5 calculates a difference between the intra prediction value and an input pixel corresponding to each processing unit, transforms and quantizes the difference as coefficient data, and calculates a quantization coefficient. A reconstructed pixel value calculation unit 6 calculates a reconstructed pixel value by performing inverse quantization and inverse transform on the quantization coefficient calculated by the transform-quantization unit 5, and adding the result to the intra prediction value. A reconstructed pixel value storing unit 7 stores the reconstructed pixel value calculated by the reconstructed pixel value calculation unit 6. An entropy coding unit 8 performs entropy coding on the quantization coefficient.
An image coding process performed in the above-described image coding apparatus will be described below. According to the present exemplary embodiment, moving image data is input to the image coding apparatus by frames. However, still image data corresponding to one frame may also be input to the image coding apparatus. Further, only the intra prediction process will be described for ease of description. However, the present exemplary embodiment is not limited to handle the intra prediction, and may use the intra prediction in combination with an inter prediction process.
The input image data corresponding to one frame is input to the block division unit 1, divided into blocks, and output to the processing unit division unit 2. The processing unit division unit 2 divides the image data divided into blocks to processing units of a size smaller than or equal to that of the block. The image data divided into processing units is then input to the intra prediction unit 34 and the transform-quantization unit 5.
The intra prediction unit 34 determines an optimal intra prediction mode with respect to the input image data divided into processing units. The intra prediction unit 34 then acquires a prediction value of a surrounding processing unit from the intra prediction value storing unit 21, and a reconstructed pixel value of the surrounding processing unit from the reconstructed pixel value storing unit 7. The intra prediction unit 34 uses the acquired values and the determined intra prediction mode to calculate the intra prediction value. The intra prediction unit 34 outputs the calculated intra prediction value to the intra prediction value storing unit 21, the reconstructed pixel value calculation unit 6, and the transform-quantization unit 5.
The intra prediction value storing unit 21 stores information about the calculated intra prediction value. If a target intra prediction value is referred to in the process of calculating an intra prediction value by an intra prediction calculation unit, the intra prediction storing unit 21 outputs the stored intra prediction value to the intra prediction unit 34. The intra prediction unit 34 thus uses the input intra prediction value to calculate an intra prediction value.
The transform-quantization unit 5 receives the intra prediction value from the intra prediction unit 34 and the input image data corresponding to each processing unit from the processing unit division unit 2, and calculates the difference between the input intra prediction value and the input image data. The transform-quantization unit 5 then performs transform and quantization on the acquired difference, and calculates a quantization coefficient. The transform-quantization unit 5 outputs the calculated quantization coefficient to the reconstructed pixel value calculation unit 6 and the entropy coding unit 8.
The reconstructed pixel value calculation unit 6 performs inverse quantization and inverse transform on the input quantization coefficient, adds the intra prediction value output from the intra prediction unit 34 to the processing result, and calculates the reconstructed pixel value. The reconstructed pixel value calculation unit 6 then outputs the calculated reconstructed pixel value to the reconstructed pixel value storing unit 7.
The reconstructed pixel value storing unit 7 stores information about the received reconstructed pixel value. The reconstructed pixel value storing unit 7 outputs to the intra prediction unit 34 the stored reconstructed pixel value, which is referred to as necessary in calculating the intra prediction value.
The entropy coding unit 8 performs entropy coding on the quantization coefficient, and outputs the result as a bit stream.
A simple flow of the coding process will be described below with reference to the drawing. Fig. 3 is a flowchart illustrating the image coding process performed by the image coding apparatus according to the first exemplary embodiment.
In step 1001, the image coding apparatus divides the input image by frames into blocks. In step 1002, the image coding apparatus divides each block into processing units of a size smaller than or equal to that of the block.
In step 1004, the image coding apparatus determines the intra prediction mode of the processing unit to be coded, and calculates the prediction value of the processing unit to be coded. The image coding apparatus then acquires the difference between the prediction value and the input image, and performs transform and quantization on the difference. In step 1005, the image coding apparatus performs entropy coding on the quantized coefficient.
In step 1006, the image coding apparatus determines whether all of the processing units in the block have been coded. If coding has been completed (COMPLETED in step 1006), the process proceeds to step 1007. If coding has not been completed (NOT COMPLETED in step 1006), the process returns to step 1002, and the image coding apparatus performs the process on the next processing unit.
In step 1007, the image coding apparatus determines whether all of the blocks in the frame have been coded. If coding has been completed (COMPLETED in step 1007), the image coding apparatus stops all operations, and ends the process. If coding has not been completed (NOT COMPLETED in step 1007), the process returns to step 1001, and the image coding apparatus performs the process on the next block.
The prediction, transform, and quantization processes performed in step 1004 will be described in detail below with reference to the drawings. Fig. 4 is a flowchart illustrating in detail the process in step 1004 according to the first exemplary embodiment.
In step 201, the image coding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit to be coded. According to the first exemplary embodiment, the image coding apparatus calculates the left reference pixel by a method illustrated in Fig. 11. In step 421 illustrated in Fig. 11, the image coding apparatus calculates the reference pixel from the prediction value of a left processing unit. In other words, according to the first exemplary embodiment, the reference pixel is calculated only from the prediction value of the reference processing unit.
In step 202 illustrated in Fig. 4, the image coding apparatus calculates an upper reference pixel to be used in calculating the intra prediction value of the processing unit to be coded. According to the first exemplary embodiment, the image coding apparatus calculates the upper reference pixel by the method illustrated in Fig. 11, similarly as in step 201. In step 421 illustrated in Fig. 11, the image coding apparatus thus calculates the reference pixel from the prediction value of an upper processing unit.
More specifically, according to the first exemplary embodiment, both the upper reference pixel and the left reference pixel are calculated only from the prediction values of the respective reference processing units. Referring to Fig. 9, it is assumed that a processing unit T is to be coded according to the first exemplary embodiment. In such a case, prediction values of a processing unit S to the left and a processing unit L above the processing unit T are used as the reference pixels for calculating the prediction value of the processing unit T.
In step 204, the image coding apparatus calculates the prediction value for each intra prediction mode using the calculated reference pixels. In step 208, the image coding apparatus compares each of the prediction values and determines the optimum intra prediction mode for performing coding.
In step 226, the image coding apparatus stores the prediction value of the determined intra prediction mode which may be referred to by the processing units in the future. In step 205, the image coding apparatus acquires the difference between the input pixel and the calculated prediction value, performs transform and quantization on the acquired difference, and calculates the quantization coefficient.
In step 206, the image coding apparatus performs inverse quantization and inverse transform on the calculated quantization coefficient and calculates a residual coefficient. In step 207, the image coding apparatus adds the prediction value to the calculated residual coefficient and calculates the reconstructed pixel value. The image coding apparatus stores the calculated reconstructed value which may be referred to by the processing units in the future, and the process then ends.
Fig. 19A illustrates an example of a conventional coding method, i.e., the processing order of the intra prediction process according to H. 264. Referring to Fig. 19A, according to the conventional method, the prediction value of the processing unit (n + 1) to be coded is calculated using only the reconstructed pixel of the reference processing unit (n). Therefore, if the processing units which are horizontally adjacent to each other are to be continuously processed, it is necessary to wait for the reconstructed pixel of the previous processing unit (n) to be calculated to calculate the prediction value of the next processing unit (n + 1).
In contrast, Fig. 19B illustrates an example of the processing order of the intra prediction process according to the present invention. Referring to Fig. 19B, the prediction value of the processing unit (n + 1) to be coded is calculated using the prediction value of the reference processing unit (n). Thus, the prediction value of the processing unit (n + 1) to be coded can be calculated without waiting for calculation of the reconstructed pixel of the previous processing unit (n), so that the processing speed can be increased by pipelining the process.
According to the present exemplary embodiment, the left processing unit and the upper processing unit are referred to as illustrated in Fig. 4, in calculating the reference pixels to be used in performing intra prediction. However, the present exemplary embodiment is not limited to this configuration, and may use an upper left processing unit to calculate the reference pixel, as illustrated in Fig. 5 (e.g., a processing unit K illustrated in Fig. 9, when the processing unit T is to be coded). Similarly, an upper right processing unit or a lower left processing unit may also be used in calculating the reference pixels (e.g., a processing unit E or a processing unit S when the processing unit L is to be processed illustrated in Fig. 9).
Further, according to the present exemplary embodiment, the frames only employ the intra prediction process. However, the present exemplary embodiment may also be applied to the frames that can employ the inter prediction process.
Furthermore, according to the present exemplary embodiment, functions of each or all units in the image coding apparatus may be described in software, and may be performed by a processing unit such as a central processing unit (CPU).
Moreover, the present exemplary embodiment may be configured to switch between the conventional method and the above-described method by using side information in the coded data which indicates whether the above-described method will be employed in units of frames or slices before coding the block, so that detailed control can be performed. For example, if high speed is demanded, the above-described method according to the present exemplary embodiment is used. However, if compatibility with the conventional method is preferred over high speed, the above-described method is not used.
The configuration of the image coding apparatus (illustrated in Fig. 1) and the flowchart of the process performed by the image coding apparatus (illustrated in Fig. 3) according to a second exemplary embodiment of the present invention is similar to those according to the first exemplary embodiment. Description of these will thus be omitted.
Fig. 4 is the flowchart illustrating in detail the process in step 1004 illustrated in Fig. 3, according to the second exemplary embodiment. The flowchart illustrated in Fig. 4 is the same as the flowchart according to the first exemplary embodiment. However, since the contents of step 201 and step 202 are different from those according to the first exemplary embodiment, only step 201 and step 202 will be described below.
In step 201, the image coding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit to be coded. According to the second exemplary embodiment, the left reference pixel is calculated by the method illustrated in Fig. 12.
In step 431 illustrated in Fig. 12, the image coding apparatus determines whether the left reference processing unit is the processing unit on which the prediction process has been performed immediately before the processing unit to be coded. If the left reference processing unit is the processing unit on which the prediction process has been performed immediately before the processing unit to be coded (YES in step 431), the process proceeds to step 421, whereas if not (NO in step 431), the process proceeds to step 401.
In step 401, the image coding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit. In step 421, the image coding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
In step 202 illustrated in Fig. 4, the image coding apparatus then calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit to be coded. According to the second exemplary embodiment, the upper reference pixel is calculated by the method illustrated in Fig. 12.
In step 431 illustrated in Fig. 12, the image coding apparatus determines whether the upper reference processing unit is the processing unit on which the prediction process has been performed immediately before the processing unit to be coded. If the upper reference processing unit is the processing unit on which the prediction process has been performed immediately before the processing unit to be coded (YES in step 431), the process proceeds to step 421, whereas if not (NO in step 431), the process proceeds to step 401.
In step 401, the image coding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit. In step 421, the image coding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
The process will be described below with reference to Fig. 9. For example, it is assumed that the processing unit T is to be coded, and the prediction process is performed on in the order of the processing units K, L, S, and T in the block to which the processing unit T belongs. In such a case, according to the second exemplary embodiment, the left processing unit S is the processing unit on which the prediction process has been performed immediately before the processing unit T, so that the image coding apparatus calculates the left reference pixel using the prediction value of the processing unit S.
On the other hand, in the calculation of the upper reference pixel, the upper processing unit L is not the processing unit on which the prediction process has been performed immediately before the processing unit T. According to the second exemplary embodiment, the image coding apparatus thus calculates the upper reference pixel using the reconstructed pixel value of the processing unit L.
Since the processes performed in steps 204, 208, 226, 205, 206, and 207 are the same as those according to the first exemplary embodiment, description will be omitted.
According to the present invention, the image coding apparatus calculates the reference pixel to be employed in performing the intra prediction by switching between using the prediction value and the reconstructed pixel of the reference processing unit, based on a condition. In such a case, if the prediction value is increasingly used, prediction accuracy is lowered, and coding efficiency is also lowered as a result. To solve such an issue, according to the second exemplary embodiment, the reference processing unit whose prediction value is to be used is limited to the processing unit on which the prediction process has been performed immediately before the processing unit to be coded. The processing speed can thus be increased by pipelining the process in the processing unit level while reducing the influence of lowered coding efficiency.
The reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the first exemplary embodiment.
Further, according to the present exemplary embodiment, the conventional method is replaced by the above-described method. However, the image coding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the first exemplary embodiment.
The configuration of the image coding apparatus (illustrated in Fig. 1) and the flowchart of the process performed by the image coding apparatus (illustrated in Fig. 3) according to a third exemplary embodiment of the present invention is similar to those according to the first exemplary embodiment. Description of these will thus be omitted.
Fig. 4 is the flowchart illustrating in detail the process in step 1004 illustrated in Fig. 3, according to the third exemplary embodiment. The flowchart illustrated in Fig. 4 is the same as the flowchart according to the first exemplary embodiment. However, since the contents of step 201 and step 202 are different from those according to the first exemplary embodiment, only step 201 and step 202 will be described below.
In step 201, the image coding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit to be coded. According to the third exemplary embodiment, the left reference pixel is calculated by the method illustrated in Fig. 13.
In step 441 illustrated in Fig. 13, the image coding apparatus determines whether the left reference processing unit belongs to the same block as the processing unit to be coded. If the left reference processing unit belongs to the same block as the processing unit to be coded (YES in step 441), the process proceeds to step 421. Whereas, if the left reference processing unit does not belong to the same block as the processing unit to be coded (NO in step 441), the process proceeds to step 401.
In step 401, the image coding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit. In step 421, the image coding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
In step 202 illustrated in Fig. 4, the image coding apparatus calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit to be coded. According to the third exemplary embodiment, the upper reference pixel is calculated by the method illustrated in Fig. 13.
In step 441, the image coding apparatus determines whether the upper reference processing unit belongs to the same block as the processing unit to be coded. If the upper reference processing unit belongs to the same block as the processing unit to be coded (YES in step 441), the process proceeds to step 421. Whereas, if the upper reference processing unit does not belong to the same block as the processing unit to be coded (NO in step 441), the process proceeds to step 401.
In step 401, the image coding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit. In step 421, the image coding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
The process will be described below with reference to Fig. 9. For example, it is assumed that the processing unit S is to be coded. In the calculation of the left reference pixel according to the third exemplary embodiment, since the left processing unit R and the processing unit S belong to different blocks, the left reference pixel is calculated using the reconstructed pixel value of the processing unit R.
On the other hand, the upper processing unit K and the processing unit S belong to the same block. Thus, in the calculation of the upper reference pixel according to the third exemplary embodiment, the prediction value of the processing unit K is used to calculate the upper reference pixel.
Since the processes performed in steps 204, 208, 226, 205, 206, and 207 are the same as those according to the first exemplary embodiment, description will be omitted.
According to the third exemplary embodiment, the reference processing unit whose prediction value is to be used is limited to the processing unit belonging to the same block as the processing unit to be coded. The processing speed can thus be increased by pipelining the process in the processing unit level while reducing the influence of lowered coding efficiency.
The reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the first exemplary embodiment.
Further, according to the present exemplary embodiment, the conventional method is replaced by the above-described method. However, the image coding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the first exemplary embodiment.
The configuration of the image coding apparatus (illustrated in Fig. 1) and the flowchart of the process performed by the image coding apparatus (illustrated in Fig. 3) according to a fourth exemplary embodiment of the present invention is similar to those according to the first exemplary embodiment. Description of these will thus be omitted.
Fig. 4 is the flowchart illustrating in detail the process in step 1004 illustrated in Fig. 3, according to the fourth exemplary embodiment. The flowchart illustrated in Fig. 4 is the same as the flowchart according to the first exemplary embodiment. However, since the contents of step 201 and step 202 are different from those according to the first exemplary embodiment, only step 201 and step 202 will be described below.
In step 201, the image coding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit to be coded. According to the fourth exemplary embodiment, the left reference pixel is calculated by the method illustrated in Fig. 14.
In step 451 illustrated in Fig. 14, the image coding apparatus determines whether the left reference processing unit belongs to the same block as the processing unit to be coded, or belongs to the block in which the prediction process has been performed immediately before the processing unit to be coded. If the left reference processing unit belongs to the same block as the processing unit to be coded, or belongs to the block in which the prediction process has been performed immediately before the processing unit to be coded (YES in step 451), the process proceeds to step 421, whereas if not (NO in step 451), the process proceeds to step 401.
In step 401, the image coding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit. In step 421, the image coding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
In step 202 illustrated in Fig. 4, the image coding apparatus calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit to be coded. According to the fourth exemplary embodiment, the upper reference pixel is calculated by the method illustrated in Fig. 14.
In step 451, the image coding apparatus determines whether the upper reference processing unit belongs to the same block as the processing unit to be coded, or belongs to the block in which the prediction process has been performed immediately before the processing unit to be coded. If the upper reference processing unit belongs to the same block as the processing unit to be coded, or belongs to the block in which the prediction process has been performed immediately before the processing unit to be coded (YES in step 451), the process proceeds to step 421, whereas if not (NO in step 451), the process proceeds to step 401.
In step 401, the image coding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit. In step 421, the image coding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
The process will be described below with reference to Fig. 9. For example, it is assumed that the processing unit K is to be coded. In the calculation of the left reference pixel according to the fourth exemplary embodiment, the left processing unit J belongs to the block in which the prediction process has been performed immediately before the block to which the processing unit K belongs. The left reference pixel is thus calculated using the prediction value of the processing unit J. On the other hand, in the calculation of the upper reference pixel, the upper processing unit C and the processing unit K belong to different blocks. Further, the upper processing unit C does not belong to the block in which the prediction process has been performed immediately before the block to which the processing unit K belongs. According to the fourth exemplary embodiment, the reconstructed pixel of the processing unit C is thus used to calculate the upper reference pixel.
Since the processes performed in steps 204, 208, 226, 205, 206, and 207 are the same as those according to the first exemplary embodiment, description will be omitted.
According to the fourth exemplary embodiment, the reference processing unit whose prediction value is used is limited to the processing unit belonging to the same block as the processing unit to be coded, or to the block which has been predicted immediately before the block to which the processing unit to be coded belongs. The processing speed can thus be increased by pipelining the process in the processing unit level while reducing the influence of lowered coding efficiency.
The reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the first exemplary embodiment.
Further, according to the present exemplary embodiment, the conventional method is replaced by the above-described method. However, the image coding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the first exemplary embodiment.
The configuration of the image coding apparatus (illustrated in Fig. 1) and the flowchart of the process performed by the image coding apparatus (illustrated in Fig. 3) according to a fifth exemplary embodiment of the present invention is similar to those according to the first exemplary embodiment. Description of these will thus be omitted.
Fig. 4 is the flowchart illustrating in detail the process in step 1004 illustrated in Fig. 3, according to the fifth exemplary embodiment. The flowchart illustrated in Fig. 4 is the same as the flowchart according to the first exemplary embodiment. However, since the contents of step 201 and step 202 are different from those according to the first exemplary embodiment, only step 201 and step 202 will be described below.
In step 201, the image coding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit to be coded. According to the fifth exemplary embodiment, the left reference pixel is calculated by the method illustrated in Fig. 15.
In step 461 illustrated in Fig. 15, the image coding apparatus determines whether a perpendicular position of the block to which the left reference processing unit belongs is the same as that of the block to which the processing unit to be coded belongs. If the perpendicular position of the block to which the left reference processing unit belongs is the same as that of the block to which the processing unit to be coded belongs (YES in step 461), the process proceeds to step 421, whereas if not (NO in step 461), the process proceeds to step 401.
In step 401, the image coding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit. In step 421, the image coding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
In step 202 illustrated in Fig. 4, the image coding apparatus then calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit to be coded. According to the fifth exemplary embodiment, the upper reference pixel is calculated by the method illustrated in Fig. 15.
In step 461, the image coding apparatus determines whether a perpendicular position of the block to which the upper reference processing unit belongs is the same as that of the block to which the processing unit to be coded belongs. If the perpendicular position of the block to which the upper reference processing unit belongs is the same as that of the block to which the processing unit to be coded belongs (YES in step 461), the process proceeds to step 421, whereas if not (NO in step 461), the process proceeds to step 401.
In step 401, the image coding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit. In step 421, the image coding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
The process will be described below with reference to Fig. 9. For example, it is assumed that the processing unit M is to be coded. In the calculation of the left reference pixel, the left processing unit L belongs to the block whose perpendicular position is the same as that of the block to which the processing unit M belongs. According to the fifth exemplary embodiment, the image coding apparatus thus calculates the left reference pixel using the prediction value of the processing unit L.
On the other hand, in the calculation of the upper reference pixel, the upper processing unit E belongs to the block whose perpendicular position is not the same as that of the block to which the processing unit M belongs. According to the fifth exemplary embodiment, the image coding unit thus uses the reconstructed pixel value of the processing unit E to calculate the upper reference pixel.
Since the processes performed in steps 204, 208, 226, 205, 206, and 207 are the same as those according to the first exemplary embodiment, description will be omitted.
According to the fifth exemplary embodiment, the reference processing unit whose prediction value is used is limited to the processing unit belonging to the block whose perpendicular position is the same as that of the block to which the processing unit to be coded belongs. The processing speed can thus be increased by pipelining the process in one block line in a horizontal direction while reducing the influence of lowered coding efficiency.
The reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the first exemplary embodiment.
Further, according to the present exemplary embodiment, the conventional method is replaced by the above-described method. However, the image coding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the first exemplary embodiment.
The configuration of the image coding apparatus (illustrated in Fig. 1) and the flowchart of the process performed by the image coding apparatus (illustrated in Fig. 3) according to a sixth exemplary embodiment of the present invention is similar to those according to the first exemplary embodiment. Description of these will thus be omitted.
Fig. 4 is the flowchart illustrating in detail the process in step 1004 illustrated in Fig. 3, according to the sixth exemplary embodiment. The flowchart illustrated in Fig. 4 is the same as the flowchart according to the first exemplary embodiment. However, since the contents of step 201 and step 202 are different from those according to the first exemplary embodiment, only step 201 and step 202 will be described below.
In step 201, the image coding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit to be coded. According to the sixth exemplary embodiment, the left reference pixel is calculated by the method illustrated in Fig. 16.
In step 471 illustrated in Fig. 16, the image coding apparatus determines whether the block to which the left reference processing unit belongs is within a predetermined number of blocks previous to the block to which the processing unit to be coded belongs. If the block to which the left reference processing unit belongs is within the predetermined number of blocks previous to the block to which the processing unit to be coded belongs (YES in step 471), the process proceeds to step 421, whereas if not (NO in step 471), the process proceeds to step 401.
In step 401, the image coding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit. In step 421, the image coding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
In step 202 illustrated in Fig. 4, the image coding apparatus then calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit to be coded. According to the sixth exemplary embodiment, the upper reference pixel is calculated by the method illustrated in Fig. 16.
In step 471, the image coding apparatus determines whether the block to which the upper reference processing unit belongs is within the predetermined number of blocks previous to the block to which the processing unit to be coded belongs. If the block to which the upper reference processing unit belongs is within the predetermined number of blocks previous to the block to which the processing unit to be coded belongs (YES in step 471), the process proceeds to step 421, whereas if not (NO in step 471), the process proceeds to step 401.
In step 401, the image coding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit. In step 421, the image coding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
The process will be described below with reference to Fig. 9. For example, it is assumed that the processing unit O is to be coded, and the predetermined number of blocks is set to five blocks. In such a case, in the calculation of the left reference pixel according to the sixth exemplary embodiment, the block to which the left processing unit N belongs is one block previous to the block to which the processing unit O belongs. Since one is smaller than five, i.e., the predetermined number of blocks, the image coding apparatus calculates the left reference pixel using the prediction value of the processing unit N.
On the other hand, in the calculation of the upper reference pixel according to the sixth exemplary embodiment, the block to which the upper processing unit G belongs is four blocks previous to the block to which the processing unit O belongs. Since four is smaller than five, i.e., the predetermined number of blocks, the image coding apparatus calculates the upper reference pixel using the prediction value of the processing unit G.
Since the processes performed in steps 204, 208, 226, 205, 206, and 207 are the same as those according to the first exemplary embodiment, description will be omitted.
According to the sixth exemplary embodiment, the reference processing unit whose prediction value is used is limited to the processing unit belonging to the block which is within a predetermined number of blocks previous to the block to which the processing unit to be coded belongs. The processing speed can thus be increased by pipelining the process for each of a plurality of blocks while reducing the influence of lowered coding efficiency.
The reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the first exemplary embodiment.
Further, according to the present exemplary embodiment, the conventional method is replaced by the above-described method. However, the image coding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the first exemplary embodiment.
The configuration of the image coding apparatus (illustrated in Fig. 1) and the flowchart of the process performed by the image coding apparatus (illustrated in Fig. 3) according to a seventh exemplary embodiment of the present invention is similar to those according to the first exemplary embodiment. Description of these will thus be omitted.
Fig. 4 is the flowchart illustrating in detail the process in step 1004 illustrated in Fig. 3, according to the seventh exemplary embodiment. The flowchart illustrated in Fig. 4 is the same as the flowchart according to the first exemplary embodiment. However, since the contents of step 201 and step 202 are different from those according to the first exemplary embodiment, only step 201 and step 202 will be described below.
In step 201, the image coding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit to be coded. According to the seventh exemplary embodiment, the left reference pixel is calculated by the method illustrated in Fig. 17.
In step 481 illustrated in Fig. 17, the image coding apparatus determines whether a size of the processing unit to be coded is smaller than a predetermined size of the processing unit. If the processing unit to be coded is smaller than the predetermined size of the processing unit (YES in step 481), the process proceeds to step 421. If the processing unit to be coded is not smaller than a predetermined size of the processing unit (NO in step 481), the process proceeds to step 401.
In step 401, the image coding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit. In step 421, the image coding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
In step 202 illustrated in Fig. 4, the image coding apparatus calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit to be coded. According to the seventh exemplary embodiment, the upper reference pixel is calculated by the method illustrated in Fig. 17.
In step 481, the image coding apparatus determines whether a size of the processing unit to be coded is smaller than the predetermined size of the processing unit. If the processing unit to be coded is smaller than the predetermined size of the processing unit (YES in step 481), the process proceeds to step 421. If the processing unit to be coded is not smaller than a predetermined size of the processing unit (NO in step 481), the process proceeds to step 401.
In step 401, the image coding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit. In step 421, the image coding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
Since the processes performed in steps 204, 208, 226, 205, 206, and 207 are the same as those according to the first exemplary embodiment, description will be omitted.
According to the seventh exemplary embodiment, the reference processing unit whose prediction value is used is dependent on the size of the processing unit to be coded. The processing speed can thus be increased by pipelining the process which is different according to the size of the processing unit to be coded, while reducing the influence of lowered coding efficiency.
The reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the first exemplary embodiment.
Further, according to the present exemplary embodiment, the conventional method is replaced by the above-described method. However, the image coding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the first exemplary embodiment.
Furthermore, according to the present exemplary embodiment, the predetermined size of the processing unit may be a fixed value, or may be multiplexed in the coded data as the side information. Moreover, the predetermined size of the processing unit and the block size may be interdependent. For example, if the size of the processing unit is smaller than or equal to half the size of the block, the image coding apparatus calculates the reference pixel using the prediction value of the reference processing unit. The image coding apparatus calculates the reference pixel using the reconstructed pixel value of the reference processing unit for other cases.
Fig. 2 is a block diagram illustrating a configuration of an image decoding apparatus according to an eighth exemplary embodiment of the present invention. According to the present exemplary embodiment, decoding of the coded data generated according to the first exemplary embodiment will be described below as an example.
Referring to Fig. 2, a code data input unit 101 inputs the code data. A header information decoding unit 102 decodes information such as the configuration of the processing unit or the intra prediction mode, from the code data. An entropy decoding unit 103 decodes the quantization coefficient by processing unit.
A reconstructed pixel value decoding unit 104 performs inverse quantization and inverse orthogonal transform on the quantization coefficient acquired by the entropy decoding unit 103. The reconstructed pixel value decoding unit 104 then adds the result to the intra prediction value calculated by an intra prediction value calculation unit 136 to be described below, and calculates the reconstructed pixel value. A reconstructed pixel value storing unit 105 stores the calculated reconstructed pixel value. The intra prediction value calculation unit 136 calculates the intra prediction value using the intra prediction mode decoded by the header information decoding unit 102. An intra prediction value storing unit 121 stores the calculated intra prediction value.
The process of decoding the image performed by the above-described image decoding apparatus will be described below. The code data input unit 101 inputs the coded data corresponding to one frame, and outputs the coded data corresponding to each block to the header information decoding unit 102. The header information decoding unit 102 receives the coded data corresponding to each block and decodes the information such as the configuration of the processing unit and the intra prediction mode. The header information decoding unit 102 then outputs the intra prediction mode information to the intra prediction value calculation unit 136. Further, the header information decoding unit 102 extracts a portion corresponding to the quantization coefficient from the coded data for each block, and outputs the extracted portion to the entropy decoding unit 103.
The entropy decoding unit 103 decodes the quantization coefficient from the extracted coded data, and outputs the quantization coefficient to the reconstructed pixel value decoding unit 104. The reconstructed pixel value decoding unit 104 performs inverse quantization and inverse transform on the input quantization coefficient and calculates the residual coefficient. The reconstructed pixel value decoding unit 104 then adds the prediction value input from the intra prediction value calculation unit 136, which is described below, to the calculated residual coefficient, and calculates the reconstructed pixel value. The reconstructed pixel value decoding unit 104 outputs the calculated reconstructed pixel value to the reconstructed pixel value storing unit 105.
The reconstructed pixel value storing unit 105 stores the calculated reconstructed pixel value and outputs it to the intra prediction value calculation unit 136 as necessary. Further, the reconstructed pixel value storing unit 105 outputs to the outside the reconstructed pixel value as a portion of a decoded image. The intra prediction value calculation unit 136 calculates as appropriate the prediction value from the surrounding prediction values stored in the intra prediction value storing unit 121 and the surrounding pixel values stored in the reconstructed pixel value storing unit 105 with use of the intra prediction mode output from the header information decoding unit 102. The intra prediction value calculation unit 136 then outputs the calculated prediction value to the reconstructed pixel value decoding unit 104 and the intra prediction value storing unit 121. The intra prediction value storing unit 121 stores the prediction value, and outputs the prediction value to the intra prediction value calculation unit 136 as necessary.
The simple flow of the decoding process will be described below with reference to the drawings. Fig. 6 is a flowchart illustrating the image decoding process performed in the image decoding apparatus according to the eighth exemplary embodiment.
In step 1101, the image decoding apparatus decodes from the input coded data the information corresponding to each block. In step 1102, the image decoding apparatus decodes the intra prediction mode of each processing unit in which prediction is to be performed, existing in each block.
In step 1103, the image decoding apparatus performs entropy decoding on the information and the coefficient in each processing unit. In step 1104, the image decoding apparatus performs inverse quantization and inverse transform on the decoded prediction error, and calculates the coefficient value. In step 1105, the image decoding apparatus calculates the prediction value from the result acquired in step 1102, adds the calculated prediction value to the coefficient value acquired in step 1104, and calculates the reconstructed pixel.
In step 1106, the image decoding apparatus determines whether all of the processing units in the block have been decoded. If decoding has been completed (COMPLETED in step 1106), the process proceeds to step 1107. If decoding has not been completed (NOT COMPLETED in step 1106), the process returns to step 1102, and the image decoding apparatus performs the process on the next processing unit.
In step 1107, the image decoding apparatus determines whether all of the blocks in the frame have been decoded. If decoding has been completed (COMPLETED in step 1107), the image decoding apparatus stops all operations, and ends the process. If decoding has not been completed (NOT COMPLETED in step 1107), the process returns to step 1101, and the image decoding apparatus performs the process on the next block.
The process performed in step 1105 illustrated in Fig. 6 will be described in detail below with reference to the drawings. Fig. 7 is a flowchart illustrating in detail the process of calculating the reconstructed pixel performed in step 1105 according to the eighth exemplary embodiment.
In step 301, the image decoding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded. According to the eighth exemplary embodiment, the image decoding apparatus calculates the left reference pixel by the method illustrated in Fig. 11. In step 421 illustrated in Fig. 11, the image decoding apparatus calculates the reference pixel from the prediction value of the left processing unit.
In step 302 illustrated in Fig. 7, the image decoding apparatus calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded. According to the eighth exemplary embodiment, the image decoding apparatus calculates the upper reference pixel by the method illustrated in Fig. 11, similarly as in step 301.
The process will be described below with reference to Fig. 9. For example, it is assumed that the processing unit T has been coded. According to the eighth exemplary embodiment, the prediction values of the left processing unit S and the upper processing unit L are used as the reference pixels for calculating the prediction value of the processing unit T.
In step 304, the image decoding apparatus calculates the prediction value using the calculated reference pixels. In step 326, the image decoding apparatus stores the calculated prediction value which may be referred to by the processing units in the future. In step 305, the image decoding apparatus acquires the inverse transform coefficient calculated in step 1104 illustrated in Fig. 6, adds the acquired inverse transform coefficient to the prediction value, and calculates the reconstructed pixel value. In step 306, the image decoding apparatus stores the calculated reconstructed pixel value which may be referred to by the processing units in the future, and then ends the process.
According to the above-described configuration and process, the coded data generated by the high-speed coding method realized by pipelining the process according to the first exemplary embodiment can be decoded.
According to the present exemplary embodiment, the left processing unit and the upper processing unit are referred to as illustrated in Fig. 7, in calculating the reference pixels to be used in performing intra prediction. However, the present exemplary embodiment is not limited to this configuration, and may use the upper left processing unit to calculate the reference pixel, as illustrated in Fig. 8 (e.g., the processing unit K illustrated in Fig. 9, when the processing unit T has been coded). Similarly, the upper right processing unit or the lower left processing unit may also be used in calculating the reference pixels (e.g., the processing unit E or S when the processing unit L has been coded).
Further, according to the present exemplary embodiment, the frames employ only the intra prediction process. However, the present exemplary embodiment may also be applied to the frames that can employ the inter prediction process.
Furthermore, according to the present exemplary embodiment, the functions of each or all units may be described in software, and may be performed by a processing unit such as the CPU.
Moreover, the present exemplary embodiment may use a code which employs side information in the coded data which indicates whether the above-described method will be employed in units of frames or slices before coding the block. Accordingly, the present exemplary embodiment may be configured to switch between the conventional method and the above-described method, so that detailed control can be performed. For example, if high speed is demanded, the above-described method according to the present exemplary embodiment is used. However, if compatibility with the conventional method is preferred over high speed, the above-described method is not used.
The configuration of the image decoding apparatus (illustrated in Fig. 2) and the flowchart of the process performed by the image decoding apparatus (illustrated in Fig. 6) according to a ninth exemplary embodiment of the present invention is similar to those according to the eighth exemplary embodiment. Description of these will thus be omitted. According to the present exemplary embodiment, decoding of the coded data generated in the second exemplary embodiment will be described below as an example.
Fig. 7 is the flowchart illustrating in detail the process in step 1105 illustrated in Fig. 6, according to the ninth exemplary embodiment. The flowchart illustrated in Fig. 7 is the same as the flowchart according to the eighth exemplary embodiment. However, since the contents of step 301 and step 302 are different from those according to the eighth exemplary embodiment, only step 301 and step 302 will be described below.
In step 301, the image decoding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded. According to the ninth exemplary embodiment, the left reference pixel is calculated by the method illustrated in Fig. 12.
In step 431 illustrated in Fig. 12, the image decoding apparatus determines whether the left reference processing unit is the processing unit on which the prediction process has been performed immediately before the processing unit that has been coded. If the left reference processing unit is the processing unit on which the prediction process has been performed immediately before the processing unit that has been coded (YES in step 431), the process proceeds to step 421, whereas if not (NO in step 431), the process proceeds to step 401.
In step 401, the image decoding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit. In step 421, the image decoding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
In step 302 illustrated in Fig. 7, the image decoding apparatus then calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded. According to the ninth exemplary embodiment, the upper reference pixel is calculated by the method illustrated in Fig. 12.
In step 431, the image decoding apparatus determines whether the upper reference processing unit is the processing unit on which the prediction process has been performed immediately before the processing unit that has been coded. If the upper reference processing unit is the processing unit on which the prediction process has been performed immediately before the processing unit that has been coded (YES in step 431), the process proceeds to step 421, whereas if not (NO in step 431), the process proceeds to step 401.
In step 401, the image decoding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit. In step 421, the image decoding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
The process will be described below with reference to Fig. 9. For example, it is assumed that the processing unit T has been coded, and the prediction process is performed on in the order of the processing units K, L, S, and T in the block to which the processing unit T belongs. In calculating the left reference pixel, according to the ninth exemplary embodiment, the left processing unit S is the processing unit on which the prediction process has been performed immediately before the processing unit T, so that the image decoding apparatus calculates the left reference pixel using the prediction value of the processing unit S.
On the other hand, in calculating the upper reference pixel, the upper processing unit L is not the processing unit on which the prediction process has been performed immediately before the processing unit T. According to the ninth exemplary embodiment, the image decoding apparatus thus calculates the upper reference pixel using the reconstructed pixel value of the processing unit L.
Since the processes performed in steps 304, 326, 305, and 306 are the same as those according to the eighth exemplary embodiment, description will be omitted.
According to the above-described configuration and process, the coded data generated by the high-speed coding method realized by pipelining the process according to the second exemplary embodiment can be decoded.
The reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the eighth exemplary embodiment.
Further, according to the present exemplary embodiment, the conventional method is replaced by the above-described method. However, the image decoding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the eighth exemplary embodiment.
The configuration of the image decoding apparatus (illustrated in Fig. 2) and the flowchart of the process performed by the image decoding apparatus (illustrated in Fig. 6) according to a tenth exemplary embodiment of the present invention is similar to those according to the eighth exemplary embodiment. Description of these will thus be omitted. According to the present exemplary embodiment, decoding of the coded data generated in the third exemplary embodiment will be described below as an example.
Fig. 7 is the flowchart illustrating in detail the process in step 1105 illustrated in Fig. 6, according to the tenth exemplary embodiment. The flowchart illustrated in Fig. 7 is the same as the flowchart according to the eighth exemplary embodiment. However, since the contents of step 301 and step 302 are different from those according to the eighth exemplary embodiment, only step 301 and step 302 will be described below.
In step 301, the image decoding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded. According to the tenth exemplary embodiment, the left reference pixel is calculated by the method illustrated in Fig. 13.
In step 441 illustrated in Fig. 13, the image decoding apparatus determines whether the left reference processing unit belongs to the same block as the processing unit that has been coded. If the left reference processing unit belongs to the same block as the processing unit that has been coded (YES in step 441), the process proceeds to step 421. On the other hand, if the left reference processing unit does not belong to the same block as the processing unit that has been coded (NO in step 441), the process proceeds to step 401.
In step 401, the image decoding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit. In step 421, the image decoding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
In step 302 illustrated in Fig. 7, the image decoding apparatus then calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded. According to the tenth exemplary embodiment, the upper reference pixel is calculated by the method illustrated in Fig. 13.
In step 441, the image decoding apparatus determines whether the upper reference processing unit belongs to the same block as the processing unit that has been coded. If the upper reference processing unit belongs to the same block as the processing unit that has been coded (YES in step 441), the process proceeds to step 421. On the other hand, if the upper reference processing unit does not belong to the same block as the processing unit that has been coded (NO in step 441), the process proceeds to step 401.
In step 401, the image decoding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit. In step 421, the image decoding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
The process will be described below with reference to Fig. 9. For example, it is assumed that the processing unit S has been coded. In calculating the left reference pixel, according to the tenth exemplary embodiment, since the left processing unit R and the processing unit S belong to different blocks, the left reference pixel is calculated using the reconstructed pixel value of the processing unit R.
On the other hand, in calculating the upper reference pixel, the upper processing unit K and the processing unit S belong to the same block. According to the tenth exemplary embodiment, the prediction value of the processing unit K is thus used to calculate the upper reference pixel.
Since the processes performed in steps 304, 326, 305, and 306 are the same as those according to the eighth exemplary embodiment, description will be omitted.
According to the above-described configuration and process, the coded data generated using the high-speed coding method realized by pipelining the process according to the third exemplary embodiment can be decoded.
The reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the eighth exemplary embodiment.
Further, according to the present exemplary embodiment, the conventional method is replaced by the above-described method. However, the image decoding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the eighth exemplary embodiment.
The configuration of the image decoding apparatus (illustrated in Fig. 2) and the flowchart of the process performed by the image decoding apparatus (illustrated in Fig. 6) according to the eleventh exemplary embodiment of the present invention is similar to those according to the eighth exemplary embodiment. Description of these will thus be omitted. According to the present exemplary embodiment, decoding of the coded data generated in the fourth exemplary embodiment will be described below as an example.
Fig. 7 is the flowchart illustrating in detail the process in step 1105 illustrated in Fig. 6, according to the eleventh exemplary embodiment. The flowchart illustrated in Fig. 7 is the same as the flowchart according to the eighth exemplary embodiment. However, since the contents of step 301 and step 302 are different from those according to the eighth exemplary embodiment, only step 301 and step 302 will be described below.
In step 301, the image decoding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded. According to the eleventh exemplary embodiment, the left reference pixel is calculated by the method illustrated in Fig. 14.
In step 451 illustrated in Fig. 14, the image decoding apparatus determines whether the left reference processing unit belongs to the same block as the processing unit that has been coded, or belongs to the block in which the prediction process has been performed immediately before the processing unit that has been coded. If the left reference processing unit belongs to the same block as the processing unit that has been coded, or belongs to the block in which the prediction process has been performed immediately before the processing unit that has been coded (YES in step 451), the process proceeds to step 421, whereas if not (NO in step 451), the process proceeds to step 401.
In step 401, the image decoding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit. In step 421, the image decoding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
In step 302 illustrated in Fig. 7, the image decoding apparatus then calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded. According to the eleventh exemplary embodiment, the upper reference pixel is calculated by the method illustrated in Fig. 14.
In step 451, the image decoding apparatus determines whether the upper reference processing unit belongs to the same block as the processing unit that has been coded, or belongs to the block in which the prediction process has been performed immediately before the processing unit that has been coded. If the upper reference processing unit belongs to the same block as the processing unit that has been coded, or belongs to the block in which the prediction process has been performed immediately before the processing unit that has been coded (YES in step 451), the process proceeds to step 421, whereas if not (NO in step 451), the process proceeds to step 401.
In step 401, the image decoding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit. In step 421, the image decoding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
The process will be described below with reference to Fig. 9. For example, it is assumed that the processing unit K has been coded. In the calculation of the left reference pixel according to the eleventh exemplary embodiment, the left processing unit J belongs to the block in which the prediction process has been performed immediately before the block to which the processing unit K belongs. The left reference pixel is thus calculated using the prediction value of the processing unit J.
On the other hand, in the calculation of the upper reference pixel, the upper processing unit C and the processing unit K belong to different blocks. Further, the upper processing unit C does not belong to the block in which the prediction process has been performed immediately before the block to which the processing unit K belongs. According to the eleventh exemplary embodiment, the reconstructed pixel of the processing unit C is thus used to calculate the upper reference pixel.
Since the processes performed in steps 304, 326, 305, and 306 are the same as those according to the eighth exemplary embodiment, description will be omitted.
According to the above-described configuration and process, the coded data generated using the high-speed coding method realized by pipelining the process according to the fourth exemplary embodiment can be decoded.
The reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the eighth exemplary embodiment.
Further, according to the present exemplary embodiment, the conventional method is replaced by the above-described method. However, the image decoding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the eighth exemplary embodiment.
The configuration of the image decoding apparatus (illustrated in Fig. 2) and the flowchart of the process performed by the image decoding apparatus (illustrated in Fig. 6) according to a twelfth exemplary embodiment of the present invention is similar to those according to the eighth exemplary embodiment. Description of these will thus be omitted. According to the present exemplary embodiment, decoding of the coded data generated in the fifth exemplary embodiment will be described below as an example.
Fig. 7 is the flowchart illustrating in detail the process in step 1105 illustrated in Fig. 6, according to the twelfth exemplary embodiment. The flowchart illustrated in Fig. 7 is the same as the flowchart according to the eighth exemplary embodiment. However, since the contents of step 301 and step 302 are different from those according to the eighth exemplary embodiment, only step 301 and step 302 will be described below.
In step 301, the image decoding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded. According to the twelfth exemplary embodiment, the left reference pixel is calculated by the method illustrated in Fig. 15.
In step 461 illustrated in Fig. 15, the image decoding apparatus determines whether the perpendicular position of the block to which the left reference processing unit belongs is the same as that of the block to which the processing unit that has been coded belongs. If the perpendicular position of the block to which the left reference processing unit belongs is the same as that of the block to which the processing unit that has been coded belongs (YES in step 461), the process proceeds to step 421, whereas if not (NO in step 461), the process proceeds to step 401.
In step 401, the image decoding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit. In step 421, the image decoding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
In step 302 illustrated in Fig. 7, the image decoding apparatus then calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded. According to the twelfth exemplary embodiment, the upper reference pixel is calculated by the method illustrated in Fig. 15.
In step 461, the image decoding apparatus determines whether the perpendicular position of the block to which the upper reference processing unit belongs is the same as that of the block to which the processing unit that has been coded belongs. If the perpendicular position of the block to which the upper reference processing unit belongs is the same as that of the block to which the processing unit that has been coded belongs (YES in step 461), the process proceeds to step 421, whereas if not (NO in step 461), the process proceeds to step 401.
In step 401, the image decoding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit. In step 421, the image decoding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
The process will be described below with reference to Fig. 9. For example, it is assumed that the processing unit M has been coded. In the calculation of the left reference pixel, the left processing unit L belongs to the block whose perpendicular position is the same as that of the block to which the processing unit M belongs. According to the twelfth exemplary embodiment, the image decoding apparatus thus calculates the left reference pixel using the prediction value of the processing unit L.
On the other hand, in the calculation of the upper reference pixel, the upper processing unit E belongs to the block whose perpendicular position is not the same as that of the block to which the processing unit M belongs. According to the twelfth exemplary embodiment, the image decoding apparatus thus uses the reconstructed pixel of the processing unit E to calculate the upper reference pixel.
Since the processes performed in steps 304, 326, 305, and 306 are the same as those according to the eighth exemplary embodiment, description will be omitted.
According to the above-described configuration and process, the coded data generated using the high-speed coding method realized by pipelining the process according to the fifth exemplary embodiment can be decoded.
The reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the eighth exemplary embodiment.
Further, according to the present exemplary embodiment, the conventional method is replaced by the above-described method. However, the image decoding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the eighth exemplary embodiment.
The configuration of the image decoding apparatus (illustrated in Fig. 2) and the flowchart of the process performed by the image decoding apparatus (illustrated in Fig. 6) according to a thirteenth exemplary embodiment of the present invention is similar to those according to the eighth exemplary embodiment. Description of these will thus be omitted. According to the present exemplary embodiment, decoding of the coded data generated in the sixth exemplary embodiment will be described below as an example.
Fig. 7 is the flowchart illustrating in detail the process in step 1105 illustrated in Fig. 6, according to the thirteenth exemplary embodiment. The flowchart illustrated in Fig. 7 is the same as the flowchart according to the eighth exemplary embodiment. However, since the contents of step 301 and step 302 are different from those according to the eighth exemplary embodiment, only step 301 and step 302 will be described below.
In step 301, the image decoding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded. According to the thirteenth exemplary embodiment, the left reference pixel is calculated by the method illustrated in Fig. 16.
In step 471 illustrated in Fig. 16, the image decoding apparatus determines whether the block to which the left reference processing unit belongs is within a predetermined number of blocks previous to the block to which the processing unit that has been coded belongs. If the block to which the left reference processing unit belongs is within the predetermined number of blocks previous to the block to which the processing unit that has been coded belongs (YES in step 471), the process proceeds to step 421, whereas if not (NO in step 471), the process proceeds to step 401.
In step 401, the image decoding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit. In step 421, the image decoding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
In step 302 illustrated in Fig. 7, the image decoding apparatus then calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded. According to the thirteenth exemplary embodiment, the upper reference pixel is calculated by the method illustrated in Fig. 16.
In step 471, the image decoding apparatus determines whether the block to which the upper reference processing unit belongs is within the predetermined number of blocks previous to the block to which the processing unit that has been coded belongs. If the block to which the upper reference processing unit belongs is within the predetermined number of blocks previous to the block to which the processing unit that has been coded belongs (YES in step 471), the process proceeds to step 421, whereas if not (NO in step 471), the process proceeds to step 401.
In step 401, the image decoding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit. In step 421, the image decoding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
The process will be described below with reference to Fig. 9. For example, it is assumed that the processing unit O has been coded, and the predetermined number of blocks is set to five blocks. In such a case, in the calculation of the left reference pixel, the block to which the left processing unit N belongs is one block previous to the block to which the processing unit O belongs. According to the thirteenth exemplary embodiment, since one is smaller than five, i.e., the predetermined number of blocks, the image decoding apparatus calculates the left reference pixel using the prediction value of the processing unit N.
On the other hand, in the calculation of the upper reference pixel, the block to which the upper processing unit G belongs is four blocks previous to the block to which the processing unit O belongs. According to the thirteenth exemplary embodiment, since four is smaller than five, i.e., the predetermined number of blocks, the image decoding apparatus calculates the left reference pixel using the prediction value of the processing unit G.
Since the processes performed in steps 304, 326, 305, and 306 are the same as those according to the eighth exemplary embodiment, description will be omitted.
According to the above-described configuration and process, the coded data generated using the high-speed coding method realized by pipelining the process according to the sixth exemplary embodiment can be decoded.
The reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the eighth exemplary embodiment.
Further, according to the present exemplary embodiment, the conventional method is replaced by the above-described method. However, the image decoding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the eighth exemplary embodiment.
The configuration of the image decoding apparatus (illustrated in Fig. 2) and the flowchart of the process performed by the image decoding apparatus (illustrated in Fig. 6) according to a fourteenth exemplary embodiment of the present invention is similar to those according to the eighth exemplary embodiment. Description of these will thus be omitted. According to the present exemplary embodiment, decoding of the coded data generated in the seventh exemplary embodiment will be described below as an example.
Fig. 7 is the flowchart illustrating in detail the process in step 1105 illustrated in Fig. 6, according to the fourteenth exemplary embodiment. The flowchart illustrated in Fig. 7 is the same as the flowchart according to the eighth exemplary embodiment. However, since the contents of step 301 and step 302 are different from those according to the eighth exemplary embodiment, only step 301 and step 302 will be described below.
In step 301, the image decoding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded. According to the fourteenth exemplary embodiment, the left reference pixel is calculated by the method illustrated in Fig. 17.
In step 481 illustrated in Fig. 17, the image decoding apparatus determines whether the size of the processing unit that has been coded is smaller than a predetermined size of the processing unit. If the processing unit that has been coded is smaller than the predetermined size of the processing unit (YES in step 481), the process proceeds to step 421. If the processing unit that has been coded is not smaller than the predetermined size of the processing unit (NO in step 481), the process proceeds to step 401.
In step 401, the image decoding apparatus calculates the left reference pixel using the reconstructed pixel value of the left reference processing unit. In step 421, the image decoding apparatus calculates the left reference pixel using the prediction value of the left reference processing unit.
In step 302 illustrated in Fig. 7, the image decoding apparatus then calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit that has been coded. According to the fourteenth exemplary embodiment, the upper reference pixel is calculated by the method illustrated in Fig. 17.
In step 481, the image decoding apparatus determines whether the size of the processing unit that has been coded is smaller than the predetermined size of the processing unit. If the processing unit that has been coded is smaller than the predetermined size of the processing unit (YES in step 481), the process proceeds to step 421. If the processing unit that has been coded is not smaller than the predetermined size of the processing unit (NO in step 481), the process proceeds to step 401.
In step 401, the image decoding apparatus calculates the upper reference pixel using the reconstructed pixel value of the upper reference processing unit. In step 421, the image decoding apparatus calculates the upper reference pixel using the prediction value of the upper reference processing unit.
Since the processes performed in steps 304, 326, 305, and 306 are the same as those according to the eighth exemplary embodiment, description will be omitted.
According to the above-described configuration and process, the coded data generated by the high-speed coding method realized by pipelining the process according to the seventh exemplary embodiment can be decoded.
The reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the eighth exemplary embodiment.
Further, according to the present exemplary embodiment, the conventional method is replaced by the above-described method. However, the image decoding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the eighth exemplary embodiment.
The configuration of the image coding apparatus (illustrated in Fig. 1) and the flowchart of the process performed by the image coding apparatus (illustrated in Fig. 3) according to a fifteenth exemplary embodiment of the present invention is similar to those according to the first exemplary embodiment. Description of these will thus be omitted.
Fig. 20 is a flowchart illustrating in detail the process performed in step 1004 illustrated in Fig. 3, according to the fifteenth exemplary embodiment.
In step 511, the image coding apparatus determines whether the processing unit to be coded is adjacent to an upper end of the block to which the processing unit belongs. If the processing unit to be coded is adjacent to the upper end of the block to which the processing unit belongs (YES in step 511), the process proceeds to step 512. If the processing unit to be coded is not adjacent to the upper end of the block to which the processing unit belongs (NO in step 511), the process proceeds to step 513.
In step 512, the image coding apparatus determines whether the processing unit to be coded is adjacent to a left end of the block to which the processing unit belongs. If the processing unit to be coded is adjacent to the left end of the block to which the processing unit belongs (YES in step 512), the process proceeds to step 501. If the processing unit to be coded is not adjacent to the left end of the block to which the processing unit belongs (NO in step 512), the process proceeds to step 503.
In step 513, the image coding apparatus determines whether the processing unit to be coded is adjacent to the left end of the block to which the processing unit belongs. If the processing unit to be coded is adjacent to the left end of the block to which the processing unit belongs (YES in step 513), the process proceeds to step 505. If the processing unit to be coded is not adjacent to the left end of the block to which the processing unit belongs (NO in step 513), the process proceeds to step 507.
In steps 501, 503, 505, and 507, the image coding apparatus calculates the left reference pixel to be used in calculating the intra prediction value of the processing unit to be coded. According to the fifteenth exemplary embodiment, the image coding apparatus calculates the left reference pixel by the method illustrated in Fig. 10 or Fig. 11.
For example, in step 501 and step 505, the image coding apparatus may calculate the left reference pixel by the method illustrated in Fig. 11. In step 503 and step 507, the image coding apparatus may calculate the left reference pixel by the method illustrated in Fig. 10. The combination of the calculation methods is not limited to the above.
In steps 502, 504, 506, and 508, the image coding apparatus calculates the upper reference pixel to be used in calculating the intra prediction value of the processing unit to be coded. According to the fifteenth exemplary embodiment, the image coding apparatus calculates the upper reference pixel by the method illustrated in Fig. 10 or Fig. 11.
For example, in step 502 and step 504, the image coding apparatus may calculate the upper reference pixel by the method illustrated in Fig. 11. In step 506 and step 508, the image coding apparatus may calculate the upper reference pixel by the method illustrated in Fig. 10. The combination of the calculation methods is not limited to the above.
Since the processes performed in steps 204, 208, 226, 205, 206, and 207 are the same as those according to the first exemplary embodiment, description will be omitted.
According to the fifteenth exemplary embodiment, the reference pixel whose prediction value is to be used is flexibly set according to a relative position of the processing unit to be coded within the block. As a result, when the image coding apparatus according to the present exemplary embodiment is implemented as an actual device, coding performance and high-speed processing by pipelining the process can be appropriately realized.
The reference pixel may also be generated using the upper left, upper right, or lower left processing unit, in addition to the left and upper processing units, similarly as in the first exemplary embodiment.
Further, according to the present exemplary embodiment, the conventional method is replaced by the above-described method. However, the image coding apparatus may be configured to switch between performing the conventional method and the above-described method using the side information included in the coded data, similarly as in the first exemplary embodiment.
Furthermore, according to the present exemplary embodiment, the predetermined size of the processing unit may be a fixed value, or may be multiplexed in the coded data as the side information.
Fig. 18 is a block diagram illustrating an image coding-decoding apparatus according to a sixteenth exemplary embodiment of the present invention.
Referring to Fig. 18, a CPU 600 performs control of the apparatus and various processes. A memory 601 provides a storage area necessary for storing an operating system (OS) and software, and for performing operations for controlling the image coding-decoding apparatus. A bus 602 connects various devices for receiving and transmitting data and control signals. A terminal 603 is used for activating the apparatus, setting various conditions, and instructing reproduction to the apparatus. A storage device 604 stores the software. A storage device 605 stores the streams. The storage devices 604 and 605 can be media that can be separated from the system and be moved.
A camera 606 captures a moving image. A monitor 607 displays the image. A communication circuit 609 includes a local area network (LAN), a public line, a wireless line, and airwaves. A communication interface 608 transmits and receives the streams via the communication circuit 609.
The moving image coding process performed in the above-described configuration will be described below. An example in which the image data input from the camera 606 is coded and output to the communication circuit 609 will be described below.
The storage device 604 stores image coding software in which the process of the flowchart illustrated in Fig. 3 is written. When the software is activated from the terminal 603, the image coding software is read out from the storage device 604 and loaded in the memory 601, and the coding process is started. The image data input from the camera 606 is thus coded by the image coding software using the optimum intra prediction mode, and is output via the communication interface 608.
Further, a case where the coded data input from the communication circuit 609 via the communication interface 608 is decoded and displayed on the monitor 607 will be described below as an example.
The storage device 604 stores image decoding software in which the process of the flowchart illustrated in Fig. 6 is written. When the terminal 603 activates the software, the image decoding software is read out from the storage device 604 and loaded in the memory 601, and the decoding process is started.
The coded data input from the communication circuit 609 via the communication interface 608 is decoded by the image decoding software, so that the image data is generated and displayed on the monitor 607.
The image data is not limited the one to be input from the camera 606, and may be read out from the storage device 605. Further, the coded data is also not limited to the one to be input to and output from the communication interface 608, and may be recorded in and read out from the storage device 605. As described above, the present invention may be implemented as software.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all modifications, equivalent structures, and functions.
This application claims the benefit of Japanese Patent Application No. 2011-004647, filed January 13, 2011, which is hereby incorporated by reference herein in its entirety.

Claims (2)

  1. An image coding apparatus comprising:
    a block division unit configured to divide an input image into a plurality of blocks;
    a processing unit division unit configured to divide each block into processing units of a same size or of a smaller size as compared to the block;
    an intra prediction value calculation unit configured to refer to, for each processing unit, a plurality of predetermined processing units or one predetermined processing unit surrounding the processing unit and calculate a prediction value of the processing unit;
    a reconstructed pixel value calculation unit configured to calculate for each processing unit a reconstructed pixel value of the processing unit according to the input image and a result of the intra prediction calculation unit with use of a predetermined method;
    a reconstructed pixel value storing unit configured to store a result of the reconstructed pixel value calculation unit; and
    an intra prediction value storing unit configured to store a result of the intra prediction value calculation unit,
    wherein the intra prediction value calculation unit calculates a prediction value of the processing unit by switching, based on a positional relationship between the processing unit and a processing unit to be referred to or a status of the processing unit, between a reconstructed pixel value of the processing unit to be referred to acquired from the reconstructed pixel value storing unit and an intra prediction value of the processing unit to be referred to acquired from the intra prediction value storing unit.
  2. A method for performing image coding in an image coding apparatus, the method comprising:
    dividing an input image into a plurality of blocks;
    dividing each block into processing units of a same size or of a smaller size as compared to the block;
    referring to, for each processing unit, a plurality of predetermined processing units or one predetermined processing unit surrounding the processing unit and calculating a prediction value of the processing unit;
    calculating for each processing unit a reconstructed pixel value of the processing unit according to the input image and the calculated intra prediction value with use of a predetermined method;
    storing a result of calculation of the reconstructed pixel value in a storing unit;
    storing a result of calculation of the intra prediction value in a storing unit; and
    calculating a prediction value of the processing unit by switching, based on a positional relationship between the processing unit and a processing unit to be referred to or a status of the processing unit, between a reconstructed pixel value of the processing unit to be referred to acquired from the stored reconstructed pixel values and an intra prediction value of the processing unit to be referred to acquired from the stored intra prediction values.
PCT/JP2012/000147 2011-01-13 2012-01-12 Image coding apparatus, image coding method and program, image decoding apparatus, and image decoding method and program WO2012096177A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011-004647 2011-01-13
JP2011004647A JP2012147292A (en) 2011-01-13 2011-01-13 Image encoder, image encoding method and program, image decoder, and image decoding method and program

Publications (1)

Publication Number Publication Date
WO2012096177A1 true WO2012096177A1 (en) 2012-07-19

Family

ID=46507084

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/000147 WO2012096177A1 (en) 2011-01-13 2012-01-12 Image coding apparatus, image coding method and program, image decoding apparatus, and image decoding method and program

Country Status (2)

Country Link
JP (1) JP2012147292A (en)
WO (1) WO2012096177A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004140473A (en) * 2002-10-15 2004-05-13 Sony Corp Image information coding apparatus, decoding apparatus and method for coding image information, method for decoding
JP2006173808A (en) * 2004-12-13 2006-06-29 Matsushita Electric Ind Co Ltd In-plane prediction device and method
JP2007013298A (en) * 2005-06-28 2007-01-18 Renesas Technology Corp Image coding apparatus
JP2007124409A (en) * 2005-10-28 2007-05-17 Matsushita Electric Ind Co Ltd Image coding apparatus
JP2007150913A (en) * 2005-11-29 2007-06-14 Matsushita Electric Ind Co Ltd Image encoding device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004140473A (en) * 2002-10-15 2004-05-13 Sony Corp Image information coding apparatus, decoding apparatus and method for coding image information, method for decoding
JP2006173808A (en) * 2004-12-13 2006-06-29 Matsushita Electric Ind Co Ltd In-plane prediction device and method
JP2007013298A (en) * 2005-06-28 2007-01-18 Renesas Technology Corp Image coding apparatus
JP2007124409A (en) * 2005-10-28 2007-05-17 Matsushita Electric Ind Co Ltd Image coding apparatus
JP2007150913A (en) * 2005-11-29 2007-06-14 Matsushita Electric Ind Co Ltd Image encoding device

Also Published As

Publication number Publication date
JP2012147292A (en) 2012-08-02

Similar Documents

Publication Publication Date Title
EP2839647B1 (en) Constraints and unit types to simplify video random access
TWI544787B (en) Intra prediction modes for lossy coding when transform is skipped
KR101572535B1 (en) Lossless coding and associated signaling methods for compound video
US11638003B2 (en) Video coding and decoding methods and devices using a library picture bitstream
US20210185360A1 (en) Method and apparatus for encoding/decoding images
JP7250917B2 (en) Method and Apparatus for Intra Prediction Using Interpolating Filters
CN112385234B (en) Apparatus and method for image and video coding
EP3262840B1 (en) Mitigating loss in inter-operability scenarios for digital video
KR101118091B1 (en) Apparatus and Method for Processing Video Data
KR20130045785A (en) Method for managing a memory and apparatus for video coding thereof
KR101147744B1 (en) Method and Apparatus of video transcoding and PVR of using the same
CN113508592A (en) Encoder, decoder and corresponding inter-frame prediction method
CN115665408A (en) Filtering method and device for cross-component linear model prediction
CN116233470B (en) Encoder, decoder and corresponding methods for indicating high level syntax
KR20150013112A (en) Video-encoding method, video-decoding method, and apparatus implementing same
CN113785573A (en) Encoder, decoder and corresponding methods using an adaptive loop filter
KR102558495B1 (en) A video encoding/decoding method for signaling HLS, a computer readable recording medium storing an apparatus and a bitstream
US20120263225A1 (en) Apparatus and method for encoding moving picture
CN113455005A (en) Deblocking filter for sub-partition boundaries generated by intra sub-partition coding tools
TWI826969B (en) Encoder and decoder, encoding method and decoding method with profile and level dependent coding options
CN113330743A (en) Encoder, decoder and corresponding method for deblocking filter adaptation
US20140321528A1 (en) Video encoding and/or decoding method and video encoding and/or decoding apparatus
CN115349257A (en) Use of DCT-based interpolation filters
CN114830665A (en) Affine motion model restriction
US20230300346A1 (en) Supporting view direction based random access of bitsteam

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12734578

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12734578

Country of ref document: EP

Kind code of ref document: A1