WO2010110126A1 - 画像予測符号化装置、画像予測符号化方法、画像予測符号化プログラム、画像予測復号装置、画像予測復号方法、及び画像予測復号プログラム - Google Patents
画像予測符号化装置、画像予測符号化方法、画像予測符号化プログラム、画像予測復号装置、画像予測復号方法、及び画像予測復号プログラム Download PDFInfo
- Publication number
- WO2010110126A1 WO2010110126A1 PCT/JP2010/054441 JP2010054441W WO2010110126A1 WO 2010110126 A1 WO2010110126 A1 WO 2010110126A1 JP 2010054441 W JP2010054441 W JP 2010054441W WO 2010110126 A1 WO2010110126 A1 WO 2010110126A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- signal
- prediction
- region
- prediction information
- target
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 100
- 230000006870 function Effects 0.000 claims description 22
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 claims description 16
- 238000007405 data analysis Methods 0.000 claims description 4
- 230000015654 memory Effects 0.000 description 35
- 238000012545 processing Methods 0.000 description 14
- 239000013598 vector Substances 0.000 description 14
- 238000010586 diagram Methods 0.000 description 11
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000013139 quantization Methods 0.000 description 4
- 230000002457 bidirectional effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 238000013213 extrapolation Methods 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001172 regenerating effect Effects 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003936 working memory Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
- H04N19/196—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/537—Motion estimation other than block-based
- H04N19/543—Motion estimation other than block-based using regions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/109—Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
Definitions
- the present invention relates to an image predictive encoding device, an image predictive encoding method, an image predictive encoding program, an image predictive decoding device, an image predictive decoding method, and an image predictive decoding program. More specifically, the present invention relates to an image predictive encoding device, an image predictive encoding method, an image predictive encoding program, an image predictive decoding device, an image predictive decoding method, which performs predictive encoding and predictive decoding using region division, And an image predictive decoding program.
- Compressive coding technology is used to efficiently transmit and store still images and moving image data.
- Examples of compression encoding methods for moving images include MPEG-1 to 4 and ITU (International Telecommunication Union) H.264. 261-H. H.264 is widely used.
- encoding / decoding processing is performed after an image to be encoded is divided into a plurality of blocks.
- predictive coding within a screen a prediction signal of the target block is generated using an adjacent reproduced image signal within the same screen as the target block.
- the already reproduced image signal is one obtained by restoring compressed image data.
- a difference signal is generated by subtracting the prediction signal from the signal of the target block, but the difference signal is encoded.
- predictive coding between screens a predicted signal is generated by referring to an already reproduced image signal in a screen different from the target block and performing motion correction.
- predictive encoding between screens a difference signal is generated by subtracting and subtracting the prediction signal from the signal of the target block, and the difference signal is encoded.
- FIG. 20 shows ITU H.264. 2 is a schematic diagram for explaining an intra-screen prediction method used in H.264.
- FIG. 20A shows an in-screen prediction method in which extrapolation is performed in the vertical direction.
- a 4 ⁇ 4 pixel target block 802 is a target block to be encoded.
- a pixel group 801 composed of pixels A to M adjacent to the boundary of the target block 802 is an adjacent region, and is an image signal that has already been reproduced in past processing.
- the prediction signal is generated by extrapolating the pixel values of the adjacent pixels A to D directly above the target block 802 downward.
- FIG. 20 shows an in-screen prediction method for performing extrapolation in the horizontal direction.
- a prediction signal is generated by extrapolating the pixel values of the already reproduced pixels I to L on the left of the target block 802 in the right direction.
- the prediction signal that minimizes the difference from the pixel signal of the target block is used for the target block. Adopted as an optimal prediction signal.
- a specific method for generating a prediction signal in this way is described in Patent Document 1, for example.
- a prediction signal is generated by searching for a signal similar to the pixel signal of the target block to be encoded from the already reproduced screen.
- a motion vector and a residual signal between the pixel signal of the target block and the prediction signal are encoded.
- the motion vector is a vector that defines the amount of spatial displacement between the target block and the area where the searched signal exists. Such a method for searching for a motion vector for each block is called block matching.
- FIG. 21 is a schematic diagram for explaining block matching.
- a screen 903 that has already been reproduced is shown in (a)
- a screen 901 including the target block 902 is shown in (b).
- an area 904 in the screen 903 is an area that exists in the same spatial position as the target block 902.
- a search range 905 surrounding the region 904 is set, and a region 906 in which the absolute value error sum with the pixel signal of the target block 402 is minimum is detected from the pixel signal in this search range.
- the signal in the area 906 becomes a prediction signal, and a vector that defines the amount of displacement from the area 904 to the area 906 is detected as a motion vector 907.
- H. H.264 provides a plurality of prediction types having different block sizes for encoding motion vectors in order to cope with changes in local features of an image.
- H. H.264 prediction types are described in Patent Document 2, for example.
- the encoding order of each screen may be arbitrary.
- the first method is forward prediction in which a prediction signal is generated with reference to a past reproduced screen in the reproduction order
- the second method is a prediction signal with reference to a future reproduced screen in the reproduction order.
- This is a backward prediction to be generated
- the third method is bidirectional prediction in which both forward prediction and backward prediction are performed and two prediction signals are averaged.
- the kind of prediction between screens it describes in patent document 3, for example.
- the generation of the prediction signal is performed in units of blocks.
- the block since the position and movement of the moving object in the video are arbitrary, when the screen is divided into blocks at equal intervals, the block may include two or more areas having different movements and patterns. . In such a case, a large prediction error occurs near the edge of the object in the video predictive coding.
- H. In H.264 a plurality of prediction types having different block sizes are prepared in order to cope with such local changes in the image characteristics and suppress an increase in prediction error.
- additional information such as a motion vector
- mode information for selecting the block size is also required, and the amount of code of the mode information is increased.
- one aspect of the present invention is to reduce the prediction error of a target block and efficiently encode an image while suppressing an increase in prediction information such as additional information (such as motion vectors) and mode information. It is an object of the present invention to provide a possible image predictive encoding device, an image predictive encoding method, and an image predictive encoding program. Another object of the present invention is to provide an image predictive decoding apparatus, an image predictive decoding method, and an image predictive decoding program corresponding to these encoding aspects.
- An image predictive coding apparatus includes: (a) region dividing means for dividing an input image into a plurality of regions; and (b) a predicted signal of a pixel signal in a target region among the plurality of regions as a previously reproduced signal.
- Prediction information estimation means for generating prediction information used for generating the prediction signal as prediction information associated with the target region, and (c) prediction information encoding for encoding prediction information associated with the target region And (d) comparing prediction information associated with the target region with prediction information associated with an adjacent region adjacent to the target region, and generating a prediction signal for the target region based on the comparison result
- Determining means for determining whether or not the prediction information associated with the adjacent area can be used; and (e) the prediction information associated with the adjacent area can be used for generating a prediction signal of the target area by the determining means.
- a region width determining means for determining a region width of the partial region which is a partial region in the target region and generates a prediction signal using prediction information associated with the adjacent region; and (f) the target region (G) prediction information associated with the target region, prediction information associated with the adjacent region, and the region width. And (h) a residual signal for generating a residual signal between the prediction signal of the target area and the pixel signal of the target area. Difference signal generating means; (i) residual signal encoding means for encoding the residual signal; and (j) residual for generating a reproduction residual signal by decoding the encoded data of the residual signal.
- An image predictive coding method includes (a) a region dividing step of dividing an input image into a plurality of regions, and (b) a prediction signal of a pixel signal in a target region among the plurality of regions.
- a prediction information estimation step which is generated from a reproduction signal and obtains prediction information used for generation of the prediction signal as prediction information associated with the target region; and (c) prediction information for encoding the prediction information associated with the target region.
- An encoding step (d) comparing prediction information associated with the target region with prediction information associated with an adjacent region adjacent to the target region, and based on a result of the comparison, a prediction signal of the target region
- a determination step for determining whether or not prediction information associated with the adjacent region can be used for generation of (e) prediction information associated with the adjacent region for generation of a prediction signal of the target region in the determination step;
- Region width determination for determining a region width of the partial region that is a partial region in the target region and that generates a prediction signal using prediction information accompanying the adjacent region A step,
- a region width encoding step for encoding information for specifying the region width,
- a prediction signal generation step of generating a prediction signal of the target area from the already reproduced signal using a width; and
- a residual signal generation step for generating, (i) a residual signal encoding step for encoding the residual signal, and (j) a reproduced residual signal by decoding encoded data of the residual signal.
- Residual signal restoration And (k) a reproduction signal generating step for generating a reproduction signal of the target area by adding the prediction signal and the reproduction residual signal; and (l) a reproduction signal of the target area is already reproduced. Storing as a signal.
- An image predictive encoding program includes: (a) region dividing means for dividing an input image into a plurality of regions; and (b) prediction of a pixel signal in a target region among the plurality of regions.
- a prediction information estimating means for generating a signal from an already reproduced signal and obtaining the prediction information used to generate the prediction signal as prediction information associated with the target area; and (c) encoding the prediction information associated with the target area.
- Determining means for determining whether or not prediction information associated with the adjacent area can be used for generating a prediction signal of the target area; and (e) prediction associated with the adjacent area for generating a prediction signal of the target area by the determining means.
- a region width that determines a region width of the partial region that is a partial region in the target region and that generates a prediction signal using prediction information that accompanies the adjacent region when it is determined that a report is available Determining means; (f) area width encoding means for encoding information for specifying the area width; (g) prediction information associated with the target area; prediction information associated with the adjacent area; and Prediction signal generating means for generating a prediction signal of the target area from the already reproduced signal using the area width; and (h) a residual signal between the prediction signal of the target area and the pixel signal of the target area (I) residual signal encoding means for encoding the residual signal, and (j) decoding the residual data by decoding the encoded data of the residual signal.
- Residual signal restoration means to be generated An addition means for generating a reproduction signal of the target area by adding a signal and the reproduction residual signal; and (l) a storage means for storing the reproduction signal of the target area as the already reproduced signal.
- the prediction signal of the partial region in the target region is generated using the prediction information of the adjacent region. Therefore, according to the aspect of the encoding of the present invention, the prediction error of the target region where the edge exists can be reduced. In addition, since the prediction information of the adjacent region is used to generate a prediction signal of the partial region in the target region, it is possible to suppress an increase in the amount of prediction information.
- the prediction information associated with the adjacent region is used to generate a prediction signal for the target region. It may be determined that it is not possible. This is because when the prediction information associated with the target region is the same as the prediction information associated with the adjacent region, the prediction error of the target region cannot be reduced.
- the adjacent region when it is determined that the combination of the prediction information associated with the target region and the prediction information associated with the adjacent region does not satisfy a preset condition, the adjacent region is used to generate a prediction signal for the target region. It may be determined that the prediction information associated with is not available.
- the encoded data of the region width associated with the target region is It does not have to be output. Thereby, the code amount is reduced.
- the adjacent area may be two adjacent areas on the left side and on the upper side of the target area.
- the prediction used to generate a prediction signal of the target area of the two adjacent areas It is possible to encode identification information that identifies an adjacent region having information. According to this feature, a prediction signal of a partial region can be generated from the optimal adjacent region of the two adjacent regions, so that a prediction error can be further reduced.
- An image predictive decoding device includes (a) encoded data of prediction information used for generating a prediction signal of a target region from compressed data encoded by dividing an image into a plurality of regions; Encoded data of information for specifying a region width of a partial region in the target region for generating a prediction signal using prediction information associated with an adjacent region adjacent to the target region, and encoded data of a residual signal And (b) prediction information decoding means for decoding the encoded data of the prediction information to restore the prediction information associated with the target area, and (c) associated with the target area.
- the prediction information is compared with the prediction information associated with the adjacent region, and based on the result of the comparison, it is determined whether the prediction information associated with the adjacent region can be used to generate the prediction signal of the target region. Determining means to ) When it is determined by the determination means that the prediction information associated with the adjacent region can be used to generate the prediction signal of the target region, the encoded data of the information for specifying the region width is decoded.
- Region width decoding means for restoring the region width; and (e) prediction information associated with the target region, prediction information associated with the adjacent region, and the region width, using the region width, the target region (F) prediction signal generation means for generating a prediction signal of the target area, (f) residual signal restoration means for restoring the reproduction residual signal of the target area from the encoded data of the residual signal, and (g) prediction of the target area Adding means for generating a reproduction signal of the target area by adding a signal and the reproduction residual signal; and (h) storage means for storing the reproduction signal of the target area as the already reproduced signal. Yes.
- the image predictive decoding method includes (a) encoded data of prediction information used for generating a prediction signal of a target region from compressed data encoded by dividing an image into a plurality of regions. Encoded data of information for specifying a region width of a partial region in the target region that generates a prediction signal using prediction information associated with an adjacent region adjacent to the target region, and a code of a residual signal (B) a prediction information decoding step for decoding the encoded data of the prediction information to restore the prediction information associated with the target region; and (c) the target data in the target region.
- a region width decoding step for decoding the encoded data and restoring the region width; and (e) prediction information associated with the target region, prediction information associated with the adjacent region, and the region width
- a prediction signal generation step of generating a prediction signal of the target region from a signal
- a residual signal recovery step of recovering a reproduction residual signal of the target region from encoded data of the residual signal
- a reproduction signal generating step of generating a reproduction signal of the target area by adding the prediction signal of the target area and the reproduction residual signal
- the reproduction signal of the target area is already reproduced. It includes a storing step of storing as No..
- the image predictive decoding program includes: (a) prediction information used for generating a prediction signal of a target region from compressed data encoded by dividing an image into a plurality of regions; Encoded data, encoded data of information for specifying a region width of a partial region in the target region that generates a prediction signal using prediction information associated with an adjacent region adjacent to the target region, and a residual (B) a prediction information decoding unit that decodes the encoded data of the prediction information and restores the prediction information associated with the target region; and (c) the above
- the prediction information associated with the target region is compared with the prediction information associated with the adjacent region, and based on the result of the comparison, the prediction information associated with the adjacent region can be used to generate a prediction signal for the target region.
- Determining means for determining whether or not (d) the area width is specified when it is determined by the determining means that the prediction information associated with the adjacent area can be used to generate a prediction signal of the target area
- Region width decoding means for decoding encoded data of information to restore the region width, and (e) prediction information associated with the target region, prediction information associated with the adjacent region, and the region width
- a prediction signal generating means for generating a prediction signal of the target area from the already reproduced signal; and (f) a residual signal recovery for recovering the reproduction residual signal of the target area from the encoded data of the residual signal.
- Means (g) adding means for generating a reproduction signal of the target area by adding the prediction signal of the target area and the reproduction residual signal; and (h) regenerating the reproduction signal of the target area Remember as signal And ⁇ means, to function as a.
- the present invention related to decoding can suitably reproduce an image from the compressed data generated by the encoding of the present invention described above.
- the prediction information associated with the adjacent region is generated in generating the prediction signal of the target region. It may be determined that it cannot be used. Further, when it is determined that the combination of the prediction information associated with the target region and the prediction information associated with the adjacent region does not satisfy the preset condition, the prediction information associated with the adjacent region is generated in the generation of the prediction signal of the target region. It may be determined that it cannot be used.
- the region width associated with the target region can be set to zero.
- the adjacent area may be two adjacent areas on the left side and the upper side of the target area.
- the region width decoding unit determines the prediction signal of the target region of the two adjacent regions. It is possible to decode the identification information that identifies the adjacent region having the prediction information used for generating the.
- a method and an image predictive encoding program are provided. Further, according to the present invention, a corresponding image predictive decoding apparatus, image predictive decoding method, and image predictive decoding program are provided.
- ITU H. 2 is a schematic diagram for explaining an intra-screen prediction method used for H.264. It is a schematic diagram for demonstrating block matching.
- FIG. 1 is a diagram illustrating an image predictive encoding device according to an embodiment.
- An image predictive coding apparatus 100 shown in FIG. 1 includes an input terminal 102, a block divider 104, a prediction signal generator 106, a frame memory 108, a subtractor 110, a converter 112, a quantizer 114, an inverse quantizer 116, Inverse transformer 118, adder 120, quantized transform coefficient encoder 122, output terminal 124, prediction information estimator 126, prediction information memory 128, determiner 130, prediction information encoder 132, region width determiner 134, In addition, a region width encoder 136 is provided.
- the converter 112, the quantizer 114, and the quantized transform coefficient encoder 122 function as a residual signal encoding unit
- the inverse quantizer 116 and the inverse transformer 118 function as a residual signal restoration unit. Is.
- the input terminal 102 is a terminal for inputting a moving image signal.
- This moving image signal is a signal including a plurality of images.
- the input terminal 102 is connected to the block divider 104 via a line L102.
- the block divider 104 divides an image included in a moving image signal into a plurality of regions. Specifically, the block divider 104 sequentially selects a plurality of images included in the moving image signal as images to be encoded. The block divider 104 divides the selected image into a plurality of areas. In the present embodiment, these areas are 8 ⁇ 8 pixel blocks. However, blocks having other sizes and / or shapes may be used as the regions.
- the block divider 104 is connected to the prediction information estimator 126 via a line L104.
- the prediction information estimator 126 detects prediction information necessary for generating a prediction signal of a target region (target block) to be encoded.
- a prediction information generation method that is, a prediction method, intra-screen prediction and inter-screen prediction as described in the background art can be applied.
- the present invention is not limited to these prediction methods.
- the description will be given on the assumption that the prediction process is performed by the block matching shown in FIG.
- the prediction information includes a motion vector, reference screen selection information, and the like.
- the prediction information detected to generate the prediction signal of the target block is referred to as “prediction information associated with the target block”.
- the prediction information estimator 126 is connected to the prediction information memory 128 and the prediction information encoder 132 via a line L126a and a line L126b.
- the prediction information memory 128 receives the prediction information from the prediction information estimator 126 via the line L126a, and stores the prediction information.
- the prediction information memory 128 is connected to the determiner 130 via a line L128.
- the prediction information encoder 132 receives the prediction information from the prediction information estimator 126 via the line L126b.
- the prediction information encoder 132 entropy encodes the received prediction information to generate encoded data, and outputs the encoded data to the output terminal 124 via the line L132.
- entropy coding arithmetic coding, variable length coding, or the like can be used, but the present invention is not limited to these entropy coding methods.
- the determiner 130 receives the prediction information associated with the target block and the prediction information associated with the adjacent block from the prediction information memory 128 via the line L128.
- the adjacent block is an adjacent area adjacent to the target block, and is an already encoded area.
- the determiner 130 compares the prediction information associated with the target block with the prediction information associated with the adjacent block, and determines whether the prediction information associated with the adjacent block is available for generating a prediction signal of the target block. .
- the determiner 130 compares the prediction information associated with the target block with the prediction information associated with the adjacent block, and if these two pieces of prediction information match, the determination unit 130 generates a prediction signal for the target block. It is determined that the prediction information associated with the adjacent block cannot be used. This is because when two prediction information matches, even if a prediction signal of a partial area of the target block is generated using prediction information attached to an adjacent block, a prediction signal is generated using prediction information attached to the target block. This is because the same result as that obtained can be obtained. That is, a reduction in prediction error cannot be expected.
- the determiner 130 determines that the prediction information associated with the adjacent block can be used to generate the prediction signal of the target block.
- the determination unit 130 is connected to the region width determiner 134 and the region width encoder 136 via a line L130, and the result of comparison (determination) by the determination unit 130 is determined via the line L130. 134 and the region width encoder 136.
- the determination result when the prediction information associated with the adjacent block cannot be used for generation of the prediction signal of the target block is referred to as a determination result indicating “unusable”, and the generation of the prediction signal of the target block is associated with the adjacent block.
- the determination result when the prediction information to be used is available is referred to as a determination result indicating “available”. Details of the operation of the determiner 130 will be described later.
- the area width determiner 134 receives the determination result from the determiner 130 via the line L130.
- the region width determiner 134 determines the region width of the partial region in the target block in which the prediction signal is generated using the prediction information attached to the adjacent block.
- the region width determiner 134 receives prediction information associated with the target block and prediction information associated with an adjacent block from the prediction information memory 128 via the line L128a. Further, the region width determiner 134 receives the already reproduced signal from the frame memory 108 and receives the image signal of the target block from the block divider 104.
- FIG. 2 is a diagram for explaining a partial region in a target block for generating a prediction signal using prediction information of adjacent blocks.
- FIG. 2 shows a case in which the adjacent block B1 adjacent to the left of the target block Bt is set as an adjacent block.
- the adjacent block in the present invention may be a block adjacent to the target block, or It may be both upper adjacent blocks. In some cases, the right adjacent block and the lower adjacent block can be used as adjacent blocks.
- the target block Bt and the adjacent block B1 shown in FIG. 2 are 8 ⁇ 8 pixel blocks.
- the upper left pixel position (horizontal position, vertical position) is represented by (0, 0)
- the lower right pixel position (horizontal position, vertical position) is represented by (7, 7).
- the partial region R2 shown in FIG. 2 is a region for generating a prediction signal using prediction information of the adjacent block B1, and the horizontal region width is w. That is, the partial region R2 is surrounded by four pixel positions (0, 0), (w-1, 0), (0, 7), and (w-1, 7).
- the partial region R1 is a region where a prediction signal is generated using prediction information associated with the target block.
- the settable area width is set to 0 to 7 pixels in 1 pixel increments.
- the region width determiner 134 according to the present embodiment generates a prediction signal of the target block for each of the eight settable region widths, and selects the region width that minimizes the absolute value sum or the square sum of the prediction errors. To do.
- the pixel signal of the target block and the prediction information associated with the target block and the adjacent block are acquired from the block divider 104 and the prediction information memory 128, respectively, and stored in the frame memory 108 based on the prediction information and the region width. This is implemented by generating a prediction signal of the target block from the already reproduced signal.
- the region width determination method and the settable region width candidates are not particularly limited.
- the settable region width may be a pixel width specified by a multiple of 2, and one or more arbitrary widths can be adopted.
- a plurality of settable region widths may be prepared, and the selection information may be encoded in sequence units, frame units, or block units.
- the region width determiner 134 is connected to the region width encoder 136 and the prediction signal generator 106 via a line L134a and a line L134b, respectively.
- the region width determiner 134 outputs the determined region width (information specifying the region width) to the region width encoder 136 and the prediction signal generator 106 via the line L134a and the line L134b.
- the region width encoder 136 entropy-encodes the region width received via the line L134a when the determination result received from the determiner 130 indicates “available” to generate encoded data.
- the region width encoder 136 can use an entropy encoding scheme such as arithmetic encoding or variable length encoding, but the present invention is not limited to these encoding schemes.
- the area width encoder 136 is connected to the output terminal 124 via a line L136, and the encoded data generated by the area width encoder 136 is output to the output terminal 124 via the line L136.
- the prediction signal generator 106 receives two pieces of prediction information associated with the target block and the adjacent block from the prediction information memory 128 via the line L128b.
- the prediction signal generator 106 receives the region width from the region width determiner 134 via the line L134b, and receives the already reproduced signal from the frame memory 108 via the line L108.
- the prediction signal generator 106 generates a prediction signal of the target block from the already reproduced signal, using the received two pieces of prediction information and region width. An example of a prediction signal generation method will be described later.
- the prediction signal generator 106 is connected to the subtractor 110 via a line L106.
- the prediction signal generated by the prediction signal generator 106 is output to the subtractor 110 via the line L106.
- the subtractor 110 is connected to the block divider 104 via a line L104b.
- the subtractor 110 subtracts the prediction signal of the target block generated by the prediction signal generator 106 from the image signal of the target block received from the block divider 104 via the line L104b. A residual signal is generated by this subtraction.
- the subtractor 110 is connected to the converter 112 via the line L110, and the residual signal is output to the converter 112 via the line L110.
- the converter 112 is a part that generates a transform coefficient by applying discrete cosine transform to the input residual signal.
- the quantizer 114 receives the transform coefficient from the transformer 112 via the line L112.
- the quantizer 114 quantizes the transform coefficient to generate a quantized transform coefficient.
- the quantized transform coefficient encoder 122 receives the quantized transform coefficient from the quantizer 114 via the line L114 and entropy codes the quantized transform coefficient to generate encoded data.
- the quantized transform coefficient encoder 122 outputs the generated encoded data to the output terminal 124 via the line L122.
- entropy coding of the quantized transform coefficient encoder 122 arithmetic coding or variable length coding can be used, but is not limited to these coding methods of the present invention.
- the output terminal 124 collectively outputs the encoded data received from the prediction information encoder 132, the region width encoder 136, and the quantized transform coefficient encoder 122 to the outside.
- the inverse quantizer 116 receives the quantized transform coefficient from the quantizer 114 via the line L114b.
- the inverse quantizer 116 dequantizes the received quantized transform coefficient to restore the transform coefficient.
- the inverse transformer 118 receives the transform coefficient from the inverse quantizer 116 via the line L116, and applies an inverse discrete cosine transform to the transform coefficient to restore a residual signal (reproduced residual signal).
- the adder 120 receives the reproduction residual signal from the inverse converter 118 via the line L118, and receives the prediction signal from the prediction signal generator 106 via the line L106b.
- the adder 120 adds the received reproduction residual signal and the prediction signal, and reproduces the signal (reproduction signal) of the target block.
- the reproduction signal generated by the adder 120 is output to the frame memory 108 via the line L120 and is stored in the frame memory 108 as an already reproduced signal.
- the converter 112 and the inverse converter 118 are used, but other conversion processes in place of these converters may be used. Further, the converter 112 and the inverse converter 118 are not essential. As described above, the encoded reproduction signal of the target block is restored by inverse processing and stored in the frame memory 108 for use in generating a prediction signal of the subsequent target block.
- the configuration of the encoder is not limited to that shown in FIG.
- the determination unit 130 and the prediction information memory 128 may be included in the prediction signal generator 106.
- the region width determiner 134 may be included in the prediction information estimator 126.
- the operation of the image predictive coding apparatus 100 and the image predictive coding method according to an embodiment will be described. Details of operations of the determiner 130, the region width determiner 134, and the prediction signal generator 106 will also be described.
- FIG. 3 is a flowchart showing a procedure of an image predictive encoding method according to an embodiment.
- the block divider 104 divides the encoding target image into a plurality of blocks.
- step S102 one block is selected as a target block for encoding from among a plurality of blocks.
- step S104 the prediction information estimator 126 determines the prediction information of the target block. This prediction information is encoded by the prediction information encoder 132 in the subsequent step S106.
- step S108 the present image predictive coding method proceeds to step S108.
- FIG. 4 is a detailed flowchart of step S108 in FIG.
- step S200 two pieces of prediction information associated with the target block and the adjacent block are input to the determiner 130.
- step S202 the determiner 130 determines whether or not the prediction information of the adjacent block is available for generating the prediction signal of the target block.
- FIG. 5 is a detailed flowchart of step S202 in FIG.
- the determiner 130 determines whether or not two pieces of prediction information associated with the target block and the adjacent blocks match. If the determination in step S300 is true (Yes), that is, if the two pieces of prediction information associated with the target block and the adjacent block match, in step S302, the determination unit 130 determines that “unusable”. Is output.
- step S304 the determiner 130 determines whether or not the prediction information associated with the adjacent block is available for generating a prediction signal for the target block. If the determination in step S304 is true (Yes), in the subsequent step S306, the determiner 130 outputs a determination result indicating “available”. On the other hand, when the determination in step S304 is false (No), the determiner 130 performs the process of step S302 described above.
- step S304 when it is determined in step S304 that the prediction information of the adjacent block is not available, (1) the adjacent block is outside the screen, (2) the prediction information of the target block and the prediction information of the adjacent block In some cases, such combinations are not allowed.
- the determiner 130 determines whether or not to generate a prediction signal for the partial region of the target region using the prediction information associated with the adjacent block according to a predetermined rule.
- This rule does not need to be transmitted if the encoding device and the decoding device share information in advance, but may be encoded and transmitted. For example, there is a method in which a plurality of such rules are prepared and which rule is applied is sent in frame units, sequence units, or block units.
- step S204 the region width determiner 134 refers to the determination result of the determiner 130, and determines whether or not the determination result is a determination result indicating “available”. If the determination result of the determiner 130 indicates “unusable”, the process of step S108 ends.
- the region width determination unit 134 determines a portion in the target region to be predicted using prediction information associated with the adjacent block in the subsequent step S206.
- the area width of the area is selected from candidates prepared in advance.
- the region width encoder 136 encodes the determined region width.
- step S110 the prediction signal generator 106 uses the two prediction information associated with the target block and the adjacent block and the region width determined by the region width determiner 134 to be stored in the frame memory 108.
- a prediction signal of the target block is generated from the already reproduced signal.
- FIG. 6 is a detailed flowchart of step S110 in FIG.
- FIG. 6 shows the operation of the prediction signal generator 106 when the prediction signal of the partial region R2 in the target block of 8 ⁇ 8 pixels is generated using prediction information associated with the adjacent block on the left as shown in FIG. Is shown.
- step S400 first, the prediction signal generator 106 acquires prediction information Pt associated with the target block and prediction information Pn associated with the adjacent block.
- step S ⁇ b> 402 the prediction signal generator 106 acquires the region width w from the region width determiner 134.
- step S404 the prediction signal generator 106 generates a prediction signal of the partial region R1 in the target block shown in FIG. 2 from the already reproduced signal using the prediction information Pt and the region width w.
- step S406 the prediction signal generator 106 generates a prediction signal for the partial region R2 of the target block from the already reproduced signal using the prediction information Pn and the region width w.
- step S406 can be omitted. If the region width w is 7, step S404 can be omitted.
- step S112 the subtractor 110 generates a residual signal using the pixel signal of the target block and the prediction signal.
- step S114 the converter 112, the quantizer 114, and the quantized transform coefficient encoder 122 transform-code the residual signal to generate encoded data.
- step S116 the inverse quantizer 116 and the inverse transformer 118 restore the reproduction residual signal from the quantized transform coefficient.
- the adder 120 generates a reproduction signal by adding the reproduction residual signal and the prediction signal.
- step S120 the reproduction signal is stored in the frame memory 108 as the already reproduced signal.
- step S122 it is checked whether or not all blocks have been processed as target blocks. If processing of all blocks has not been completed, one of the unprocessed blocks is selected as the target block. Then, the processing after step S102 is performed. On the other hand, when the processing of all the blocks is completed, the processing of the present image predictive coding method ends.
- FIG. 7 is a diagram illustrating an image predictive decoding device according to an embodiment. 7 includes an input terminal 202, a data analyzer 204, an inverse quantizer 206, an inverse transformer 208, an adder 210, an output terminal 212, a quantized transform coefficient decoder 214, and prediction information decoding. 216, a region width decoder 218, a frame memory 108, a prediction signal generator 106, a prediction information memory 128, and a determiner 130.
- the inverse quantizer 206, the inverse transformer 208, and the quantized transform coefficient decoder 214 function as residual signal decoding means. Note that the decoding means by the inverse quantizer 206 and the inverse transformer 208 may be performed using other than these. Further, the inverse converter 208 may not be provided.
- the input terminal 202 inputs compressed data that has been compression-encoded by the above-described image predictive encoding device 100 (or image predictive encoding method).
- the compressed data includes, for each of a plurality of blocks in the image, encoded data of quantized transform coefficients generated by transform-quantizing the residual signal and entropy coding, and prediction information for generating a prediction signal Encoded data, and encoded data of the area width of the partial area in the block for generating the prediction signal using the prediction information associated with the adjacent block adjacent to the target block.
- the prediction information includes a motion vector, a reference screen number, and the like.
- the input terminal 202 is connected to the data analyzer 204 via a line L202.
- the data analyzer 204 receives the compressed data from the input terminal 202 via the line L202.
- the data analyzer 204 analyzes the received compressed data, and for the target block to be decoded, separates the compressed data into encoded data of quantization transform coefficients, encoded data of prediction information, and encoded data of region width. .
- the data analyzer 204 outputs the encoded data of the region width to the region width decoder 218 via the line L204a, outputs the encoded data of the prediction information to the prediction information decoder 216 via the line L204b, and passes through the line L204c.
- the encoded data of the quantized transform coefficient is output to the quantized transform coefficient decoder 214.
- the prediction information decoder 216 entropy-decodes encoded data of prediction information associated with the target block to obtain prediction information.
- the prediction information decoder 216 is connected to the prediction information memory 128 via a line L216.
- the prediction information generated by the prediction information decoder 216 is stored in the prediction information memory 128 via the line L216.
- the prediction information memory 128 is connected to the determination unit 130 and the prediction signal generator 106 via a line L128a and a line L128b, respectively.
- the determiner 130 has the same function as the determiner 130 of the encoding device in FIG. That is, the determiner 130 compares the prediction information associated with the target block with the prediction information associated with the adjacent block adjacent to the target block, and can use the prediction information associated with the adjacent block when generating the prediction signal of the target block. It is determined whether or not.
- the determiner 130 compares two prediction information associated with the target block and the adjacent neighboring block, and if the two pieces of prediction information match, the determination block 130 generates a prediction signal for the target block. It is determined that the accompanying prediction information cannot be used. That is, in this case, the determiner 130 outputs a determination result indicating “unusable”. On the other hand, when the two pieces of prediction information are different, the determiner 130 outputs a determination result indicating “available”.
- the determiner 130 is connected to the region width decoder 218 via the line L130.
- the determination result by the determiner 130 is output to the region width decoder 218 via the line L130. Note that the detailed processing flow of the determination unit 130 is omitted because it has already been described with reference to FIG.
- the region width decoder 218 restores the region width by entropy decoding the encoded data of the input region width based on the determination result of the determiner 130 received via L130. That is, when the determination result indicates “available”, the region width decoder 218 decodes the encoded data of the region width and restores the region width. On the other hand, when the determination result is “unusable”, the region width need not be restored.
- the region width decoder 218 is connected to the prediction signal generator 106 via a line L218, and the region width generated by the region width decoder 218 is output to the prediction signal generator 106 via a line L218.
- the prediction signal generator 106 has the same function as the prediction signal generator of the encoding device in FIG. That is, the prediction signal generator 106 is stored in the frame memory 108 using the prediction information associated with the target block, the prediction information (if necessary) associated with the adjacent block, and the region width received via L218. A prediction signal of a decoding target block is generated from the already reproduced signal. Details of the operation of the prediction signal generator 106 have been described with reference to FIG. The prediction signal generator 106 is connected to the adder 210 via the line L106. The prediction signal generator 106 outputs the generated prediction signal to the adder 210 via L106.
- the quantized transform coefficient decoder 214 receives the encoded data of the quantized transform coefficients from the data analyzer 204 via the line L204c.
- the quantized transform coefficient decoder 214 entropy-decodes the received encoded data, and restores the quantized transform coefficient of the residual signal of the target block.
- the quantized transform coefficient decoder 214 outputs the restored quantized transform coefficient to the inverse quantizer 206 via the line L214.
- the inverse quantizer 206 dequantizes the quantized transform coefficient received via the line L214 to restore the transform coefficient.
- the inverse transformer 208 receives the restored transform coefficient from the inverse quantizer 206 via the line L206, applies inverse discrete cosine transform to the transform coefficient, and obtains the residual signal (reconstructed residual signal) of the target block. Restore.
- the adder 210 receives the restored residual signal from the inverse transformer 208 via the line L208, and receives the prediction signal generated by the prediction signal generator 106 via the line L106.
- the adder 210 generates a reproduction signal of the target block by adding the received restored residual signal and the prediction signal.
- the reproduction signal is output to the frame memory 108 via the line L210 and stored in the frame memory 108. This reproduction signal is also output to the output terminal 212.
- the output terminal 212 outputs a reproduction signal to the outside (for example, a display).
- FIG. 8 is a flowchart of an image predictive decoding method according to an embodiment.
- this image predictive decoding method first, compressed data is input via the input terminal 202 in step S500.
- step S502 a target block to be processed is selected.
- step S504 the data analyzer 204 analyzes the data of the compressed data, and extracts the prediction information associated with the target block to be decoded, the region width, and the encoded data of the quantized transform coefficient.
- the prediction information is decoded by the prediction information decoder 216 in step S506.
- FIG. 9 is a detailed flowchart of step S508 in FIG. As shown in FIG. 9, in the process of step S ⁇ b> 508, first, in step S ⁇ b> 600, two pieces of prediction information associated with the target block and adjacent blocks are input to the determiner 130.
- step S202 the determiner 130 determines the availability of the prediction information associated with the adjacent block and outputs a determination result. Since the operation of the determiner 130 in step S202 is the same as the operation described in FIG. 5, detailed description thereof is omitted.
- step S602 it is determined whether or not the determination result of the determiner 130 indicates “available”. If the determination result in step S602 is true (Yes), that is, if the prediction information of the adjacent block is available, in step S604, the region width decoder 218 decodes the region width encoded data and performs object processing. The area width of the partial area (R2) of the block is restored. On the other hand, if the determination in step S602 is false (No), in step S606, the region width decoder 218 sets the region width of the partial region (R2) of the target block to zero.
- step S510 the prediction signal generator 106 decodes from the already reproduced signal using the two prediction informations attached to the target block and the adjacent block (the prediction information of the adjacent block is used only when necessary) and the region width. A prediction signal of the target block is generated.
- step S510 is the same as step S110 described in FIG.
- the quantized transform coefficient decoder 214 restores the quantized transform coefficient from the encoded data
- the inverse quantizer 206 restores the transform coefficient from the quantized transform coefficient
- the inverse transformer 208 converts A reproduction residual signal is generated from the coefficients.
- step S514 the adder 210 adds the prediction signal of the target block and the playback residual signal to generate a playback signal of the target block.
- this reproduced signal is stored in the frame memory 108 as an already reproduced signal for reproducing the next target block.
- step S518 If it is determined in step S518 that the processing of all the blocks has not been completed, that is, if there is next compressed data, an unprocessed block is selected as the target block in step S502, and the subsequent steps are performed. Repeated. On the other hand, if all blocks have been processed in step S518, the process ends.
- the adjacent block is the block adjacent to the left of the target block, but the adjacent block may be a block adjacent to the target block.
- FIG. 10 is a diagram for explaining another example of adjacent blocks.
- the target block Bt and the adjacent block B2 are 8 ⁇ 8 pixel blocks.
- the upper left pixel position (horizontal position, vertical position) is (0, 0)
- the lower right pixel position is (7, 7).
- the partial region R2 is a region surrounded by the pixel positions (0, 0), (7, 0), (0, w-1), (7, w-1), and uses the prediction information of the adjacent block B2. This is a region where a prediction signal may be generated.
- the region width of the partial region R2 is w.
- the range of x in step S404 in FIG. 6 is 0-7, and the range of y is w-7. It becomes. Further, the range of x in step S406 in FIG. 6 is 0 to 7, and the range of y is 0 to w-1.
- the adjacent block may be two blocks on the left and top adjacent to the target block, and it is possible to select one of the two adjacent blocks for each target block.
- the prediction signal generator 106 has a function of performing the prediction process described with reference to FIGS. 4 and 10, and the region width determiner 134 uses the prediction information used for prediction of the partial region of the target block. This includes a function of selecting a neighboring block having a left side or an upper neighboring block.
- the region width encoder 136 includes a function of encoding identification information for identifying an adjacent block having prediction information used for generating a prediction signal of a target region from two pieces of prediction information associated with two adjacent blocks.
- the area width decoder 218 includes a function of decoding the identification information.
- FIG. 11 is a flowchart showing a detailed procedure of another example of step S108 of FIG.
- step S700 two pieces of prediction information associated with the adjacent block on the target block and the left adjacent block are input to the determiner 130.
- the determiner 130 determines whether the prediction information associated with the adjacent block on the left of the target block can be used to generate a prediction signal for the partial region of the target block. And output the determination result.
- step S704 when it is determined that the determination result by the determiner 130 indicates “unusable” (in the case of No), that is, the determination result includes prediction information associated with the left adjacent block as a target. If it is indicated that it cannot be used to generate a prediction signal for a partial region of the block, the process proceeds to the subsequent step S202.
- the determiner 130 determines whether or not the prediction information associated with the adjacent block above the target block can be used to generate a prediction signal of the partial area of the target block. Output the result.
- step S706 when it is determined in step S706 that the determination result by the determiner 130 indicates “unusable” (in the case of No), that is, the determination result includes prediction information associated with the adjacent block above. If it is indicated that it cannot be used to generate the prediction signal of the partial area of the block, the process of step S108 ends.
- step S706 when it is determined in step S706 that the determination result by the determiner 130 indicates “available” (in the case of Yes), in step S708, the region width determiner 134 predicts the upper adjacent block.
- the area width w of the partial area R2 (see FIG. 10) of the target block for which the prediction signal is generated using information is determined.
- This region width w is then encoded by the region width encoder 136 in a subsequent step S208.
- step S704 when it is determined that the determination result by the determiner 130 indicates “available” (in the case of Yes), in the subsequent step S202, the determiner 130 is illustrated in step S202 of FIG. According to the procedure, it is determined whether prediction information associated with an adjacent block on the target block can be used to generate a prediction signal for a partial region of the target block, and a determination result is output.
- step S710 when it is determined in step S710 that the determination result by the determiner 130 indicates “unusable” (in the case of No), in the subsequent step S712, the region width determiner 134 determines that the left adjacent block The region width w of the partial region R2 (see FIG. 2) of the target block for which the prediction signal is generated using the prediction information is determined. This region width w is then encoded by the region width encoder 136 in a subsequent step S208.
- step S710 when it is determined in step S710 that the determination result by the determiner 130 indicates “available” (in the case of Yes), in the subsequent step S714, the prediction signal is output from the left adjacent block and the upper adjacent block. An adjacent block having prediction information to be used for generation of is selected.
- the region width determiner 134 selects which one of the prediction information of the upper adjacent block and the prediction information of the left adjacent block is used to generate the prediction signal of the partial region of the target block. To do.
- the selection method is not limited.
- the region width determiner 134 sets the width of the adjacent block and the partial region R2 as shown in FIGS.
- a prediction signal of the target block is generated using the prediction information of the block, and a pair of an adjacent block and a region width that minimizes the prediction error of the target block is selected.
- region width encoder 136 encodes identification information for specifying an adjacent block having the selected prediction information.
- FIG. 12 is a flowchart showing a detailed procedure of another example of step S508 of FIG. 8, and shows a procedure used in decoding corresponding to encoding using the processing of FIG.
- step S800 prediction information associated with the left adjacent block of the target block and prediction information associated with the adjacent block above are input to the determiner 130.
- the determiner 130 can use the prediction information associated with the left adjacent block and can use the prediction information associated with the upper adjacent block. And output the determination result.
- step S802 the region width decoder 218 determines from the determination result of the determiner 130 whether prediction information associated with one of the two adjacent blocks is available. If no prediction information associated with any adjacent block is available, the region width decoder 218 sets the region width of the partial region R2 of the decoding target block to 0 in step S804, and ends the process. To do.
- step S802 if it is determined in step S802 that the prediction information associated with one of the two adjacent blocks is available, in step S806, the region width decoder 218 determines From the determination result of 130, it is determined whether or not the prediction information accompanying two adjacent blocks can be used together. When the prediction information of the two adjacent blocks can be used together, in subsequent step S808, the region width decoder 218 decodes the identification information of the adjacent block from the encoded data, and proceeds to step S812.
- step S810 the region width decoder 218 determines the determination information. Based on the determination result of 130, one of the prediction information associated with two adjacent blocks is selected, and the process proceeds to step S812. In step S812, the region width decoder 218 decodes the region width value.
- the prediction signal of the target block may be generated using both the prediction information associated with the left adjacent block of the target block and the prediction information associated with the adjacent block above.
- the region width encoder 136 has a function of encoding two sets of prediction information and two region widths associated with two adjacent blocks
- the region width decoder 218 has two prediction information. It has a function of decoding a set of information and two region widths. In this case, as shown in FIG. 13, prediction signals for the four partial regions R1 to R4 in the target block Bt are individually generated.
- the prediction signal generator 106 generates a prediction signal of the partial region R2 using the prediction information associated with the left adjacent block B1, and predicts the partial region R3 using the prediction information associated with the adjacent block B2 above. Generate a signal. Further, the prediction signal generator 106 needs to have a function of generating a prediction signal of the partial region R4.
- the method for predicting the partial region R4 is not limited in the present invention because rules may be determined in advance. For example, the prediction signal for the partial region R4 generated based on the prediction information associated with the left adjacent block and the prediction signal for the partial region R4 generated based on the prediction information associated with the adjacent block above And a method of generating a prediction signal of the partial region R4 based on prediction information associated with the adjacent block on the upper left. There is also a method of automatically selecting prediction information belonging to the upper and left adjacent blocks using surrounding decoded data such as prediction information associated with the left and upper adjacent blocks, a method of sending selection information, etc. Can be adopted.
- the partial area in the target block is always rectangular, but the partial areas R1 and R2 in the target block Bt shown in FIG. 14A or the target block Bt shown in FIG. As in the partial regions R1 and R2, inner regions having arbitrary shapes may be used. In such a case, shape information may be sent in addition to the region width.
- the block size is a fixed size.
- the sizes of the target block Bt and the adjacent block B1 may be different.
- various shapes can be used as the shapes of the partial regions R1 to R3 in the target block Bt.
- a partial region configured according to the situation may be determined, or information indicating an adjacent block may be selected from a plurality of candidates and explicitly encoded. Further, a rule (for example, a unit for selecting an area width is adjusted to a smaller block size) may be determined in advance.
- the area width encoder not the area width value itself but information for specifying the area width may be encoded.
- the region width decoder not the region width value itself but the information for specifying the region width from the encoded data is decoded and the region width value is determined based on the information for specifying the region width. May be restored.
- the region width encoder may prepare a plurality of region width value candidates for the partial region in the target block and encode the identification information.
- the region width decoder may restore the region width value based on the decoded identification information.
- Candidate region widths may be determined in advance by the encoder and decoder, or may be sent in sequence units or frame units.
- the region width encoder may encode a difference value between the region width value of the partial region in the target block and the region width of the adjacent block.
- the region width decoder adds the region width value of the adjacent block that has already been decoded and the difference value decoded from the encoded data, thereby obtaining the region width of the partial region in the target block. The value of can be restored.
- the region width encoder may encode information indicating that the region width of the partial region of the target block is the same as the region width of the adjacent block. When the information indicating that the area width of the partial area of the target block is the same as the area width of the adjacent block is decoded, the area width decoder calculates the area width of the adjacent block of the partial area of the target block.
- the region width encoder may encode one or more information items for specifying the region width. That is, one or more information items (for example, one or more bits) that can uniquely specify the region width may be encoded. In this case, the region width decoder can decode one or more information items from the encoded data and restore the region width according to the one or more information items.
- the residual signal conversion process may be performed with a fixed block size.
- the target area may be subdivided into a size that matches the partial area, and the conversion process may be performed on each area generated by the subdivision.
- the adjacent blocks that can use the prediction information associated with the adjacent block are not limited to the adjacent block on the target block and the left adjacent block. For example, if the prediction information is encoded one block sequence ahead, all four blocks adjacent to the target block can be used as adjacent blocks, and the prediction information associated with them can be used to generate a prediction signal for the target block. It becomes.
- the prediction signals of each target block can include four surrounding blocks and a total of five associated with the target block (upper left, lower left, upper right, lower right). (9) prediction information can be freely configured.
- the encoding / decoding process does not fail even if a partial area is provided, so the prediction signal generation process of the present invention is realized even in a configuration that does not include a prediction information comparator Is possible.
- the determination unit 130 determines the availability of the prediction information associated with the adjacent block.
- the prediction information associated with the adjacent block matches the prediction information associated with the target block Or when it is determined that the prediction information of the adjacent block is not available, it is determined that the prediction information associated with the adjacent block cannot be used.
- the prediction information associated with an adjacent block cannot be used.
- the difference between the motion vector of the adjacent block and the motion vector of the target block is larger than the threshold, it may be determined that the prediction information associated with the adjacent block cannot be used.
- the prediction information associated with the adjacent block cannot be used.
- the prediction information attached to the adjacent block and the target block is compared.
- the prediction information generated based on the two pieces of prediction information is attached to the adjacent block under the condition that the prediction signals are the same. The availability of prediction information to be determined may be determined.
- inter-screen prediction motion vector and reference screen information
- Prediction methods such as in-screen prediction, luminance compensation, bidirectional prediction, and backward prediction can be applied to the prediction signal generation processing of the present invention.
- mode information, luminance compensation parameters, and the like are included in the prediction information.
- the prediction signal generation processing may be performed separately from the luminance signal for the color signal or the color difference signal. Further, the generation process of the color signal or the color difference signal prediction signal may be performed in conjunction with the luminance signal process. In the latter case, when the resolution of the color signal is lower than that of the luminance signal (for example, the resolution is half in the vertical and horizontal directions), the area width in the luminance signal is limited (for example, an even value) or the luminance A conversion formula from the signal area width to the color signal area width may be determined.
- the noise removal process may be performed on the boundary portion of the partial region.
- FIG. 16 is a diagram showing an image predictive encoding program according to an embodiment together with a recording medium.
- FIG. 17 is a diagram illustrating an image predictive decoding program according to an embodiment together with a recording medium.
- FIG. 18 is a diagram illustrating a hardware configuration of a computer for executing a program recorded in a recording medium.
- FIG. 19 is a perspective view of a computer for executing a program stored in a recording medium.
- the image predictive encoding program P100 is provided by being stored in the recording medium 10.
- the image predictive decoding program P200 is also stored in the recording medium 10 and provided.
- the recording medium 10 include a recording medium such as a floppy disk, CD-ROM, DVD, or ROM, or a semiconductor memory.
- the computer 30 includes a reading device 12 such as a floppy disk drive device, a CD-ROM drive device, a DVD drive device, a working memory (RAM) 14 in which an operating system is resident, and a recording medium 10.
- the computer 30 can access the image predictive coding program P100 stored in the recording medium 10 from the reading device 12, and the image predictive coding device is used by the program P100. 100 can be operated.
- the computer 30 can access the image predictive decoding program P200 stored in the recording medium 10 from the reading device 12, and the image predictive decoding device by the program P200. 200 can be operated.
- the image predictive encoding program P100 and the image predictive decoding program P200 may be provided as a computer data signal 40 superimposed on a carrier wave via a network.
- the computer 30 can store the image predictive coding program P100 or the image predictive decoding program P200 received by the communication device 24 in the memory 16, and can execute the program P100 or P200.
- the image predictive coding program P100 includes a block division module P104, a prediction signal generation module P106, a storage module P108, a subtraction module P110, a transform module P112, a quantization module P114, an inverse quantization module P116, and an inverse.
- each module of the image predictive coding program P100 includes a block divider 104, a prediction signal generator 106, a frame memory 108, a subtractor 110, a converter 112, a quantizer 114, an inverse quantizer 116, and an inverse quantizer.
- the function of the generator 136 is the same.
- the image prediction decoding program P200 includes a data analysis module P204, a quantized transform coefficient decoding module P214, a prediction information decoding module P216, a region width decoding module P218, a prediction information storage module P128, a determination module P130, an inverse quantization module P206, and an inverse transform.
- a module P208, an addition module P210, a prediction signal generation module P106, and a storage module P108 are provided.
- each module of the image predictive decoding program P200 includes a data analyzer 204, a quantized transform coefficient decoder 214, a prediction information decoder 216, a region width decoder 218, a prediction information memory 128, a determiner 130, an inverse quantum.
- the functions of the converter 206, the inverse converter 208, the adder 210, the prediction signal generator 106, and the frame memory 108 are the same.
- DESCRIPTION OF SYMBOLS 100 ... Image predictive coding apparatus, 102 ... Input terminal, 104 ... Block divider, 106 ... Prediction signal generator, 108 ... Frame memory, 110 ... Subtractor, 112 ... Converter, 114 ... Quantizer, 116 ... Inverse Quantizer, 118 ... Inverse transformer, 120 ... Adder, 122 ... Quantized transform coefficient encoder, 124 ... Output terminal, 126 ... Prediction information estimator, 128 ... Prediction information memory, 130 ... Determinator, 132 ... Prediction information encoder, 134 ... region width determiner, 136 ... region width encoder, 200 ... image predictive decoding device, 202 ... input terminal, 204 ...
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Color Television Systems (AREA)
Abstract
Description
上の説明では、対象ブロック内の部分領域は常に矩形であったが、図14の(a)に示す対象ブロックBt内の部分領域R1及びR2や、図14の(b)に示す対象ブロックBt内の部分領域R1及びR2のように、任意の形状の部分領域が用いられてもよい。このような場合には、領域幅に加えて形状情報が送付されることがある。
上の説明では、ブロックサイズは固定サイズであったが、図15の(a)~(c)に示すように、対象ブロックBtと隣接ブロックB1のサイズが異なっていてもよい。このような場合には、図15の(a)~(c)に示すように、対象ブロックBtにおける部分領域R1~R3の形状としては、種々の形状を用いることが可能である。状況に応じて構成される部分領域が決定されてもよく、隣接ブロックを指示する情報を複数の候補から選択して明示的に符号化してもよい。また、予め規則(例えば、領域幅を選択する単位をブロックサイズの小さい方に合わせる)を決めておいてもよい。
領域幅符号化器においては、領域幅の値自体ではなく、領域幅を特定するための情報が符号化されてもよい。また、領域幅復号器においては、領域幅の値自体ではなく、符号化データから領域幅を特定するための情報を復号して、当該領域幅を特定するための情報に基づいて領域幅の値を復元してもよい。例えば、領域幅符号化器は、対象ブロック内の部分領域の領域幅の値の候補を複数用意し、その識別情報を符号化してもよい。領域幅復号器は,復号した識別情報に基づいて領域幅の値を復元してもよい。候補となる領域幅は符号化器と復号器で予め決めておいてもよいし、シーケンス単位やフレーム単位で送ってよい。また、領域幅符号化器は、対象ブロック内の部分領域の領域幅の値と隣接ブロックの領域幅との差分値を符号化してもよい。この場合には、領域幅復号器は、既に復号されている隣接ブロックの領域幅の値と符号化データから復号された上記差分値とを加算することにより、対象ブロック内の部分領域の領域幅の値を復元することができる。あるいは、領域幅符号化器は、対象ブロックの部分領域の領域幅が隣接ブロックの領域幅と同じであることを指示する情報を符号化してもよい。対象ブロックの部分領域の領域幅が隣接ブロックの領域幅と同じであることを指示する情報が復号された場合には、領域幅復号器は、当該隣接ブロックの領域幅を対象ブロックの部分領域の領域幅として用いることができる。この際、対象ブロックの部分領域の領域幅が隣接ブロックの領域幅と異なることを指示し、加えて領域幅の値あるいは領域幅を特定するための情報を送ってもよい。対象ブロックの部分領域の領域幅が隣接ブロックの領域幅と異なることを指示する情報が復号された場合には,領域幅復号器は,更に符号化データから領域幅の値あるいは領域幅を特定するための情報を復号して、当該領域幅を特定するための情報に基づいて領域幅の値を復元してもよい。また、領域幅符号化器は、領域幅を特定するための一以上の情報アイテムを符号化してもよい。即ち、領域幅を一意に特定し得る一以上の情報アイテム(例えば、一以上のビット)が符号化されてもよい。この場合には、領域幅復号器は、符号化データから一以上の情報アイテムを復号して、当該一以上の情報アイテムに従って、領域幅を復元することができる。
残差信号の変換処理は、固定のブロックサイズで行ってもよい。また、部分領域にあわせたサイズに対象領域を再分割して、再分割により生成された各領域に対して変換処理を行ってもよい。
隣接ブロックに付随する予測情報の利用が可能な隣接ブロックは、対象ブロックの上の隣接ブロックと左の隣接ブロックには限定可能されない。例えば、予測情報を1ブロック列分先に符号化しておけば、対象ブロックに隣接する4ブロックのすべてを隣接ブロックとし、それらに付随する予測情報を対象ブロックの予測信号の生成に用いることが可能となる。
上の説明では、判定器130が隣接ブロックに付随する予測情報の利用可能性を判定するための予め定めたルールとして、隣接ブロックに付随する予測情報が対象ブロックに付随する予測情報と一致する場合や隣接ブロックの予測情報が利用可能な状態にないと判定される場合には、隣接ブロックに付随する予測情報が利用できないものと判定している。後者については、隣接ブロックが画面内予測により予測され、対象ブロックが画面間予測される場合、及び、この逆の場合には、隣接ブロックに付随する予測情報が利用できないものと判定してもよい。また、隣接ブロックの動きベクトルと対象ブロックの動きベクトルとの間の差が閾値より大きい場合に、隣接ブロックに付随する予測情報が利用できないものと判定してもよい。さらに、隣接ブロックと対象ブロックのブロックサイズが互いに異なる場合には、隣接ブロックに付随する予測情報が利用できないものと判定してもよい。また、上の説明では、隣接ブロックと対象ブロックに付随する予測情報を比較しているが、2つの予測情報に基づいて生成される予測信号が同じであるか否かという条件で隣接ブロックに付随する予測情報の利用可能性を判定してもよい。
上の説明では、予測信号の生成方法として画面間予測(動きベクトルと参照画面情報)を説明したが、本発明はこの予測方法に限定されるものではない。画面内予測や、輝度補償、双方向予測、後方予測などの予測方法を、本発明の予測信号生成処理に適用することが可能である。この場合、モード情報や輝度補償パラメータなどが予測情報に含まれる。
上の説明では、色フォーマットについては特に述べていないが、色信号あるいは色差信号についても、輝度信号と個別に予測信号の生成処理を行ってもよい。また、色信号あるいは色差信号の予測信号の生成処理を、輝度信号の処理と連動して行ってもよい。後者の場合に、色信号の解像度が輝度信号よりも低い場合(例えば、解像度が縦方向と横方向に半分)には、輝度信号における領域幅を制限(例えば、偶数値)するか、あるいは輝度信号の領域幅から色信号の領域幅への変換式を決めておけばよい。
上記では述べていないが、再生画像に対してブロックノイズ除去処理を行う場合には、部分領域の境界部分に対してノイズ除去処理を行うとよい。
Claims (14)
- 入力画像を複数の領域に分割する領域分割手段と、
前記複数の領域のうち対象領域の画素信号の予測信号を既再生信号から生成し、該予測信号の生成に用いた予測情報を前記対象領域に付随する予測情報として得る予測情報推定手段と、
前記対象領域に付随する予測情報を符号化する予測情報符号化手段と、
前記対象領域に付随する予測情報と該対象領域に隣接する隣接領域に付随する予測情報との比較を行い、該比較の結果に基づいて、前記対象領域の予測信号の生成に前記隣接領域に付随する予測情報が利用可能か否かを判定する判定手段と、
前記判定手段によって前記対象領域の予測信号の生成に前記隣接領域に付随する予測情報が利用可能であると判定された場合に、前記対象領域内の部分領域であって前記隣接領域に付随する予測情報を用いて予測信号を生成する該部分領域の領域幅を決定する領域幅決定手段と、
前記領域幅を特定するための情報を符号化する領域幅符号化手段と、
前記対象領域に付随する予測情報、前記隣接領域に付随する予測情報、及び前記領域幅を用いて、前記既再生信号から、前記対象領域の予測信号を生成する予測信号生成手段と、
前記対象領域の予測信号と前記対象領域の画素信号との間の残差信号を生成する残差信号生成手段と、
前記残差信号を符号化する残差信号符号化手段と、
前記残差信号の符号化データを復号することによって再生残差信号を生成する残差信号復元手段と、
前記予測信号と前記再生残差信号とを加算することによって前記対象領域の再生信号を生成する加算手段と、
前記対象領域の再生信号を前記既再生信号として記憶する記憶手段と、
を備える、画像予測符号化装置。 - 前記判定手段は、前記対象領域に付随する予測情報と前記隣接領域に付随する予測情報とが同じであると判定したときに、前記対象領域の予測信号の生成に前記隣接領域に付随する予測情報が利用できないと判定する、請求項1に記載の画像予測符号化装置。
- 前記判定手段は、前記対象領域に付随する予測情報と前記隣接領域に付随する予測情報の組み合わせが予め設定した条件を満たさないと判定したとき、前記対象領域の予測信号の生成に前記隣接領域に付随する予測情報が利用できないと判定する、請求項1に記載の画像予測符号化装置。
- 前記領域幅符号化手段は、前記判定手段によって前記対象領域の予測信号の生成に前記隣接領域に付随する予測情報が利用できないと判定されているときに、前記領域幅の符号化データを出力しない、請求項1に記載の画像予測符号化装置。
- 前記隣接領域が前記対象領域の左隣及び上隣の二つの隣接領域であり、
前記判定手段によって前記二つの隣接領域に付随する予測情報が共に前記対象領域の予測信号の生成に利用可能と判定されたときに、前記領域幅符号化手段は、前記二つの隣接領域のうち前記対象領域の予測信号の生成に利用する予測情報を有する隣接領域を特定する識別情報を符号化する、請求項1に記載の画像予測符号化装置。 - 画像を複数の領域に分割して符号化された圧縮データの中から、対象領域の予測信号の生成に用いる予測情報の符号化データと、前記対象領域に隣接する隣接領域に付随する予測情報を用いて予測信号を生成する前記対象領域内の部分領域の領域幅を特定するための情報の符号化データと、残差信号の符号化データと、を抽出するデータ解析手段と、
前記予測情報の符号化データを復号して前記対象領域に付随する予測情報を復元する予測情報復号手段と、
前記対象領域に付随する予測情報と前記隣接領域に付随する予測情報との比較を行い、該比較の結果に基づいて、前記対象領域の予測信号の生成に前記隣接領域に付随する予測情報が利用可能か否かを判定する判定手段と、
前記判定手段によって前記対象領域の予測信号の生成に前記隣接領域に付随する予測情報が利用可能であると判定された場合に、前記領域幅を特定するための情報の符号化データを復号して該領域幅を復元する領域幅復号手段と、
前記対象領域に付随する予測情報、前記隣接領域に付随する予測情報、及び前記領域幅を用いて、既再生信号から、前記対象領域の予測信号を生成する予測信号生成手段と、
前記残差信号の符号化データから前記対象領域の再生残差信号を復元する残差信号復元手段と、
前記対象領域の予測信号と前記再生残差信号とを加算することによって前記対象領域の再生信号を生成する加算手段と、
前記対象領域の再生信号を前記既再生信号として記憶する記憶手段と、
を備える、画像予測復号装置。 - 前記判定手段は、前記対象領域に付随する予測情報と前記隣接領域に付随する予測情報とが同じであると判定したときに、前記対象領域の予測信号の生成に前記隣接領域に付随する予測情報が利用できないと判定する、請求項6に記載の画像予測復号装置。
- 前記判定手段は、前記対象領域に付随する予測情報と前記隣接領域に付随する予測情報の組み合わせが予め設定した条件を満たさないと判定したときに、前記対象領域の予測信号の生成に前記隣接領域に付随する予測情報が利用できないと判定する、請求項6に記載の画像予測復号装置。
- 前記領域幅復号手段は、前記判定手段によって前記対象領域の予測信号の生成に前記隣接領域に付随する予測情報が利用できないと判定されているときに、前記対象領域に付随する前記領域幅を0に設定する、請求項6に記載の画像予測復号装置。
- 前記隣接領域が前記対象領域の左隣及び上隣の二つの隣接領域であり、
前記判定手段によって前記二つの隣接領域に付随する予測情報が共に前記対象領域の予測信号の生成に利用可能と判定されたときに、前記領域幅復号手段は、前記二つの隣接領域のうち前記対象領域の予測信号の生成に利用する予測情報を有する隣接領域を特定する識別情報を復号する、請求項6に記載の画像予測復号装置。 - 入力画像を複数の領域に分割する領域分割ステップと、
前記複数の領域のうち対象領域の画素信号の予測信号を既再生信号から生成し、該予測信号の生成に用いた予測情報を前記対象領域に付随する予測情報として得る予測情報推定ステップと、
前記対象領域に付随する予測情報を符号化する予測情報符号化ステップと、
前記対象領域に付随する予測情報と該対象領域に隣接する隣接領域に付随する予測情報との比較を行い、該比較の結果に基づいて、前記対象領域の予測信号の生成に前記隣接領域に付随する予測情報が利用可能か否かを判定する判定ステップと、
前記判定ステップにおいて前記対象領域の予測信号の生成に前記隣接領域に付随する予測情報が利用可能であると判定された場合に、前記対象領域内の部分領域であって前記隣接領域に付随する予測情報を用いて予測信号を生成する該部分領域の領域幅を決定する領域幅決定ステップと、
前記領域幅を特定するための情報を符号化する領域幅符号化ステップと、
前記対象領域に付随する予測情報、前記隣接領域に付随する予測情報、及び前記領域幅を用いて、前記既再生信号から、前記対象領域の予測信号を生成する予測信号生成ステップと、
前記対象領域の予測信号と前記対象領域の画素信号との間の残差信号を生成する残差信号生成ステップと、
前記残差信号を符号化する残差信号符号化ステップと、
前記残差信号の符号化データを復号することによって再生残差信号を生成する残差信号復元ステップと、
前記予測信号と前記再生残差信号とを加算することによって前記対象領域の再生信号を生成する再生信号生成ステップと、
前記対象領域の再生信号を前記既再生信号として記憶する記憶ステップと、
を含む画像予測符号化方法。 - コンピュータを、
入力画像を複数の領域に分割する領域分割手段と、
前記複数の領域のうち対象領域の画素信号の予測信号を既再生信号から生成し、該予測信号の生成に用いた予測情報を前記対象領域に付随する予測情報として得る予測情報推定手段と、
前記対象領域に付随する予測情報を符号化する予測情報符号化手段と、
前記対象領域に付随する予測情報と該対象領域に隣接する隣接領域に付随する予測情報との比較を行い、該比較の結果に基づいて、前記対象領域の予測信号の生成に前記隣接領域に付随する予測情報が利用可能か否かを判定する判定手段と、
前記判定手段によって前記対象領域の予測信号の生成に前記隣接領域に付随する予測情報が利用可能であると判定された場合に、前記対象領域内の部分領域であって前記隣接領域に付随する予測情報を用いて予測信号を生成する該部分領域の領域幅を決定する領域幅決定手段と、
前記領域幅を特定するための情報を符号化する領域幅符号化手段と、
前記対象領域に付随する予測情報、前記隣接領域に付随する予測情報、及び前記領域幅を用いて、前記既再生信号から、前記対象領域の予測信号を生成する予測信号生成手段と、
前記対象領域の予測信号と前記対象領域の画素信号との間の残差信号を生成する残差信号生成手段と、
前記残差信号を符号化する残差信号符号化手段と、
前記残差信号の符号化データを復号することによって再生残差信号を生成する残差信号復元手段と、
前記予測信号と前記再生残差信号とを加算することによって前記対象領域の再生信号を生成する加算手段と、
前記対象領域の再生信号を前記既再生信号として記憶する記憶手段と、
として機能させる画像予測符号化プログラム。 - 画像を複数の領域に分割して符号化された圧縮データの中から、対象領域の予測信号の生成に用いる予測情報の符号化データと、前記対象領域に隣接する隣接領域に付随する予測情報を用いて予測信号を生成する前記対象領域内の部分領域の領域幅を特定するための情報の符号化データと、残差信号の符号化データと、を抽出するデータ解析ステップと、
前記予測情報の符号化データを復号して前記対象領域に付随する予測情報を復元する予測情報復号ステップと、
前記対象領域に付随する予測情報と前記隣接領域に付随する予測情報との比較を行い、該比較の結果に基づいて、前記対象領域の予測信号の生成に前記隣接領域に付随する予測情報が利用可能か否かを判定する判定ステップと、
前記判定ステップにおいて前記対象領域の予測信号の生成に前記隣接領域に付随する予測情報が利用可能であると判定された場合に、前記領域幅を特定するための情報の符号化データを復号して該領域幅を復元する領域幅復号ステップと、
前記対象領域に付随する予測情報、前記隣接領域に付随する予測情報、及び前記領域幅を用いて、既再生信号から、前記対象領域の予測信号を生成する予測信号生成ステップと、
前記残差信号の符号化データから前記対象領域の再生残差信号を復元する残差信号復元ステップと、
前記対象領域の予測信号と前記再生残差信号とを加算することによって前記対象領域の再生信号を生成する再生信号生成ステップと、
前記対象領域の再生信号を前記既再生信号として記憶する記憶ステップと、
を含む画像予測復号方法。 - コンピュータを、
画像を複数の領域に分割して符号化された圧縮データの中から、対象領域の予測信号の生成に用いる予測情報の符号化データと、前記対象領域に隣接する隣接領域に付随する予測情報を用いて予測信号を生成する前記対象領域内の部分領域の領域幅を特定するための情報の符号化データと、残差信号の符号化データと、を抽出するデータ解析手段と、
前記予測情報の符号化データを復号して前記対象領域に付随する予測情報を復元する予測情報復号手段と、
前記対象領域に付随する予測情報と前記隣接領域に付随する予測情報との比較を行い、該比較の結果に基づいて、前記対象領域の予測信号の生成に前記隣接領域に付随する予測情報が利用可能か否かを判定する判定手段と、
前記判定手段によって前記対象領域の予測信号の生成に前記隣接領域に付随する予測情報が利用可能であると判定された場合に、前記領域幅を特定するための情報の符号化データを復号して該領域幅を復元する領域幅復号手段と、
前記対象領域に付随する予測情報、前記隣接領域に付随する予測情報、及び前記領域幅を用いて、既再生信号から、前記対象領域の予測信号を生成する予測信号生成手段と、
前記残差信号の符号化データから前記対象領域の再生残差信号を復元する残差信号復元手段と、
前記対象領域の予測信号と前記再生残差信号とを加算することによって前記対象領域の再生信号を生成する加算手段と、
前記対象領域の再生信号を前記既再生信号として記憶する記憶手段と、
として機能させる画像予測復号プログラム。
Priority Applications (36)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
MX2015006350A MX341365B (es) | 2009-03-23 | 2010-03-16 | Dispositivo de codificacion predictiva de imagen, metodo de codificacion predictiva de imagen, programa de codificacion predictiva de imagen, dispositivo de descodificacion predictiva de imagen, metodo de descodificacion predictiva de imagen y programa de descodificacion predictiva de imagen. |
KR1020177023888A KR101812122B1 (ko) | 2009-03-23 | 2010-03-16 | 화상 예측 부호화 장치, 화상 예측 부호화 방법, 화상 예측 복호 장치, 및 화상 예측 복호 방법 |
KR1020197005031A KR102032771B1 (ko) | 2009-03-23 | 2010-03-16 | 화상 예측 부호화 장치, 화상 예측 부호화 방법, 화상 예측 복호 장치, 및 화상 예측 복호 방법 |
JP2011505988A JP5586101B2 (ja) | 2009-03-23 | 2010-03-16 | 画像予測符号化装置、画像予測符号化方法、画像予測符号化プログラム、画像予測復号装置、画像予測復号方法、及び画像予測復号プログラム |
KR1020197005032A KR102032772B1 (ko) | 2009-03-23 | 2010-03-16 | 화상 예측 부호화 장치, 화상 예측 부호화 방법, 화상 예측 복호 장치, 및 화상 예측 복호 방법 |
MX2016009255A MX354869B (es) | 2009-03-23 | 2010-03-16 | Dispositivo de codificación predictiva de imagen, método de codificación predictiva de imagen, programa de codificación predictiva de imagen, dispositivo de descodificación predictiva de imagen, método de descodificación predictiva de imagen y programa de descodificación predictiva de imagen. |
KR1020157032965A KR101700236B1 (ko) | 2009-03-23 | 2010-03-16 | 화상 예측 부호화 장치, 화상 예측 부호화 방법, 화상 예측 복호 장치, 및 화상 예측 복호 방법 |
CA2756419A CA2756419C (en) | 2009-03-23 | 2010-03-16 | Image predictive encoding device, image predictive encoding method, image predictive encoding program, image predictive decoding device, image predictive decoding method, and image predictive decoding program |
EP19184092.5A EP3567853B8 (en) | 2009-03-23 | 2010-03-16 | Image predictive decoding device and image predictive decoding method |
RU2011142796/08A RU2549170C2 (ru) | 2009-03-23 | 2010-03-16 | Устройство кодирования с предсказанием изображений, способ кодирования с предсказанием изображений, программа кодирования с предсказанием изображений, устройство декодирования с предсказанием изображений, способ декодирования с предсказанием изображений, программа декодирования с предсказанием изображений |
BRPI1011698-2A BRPI1011698B1 (pt) | 2009-03-23 | 2010-03-16 | Método de decodificação preditiva de imagem, dispositivo de decodificação preditiva de imagem, método de codificação preditiva de imagem, e dispositivo de codificação preditiva de imagem |
CN201080013170.3A CN102362500B (zh) | 2009-03-23 | 2010-03-16 | 图像预测编码装置、图像预测编码方法、图像预测编码程序、图像预测解码装置、图像预测解码方法及图像预测解码程序 |
SG2011065208A SG174323A1 (en) | 2009-03-23 | 2010-03-16 | Image predictive encoding device, image predictive encoding method, image predictive encoding program, image predictive decoding device, image predictive decoding method, and image predictive decoding program |
MX2011009960A MX2011009960A (es) | 2009-03-23 | 2010-03-16 | Dispositivo de codificacion predictiva de imagen, metodo de codificacion predictiva de imagen, programa de codificacion predictiva de imagen, dispositivo de descodificacion predictiva de imagen, metodo de descodificacion predictiva de imagen y progra |
KR1020167027835A KR101773990B1 (ko) | 2009-03-23 | 2010-03-16 | 화상 예측 부호화 장치, 화상 예측 부호화 방법, 화상 예측 복호 장치, 및 화상 예측 복호 방법 |
EP17179305.2A EP3249924B1 (en) | 2009-03-23 | 2010-03-16 | Image predictive encoding device, image predictive encoding method, image predictive decoding device and image predictive decoding method |
KR1020177036233A KR101952726B1 (ko) | 2009-03-23 | 2010-03-16 | 화상 예측 부호화 장치, 화상 예측 부호화 방법, 화상 예측 복호 장치, 및 화상 예측 복호 방법 |
EP10755924.7A EP2413605B1 (en) | 2009-03-23 | 2010-03-16 | Image predictive encoding device, image predictive encoding method, image predictive encoding program, image predictive decoding device, image predictive decoding method, and image predictive decoding program |
AU2010228415A AU2010228415B2 (en) | 2009-03-23 | 2010-03-16 | Image predictive encoding device, image predictive encoding method, image predictive encoding program, image predictive decoding device, image predictive decoding method, and image predictive decoding program |
EP19184100.6A EP3567854B1 (en) | 2009-03-23 | 2010-03-16 | Image predictive decoding method |
EP19184146.9A EP3567855B1 (en) | 2009-03-23 | 2010-03-16 | Image predictive encoding device, image predictive encoding method, image predictive encoding program, image predictive decoding device, image predictive decoding method, and image predictive decoding program |
EP19184087.5A EP3567852B1 (en) | 2009-03-23 | 2010-03-16 | Image predictive decoding device and image predictive decoding method |
EP19184147.7A EP3567856B8 (en) | 2009-03-23 | 2010-03-16 | Image predictive decoding device and image predictive decoding method |
PL17179305T PL3249924T3 (pl) | 2009-03-23 | 2010-03-16 | Urządzenie do predykcyjnego kodowania obrazu, sposób predykcyjnego kodowania obrazu, urządzenie do predykcyjnego dekodowania obrazu i sposób predykcyjnego dekodowania obrazu |
US13/240,559 US9031125B2 (en) | 2009-03-23 | 2011-09-22 | Image predictive encoding and decoding device |
US14/581,705 US9549186B2 (en) | 2009-03-23 | 2014-12-23 | Image predictive encoding and decoding device |
AU2016201339A AU2016201339B2 (en) | 2009-03-23 | 2016-03-02 | Image predictive encoding device, image predictive encoding method, image predictive encoding program, image predictive decoding device, image predictive decoding method, and image predictive decoding program |
US15/348,504 US10063855B2 (en) | 2009-03-23 | 2016-11-10 | Image predictive encoding and decoding device |
AU2017265185A AU2017265185B2 (en) | 2009-03-23 | 2017-11-27 | Image predictive encoding device, image predictive encoding method, image predictive encoding program, image predictive decoding device, image predictive decoding method, and image predictive decoding program |
US16/032,988 US10284847B2 (en) | 2009-03-23 | 2018-07-11 | Image predictive encoding and decoding device |
US16/032,985 US10284846B2 (en) | 2009-03-23 | 2018-07-11 | Image predictive encoding and decoding device |
US16/032,998 US10284848B2 (en) | 2009-03-23 | 2018-07-11 | Image predictive encoding and decoding device |
AU2019204856A AU2019204856B2 (en) | 2009-03-23 | 2019-07-05 | Image predictive encoding device, image predictive encoding method, image predictive encoding program, image predictive decoding device, image predictive decoding method, and image predictive decoding program |
AU2019204854A AU2019204854B2 (en) | 2009-03-23 | 2019-07-05 | Image predictive encoding device, image predictive encoding method, image predictive encoding program, image predictive decoding device, image predictive decoding method, and image predictive decoding program |
AU2019204852A AU2019204852B2 (en) | 2009-03-23 | 2019-07-05 | Image predictive encoding device, image predictive encoding method, image predictive encoding program, image predictive decoding device, image predictive decoding method, and image predictive decoding program |
AU2019204853A AU2019204853B2 (en) | 2009-03-23 | 2019-07-05 | Image predictive encoding device, image predictive encoding method, image predictive encoding program, image predictive decoding device, image predictive decoding method, and image predictive decoding program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2009069975 | 2009-03-23 | ||
JP2009-069975 | 2009-03-23 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/240,559 Continuation US9031125B2 (en) | 2009-03-23 | 2011-09-22 | Image predictive encoding and decoding device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010110126A1 true WO2010110126A1 (ja) | 2010-09-30 |
Family
ID=42780818
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2010/054441 WO2010110126A1 (ja) | 2009-03-23 | 2010-03-16 | 画像予測符号化装置、画像予測符号化方法、画像予測符号化プログラム、画像予測復号装置、画像予測復号方法、及び画像予測復号プログラム |
Country Status (19)
Country | Link |
---|---|
US (6) | US9031125B2 (ja) |
EP (8) | EP3567853B8 (ja) |
JP (5) | JP5586101B2 (ja) |
KR (7) | KR102032771B1 (ja) |
CN (2) | CN104065961B (ja) |
AU (7) | AU2010228415B2 (ja) |
BR (1) | BRPI1011698B1 (ja) |
CA (7) | CA3000726C (ja) |
DK (4) | DK3567856T3 (ja) |
ES (7) | ES2935961T3 (ja) |
FI (5) | FI3567853T3 (ja) |
HU (5) | HUE061250T2 (ja) |
MX (4) | MX341365B (ja) |
PL (7) | PL3249924T3 (ja) |
PT (7) | PT3567854T (ja) |
RU (7) | RU2549170C2 (ja) |
SG (2) | SG174323A1 (ja) |
TW (7) | TWI715906B (ja) |
WO (1) | WO2010110126A1 (ja) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012095099A (ja) * | 2010-10-27 | 2012-05-17 | Jvc Kenwood Corp | 動画像符号化装置、動画像符号化方法及び動画像符号化プログラム、並びに動画像復号装置、動画像復号方法及び動画像復号プログラム |
JP2012114591A (ja) * | 2010-11-22 | 2012-06-14 | Jvc Kenwood Corp | 動画像復号装置、動画像復号方法及び動画像復号プログラム |
JP2012114590A (ja) * | 2010-11-22 | 2012-06-14 | Jvc Kenwood Corp | 動画像符号化装置、動画像符号化方法及び動画像符号化プログラム |
JP2012142886A (ja) * | 2011-01-06 | 2012-07-26 | Kddi Corp | 画像符号化装置及び画像復号装置 |
JP2012205287A (ja) * | 2011-03-28 | 2012-10-22 | Jvc Kenwood Corp | 画像符号化装置、画像符号化方法および画像符号化プログラム |
JP2012205288A (ja) * | 2011-03-28 | 2012-10-22 | Jvc Kenwood Corp | 画像復号装置、画像復号方法および画像復号プログラム |
US9031125B2 (en) | 2009-03-23 | 2015-05-12 | Ntt Docomo, Inc. | Image predictive encoding and decoding device |
CN105379270A (zh) * | 2013-07-15 | 2016-03-02 | 高通股份有限公司 | 颜色分量间残余预测 |
US9986261B2 (en) | 2010-07-20 | 2018-05-29 | Ntt Docomo, Inc. | Image prediction encoding/decoding system |
US11871000B2 (en) | 2010-12-28 | 2024-01-09 | Dolby Laboratories Licensing Corporation | Method and system for selectively breaking prediction in video coding |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8428375B2 (en) * | 2010-11-17 | 2013-04-23 | Via Technologies, Inc. | System and method for data compression and decompression in a graphics processing system |
JP5781313B2 (ja) * | 2011-01-12 | 2015-09-16 | 株式会社Nttドコモ | 画像予測符号化方法、画像予測符号化装置、画像予測符号化プログラム、画像予測復号方法、画像予測復号装置及び画像予測復号プログラム |
JP5485969B2 (ja) * | 2011-11-07 | 2014-05-07 | 株式会社Nttドコモ | 動画像予測符号化装置、動画像予測符号化方法、動画像予測符号化プログラム、動画像予測復号装置、動画像予測復号方法及び動画像予測復号プログラム |
KR101665921B1 (ko) | 2011-11-08 | 2016-10-12 | 가부시끼가이샤 도시바 | 이미지 부호화 방법, 이미지 복호화 방법, 이미지 부호화 장치 및 이미지 복호화 장치 |
CN107707912B (zh) * | 2011-12-28 | 2020-05-22 | Jvc 建伍株式会社 | 动图像编码装置以及动图像编码方法 |
JP6045222B2 (ja) * | 2012-06-28 | 2016-12-14 | 株式会社Nttドコモ | 動画像予測復号装置、方法及びプログラム |
EP2869557B1 (en) * | 2012-06-29 | 2023-08-09 | Electronics And Telecommunications Research Institute | Method and device for encoding/decoding images |
WO2018062699A1 (ko) * | 2016-09-30 | 2018-04-05 | 엘지전자 주식회사 | 영상 코딩 시스템에서 영상 디코딩 방법 및 장치 |
CN116866553A (zh) * | 2017-01-16 | 2023-10-10 | 世宗大学校产学协力团 | 影像解码/编码方法以及传送比特流的方法 |
EP3780604A4 (en) * | 2018-03-29 | 2021-12-22 | Industry Academy Cooperation Foundation Of Sejong University | IMAGE CODING / DECODING PROCESS AND APPARATUS |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07264598A (ja) * | 1994-03-23 | 1995-10-13 | Nippon Telegr & Teleph Corp <Ntt> | 動き補償方法、動きベクトル検出回路および動き補償回路 |
US6259739B1 (en) | 1996-11-26 | 2001-07-10 | Matsushita Electric Industrial Co., Ltd. | Moving picture variable bit rate coding apparatus, moving picture variable bit rate coding method, and recording medium for moving picture variable bit rate coding program |
WO2003026315A1 (en) * | 2001-09-14 | 2003-03-27 | Ntt Docomo, Inc. | Coding method, decoding method, coding apparatus, decoding apparatus, image processing system, coding program, and decoding program |
US6765964B1 (en) | 2000-12-06 | 2004-07-20 | Realnetworks, Inc. | System and method for intracoding video data |
US7003035B2 (en) | 2002-01-25 | 2006-02-21 | Microsoft Corporation | Video coding methods and apparatuses |
WO2008016609A2 (en) * | 2006-08-02 | 2008-02-07 | Thomson Licensing | Adaptive geometric partitioning for video encoding |
JP2008311781A (ja) * | 2007-06-12 | 2008-12-25 | Ntt Docomo Inc | 動画像符号化装置、動画像復号化装置、動画像符号化方法、動画像復号化方法、動画像符号化プログラム及び動画像復号化プログラム |
JP2009005147A (ja) * | 2007-06-22 | 2009-01-08 | Sony Corp | 情報処理システムおよび方法、情報処理装置および方法、並びにプログラム |
Family Cites Families (68)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5260783A (en) * | 1991-02-21 | 1993-11-09 | Gte Laboratories Incorporated | Layered DCT video coder for packet switched ATM networks |
KR0166716B1 (ko) * | 1992-06-18 | 1999-03-20 | 강진구 | 블럭 dpcm을 이용한 부호화/복호화방법 및 장치 |
US6285710B1 (en) * | 1993-10-13 | 2001-09-04 | Thomson Licensing S.A. | Noise estimation and reduction apparatus for video signal processing |
US5608458A (en) * | 1994-10-13 | 1997-03-04 | Lucent Technologies Inc. | Method and apparatus for a region-based approach to coding a sequence of video images |
JPH09182081A (ja) * | 1995-12-25 | 1997-07-11 | Nippon Telegr & Teleph Corp <Ntt> | 動き補償予測符号化装置 |
US5682204A (en) * | 1995-12-26 | 1997-10-28 | C Cube Microsystems, Inc. | Video encoder which uses intra-coding when an activity level of a current macro-block is smaller than a threshold level |
JP3573465B2 (ja) * | 1996-01-22 | 2004-10-06 | 松下電器産業株式会社 | デジタル画像符号化、復号化方法及びそれを用いたデジタル画像符号化、復号化装置 |
CN1183769C (zh) * | 1996-05-28 | 2005-01-05 | 松下电器产业株式会社 | 图像预测编码/解码装置和方法以及记录媒体 |
EP0817499A3 (en) * | 1996-06-28 | 2002-05-22 | Matsushita Electric Industrial Co., Ltd. | Image coding method using extrapolated pixels in insignificant areas of blocks |
US6687405B1 (en) | 1996-11-13 | 2004-02-03 | Koninklijke Philips Electronics N.V. | Image segmentation |
US6359929B1 (en) * | 1997-07-04 | 2002-03-19 | Matsushita Electric Industrial Co., Ltd. | Image predictive decoding method, image predictive decoding apparatus, image predictive coding apparatus, and data storage medium |
US6483521B1 (en) * | 1998-02-02 | 2002-11-19 | Matsushita Electric Industrial Co., Ltd. | Image composition method, image composition apparatus, and data recording media |
US6408029B1 (en) * | 1998-04-02 | 2002-06-18 | Intel Corporation | Method and apparatus for simplifying real-time data encoding |
JP2000041255A (ja) * | 1998-07-24 | 2000-02-08 | Canon Inc | 動き補償処理方法、動き補償処理回路、符号化装置、及び記憶媒体 |
US6400763B1 (en) * | 1999-02-18 | 2002-06-04 | Hewlett-Packard Company | Compression system which re-uses prior motion vectors |
US6614442B1 (en) * | 2000-06-26 | 2003-09-02 | S3 Graphics Co., Ltd. | Macroblock tiling format for motion compensation |
EP1445956A4 (en) * | 2001-11-16 | 2009-09-02 | Ntt Docomo Inc | IMAGE ENCODING METHOD, IMAGE DECODING METHOD, ENCODER AND IMAGE DECODER, PROGRAM, COMPUTER DATA SIGNAL, AND IMAGE TRANSMISSION SYSTEM |
JP3861698B2 (ja) | 2002-01-23 | 2006-12-20 | ソニー株式会社 | 画像情報符号化装置及び方法、画像情報復号装置及び方法、並びにプログラム |
KR100931750B1 (ko) * | 2002-04-19 | 2009-12-14 | 파나소닉 주식회사 | 움직임 벡터 계산방법 |
RU2314656C2 (ru) * | 2002-06-11 | 2008-01-10 | Нокиа Корпорейшн | Внутреннее кодирование, основанное на пространственном прогнозировании |
CN100553339C (zh) * | 2002-07-15 | 2009-10-21 | 株式会社日立制作所 | 动态图像解码方法 |
US6728315B2 (en) * | 2002-07-24 | 2004-04-27 | Apple Computer, Inc. | Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding with reduced requirements for division operations |
JP3504256B1 (ja) * | 2002-12-10 | 2004-03-08 | 株式会社エヌ・ティ・ティ・ドコモ | 動画像符号化方法、動画像復号方法、動画像符号化装置、及び動画像復号装置 |
KR100560843B1 (ko) * | 2003-04-10 | 2006-03-13 | 에스케이 텔레콤주식회사 | 비디오 부호기에서 적응 움직임 벡터의 탐색 영역을결정하는 방법 및 장치 |
JP4373702B2 (ja) * | 2003-05-07 | 2009-11-25 | 株式会社エヌ・ティ・ティ・ドコモ | 動画像符号化装置、動画像復号化装置、動画像符号化方法、動画像復号化方法、動画像符号化プログラム及び動画像復号化プログラム |
JP2005005844A (ja) * | 2003-06-10 | 2005-01-06 | Hitachi Ltd | 計算装置及び符号化処理プログラム |
BRPI0411765A (pt) * | 2003-06-25 | 2006-08-08 | Thomson Licensing | codificação de decisão modal rápida para inter-quadros |
US20050013498A1 (en) * | 2003-07-18 | 2005-01-20 | Microsoft Corporation | Coding of motion vector information |
US8064520B2 (en) | 2003-09-07 | 2011-11-22 | Microsoft Corporation | Advanced bi-directional predictive coding of interlaced video |
US7400681B2 (en) * | 2003-11-28 | 2008-07-15 | Scientific-Atlanta, Inc. | Low-complexity motion vector prediction for video codec with two lists of reference pictures |
JP4213646B2 (ja) * | 2003-12-26 | 2009-01-21 | 株式会社エヌ・ティ・ティ・ドコモ | 画像符号化装置、画像符号化方法、画像符号化プログラム、画像復号装置、画像復号方法、及び画像復号プログラム。 |
JP3879741B2 (ja) * | 2004-02-25 | 2007-02-14 | ソニー株式会社 | 画像情報符号化装置および画像情報符号化方法 |
JP4313710B2 (ja) | 2004-03-25 | 2009-08-12 | パナソニック株式会社 | 画像符号化方法および画像復号化方法 |
JP4414904B2 (ja) * | 2004-04-16 | 2010-02-17 | 株式会社エヌ・ティ・ティ・ドコモ | 動画像符号化装置、動画像符号化方法、動画像符号化プログラム、動画像復号装置、動画像復号方法、及び動画像復号プログラム |
US8085846B2 (en) * | 2004-08-24 | 2011-12-27 | Thomson Licensing | Method and apparatus for decoding hybrid intra-inter coded blocks |
KR101108681B1 (ko) | 2005-01-19 | 2012-01-25 | 삼성전자주식회사 | 동영상 코덱에서의 주파수 변환 계수 예측 방법 및 장치,이를 구비한 부호화 및 복호화 장치와 방법 |
RU2336661C2 (ru) * | 2005-04-19 | 2008-10-20 | Самсунг Электроникс Ко., Лтд. | Способ и устройство адаптивного выбора контекстной модели для кодирования по энтропии |
TWI249907B (en) * | 2005-04-20 | 2006-02-21 | Ind Tech Res Inst | Method for fast mode decision of variable block size coding |
JP2007043651A (ja) * | 2005-07-05 | 2007-02-15 | Ntt Docomo Inc | 動画像符号化装置、動画像符号化方法、動画像符号化プログラム、動画像復号装置、動画像復号方法及び動画像復号プログラム |
US20070025444A1 (en) * | 2005-07-28 | 2007-02-01 | Shigeyuki Okada | Coding Method |
US8446954B2 (en) * | 2005-09-27 | 2013-05-21 | Qualcomm Incorporated | Mode selection techniques for multimedia coding |
JP2007116351A (ja) * | 2005-10-19 | 2007-05-10 | Ntt Docomo Inc | 画像予測符号化装置、画像予測復号装置、画像予測符号化方法、画像予測復号方法、画像予測符号化プログラム、及び画像予測復号プログラム |
CA2631336A1 (en) * | 2005-11-30 | 2007-06-07 | Kabushiki Kaisha Toshiba | Image encoding/image decoding method, image encoding/image decoding apparatus |
EP1995967A4 (en) * | 2006-03-16 | 2009-11-11 | Huawei Tech Co Ltd | METHOD AND APPARATUS FOR ADAPTIVE QUANTIFICATION IN AN ENCODING PROCEDURE |
JP5002286B2 (ja) | 2006-04-27 | 2012-08-15 | キヤノン株式会社 | 画像符号化装置、画像符号化方法、プログラム及び記憶媒体 |
US20080023247A1 (en) * | 2006-07-27 | 2008-01-31 | Hall Mark L | Reverse drive safety system for vehicle |
US20080026729A1 (en) | 2006-07-31 | 2008-01-31 | Research In Motion Limited | Method and apparatus for configuring unique profile settings for multiple services |
WO2008053746A1 (fr) | 2006-10-30 | 2008-05-08 | Nippon Telegraph And Telephone Corporation | Procédé de génération d'informations de référence prédictives, procédé de codage et de décodage d'image dynamiques, leur dispositif, leur programme et support de stockage contenant le programme |
KR101383540B1 (ko) * | 2007-01-03 | 2014-04-09 | 삼성전자주식회사 | 복수의 움직임 벡터 프리딕터들을 사용하여 움직임 벡터를추정하는 방법, 장치, 인코더, 디코더 및 복호화 방법 |
US20080240242A1 (en) * | 2007-03-27 | 2008-10-02 | Nokia Corporation | Method and system for motion vector predictions |
BRPI0809512A2 (pt) * | 2007-04-12 | 2016-03-15 | Thomson Licensing | método e aparelho para mesclagem dependente de contexto para modos salto-direto para codificação e decodificação de vídeo |
JP4788649B2 (ja) * | 2007-04-27 | 2011-10-05 | 株式会社日立製作所 | 動画像記録方法及びその装置 |
KR100901874B1 (ko) * | 2007-07-11 | 2009-06-09 | 한국전자통신연구원 | 비디오 인코딩을 위한 인터 모드 결정 방법 |
CN101743754B (zh) * | 2007-07-17 | 2012-04-18 | 日本电信电话株式会社 | 视频编码装置及方法 |
KR101408698B1 (ko) * | 2007-07-31 | 2014-06-18 | 삼성전자주식회사 | 가중치 예측을 이용한 영상 부호화, 복호화 방법 및 장치 |
JP2009111691A (ja) * | 2007-10-30 | 2009-05-21 | Hitachi Ltd | 画像符号化装置及び符号化方法、画像復号化装置及び復号化方法 |
JP4990927B2 (ja) | 2008-03-28 | 2012-08-01 | 三星電子株式会社 | 動きベクトル情報の符号化/復号化方法及び装置 |
JP5406465B2 (ja) | 2008-04-24 | 2014-02-05 | 株式会社Nttドコモ | 画像予測符号化装置、画像予測符号化方法、画像予測符号化プログラム、画像予測復号装置、画像予測復号方法及び画像予測復号プログラム |
JP5277257B2 (ja) | 2008-12-03 | 2013-08-28 | 株式会社日立製作所 | 動画像復号化方法および動画像符号化方法 |
ES2935961T3 (es) | 2009-03-23 | 2023-03-13 | Ntt Docomo Inc | Procedimiento de descodificación predictiva de imágenes |
US9626769B2 (en) | 2009-09-04 | 2017-04-18 | Stmicroelectronics International N.V. | Digital video encoder system, method, and non-transitory computer-readable medium for tracking object regions |
KR101484281B1 (ko) | 2010-07-09 | 2015-01-21 | 삼성전자주식회사 | 블록 병합을 이용한 비디오 부호화 방법 및 그 장치, 블록 병합을 이용한 비디오 복호화 방법 및 그 장치 |
PL2924995T3 (pl) | 2010-07-09 | 2018-11-30 | Samsung Electronics Co., Ltd | Sposób dekodowania wideo wykorzystujący łączenie bloków |
ES2820437T3 (es) | 2010-07-20 | 2021-04-21 | Ntt Docomo Inc | Método de codificación predictiva de imágenes, dispositivo de decodificación predictiva de imágenes, método de decodificación predictiva de imágenes y programa de decodificación predictiva de imágenes |
CN106210737B (zh) | 2010-10-06 | 2019-05-21 | 株式会社Ntt都科摩 | 图像预测解码装置、图像预测解码方法 |
KR20120140181A (ko) | 2011-06-20 | 2012-12-28 | 한국전자통신연구원 | 화면내 예측 블록 경계 필터링을 이용한 부호화/복호화 방법 및 그 장치 |
WO2013042888A2 (ko) | 2011-09-23 | 2013-03-28 | 주식회사 케이티 | 머지 후보 블록 유도 방법 및 이러한 방법을 사용하는 장치 |
HUE064456T2 (hu) * | 2011-11-02 | 2024-03-28 | Tagivan Ii Llc | Videó kódoló eljárás és videó kódoló |
-
2010
- 2010-03-16 ES ES19184100T patent/ES2935961T3/es active Active
- 2010-03-16 KR KR1020197005031A patent/KR102032771B1/ko active IP Right Grant
- 2010-03-16 EP EP19184092.5A patent/EP3567853B8/en active Active
- 2010-03-16 PT PT191841006T patent/PT3567854T/pt unknown
- 2010-03-16 PL PL17179305T patent/PL3249924T3/pl unknown
- 2010-03-16 RU RU2011142796/08A patent/RU2549170C2/ru active
- 2010-03-16 EP EP19184087.5A patent/EP3567852B1/en active Active
- 2010-03-16 ES ES15185593.9T patent/ES2641203T3/es active Active
- 2010-03-16 EP EP19184100.6A patent/EP3567854B1/en active Active
- 2010-03-16 DK DK19184147.7T patent/DK3567856T3/da active
- 2010-03-16 PT PT191840925T patent/PT3567853T/pt unknown
- 2010-03-16 PT PT191841477T patent/PT3567856T/pt unknown
- 2010-03-16 KR KR1020117021010A patent/KR101572462B1/ko active IP Right Grant
- 2010-03-16 CA CA3000726A patent/CA3000726C/en active Active
- 2010-03-16 HU HUE19184087A patent/HUE061250T2/hu unknown
- 2010-03-16 ES ES19184147T patent/ES2936295T3/es active Active
- 2010-03-16 CN CN201410341962.3A patent/CN104065961B/zh active Active
- 2010-03-16 RU RU2015108786/08A patent/RU2595754C2/ru active
- 2010-03-16 PL PL19184100.6T patent/PL3567854T3/pl unknown
- 2010-03-16 KR KR1020177036233A patent/KR101952726B1/ko active IP Right Grant
- 2010-03-16 KR KR1020167027835A patent/KR101773990B1/ko active IP Right Grant
- 2010-03-16 DK DK19184087.5T patent/DK3567852T3/da active
- 2010-03-16 EP EP10755924.7A patent/EP2413605B1/en active Active
- 2010-03-16 WO PCT/JP2010/054441 patent/WO2010110126A1/ja active Application Filing
- 2010-03-16 EP EP17179305.2A patent/EP3249924B1/en active Active
- 2010-03-16 AU AU2010228415A patent/AU2010228415B2/en active Active
- 2010-03-16 PT PT171793052T patent/PT3249924T/pt unknown
- 2010-03-16 PL PL19184092.5T patent/PL3567853T3/pl unknown
- 2010-03-16 SG SG2011065208A patent/SG174323A1/en unknown
- 2010-03-16 HU HUE19184100A patent/HUE061153T2/hu unknown
- 2010-03-16 DK DK19184092.5T patent/DK3567853T3/da active
- 2010-03-16 CA CA2921802A patent/CA2921802C/en active Active
- 2010-03-16 PT PT191840875T patent/PT3567852T/pt unknown
- 2010-03-16 CA CA3050583A patent/CA3050583C/en active Active
- 2010-03-16 CN CN201080013170.3A patent/CN102362500B/zh active Active
- 2010-03-16 EP EP15185593.9A patent/EP2988500B1/en active Active
- 2010-03-16 PL PL19184146.9T patent/PL3567855T3/pl unknown
- 2010-03-16 ES ES19184092T patent/ES2964540T3/es active Active
- 2010-03-16 CA CA3050582A patent/CA3050582C/en active Active
- 2010-03-16 FI FIEP19184092.5T patent/FI3567853T3/fi active
- 2010-03-16 BR BRPI1011698-2A patent/BRPI1011698B1/pt active IP Right Grant
- 2010-03-16 KR KR1020197005032A patent/KR102032772B1/ko active IP Right Grant
- 2010-03-16 PL PL15185593T patent/PL2988500T3/pl unknown
- 2010-03-16 CA CA3000728A patent/CA3000728C/en active Active
- 2010-03-16 FI FIEP19184147.7T patent/FI3567856T3/fi active
- 2010-03-16 MX MX2015006350A patent/MX341365B/es unknown
- 2010-03-16 JP JP2011505988A patent/JP5586101B2/ja active Active
- 2010-03-16 EP EP19184146.9A patent/EP3567855B1/en active Active
- 2010-03-16 PT PT191841469T patent/PT3567855T/pt unknown
- 2010-03-16 FI FIEP19184100.6T patent/FI3567854T3/fi active
- 2010-03-16 DK DK19184146.9T patent/DK3567855T3/da active
- 2010-03-16 FI FIEP19184146.9T patent/FI3567855T3/fi active
- 2010-03-16 KR KR1020177023888A patent/KR101812122B1/ko active IP Right Grant
- 2010-03-16 EP EP19184147.7A patent/EP3567856B8/en active Active
- 2010-03-16 SG SG10201400802XA patent/SG10201400802XA/en unknown
- 2010-03-16 KR KR1020157032965A patent/KR101700236B1/ko active IP Right Grant
- 2010-03-16 CA CA3050573A patent/CA3050573C/en active Active
- 2010-03-16 ES ES17179305T patent/ES2750224T3/es active Active
- 2010-03-16 PT PT151855939T patent/PT2988500T/pt unknown
- 2010-03-16 MX MX2016009255A patent/MX354869B/es unknown
- 2010-03-16 FI FIEP19184087.5T patent/FI3567852T3/fi active
- 2010-03-16 ES ES19184087T patent/ES2936129T3/es active Active
- 2010-03-16 HU HUE19184147A patent/HUE061103T2/hu unknown
- 2010-03-16 HU HUE19184146A patent/HUE061249T2/hu unknown
- 2010-03-16 CA CA2756419A patent/CA2756419C/en active Active
- 2010-03-16 HU HUE19184092A patent/HUE064690T2/hu unknown
- 2010-03-16 ES ES19184146T patent/ES2937695T3/es active Active
- 2010-03-16 MX MX2011009960A patent/MX2011009960A/es active IP Right Grant
- 2010-03-16 PL PL19184147.7T patent/PL3567856T3/pl unknown
- 2010-03-16 PL PL19184087.5T patent/PL3567852T3/pl unknown
- 2010-03-22 TW TW107147675A patent/TWI715906B/zh active
- 2010-03-22 TW TW105134809A patent/TWI606717B/zh active
- 2010-03-22 TW TW104133686A patent/TW201603561A/zh unknown
- 2010-03-22 TW TW107147849A patent/TWI699113B/zh active
- 2010-03-22 TW TW099108378A patent/TWI517676B/zh active
- 2010-03-22 TW TW109120081A patent/TWI735257B/zh active
- 2010-03-22 TW TW106130004A patent/TWI654876B/zh active
-
2011
- 2011-09-22 US US13/240,559 patent/US9031125B2/en active Active
- 2011-09-22 MX MX2020011758A patent/MX2020011758A/es unknown
-
2014
- 2014-07-18 JP JP2014147639A patent/JP5779270B2/ja active Active
- 2014-12-23 US US14/581,705 patent/US9549186B2/en active Active
-
2015
- 2015-04-17 JP JP2015085249A patent/JP6000398B2/ja active Active
-
2016
- 2016-03-02 AU AU2016201339A patent/AU2016201339B2/en active Active
- 2016-07-06 RU RU2016127198A patent/RU2639662C1/ru active
- 2016-08-30 JP JP2016168204A patent/JP6220023B2/ja active Active
- 2016-11-10 US US15/348,504 patent/US10063855B2/en active Active
-
2017
- 2017-09-28 JP JP2017187567A patent/JP6405432B2/ja active Active
- 2017-11-27 AU AU2017265185A patent/AU2017265185B2/en active Active
- 2017-11-30 RU RU2017141748A patent/RU2672185C1/ru active
-
2018
- 2018-07-11 US US16/032,998 patent/US10284848B2/en active Active
- 2018-07-11 US US16/032,985 patent/US10284846B2/en active Active
- 2018-07-11 US US16/032,988 patent/US10284847B2/en active Active
- 2018-11-02 RU RU2018138742A patent/RU2694239C1/ru active
- 2018-11-02 RU RU2018138743A patent/RU2709165C1/ru active
- 2018-11-02 RU RU2018138744A patent/RU2707713C1/ru active
-
2019
- 2019-07-05 AU AU2019204853A patent/AU2019204853B2/en active Active
- 2019-07-05 AU AU2019204856A patent/AU2019204856B2/en active Active
- 2019-07-05 AU AU2019204852A patent/AU2019204852B2/en active Active
- 2019-07-05 AU AU2019204854A patent/AU2019204854B2/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07264598A (ja) * | 1994-03-23 | 1995-10-13 | Nippon Telegr & Teleph Corp <Ntt> | 動き補償方法、動きベクトル検出回路および動き補償回路 |
US6259739B1 (en) | 1996-11-26 | 2001-07-10 | Matsushita Electric Industrial Co., Ltd. | Moving picture variable bit rate coding apparatus, moving picture variable bit rate coding method, and recording medium for moving picture variable bit rate coding program |
US6765964B1 (en) | 2000-12-06 | 2004-07-20 | Realnetworks, Inc. | System and method for intracoding video data |
WO2003026315A1 (en) * | 2001-09-14 | 2003-03-27 | Ntt Docomo, Inc. | Coding method, decoding method, coding apparatus, decoding apparatus, image processing system, coding program, and decoding program |
US7003035B2 (en) | 2002-01-25 | 2006-02-21 | Microsoft Corporation | Video coding methods and apparatuses |
WO2008016609A2 (en) * | 2006-08-02 | 2008-02-07 | Thomson Licensing | Adaptive geometric partitioning for video encoding |
JP2008311781A (ja) * | 2007-06-12 | 2008-12-25 | Ntt Docomo Inc | 動画像符号化装置、動画像復号化装置、動画像符号化方法、動画像復号化方法、動画像符号化プログラム及び動画像復号化プログラム |
JP2009005147A (ja) * | 2007-06-22 | 2009-01-08 | Sony Corp | 情報処理システムおよび方法、情報処理装置および方法、並びにプログラム |
Non-Patent Citations (1)
Title |
---|
See also references of EP2413605A4 |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9031125B2 (en) | 2009-03-23 | 2015-05-12 | Ntt Docomo, Inc. | Image predictive encoding and decoding device |
US10284846B2 (en) | 2009-03-23 | 2019-05-07 | Ntt Docomo, Inc. | Image predictive encoding and decoding device |
US10284848B2 (en) | 2009-03-23 | 2019-05-07 | Ntt Docomo, Inc. | Image predictive encoding and decoding device |
US10063855B2 (en) | 2009-03-23 | 2018-08-28 | Ntt Docomo, Inc. | Image predictive encoding and decoding device |
US10284847B2 (en) | 2009-03-23 | 2019-05-07 | Ntt Docomo, Inc. | Image predictive encoding and decoding device |
US9549186B2 (en) | 2009-03-23 | 2017-01-17 | Ntt Docomo, Inc. | Image predictive encoding and decoding device |
US10063888B1 (en) | 2010-07-20 | 2018-08-28 | Ntt Docomo, Inc. | Image prediction encoding/decoding system |
US10542287B2 (en) | 2010-07-20 | 2020-01-21 | Ntt Docomo, Inc. | Image prediction encoding/decoding system |
US9986261B2 (en) | 2010-07-20 | 2018-05-29 | Ntt Docomo, Inc. | Image prediction encoding/decoding system |
US10225580B2 (en) | 2010-07-20 | 2019-03-05 | Ntt Docomo, Inc. | Image prediction encoding/decoding system |
US10230987B2 (en) | 2010-07-20 | 2019-03-12 | Ntt Docomo, Inc. | Image prediction encoding/decoding system |
JP2012095099A (ja) * | 2010-10-27 | 2012-05-17 | Jvc Kenwood Corp | 動画像符号化装置、動画像符号化方法及び動画像符号化プログラム、並びに動画像復号装置、動画像復号方法及び動画像復号プログラム |
JP2012114590A (ja) * | 2010-11-22 | 2012-06-14 | Jvc Kenwood Corp | 動画像符号化装置、動画像符号化方法及び動画像符号化プログラム |
JP2012114591A (ja) * | 2010-11-22 | 2012-06-14 | Jvc Kenwood Corp | 動画像復号装置、動画像復号方法及び動画像復号プログラム |
US11871000B2 (en) | 2010-12-28 | 2024-01-09 | Dolby Laboratories Licensing Corporation | Method and system for selectively breaking prediction in video coding |
JP2012142886A (ja) * | 2011-01-06 | 2012-07-26 | Kddi Corp | 画像符号化装置及び画像復号装置 |
JP2012205288A (ja) * | 2011-03-28 | 2012-10-22 | Jvc Kenwood Corp | 画像復号装置、画像復号方法および画像復号プログラム |
JP2012205287A (ja) * | 2011-03-28 | 2012-10-22 | Jvc Kenwood Corp | 画像符号化装置、画像符号化方法および画像符号化プログラム |
CN105379270B (zh) * | 2013-07-15 | 2018-06-05 | 高通股份有限公司 | 颜色分量间残余预测 |
CN105379270A (zh) * | 2013-07-15 | 2016-03-02 | 高通股份有限公司 | 颜色分量间残余预测 |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6405432B2 (ja) | 画像予測復号装置および画像予測復号方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201080013170.3 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10755924 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011505988 Country of ref document: JP |
|
ENP | Entry into the national phase |
Ref document number: 20117021010 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12011501792 Country of ref document: PH Ref document number: 2010755924 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010228415 Country of ref document: AU |
|
WWE | Wipo information: entry into national phase |
Ref document number: 7174/DELNP/2011 Country of ref document: IN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2756419 Country of ref document: CA Ref document number: MX/A/2011/009960 Country of ref document: MX |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2010228415 Country of ref document: AU Date of ref document: 20100316 Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2011142796 Country of ref document: RU Kind code of ref document: A |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: PI1011698 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: PI1011698 Country of ref document: BR Kind code of ref document: A2 Effective date: 20110923 |
|
WWE | Wipo information: entry into national phase |
Ref document number: IDP00201608771 Country of ref document: ID |