WO2011074197A1 - 画像符号化装置、画像復号化装置、画像符号化方法、及び、画像復号化方法 - Google Patents
画像符号化装置、画像復号化装置、画像符号化方法、及び、画像復号化方法 Download PDFInfo
- Publication number
- WO2011074197A1 WO2011074197A1 PCT/JP2010/007019 JP2010007019W WO2011074197A1 WO 2011074197 A1 WO2011074197 A1 WO 2011074197A1 JP 2010007019 W JP2010007019 W JP 2010007019W WO 2011074197 A1 WO2011074197 A1 WO 2011074197A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- signal
- transition
- reduced
- prediction
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/53—Multi-resolution motion estimation; Hierarchical motion estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
Definitions
- the present invention relates to an image signal encoding and decoding technique, and uses an in-screen prediction process that generates a prediction signal for a target image signal from an encoded image and encodes a difference signal from the prediction signal.
- the present invention relates to an encoding device, an image decoding device, an image encoding method, and an image decoding method.
- High-efficiency coding of images includes, for example, MPEG2 and MPEG4-AVC (Advanced Video Coding), the correlation between spatially adjacent pixels in the same frame of a moving image signal, and temporally adjacent frames. And a method of compressing the amount of information using the correlation between fields.
- MPEG2 and MPEG4-AVC Advanced Video Coding
- MPEG4-AVC ISO / IEC 14496-10 Advanced Video Coding
- an image is divided into a plurality of two-dimensional blocks, and a prediction signal is generated using correlation within the same frame or between frames in units of blocks.
- High encoding efficiency is realized by encoding the difference information.
- Prediction processing using correlation within the same frame in MPEG4-AVC is called intra prediction, and as shown in FIG. 11, a prediction image of a block to be encoded is decoded from an encoded portion adjacent to the target block. Generate using images.
- intra prediction a plurality of (9 types when prediction is performed in units of 4 ⁇ 4 pixel blocks) assuming that adjacent decoded images have high correlation in a certain direction as shown in FIG.
- a prediction mode with the smallest error from the encoding target block is selected from the generated prediction image, and is encoded together with the prediction mode information.
- Intra prediction is a prediction process that uses only the correlation with the adjacent area, and the prediction effect decreases when the correlation at the boundary with the encoding target block is small.
- Japanese Patent Application Laid-Open No. 2004-228561 presents a method for performing prediction processing using image correlation at a position distant from the target block. Specifically, as shown in FIG. 12, the error between the encoded decoded image and the encoding target block at the position moved by the amount of transition in the screen (hereinafter referred to as a transition vector) from the target block is calculated, A reference image referred to by a transition vector with the least error is set as a predicted image, and is encoded together with the transition vector.
- Patent Document 2 proposes a technique for specifying a shift vector without transmitting it in order to reduce the amount of code required for the shift vector.
- an encoded decoded image adjacent to a target block is used as a template, an encoded decoded image at a position moved by a shift vector, and an encoded decoded image adjacent to the target block
- the shift vector with the smallest error is regarded as the shift vector of the encoding target block
- the reference image referred to by the shift vector is set as the predicted image.
- the decoding side can be calculated without receiving the transition vector by detecting the transition vector using the encoded decoded image, so that the amount of code increases due to the additional information. Does not happen.
- high-efficiency encoding can be realized by performing motion compensation prediction processing based on decoded image signals of different frames in the time direction for temporally continuous image signals.
- the reference frame for which compensation prediction processing is performed needs to be encoded using only intra prediction within the same frame, and the prediction effect decreases when there is little correlation at the boundary with the block to be encoded There is. This problem appears as a decrease in coding efficiency due to the performance limit of intra prediction because motion compensated prediction does not function even when the video signal is not temporally continuous.
- Patent Document 1 there is a problem that the efficiency decreases when a prediction signal having an effect exceeding the code amount of the transition vector cannot be calculated.
- Patent Document 2 an adjacent image of the encoding target block is used as a template. Therefore, when the correlation between the adjacent image and the target block is low, or when the correlation with the adjacent image of the reference block referenced by the transition vector generated in Patent Document 1 is low, etc. Therefore, there is a problem that a highly accurate transition vector cannot be obtained and the efficiency is not improved.
- an object of the present invention is to realize an intra-frame prediction method that more effectively uses the image correlation at a position away from the target block in order to greatly improve the prediction efficiency within the frame.
- the image coding apparatus uses a locally decoded image of a block in the same image signal that has already been coded for the coding target block to correlate with the coding target block.
- a transition vector detection unit that searches for a signal having a high correlation and calculates a transition vector that is a transition within the screen between the coding target block and the transition prediction signal, using a signal having the highest correlation as a transition prediction signal;
- a signal highly correlated with the encoding target block using a signal that has been reduced in at least one of horizontal and vertical with respect to the local decoded image of the block within the same image signal that has already been encoded with respect to the encoding target block
- a transition vector that is a transition in the screen between the reduced coding target block and the reduced transition prediction signal is obtained using the most correlated signal as the reduced transition prediction signal.
- the image decoding apparatus includes a prediction signal generated from a decoded image of a block in the same image signal that has already been decoded, with respect to a decoding target block, from an encoded stream that has been encoded in units of blocks.
- a transition vector / mode that decodes a transition vector that is a transition in the screen with the decoding target block and information indicating whether to generate a prediction signal by reducing the decoded image specified in the transition vector
- a transition prediction signal generation unit that generates a prediction signal from the decoded image according to information indicating whether or not to generate a prediction signal by reducing the decoded vector and the decoded image;
- a decoded image is calculated by adding the prediction signal and the decoded residual signal.
- the image encoding method of the present invention searches for a signal having a high correlation with the encoding target block using the local decoded image of the block in the same image signal that has already been encoded with respect to the encoding target block, A signal having the highest correlation is used as a transition prediction signal, a step of calculating a transition vector that is a transition in the screen between the encoding target block and the transition prediction signal, and the encoding target block is already encoded.
- a signal highly correlated with the block to be encoded is searched using a signal reduced to at least one of horizontal and vertical, and the signal with the highest correlation is obtained.
- a reduced transition prediction signal obtaining a transition vector that is a transition in a screen between the reduced encoding target block and the reduced transition prediction signal; the transition prediction signal; and Selecting a signal highly correlated with the encoding target block from the small transition prediction signal, and outputting a transition vector used for the selected signal and information indicating the selected result as a prediction signal,
- the difference signal between the prediction signal and the encoding target block, the transition vector, and information indicating the selected result are encoded.
- the image decoding method of the present invention includes a prediction signal generated from a decoded image of a block in the same image signal that has already been decoded with respect to a decoding target block from an encoded stream that has been encoded in units of blocks.
- the decoded image is calculated by addition.
- the image encoding device and the image decoding device by generating a reference image reduced as a template signal for predicting a texture component and reducing the encoded decoded image, and using it as a predicted image
- the accuracy of image signal prediction within a frame in the conventional method can be improved.
- the filter characteristics when creating a reduced image it is possible to make the signal characteristics of the reduced image closer to the input image by evaluating the characteristics of the input image at the time of encoding, and more predictive accuracy. Can be raised. By combining these, the coding efficiency can be improved.
- FIG. 1 is a block diagram showing the configuration of an image encoding apparatus according to the first embodiment of the present invention.
- the image coding apparatus according to the present embodiment includes an input terminal 100, an input image buffer 101, a block division unit 102, a transition vector detection unit 103, a reduced image transition vector detection unit 104, a transition prediction mode.
- transition vector detection unit 103 the reduced image transition vector detection unit 104, the transition prediction mode determination / signal generation unit 105, the reduced image generation unit 113, and the reduced decoded image memory 114 are provided, and the operations in these processing blocks are as follows.
- the other processing blocks are processing blocks constituting an intra-frame encoding process in an image encoding apparatus such as MPEG4-AVC.
- the digital image signal input from the input terminal 100 is stored in the input image buffer 101.
- the digital image signal stored in the input image buffer 101 is supplied to the block dividing unit 102, and is cut out as an encoding target block in units of two-dimensional macroblocks composed of 16 ⁇ 16 pixels.
- the block division unit 102 supplies the extracted encoding target block to the transition vector detection unit 103, the reduced image transition vector detection unit 104, the transition prediction mode determination / signal generation unit 105, and the subtractor 106.
- the subtractor 106 calculates a difference between a coding target block supplied from the block dividing unit 102 and a prediction image block supplied from a transition prediction mode determination / signal generation unit 105 described later, and orthogonalizes the result as a difference block. This is supplied to the conversion unit 107.
- the orthogonal transform unit 107 generates DCT coefficients corresponding to the orthogonally transformed frequency component signal by performing DCT transform on the difference block in units of a predetermined two-dimensional block (for example, horizontal 8 pixels ⁇ vertical 8 pixels). To do. Further, the orthogonal transform unit 107 collects the generated DCT coefficients in units of two-dimensional macroblocks and outputs them to the quantization unit 108. The quantization unit 108 performs the quantization process by dividing the DCT coefficient by a different value for each frequency component. The quantization unit 108 supplies the quantized DCT coefficient to the inverse quantization unit 109 and the entropy coding unit 115.
- the inverse quantization unit 109 performs inverse quantization by multiplying the quantized DCT coefficient input from the quantization unit 108 by a value divided at the time of quantization, and the result of the inverse quantization is obtained.
- the decoded DCT coefficient is output to the inverse orthogonal transform unit 110.
- the inverse orthogonal transform unit 110 performs inverse DCT processing to generate a decoded difference block.
- the inverse orthogonal transform unit 110 supplies the decoded difference block to the adder 111.
- the adder 111 adds the prediction image block supplied from the transition prediction mode determination / signal generation unit 105 and the decoded difference block supplied from the inverse orthogonal transform unit 110 to generate a local decoding block.
- the local decoded block generated by the adder 111 is stored in the intra-frame decoded image memory 112 in a form subjected to inverse block conversion.
- the transition vector detection unit 103 calculates a transition vector between the image signal of the block to be encoded input from the block division unit 102 and the local decoded image signal stored in the intra-frame decoded image memory 112. Specifically, the local decoded image signal corresponding to the transition vector DV in which the entire reference block is arranged at the position of the encoded partial decoded image shown in FIG.
- the correlation value between the blocks to be encoded is calculated using an evaluation formula defined by the sum of absolute value errors, the sum of square errors, etc., and the transition vector with the smallest value shown in the evaluation formula is used for transition prediction. It is a vector value.
- the transition vector detection unit 103 outputs a local decoded image signal corresponding to the detected transition vector value as a transition predicted image to the transition prediction mode determination / signal generation unit 105 together with the detected transition vector value.
- the local decoding block generated by the adder 111 is input to the reduced image generation unit 113 together with the intra-frame decoded image memory 112, and the local decoding block is subjected to reduction processing, and the reduced local decoding block is reduced.
- the decoded image memory 114 is output.
- the reduction direction and the filter coefficient are defined as fixed.
- the reduction processing is reduced to 1/2 in both horizontal and vertical directions, and the filter coefficient is defined to apply a 3-tap one-dimensional filter 1.2.1 (/ 4) in the order of horizontal / vertical.
- the reduced local decoding block output from the reduced image generating unit 113 is stored in the reduced decoded image memory 114 and is used as a reduced decoded image in the shift vector detection in the reduced image shift vector detecting unit 104.
- the relationship between the reduced local decoding block and the encoding target block is as follows: the position in the screen of the encoding target block when the encoding target block is virtually reduced, and the predicted image
- the amount of shift from the position in the screen of the reference block that is a candidate is defined as a shift vector.
- the reduced image transition vector detection unit 104 inputs a two-dimensional block having the same block size as that of the encoding target block from the reduced decoded image memory 114 with reference to the position in the screen indicated by the transition vector.
- the correlation value between them is calculated with an evaluation formula defined by the sum of absolute value errors, the sum of square errors, etc., and the transition vector with the smallest value indicated by the evaluation formula is detected as the transition vector value used for reduced transition prediction To do.
- the reduced image transition vector detection unit 104 outputs the reduced decoded image corresponding to the detected transition vector value as a reduced transition predicted image to the transition prediction mode determination / signal generation unit 105 together with the detected transition vector value.
- the transition prediction mode determination / signal generation unit 105 includes a transition vector value and a transition prediction image input from the transition vector detection unit 103, and a transition vector value and a reduction transition prediction image input from the reduced image transition vector detection unit 104.
- the optimum prediction mode is selected, the selected prediction image is output to the subtractor 106 and the adder 111, and information indicating the selected prediction mode and the transition vector is output to the entropy encoding unit 115.
- the detailed operation of the transition prediction mode determination / signal generation unit 105 will be described later.
- the entropy encoding unit 115 receives the quantized DCT coefficient supplied from the quantization unit 108 and the information indicating the selected prediction mode and transition vector supplied from the transition prediction mode determination / signal generation unit 105. Then, variable length coding of the shift vector information and prediction mode information and the quantized DCT coefficient is performed. Information subjected to variable length coding is output to the stream buffer 116.
- the encoded stream stored in the stream buffer 116 is output to a recording medium or a transmission path via an output terminal 117.
- the code amount control unit 118 is supplied with the code amount of the bit stream stored in the stream buffer 116, and is compared with the target code amount to obtain the target code. In order to approximate the amount, the quantization level (quantization scale) of the quantization unit 108 is controlled.
- FIG. 2 is a block diagram showing the configuration of the image decoding apparatus according to the first embodiment of the present invention.
- the image decoding apparatus includes an input terminal 200, a stream buffer 201, an entropy decoding unit 202, a transition vector / mode decoding unit 203, a transition prediction signal generation unit 204, and an inverse quantization unit. 205, an inverse orthogonal transform unit 206, an adder 207, an intra-frame decoded image memory 208, an output terminal 209, and a reduced image generation unit 210.
- transition vector / mode decoding unit 203 the transition prediction signal generation unit 204, and the reduced image generation unit 210 are provided, and the operations in these processing blocks are the features of the first embodiment of the present invention, and other processing blocks Is a processing block constituting an intra-frame decoding process in an image encoding device such as MPEG4-AVC.
- the encoded bit stream input from the input terminal 200 is stored in the stream buffer 201.
- the stored encoded bit stream is supplied from the stream buffer 201 to the entropy decoding unit 202, and the entropy decoding unit 202 is input.
- Variable-length decoding is performed on the encoded transition vector information and prediction mode information, and the quantized DCT coefficient from the bitstream, and the DCT coefficient quantized by the inverse quantization unit 205 is converted into the transition vector / mode decoding.
- the shift vector information and the prediction mode information are output to the unit 203.
- the same processing as the local decoding processing of the moving image coding apparatus according to the first embodiment is performed.
- the decoded image stored in the intra-frame decoded image memory 208 is displayed as a decoded image signal on the display device via the output terminal 209.
- the transition vector / mode decoding unit 203 determines whether the transition vector value and the transition prediction signal are prediction signals subjected to normal transition prediction processing based on the transition vector information and the prediction mode information input from the entropy decoding unit 202, or a reduced image. A function of calculating a selection signal as to whether the prediction signal has been subjected to the transition prediction process using, and outputting it to the transition prediction signal generation unit 204.
- the transition prediction signal generation unit 204 generates a prediction image based on the transition vector value output from the transition vector / mode decoding unit 203 and the selection signal.
- the selection signal indicates normal transition prediction
- a decoded image signal at a position shifted from the decoding target block by the shift vector value is input from the intra-frame decoded image memory 208, and a prediction signal is generated.
- the selection signal indicates a transition prediction using a reduced image
- a transition vector value is output to the reduced image generation unit 210, and the generated reduced image is received.
- the reduced image generation unit 210 receives the in-screen of the encoding target block when the encoding target block is virtually reduced as illustrated in FIG. 3.
- a decoded image at a position indicated by a vector value obtained by correcting the transition vector to a transition before reduction is obtained as an intra-frame decoded image.
- the result input from the memory 208 and subjected to the reduction filter process is output to the transition prediction signal generation unit 204.
- the transition prediction signal generation unit 204 outputs the generated or input predicted image to the adder 207.
- the reduced image generating unit 210 generates the reference block indicated by the shift vector only when the reduced shift prediction is performed, using the reduced filter.
- the local decoding process in the image encoding apparatus according to the first embodiment it is also possible to adopt a configuration in which a reduction process is always performed on the decoded two-dimensional block and stored in the reduced image memory. is there.
- the flowchart shown in FIG. 4 shows the operation of the transition prediction mode determination process in units of slices defined by a plurality of coding blocks.
- the target coding block Cur is input (S400), and the transition vector DV and the transition prediction image DVref corresponding to the coding target block are received from the transition vector detection unit 103 (S401). Subsequently, the reduced transition vector DVss and the reduced transition predicted image DVrefss corresponding to the encoding target block are received from the reduced image transition vector detection unit 104 (S402).
- the error value for each pixel between the encoding block Cur and the transition prediction image DVref is integrated, and the error evaluation value ErrNorm (DV) is calculated, and error values for each pixel between Cur and the reduced transition predicted image DVrefss are integrated to calculate an error evaluation value ErrSS (DVss) (S403).
- a code amount necessary for encoding the shift vector value as information is calculated. Specifically, a transition vector is predicted from the predicted value DVpred and the difference value is encoded. For the calculation of the predicted value DVpred, a configuration used for motion vector prediction in MPEG4-AVC as shown in FIG. 13 is used. As the adjacent blocks, three blocks of the block A that is the left adjacent to the target block, the block B that is the upper adjacent, and the block C that is the upper right are selected. However, when the block C such as the image end is invalid, the block D at the upper left is used instead of the block C.
- the predicted transition vector values PDVx, PDVy are expressed as horizontal components as shown in Equation 1 below.
- PDMVx Median (DVAx, DVBx, DVCx)
- PDMVy Median (DVAy, DVBy, DVCy)
- a difference value DiffDV between the predicted value DVpred of the shift vector and the shift vector DV is obtained, and a vector assumed code amount at the time of encoding is calculated and added to ErrNorm (DV) (S405).
- the assumed code amount of the vector for example, assuming that DiffDV is encoded as a Golomb code, the necessary code amount can be calculated.
- a difference value DiffDVss between a value obtained by reducing the predicted value DVpred of the shift vector and the reduced shift vector DVss is obtained, and a vector assumed code amount at the time of encoding is calculated and added to ErrSS (DVss) (S406).
- an image signal having a similar texture component in an object different from the encoding target block is extracted as a prediction signal, so that the reduced-decoded image and the image at the same position in the decoded image signal not reduced are used.
- the transition vector subjected to correction according to the reduction ratio is stored as the transition vector value of the adjacent block in FIG. Take the configuration.
- the value obtained by reducing DVpred to 1/2 horizontally and vertically is used.
- the difference value from DVss is DiffDVss.
- the assumed vector code amount at the time of encoding DiffDVss is calculated in the same manner as DiffDV.
- the error evaluation values ErrNorm (DV) and ErrSS (DVss) calculated in this way are compared (S407). If ErrSS (DVss)> ErrNorm (DV) (S407: YES), the prediction mode is changed.
- DVsel and DVresult are output to the entropy encoding unit 115 (S412), and the process for each block to be encoded is terminated.
- this encoding target block is not the last block of the slice (S413: NO)
- the encoding target block is updated (S414), and the process returns to S400. If it is the last block of the slice (S413: YES), the transition prediction mode determination process in units of slices ends.
- the point of the present invention in the first embodiment is that a block in which the reference image is reduced is added as a prediction target signal in order to fully utilize the self-similarity and texture similarity of the image signal as compared with the conventional method.
- a prediction block having a higher correlation with the encoded block is generated.
- the transition vector value in the normal transition prediction mode and the reduced transition prediction mode, by correcting the prediction value of the transition vector according to the reduction ratio, it is possible to appropriately perform the transition vector prediction from the adjacent block. An increase in the code amount of information can be avoided.
- the locally decoded image serving as the reference image has an increase in distortion components that do not exist in the input image and a decrease in high-frequency components due to the influence of encoding deterioration, and the correlation with the block to be encoded is reduced.
- the distortion component is cut as a high-frequency component, and the high-frequency component remains on the pixel basis of the reduced image even after encoding. And is used for prediction processing as a prediction block having higher correlation with the encoded block.
- the first embodiment is an embodiment of an image encoding / decoding device that uses only the correlation within a frame.
- the second embodiment is a moving image that uses a correlation within a frame and a correlation between frames.
- 1 is an example of an image encoding / decoding device that can utilize the time correlation of
- FIG. 5 is a block diagram showing the configuration of an image encoding apparatus according to the second embodiment of the present invention.
- the image coding apparatus according to the present exemplary embodiment has an input terminal 100, an input image buffer 101, a block dividing unit 102, a transition vector detecting unit 103, which have the same functions as those in the first exemplary embodiment.
- the decoded image memory 112, the reduced decoded image memory 114, the entropy encoding unit 115, the stream buffer 116, the output terminal 117, the code amount control unit 118, and a reduced image generation unit 513 having additional processing with respect to the first embodiment.
- an intra prediction unit 519, a deblock filter 520, a reference image memory 521, a motion vector which are added processing blocks.
- the intra prediction unit 519 receives the encoding target block from the block division unit 102 and receives the intra-frame decoded image memory 112. Intra prediction processing performed in MPEG4-AVC is performed using a decoded image in an adjacent encoded area.
- the intra prediction unit 519 selects an intra prediction mode having the highest correlation between the prediction image and the encoding target block, and outputs the intra prediction image, the intra prediction mode signal, and the error evaluation value to the mode determination unit 524.
- the motion vector detection unit 522 performs motion estimation between the encoding target block image acquired from the block division unit 102 and the reference image stored in the reference image memory 521.
- a reference image at a position moved by a predetermined movement amount from the same position on the screen is cut out, and a movement amount that minimizes a prediction error when the image is used as a prediction block is determined as a motion vector.
- a value a block matching process that is obtained while changing the movement amount is used.
- the motion vector detection unit 522 requires a code amount necessary for encoding the difference between the motion vector value calculated from the motion vector value used for the adjacent block as shown in FIG. 13 and the motion vector value. In consideration of the above, the optimal motion vector value is detected.
- the motion vector value obtained by the motion vector detection unit 522 is supplied to the motion compensation prediction unit 523, selects a prediction signal with the least difference information to be encoded from the prediction signals for a plurality of reference images, and selects the selected motion compensation
- the prediction mode and the prediction signal are output to the mode determination unit 524.
- the processing block described above is configured to apply the conventional method of intra prediction and motion compensation prediction.
- the operations of the image analysis unit 525, the reduced image generation unit 513, and the mode determination unit 524 which are processing blocks that perform the operations showing the characteristics in the second embodiment of the present invention, are the same as those in the encoding process shown in FIG. This will be described with reference to a flowchart.
- the flowchart shown in FIG. 6 shows the operation of one-screen encoding processing defined by a plurality of encoding blocks.
- the image data of one screen stored in the input image buffer 101 is input to the image analysis unit 525, and the horizontal and vertical frequency components in one screen are measured (S600).
- the measurement method it is possible to use frequency analysis by Fourier transform, frequency analysis by wavelet transform, etc., but in this example, discrete Fourier transform FFT (Fast Fourier Transfer) is applied one-dimensionally separately in horizontal and vertical directions. The result is accumulated, and the value obtained by adding the results in the entire screen is taken as the measured value.
- the unit of the FFT is, for example, 32 pixels, and the analysis position is moved every 16 pixels in order to reduce the influence due to the boundary of the analysis unit.
- a low-pass filter coefficient set capable of performing band limitation in a plurality of bands is prepared in advance.
- the design method of the filter coefficient it is possible to use an existing digital filter design method.
- horizontal and vertical filter coefficients having a band limiting characteristic capable of forming a reduced image closest to the frequency distribution obtained as a result of measuring the frequency components are selected (S601).
- a selection method it is also possible to use a method of selecting a filter coefficient having the widest passband among filters having a frequency component distribution equal to or higher than the corresponding frequency as a stop band at a frequency that falls below a certain threshold. It is also possible to actually measure the frequency components in the horizontal and vertical directions on one screen for a reduced image obtained by reducing the input image by the coefficient, and select the filter coefficient whose frequency characteristics are most approximate.
- the image analysis unit 525 outputs the selected filter coefficient or a parameter designating the coefficient to the entropy encoding unit 115 and the reduced image generation unit 513.
- the entropy encoding unit 115 adds, for example, MPEG-4 AVC (ISO / IEC 14496-10 Advanced Video Coding) PPS (Picture Parameter Set) defined in MPEG4-AVC (ISO / IEC 14496-10 Advanced ⁇ Video Coding) as additional information related to the encoding of the entire screen. Then, the selected filter coefficient or a parameter designating the coefficient is encoded (S602).
- MPEG-4 AVC ISO / IEC 14496-10 Advanced Video Coding
- PPS Picture Parameter Set
- MPEG4-AVC ISO / IEC 14496-10 Advanced ⁇ Video Coding
- the encoding process of one screen is started using the filter coefficient selected in this way.
- an encoding target block is cut out from the input image (S603). If it is not an I slice (S604: NO), motion vector detection / motion compensation prediction is performed (S605), and intra prediction is performed in parallel (S606). Then, transition vector detection is performed (S607), and reduced transition vector detection is performed (S608). Subsequently, in order to select which of the transition prediction and the reduced transition prediction is used as the transition prediction, transition mode determination / transition prediction is performed (S609). Regarding the determination method, the method described in the first embodiment is used.
- the mode determination unit 524 selects an optimal prediction mode, generates a prediction image, and outputs the prediction image to the subtractor 106 and the adder 111 (S610).
- the prediction mode / motion vector / transition vector and information indicating whether or not the reduced image in the transition prediction is used are output to the entropy encoding unit 115.
- a difference signal between the encoding target block and the predicted image is calculated, orthogonal transformation / quantization is performed (S611), and the quantized orthogonal transformation coefficient, the reduction in the prediction mode / motion vector / transition vector, and transition prediction are performed.
- Information indicating whether or not an image has been used is encoded (S612).
- the quantized coefficient is subjected to inverse quantization / inverse orthogonal transform, the output signal is added to the predicted image (S613), and the generated local decoded image is stored in the intra-frame decoded image memory 112 (S614). .
- the reduced image generating unit 513 performs horizontal and vertical reduction processing based on the locally decoded image input from the adder 111 and the selected filter coefficient or parameter specifying the coefficient input from the image analysis unit 525. A filter coefficient is set, and the locally decoded image is reduced using the filter coefficient (S615).
- the reduced image generation unit 513 stores the reduced local reduced image in the reduced decoded image memory 114 (S616), and ends the encoding process for the target encoding block.
- the deblocking filter 520 When the target encoding block is the last block of one screen (S617: YES), the deblocking filter 520 performs the deblocking filter on the entire screen and stores it in the reference image memory 521 (S618). The screen encoding process ends. If it is not the last block of one screen (S617: NO), the encoding target block is updated (S619), and the process returns to S603.
- FIG. 7 is a block diagram showing the configuration of the image decoding apparatus according to the second embodiment of the present invention.
- the image decoding apparatus according to the present embodiment has an input terminal 200, a stream buffer 201, an entropy decoding unit 202, a transition vector / mode decoding unit 203, which have the same functions as in the first example.
- the intra prediction mode decoding unit 714, the intra prediction unit 715, the motion vector decoding unit 716, the deblock filter 717, the reference image memory 718, and the motion compensated prediction unit 719 are the second in FIG. Similar to the description of the image coding apparatus in the embodiment, this is a configuration for decoding intra prediction and motion compensated prediction in the MPEG4-AVC standard, and is not a processing block having the features of the present invention, and therefore will not be described.
- the operations of the reduced filter coefficient decoding unit 711, the prediction mode decoding unit 712, the prediction signal selection unit 713, and the reduced image generation unit 710 which are processing blocks that perform the operation indicating the characteristics in the second embodiment of the present invention, are described. This will be described with reference to the flowchart of the encoding process shown in FIG.
- the flowchart shown in FIG. 8 shows the operation of one-screen encoding processing defined by a plurality of encoding blocks.
- the entropy decoding unit 202 detects additional information related to encoding of the entire screen from the encoded bit stream stored in the stream buffer 201, and inputs it to the reduced filter coefficient decoding unit 711.
- the reduced filter coefficient decoding unit 711 decodes parameter information related to one screen, and decodes the filter coefficient used in the screen or information designating the filter coefficient (S800).
- the decoding process for one screen is started.
- quantized coefficients for the decoding target block are output from the entropy decoding unit 202 to the inverse quantization unit 205, and additional information regarding the decoding target block is output to the prediction mode decoding unit 712.
- the prediction mode decoding unit 712 decodes information on the decoding target block (S801), outputs the decoded prediction mode to the prediction signal selection unit 713, and when the decoded prediction mode is intra prediction (S802). : YES), the intra prediction mode decoding unit 714 decodes the intra prediction mode, and using the decoded intra prediction mode, the intra prediction unit 715 receives the intra from the decoded adjacent pixels stored in the intra-frame decoded image memory 208. Prediction is performed (S803), and the intra prediction result is output to the prediction signal selection unit 713.
- the motion compensation prediction unit 719 performs motion compensation from the reference image stored in the reference image memory 718 (S805), and the motion compensation prediction result is used as a prediction signal selection unit. To 713.
- the transition vector / mode decoding unit 203 decodes information indicating the transition prediction mode and the transition vector, and outputs the decoded prediction mode and the transition vector signal to the transition prediction signal generation unit 204.
- the decoded transition prediction mode is reduced transition prediction (S806: YES)
- the reduced image generation unit 710 receives the reduced filter coefficient information input from the reduced filter coefficient decoding unit 711 and the changed prediction signal generation unit 204.
- a filter specified by the reduced filter coefficient information is input from the intra-frame decoded image memory 208 using the information indicating the input transition vector and the decoded image at the position indicated by the vector value obtained by correcting the transition vector to the transition before reduction.
- reduction filter processing is performed to perform reduction transition prediction (S807), and the reduction transition prediction result is output to the transition prediction signal generation unit 204.
- the transition prediction signal generation unit 204 uses the information indicating the transition vector input from the transition vector / mode decoding unit 203 to perform intra-frame prediction.
- a decoded image signal at a position shifted from the decoding target block by the shift vector value is input from the decoded image memory 208, and a shift prediction signal is generated (S808).
- the transition prediction signal generation unit 204 outputs, to the prediction signal selection unit 713, a signal generated in the decoding target block among the transition prediction signal and the reduced transition prediction signal input from the reduced image generation unit 710.
- the prediction signal selection unit 713 stores the prediction signal input from any of the intra prediction unit 715, the motion compensation prediction unit 719, and the transition prediction signal generation unit 204 (S809), and outputs the prediction signal to the adder 207.
- the quantized coefficient output from the entropy decoding unit 202 is subjected to inverse quantization / inverse orthogonal transformation, and the output signal and the prediction signal are added by the adder 207 (S810), and a decoded image is generated.
- the generated decoded image is stored in the intra-frame decoded image memory 208 (S811), and the decoding process for the decoding target block ends.
- the deblocking filter 717 applies a deblocking filter to the entire screen and stores it in the reference image memory 718 (S813). Ends.
- the characteristics of the band limiting filter for reducing the encoded decoded image used as the template are set based on the result of measuring the band characteristics of the input image, and selected from a plurality of definable filter parameters.
- a template approximating the signal characteristics of the input image can be generated as a reduced image, and even when the decoded image is greatly degraded, the components stored when the reduced image is generated are the middle and low frequency components before reduction. Therefore, it is possible to generate a prediction signal that is less affected by coding deterioration, maintains the quality of a signal used as a template, and is less affected by a decrease in prediction efficiency due to coding deterioration. Therefore, the prediction efficiency is improved as compared with the conventional method, and the encoding efficiency is increased.
- the information for generating the reduced filter can be controlled with a small amount of information by selecting it on a screen basis from a preset filter set corresponding to the band limiting characteristic, and the increase in additional information can be suppressed. I can do it.
- the characteristic information of the input apparatus is input to the image analysis unit 525 and reduced. It is also possible to set a reduction filter coefficient when generating an image, and the same effect can be exhibited.
- the third embodiment does not specify the reduced filter coefficient by frequency analysis, but measures the prediction efficiency accompanied by the transition between the reduced image and the input image, so that the optimum reduction is achieved.
- a filter coefficient is set and used for encoding / decoding processing. Therefore, since the image decoding apparatus in the third embodiment can be realized with the same configuration as the image decoding apparatus in the second embodiment, only the encoding apparatus will be described.
- FIG. 9 is a block diagram showing the configuration of an image encoding apparatus according to the third embodiment of the present invention.
- the configuration of the image encoding device of the third embodiment is different from the configuration of the image encoding device of the second embodiment shown in FIG. 5 in that a reduced filter selection unit 925 and a reduced image correlation detection are used instead of the image analysis unit 525. The difference is that the portion 926 is used.
- the encoding process flowchart of the third embodiment shown in FIG. 10 is different from the encoding process flowchart of the second embodiment shown in FIG. 6 in that the processes of S1000, S1001, and S1002 are replaced with the processes of S600 and S601. The flow will be applied.
- the image data of one screen stored in the input image buffer 101 is input to the reduction filter selection unit 925, and a plurality of reduced images are generated using a plurality of filters having different band characteristics prepared in advance (S1000). ).
- the reduced images of the input image and the input images generated by the respective reduced filters are output from the reduced filter selection unit 925 to the reduced image correlation detection unit 926, and the reduced image correlation detection unit 926 inputs the respective reduced images and inputs.
- Correlation is measured by detecting the transition vector of the image and integrating the prediction error values at the time of detection (S1001).
- the processing unit for detecting the transition vector can be the same as or different from the block unit of the transition prediction used at the time of encoding, and it is also possible to detect the transition vector for the entire area of one screen. It is possible to detect a transition vector by determining a region, or to extract a block having a small adjacent correlation with a target block in an input image and detect a transition vector for the extracted block.
- the integrated value of the prediction error when each reduced filter coefficient is used is output from the reduced image correlation detecting unit 926 to the reduced filter selecting unit 925, and the reduced filter selecting unit 925 compares the integrated values to minimize the integrated value.
- a reduction filter coefficient is selected and set as a filter coefficient (S1002).
- the reduced filter selection unit 925 outputs the selected filter coefficient or a parameter designating the coefficient to the entropy encoding unit 115 and the reduced image generation unit 513.
- the entropy encoding unit 115 as additional information related to the encoding of the entire screen, for example, PPS (defined in MPEG4-AVC (ISO / IEC 14496-10 Advanced Video Coding))
- PPS defined in MPEG4-AVC (ISO / IEC 14496-10 Advanced Video Coding)
- S602 Picture Parameter Set
- the characteristics of the band limiting filter for reducing the encoded decoded image used as the template are the reduced image obtained by band-limiting the input image and the input image with a plurality of definable filter parameters.
- the degree of correlation between images is measured in the form of detecting a transition vector, and filter parameters are selected based on the measured results, reducing the correlation that is high when performing actual transition prediction on the input image. An image can be generated.
- a template more suitable as a prediction signal in the encoding process can be generated from the reduced image, and the prediction accuracy can be further improved.
- the reduced image transition vector detection unit 926 when the unit of the two-dimensional block for detecting the transition vector is the same as the encoding target block, the reduced image transition vector detection unit. By outputting to 104, it can be used as a shift vector value using a reduced decoded image as it is, and it is used as a reference value when detecting a shift vector (prediction error for N pixels around this vector value). It is also possible to measure the value and detect the displacement vector).
- the image encoding device and the image decoding device presented as the first, second, and third embodiments are physically a CPU (Central Processing Unit), a recording device such as a memory, a display such as a display.
- the present invention can be realized by a computer provided with a device and a communication means for a transmission path, and the means provided with each function can be realized as a program on the computer and executed.
- the present invention can be used for image signal encoding and decoding techniques.
Abstract
Description
変移予測画像として、検出した変移ベクトル値と共に、変移予測モード判定/信号生成部105に出力する。
PDMVx=Median(DVAx、DVBx、DVCx)
PDMVy=Median(DVAy、DVBy、DVCy)
このようにして、生成されたPDMVx、PDMVyを変移ベクトルの予測値DVpredとして算出する(S404)。
101 入力画像バッファ
102 ブロック分割部
103 変移ベクトル検出部
104 縮小画像変移ベクトル検出部
105 変移予測モード判定/信号生成部
106 減算器
107 直交変換部
108 量子化部
109 逆量子化部
110 逆直交変換部
111 加算器
112 フレーム内復号画像メモリ
113 縮小画像生成部
114 縮小復号画像メモリ
115 エントロピー符号化部
116 ストリームバッファ
117 出力端子
118 符号量制御部
200 入力端子
201 ストリームバッファ
202 エントロピー復号部
203 変移ベクトル/モード復号部
204 変移予測信号生成部
205 逆量子化部
206 逆直交変換部
207 加算器
208 フレーム内復号画像メモリ
209 出力端子
210 縮小画像生成部
513 縮小画像生成部
519 イントラ予測部
520 デブロックフィルタ
521 参照画像メモリ
522 動きベクトル検出部
523 動き補償予測部
524 モード判定部
525 画像解析部
710 縮小画像生成部
711 縮小フィルタ係数復号部
712 予測モード復号部
713 予測信号選択部
714 イントラ予測モード復号部
715 イントラ予測部
716 動きベクトル復号部
717 デブロックフィルタ
718 参照画像メモリ
719 動き補償予測部
925 縮小フィルタ選択部
926 縮小画像相関検出部
Claims (7)
- 符号化対象ブロックに対して、既に符号化された同一画像信号内のブロックの局部復号画像を用いて、符号化対象ブロックと相関性の高い信号を探索し、最も相関性の高い信号を変移予測信号として、前記符号化対象ブロックと前記変移予測信号との画面内の変移である変移ベクトルを算出する、変移ベクトル検出部と、
符号化対象ブロックに対して、既に符号化された同一画像信号内のブロックの局部復号画像に対して、水平及び垂直の少なくとも一方に縮小した信号を用いて、符号化対象ブロックと相関性の高い信号を探索し、最も相関性の高い信号を縮小変移予測信号として、縮小された符号化対象ブロックと前記縮小変移予測信号との画面内の変移である変移ベクトルを求める、縮小画像変移ベクトル検出部と、
前記変移予測信号と前記縮小変移予測信号より、符号化対象ブロックと相関性の高い信号を選択し予測信号として、選択された信号に用いられた変移ベクトル、及び選択した結果を示す情報を出力する、変移予測モード判定/信号生成部とを有し、
前記予測信号と前記符号化対象ブロックとの差分信号と、前記変移ベクトルと、前記選択した結果を示す情報とを符号化することを特徴とする、画像符号化装置。 - 更に、
入力画像の画面単位の周波数特性を測定もしくは入力情報として受け取り、縮小画像を生成する際に用いる縮小フィルタ係数として、縮小画像の周波数特性が入力画像の周波数特性に近づくフィルタ係数を選択する、画像解析部と、
既に符号化された同一画像信号内のブロックの局部復号画像に対して、水平及び垂直の少なくとも一方に縮小した信号を生成する、縮小画像生成部とを有し、
前記縮小画像生成部において、前記画像解析部において選択された縮小フィルタ係数を用いて、縮小した信号を生成することを特徴とする、
請求項1に記載の画像符号化装置。 - 更に、
入力画像と、前記入力画像を複数の帯域制限特性を持つフィルタにおいて縮小した、縮小画像の間の相関性を示す値を算出する、縮小画像相関検出部と、
算出された相関性を示す値より、最も相関の高い縮小画像を生成するフィルタの係数を縮小フィルタ係数として選択する、縮小フィルタ選択部と、
既に符号化された同一画像信号内のブロックの局部復号画像に対して、水平及び垂直の少なくとも一方に縮小した信号を生成する、縮小画像生成部とを有し、
前記縮小画像生成部において、前記縮小フィルタ選択部において選択された縮小フィルタ係数を用いて、縮小した信号を生成することを特徴とする、
請求項1に記載の画像符号化装置。 - ブロック単位で符号化が施された符号化ストリームより、復号対象ブロックに対して、既に復号された同一画像信号内のブロックの復号画像より生成される予測信号と、前記復号対象ブロックとの画面内の変移である変移ベクトル及び、前記変移ベクトルにおいて指定される前記復号画像を縮小して予測信号を生成するか否かを示す情報を復号する、変移ベクトル/モード復号部と、
前記変移ベクトル、及び前記復号画像を縮小して予測信号を生成するか否かを示す情報に従って、前記復号画像より予測信号を生成する、変移予測信号生成部とを有し、
前記予測信号と、復号された残差信号とを加算することで復号画像を算出することを特徴とする、画像復号化装置。 - 更に、縮小画像を生成する際に用いられるフィルタ係数を指定する情報を復号する、縮小フィルタ係数復号部を有し、
前記変移予測信号生成部が、前記変移ベクトル、前記復号画像を縮小して予測信号を生成するか否かを示す情報、及び前記縮小画像を生成する際に用いられるフィルタ係数を指定する情報に従って、前記復号画像より予測信号を生成することを特徴とする、請求項4に記載の画像復号化装置。 - 符号化対象ブロックに対して、既に符号化された同一画像信号内のブロックの局部復号画像を用いて、符号化対象ブロックと相関性の高い信号を探索し、最も相関性の高い信号を変移予測信号として、前記符号化対象ブロックと前記変移予測信号との画面内の変移である変移ベクトルを算出するステップと、
符号化対象ブロックに対して、既に符号化された同一画像信号内のブロックの局部復号画像に対して、水平及び垂直の少なくとも一方に縮小した信号を用いて、符号化対象ブロックと相関性の高い信号を探索し、最も相関性の高い信号を縮小変移予測信号として、縮小された符号化対象ブロックと前記縮小変移予測信号との画面内の変移である変移ベクトルを求めるステップと、
前記変移予測信号と前記縮小変移予測信号より、符号化対象ブロックと相関性の高い信号を選択し予測信号として、選択された信号に用いられた変移ベクトル、及び選択した結果を示す情報を出力するステップとを含み、
前記予測信号と前記符号化対象ブロックとの差分信号と、前記変移ベクトルと、前記選択した結果を示す情報とを符号化することを特徴とする、画像符号化方法。 - ブロック単位で符号化が施された符号化ストリームより、復号対象ブロックに対して、既に復号された同一画像信号内のブロックの復号画像より生成される予測信号と、前記復号対象ブロックとの画面内の変移である変移ベクトル及び、前記変移ベクトルにおいて指定される前記復号画像を縮小して予測信号を生成するか否かを示す情報を復号するステップと、
前記変移ベクトル、及び前記復号画像を縮小して予測信号を生成するか否かを示す情報に従って、前記復号画像より予測信号を生成するステップとを含み、
前記予測信号と、復号された残差信号とを加算することで復号画像を算出することを特徴とする、画像復号化方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/509,759 US8606026B2 (en) | 2009-12-15 | 2010-12-02 | Image encoding device, image decoding device, image encoding method, and image decoding method based on reduced-image displacement vector |
EP10837234A EP2515540A1 (en) | 2009-12-15 | 2010-12-02 | Image encoding device, image decoding device, image encoding method, and image decoding method |
CN201080057037.8A CN102656889B (zh) | 2009-12-15 | 2010-12-02 | 图像编码装置、图像解码装置、图像编码方法及图像解码方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2009284017A JP5321439B2 (ja) | 2009-12-15 | 2009-12-15 | 画像符号化装置、画像復号化装置、画像符号化方法、及び、画像復号化方法 |
JP2009-284017 | 2009-12-15 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011074197A1 true WO2011074197A1 (ja) | 2011-06-23 |
Family
ID=44166972
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2010/007019 WO2011074197A1 (ja) | 2009-12-15 | 2010-12-02 | 画像符号化装置、画像復号化装置、画像符号化方法、及び、画像復号化方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US8606026B2 (ja) |
EP (1) | EP2515540A1 (ja) |
JP (1) | JP5321439B2 (ja) |
CN (1) | CN102656889B (ja) |
WO (1) | WO2011074197A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104365107A (zh) * | 2012-03-26 | 2015-02-18 | Jvc建伍株式会社 | 图像编码装置、图像编码方法、图像编码程序、发送装置、发送方法、及发送程序、以及图像解码装置、图像解码方法、图像解码程序、接收装置、接收方法、及接收程序 |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI463878B (zh) | 2009-02-19 | 2014-12-01 | Sony Corp | Image processing apparatus and method |
US20120251012A1 (en) * | 2009-12-18 | 2012-10-04 | Tomohiro Ikai | Image filter, encoding device, decoding device, and data structure |
JP2012151576A (ja) * | 2011-01-18 | 2012-08-09 | Hitachi Ltd | 画像符号化方法、画像符号化装置、画像復号方法及び画像復号装置 |
JP5801614B2 (ja) | 2011-06-09 | 2015-10-28 | キヤノン株式会社 | 画像処理装置、画像処理方法 |
TW201306568A (zh) * | 2011-07-20 | 2013-02-01 | Novatek Microelectronics Corp | 移動估測方法 |
US8842937B2 (en) * | 2011-11-22 | 2014-09-23 | Raytheon Company | Spectral image dimensionality reduction system and method |
US8655091B2 (en) | 2012-02-24 | 2014-02-18 | Raytheon Company | Basis vector spectral image compression |
JP5891916B2 (ja) * | 2012-04-09 | 2016-03-23 | 大日本印刷株式会社 | 画像拡大処理装置 |
WO2014141964A1 (ja) * | 2013-03-14 | 2014-09-18 | ソニー株式会社 | 画像処理装置および方法 |
JP6643884B2 (ja) * | 2015-12-04 | 2020-02-12 | 日本放送協会 | 映像符号化装置およびプログラム |
CN117201807A (zh) * | 2016-08-01 | 2023-12-08 | 韩国电子通信研究院 | 图像编码/解码方法和装置以及存储比特流的记录介质 |
EP3562158A1 (en) * | 2018-04-27 | 2019-10-30 | InterDigital VC Holdings, Inc. | Method and apparatus for combined intra prediction modes |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06113291A (ja) * | 1992-09-25 | 1994-04-22 | Olympus Optical Co Ltd | 画像符号化及び復号化装置 |
JPH09182082A (ja) * | 1995-12-25 | 1997-07-11 | Nippon Telegr & Teleph Corp <Ntt> | 動画像の動き補償予測符号化方法とその装置 |
JP2005159947A (ja) | 2003-11-28 | 2005-06-16 | Matsushita Electric Ind Co Ltd | 予測画像生成方法、画像符号化方法および画像復号化方法 |
JP2007043651A (ja) | 2005-07-05 | 2007-02-15 | Ntt Docomo Inc | 動画像符号化装置、動画像符号化方法、動画像符号化プログラム、動画像復号装置、動画像復号方法及び動画像復号プログラム |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007124408A (ja) * | 2005-10-28 | 2007-05-17 | Matsushita Electric Ind Co Ltd | 動きベクトル検出装置および動きベクトル検出方法 |
US20100118940A1 (en) * | 2007-04-19 | 2010-05-13 | Peng Yin | Adaptive reference picture data generation for intra prediction |
-
2009
- 2009-12-15 JP JP2009284017A patent/JP5321439B2/ja active Active
-
2010
- 2010-12-02 CN CN201080057037.8A patent/CN102656889B/zh active Active
- 2010-12-02 WO PCT/JP2010/007019 patent/WO2011074197A1/ja active Application Filing
- 2010-12-02 US US13/509,759 patent/US8606026B2/en active Active
- 2010-12-02 EP EP10837234A patent/EP2515540A1/en not_active Withdrawn
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06113291A (ja) * | 1992-09-25 | 1994-04-22 | Olympus Optical Co Ltd | 画像符号化及び復号化装置 |
JPH09182082A (ja) * | 1995-12-25 | 1997-07-11 | Nippon Telegr & Teleph Corp <Ntt> | 動画像の動き補償予測符号化方法とその装置 |
JP2005159947A (ja) | 2003-11-28 | 2005-06-16 | Matsushita Electric Ind Co Ltd | 予測画像生成方法、画像符号化方法および画像復号化方法 |
JP2007043651A (ja) | 2005-07-05 | 2007-02-15 | Ntt Docomo Inc | 動画像符号化装置、動画像符号化方法、動画像符号化プログラム、動画像復号装置、動画像復号方法及び動画像復号プログラム |
Non-Patent Citations (1)
Title |
---|
"Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG(ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6), JVT-C151, 3rd Meeting: Fairfax, Virginia, USA", 6 May 2002, article SIU-LEONG YU ET AL.: "New Intra Prediction using Intra-Macroblock Motion Compensation", pages: 1 - 3, XP008158737 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104365107A (zh) * | 2012-03-26 | 2015-02-18 | Jvc建伍株式会社 | 图像编码装置、图像编码方法、图像编码程序、发送装置、发送方法、及发送程序、以及图像解码装置、图像解码方法、图像解码程序、接收装置、接收方法、及接收程序 |
Also Published As
Publication number | Publication date |
---|---|
CN102656889A (zh) | 2012-09-05 |
EP2515540A1 (en) | 2012-10-24 |
JP5321439B2 (ja) | 2013-10-23 |
JP2011129998A (ja) | 2011-06-30 |
CN102656889B (zh) | 2014-12-24 |
US20120275717A1 (en) | 2012-11-01 |
US8606026B2 (en) | 2013-12-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5321439B2 (ja) | 画像符号化装置、画像復号化装置、画像符号化方法、及び、画像復号化方法 | |
US11575906B2 (en) | Image coding device, image decoding device, image coding method, and image decoding method | |
EP1797722B1 (en) | Adaptive overlapped block matching for accurate motion compensation | |
CA2452632C (en) | Method for sub-pixel value interpolation | |
KR101228651B1 (ko) | 모션 추정을 수행하기 위한 방법 및 장치 | |
US9237354B2 (en) | Video coding apparatus, video coding method and video coding program, and video decoding apparatus, video decoding method and video decoding program | |
US20070098067A1 (en) | Method and apparatus for video encoding/decoding | |
WO2010137323A1 (ja) | 映像符号化装置、映像復号装置、映像符号化方法、および映像復号方法 | |
JP2005532725A (ja) | ビデオ符号化における内挿フィルタタイプの選択方法および選択システム | |
KR20100113448A (ko) | 화상 부호화 방법 및 화상 복호 방법 | |
EP1811789A2 (en) | Video coding using directional interpolation | |
JP2008507190A (ja) | 動き補償方法 | |
KR20100099723A (ko) | 화상 부호화 장치, 화상 복호 장치, 화상 부호화 방법, 및 화상 복호 방법 | |
KR101700410B1 (ko) | 인트라 모드를 이용한 쿼터 픽셀 해상도를 갖는 영상 보간 방법 및 장치 | |
JP5649296B2 (ja) | 画像符号化装置 | |
KR101691380B1 (ko) | 시프팅 매트릭스를 이용한 dct 기반의 부화소 단위 움직임 예측 방법 | |
KR20090037288A (ko) | 동영상 부호화 데이터율 제어를 위한 실시간 장면 전환검출 방법, 이를 이용한 영상통화 품질 향상 방법, 및영상통화 시스템 | |
KR101934840B1 (ko) | 인트라 모드를 이용한 쿼터 픽셀 해상도를 갖는 영상 보간 방법 및 장치 | |
AU2007237319B2 (en) | Method for sub-pixel value interpolation | |
KR20190004247A (ko) | 인트라 모드를 이용한 쿼터 픽셀 해상도를 갖는 영상 보간 방법 및 장치 | |
KR20190004246A (ko) | 인트라 모드를 이용한 쿼터 픽셀 해상도를 갖는 영상 보간 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201080057037.8 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10837234 Country of ref document: EP Kind code of ref document: A1 |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10837234 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13509759 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010837234 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |