WO2011125313A1 - 動画像符号化装置および動画像復号装置 - Google Patents
動画像符号化装置および動画像復号装置 Download PDFInfo
- Publication number
- WO2011125313A1 WO2011125313A1 PCT/JP2011/001953 JP2011001953W WO2011125313A1 WO 2011125313 A1 WO2011125313 A1 WO 2011125313A1 JP 2011001953 W JP2011001953 W JP 2011001953W WO 2011125313 A1 WO2011125313 A1 WO 2011125313A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- unit
- prediction
- image
- encoding
- block
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/107—Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/109—Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/12—Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
- H04N19/122—Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/567—Motion estimation based on rate distortion criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/91—Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/523—Motion estimation or motion compensation with sub-pixel accuracy
Definitions
- the present invention relates to a moving image encoding apparatus that divides a moving image into predetermined regions and performs encoding in units of regions, and a moving image decoding device that decodes encoded moving images in units of predetermined regions.
- each frame of a video signal is moved in units of block data (called a macro block) in which 8 ⁇ 8 pixels of color difference signals corresponding to a luminance signal of 16 ⁇ 16 pixels are combined.
- a compression method based on a compensation technique and an orthogonal transform / transform coefficient quantization technique is employed.
- Motion compensation technology is a technology that reduces the redundancy of signals in the time direction for each macroblock using a high correlation existing between video frames.
- a previously encoded frame is used as a reference image. Accumulated and searched from within a predetermined search range in the reference image for a block region having the smallest differential power with the current macroblock that is the target of motion compensation prediction, and the spatial position of the current macroblock and the reference image. This is a technique for encoding a shift from a spatial position of a search result block in the middle as a motion vector.
- the amount of information is compressed by orthogonally transforming and quantizing the difference signal obtained by subtracting the prediction signal obtained as a result of the above-mentioned motion compensation prediction from the current macroblock. Realized.
- MPEG-4 Visual the minimum block size as a unit of motion compensation prediction is 8 ⁇ 8 pixels, and DCT (discrete cosine transform) of 8 ⁇ 8 pixel size is also used for orthogonal transform.
- MPEG-4 AVC Moving Picture Experts Group-4 Advanced Video Coding
- ITU-T H.264 ITU-T H.264
- motion-compensated prediction with a block size smaller than 8 ⁇ 8 pixels is prepared, and orthogonal transformation is performed by adaptively switching between 8 ⁇ 8 pixels and 4 ⁇ 4 pixels integer precision DCT for each macroblock. It can be encoded.
- Patent Document 1 For such a problem, there has been an apparatus that switches the macroblock size depending on the resolution or content of an image (see, for example, Patent Document 1).
- Patent Document 1 it is possible to perform compression coding by switching an orthogonal transform block size or a set of orthogonal transform block sizes that can be selected according to a macroblock size.
- the present invention has been made in order to solve the above-described problems.
- the orthogonal transform block size is adaptively switched and compression-coded.
- An object of the present invention is to obtain a moving image encoding device and a moving image decoding device.
- the moving picture coding apparatus includes a coding control unit that instructs a transform / quantization unit to set a predetermined transform block size from a set of transform block sizes determined in advance according to the block size of a block image.
- the transform / quantization unit divides the prediction difference signal into blocks having a transform block size specified by the encoding control unit, performs transform and quantization processing, and generates compressed data.
- the inverse quantization / inverse transform unit determines the transform block size based on the decoded coding mode and the transform block size information included in the compression parameter, and the compressed data Are subjected to inverse transform and inverse quantization processing in block units of the transform block size.
- a predetermined transform block size is selected from a set of transform block sizes determined in advance according to the block size of the block image, and the prediction difference signal is divided into blocks of the transform block size and transformed. Since the compressed data is generated by performing the quantization process, a moving image that can be compression-coded by adaptively switching the transform block size for each region that is a unit of motion compensation prediction in the macroblock An encoding device and a moving image decoding device can be obtained.
- FIG. 1 It is a block diagram which shows the structure of the moving image encoder which concerns on Embodiment 1 of this invention. It is a figure which shows an example of the encoding mode of the picture which performs the prediction encoding of a time direction. It is a figure which shows another example of the encoding mode of the picture which performs the prediction encoding of a time direction.
- 3 is a block diagram showing an internal configuration of a motion compensation prediction unit of the video encoding apparatus according to Embodiment 1.
- FIG. It is a figure explaining the determination method of the predicted value of the motion vector according to encoding mode. It is a figure which shows an example of adaptation of the transform block size according to encoding mode.
- FIG. 3 is a block diagram showing an internal configuration of a transform / quantization unit of the video encoding apparatus according to Embodiment 1.
- FIG. It is a block diagram which shows the structure of the moving image decoding apparatus which concerns on Embodiment 1 of this invention. It is a block diagram which shows the internal structure of the variable length encoding part of the moving image encoder which concerns on Embodiment 2 of this invention. It is a figure which shows an example of a binarization table, and shows the state before an update. It is a figure which shows an example of a probability table. It is a figure which shows an example of a state transition table.
- FIG. 13A is a diagram illustrating a procedure for generating context identification information.
- FIG. 13A is a diagram illustrating a binarization table in binary tree representation
- FIG. 13B is a diagram illustrating the positional relationship between an encoding target macroblock and peripheral blocks.
- FIG. It is a figure which shows an example of a binarization table, and shows the state after an update.
- It is a block diagram which shows the internal structure of the interpolation image generation part with which the motion compensation prediction part of the moving image encoder which concerns on Embodiment 3 of this invention is provided.
- Embodiment 1 FIG.
- each frame image of video is used as an input, motion compensation prediction is performed between adjacent frames, and compression processing by orthogonal transformation / quantization is performed on the obtained prediction difference signal.
- a moving picture coding apparatus that generates a bit stream by performing variable length coding and a moving picture decoding apparatus that decodes the bit stream will be described.
- FIG. 1 is a block diagram showing a configuration of a moving picture coding apparatus according to Embodiment 1 of the present invention.
- 1 divides a macroblock image obtained by dividing each frame image of the input video signal 1 into a plurality of blocks having a macroblock size of 4 into one or more sub-blocks according to the encoding mode 7.
- the block dividing unit 2 that outputs the macro / sub-block image 5 and the macro / sub-block image 5 are input
- the macro / sub-block image 5 is intraframed using the image signal of the intra prediction memory 28.
- the intra prediction unit 8 that predicts and generates the predicted image 11 and the macro / subblock image 5 are input, the reference image 15 of the motion compensated prediction frame memory 14 is used for the macro / subblock image 5.
- a motion compensated prediction unit 9 that performs motion compensated prediction to generate a predicted image 17 and a macro / sub-block image 5 according to the encoding mode 7 From the switching unit 6 that is input to either the tiger prediction unit 8 or the motion compensation prediction unit 9 and the macro / sub-block image 5 that is output from the block division unit 2, either the intra prediction unit 8 or the motion compensation prediction unit 9 is used.
- the subtraction unit 12 that generates the prediction difference signal 13 by subtracting the prediction images 11 and 17 output from one side, and the conversion / quantization that performs the conversion and quantization processing on the prediction difference signal 13 to generate the compressed data 21 Unit 19, variable-length encoding unit 23 that entropy-encodes compressed data 21 and multiplexes it into bit stream 30, and inverse that generates local decoded prediction difference signal 24 by performing inverse quantization and inverse transform processing on compressed data 21
- the quantized / inverse transform unit 22 and the prediction images 11 and 17 output from either the intra prediction unit 8 or the motion compensation prediction unit 9 are added to the inverse quantization / inverse transform unit 22.
- the encoding control unit 3 includes information necessary for processing of each unit (macroblock size 4, encoding mode 7, optimal encoding mode 7a, prediction parameter 10, optimal prediction parameters 10a and 18a, compression parameter 20, and optimal compression parameter 20a. ) Is output. Details of the macroblock size 4 and the encoding mode 7 will be described below. Details of other information will be described later.
- the encoding control unit 3 specifies the macro block size 4 of each frame image of the input video signal 1 to the block dividing unit 2 and selects all the selectable macro blocks according to the picture type for each encoding target macro block. Indicates encoding mode 7.
- the encoding control unit 3 can select a predetermined encoding mode from the set of encoding modes, but this encoding mode set is arbitrary, for example, the set shown in FIG. 2A or FIG. 2B described below. A predetermined encoding mode can be selected from the list.
- FIG. 2A is a diagram illustrating an example of a coding mode of a P (Predictive) picture that performs predictive coding in the time direction.
- mb_modes 0 to 2 are modes (inter) for encoding macroblocks (M ⁇ L pixel blocks) by inter-frame prediction.
- mb_mode0 is a mode in which one motion vector is assigned to the entire macroblock
- mb_mode1 and 2 are modes in which the macroblock is equally divided horizontally or vertically, and different motion vectors are assigned to the divided sub-blocks.
- mb_mode3 is a mode in which a macroblock is divided into four and different coding modes (sub_mb_mode) are assigned to the divided subblocks.
- sub_mb_mode0 to sub_mb_mode0 are coding modes respectively assigned to subblocks (m ⁇ 1 pixel blocks) obtained by dividing the macroblock into four when mb_mode3 is selected as the macroblock coding mode.
- Is a mode (intra) in which sub-blocks are encoded by intra-frame prediction.
- the other modes are modes for inter-frame encoding (inter)
- sub_mb_mode1 is a mode in which one motion vector is assigned to the entire subblock
- sub_mb_mode2 and 3 are subdivided into subblocks horizontally or vertically, respectively.
- Sub_mb_mode4 is a mode in which a different motion vector is assigned to each divided subblock
- sub_mb_mode4 is a mode in which a subblock is divided into four and a different motion vector is assigned to each divided subblock.
- FIG. 2B is a diagram illustrating another example of the coding mode of the P picture that performs predictive coding in the time direction.
- mb_modes 0 to 6 are modes (inter) for encoding macroblocks (M ⁇ L pixel blocks) by inter-frame prediction.
- mb_mode0 is a mode in which one motion vector is assigned to the entire macroblock
- mb_modes 1 to 6 divide the macroblock horizontally, vertically or diagonally, and assign different motion vectors to the divided sub-blocks.
- Mode. mb_mode7 is a mode in which a macroblock is divided into four and different coding modes (sub_mb_mode) are allocated to the divided subblocks.
- Sub_mb_mode 0 to 8 are coding modes respectively assigned to sub-blocks (m ⁇ 1 pixel blocks) obtained by dividing the macro block into four when mb_mode 7 is selected in the macro block coding mode, and sub_mb_mode 0 Is a mode (intra) in which sub-blocks are encoded by intra-frame prediction.
- the other modes are inter-frame encoding modes (inter)
- sub_mb_mode1 is a mode in which one motion vector is assigned to the entire sub-block
- sub_mb_modes 2 to 7 are sub-blocks divided horizontally, vertically, or diagonally, respectively.
- the sub_block_mode 8 is a mode in which a sub-block is divided into four, and a different motion vector is assigned to each divided sub-block.
- the block dividing unit 2 divides each frame image of the input video signal 1 input to the moving image encoding device into macroblock images having a macroblock size of 4 specified by the encoding control unit 3. Further, the block division unit 2 assigns different coding modes to the sub-blocks in which the coding mode 7 specified by the coding control unit 3 has divided the macroblock (sub_mb_mode1 to 4 in FIG. 2A or sub_mb_mode1 in FIG. 2B). ) To 8), the macroblock image is divided into sub-block images indicated by the encoding mode 7. Therefore, the block image output from the block dividing unit 2 is either a macro block image or a sub block image according to the encoding mode 7. Hereinafter, this block image is referred to as a macro / sub-block image 5.
- each frame of the input video signal 1 is not an integer multiple of the horizontal size or vertical size of the macroblock size 4
- the frame size is equal to the macroblock size for each frame of the input video signal 1.
- a frame (extended frame) in which pixels are expanded in the horizontal direction or the vertical direction until an integral multiple is generated. For example, when expanding a pixel in the vertical direction, the pixel at the bottom of the original frame is repeatedly filled, or a pixel having a fixed pixel value (gray, black, white, etc.) There are methods such as filling.
- an extension frame generated for each frame of the input video signal 1 and having an integer multiple of the macroblock size is input to the block dividing unit 2 in place of each frame image of the input video signal 1.
- the macroblock size 4 and the frame size (horizontal size and vertical size) of each frame of the input video signal 1 are variable because they are multiplexed into a bitstream in units of sequences or pictures consisting of one or more frames. It is output to the long encoding unit 23.
- the value of the macro block size may be specified by a profile or the like without being multiplexed directly into the bit stream.
- identification information for identifying the profile in sequence units is multiplexed into the bit stream.
- the switching unit 6 is a switch for switching the input destination of the macro / sub-block image 5 in accordance with the encoding mode 7.
- the switching unit 6 inputs the macro / sub-block image 5 intra prediction unit 8 to
- the conversion mode 7 is a mode for encoding by inter-frame prediction (hereinafter referred to as inter-frame prediction mode)
- the macro / sub-block image 5 is input to the motion compensation prediction unit 9.
- the intra prediction unit 8 performs intra-frame prediction on the input macro / subblock image 5 in units of a macroblock to be encoded specified by the macroblock size 4 or a subblock specified by the encoding mode 7. .
- the intra prediction unit 8 uses the image signals in the frames stored in the intra prediction memory 28 for all intra prediction modes included in the prediction parameter 10 instructed from the encoding control unit 3. A predicted image 11 is generated for each.
- the encoding control unit 3 designates the intra prediction mode as the prediction parameter 10 corresponding to the encoding mode 7.
- a macroblock or sub-block is set to a 4 ⁇ 4 pixel block unit, and a prediction image is generated using pixels around a unit block of an image signal in the intra prediction memory 28.
- a mode for generating a prediction image using pixels around a unit block of an image signal in the intra prediction memory 28 in units of pixel blocks, a mode for generating a prediction image from an image obtained by reducing the size of a macro block or sub block, and the like. is there.
- the motion-compensated prediction unit 9 designates a reference image 15 to be used for generating a predicted image from one or more frames of reference image data stored in the motion-compensated prediction frame memory 14, and the reference image 15 and the macro / subblock Using the image 5, motion compensation prediction according to the encoding mode 7 instructed from the encoding control unit 3 is performed, and a prediction parameter 18 and a prediction image 17 are generated.
- the motion compensated prediction unit 9 uses the motion vector as the prediction parameter 18 corresponding to the coding mode 7 and the reference image identification number (reference image index) indicated by each motion vector. Etc. Details of the method of generating the prediction parameter 18 will be described later.
- the subtraction unit 12 subtracts either the predicted image 11 or the predicted image 17 from the macro / sub-block image 5 to obtain a predicted difference signal 13.
- the prediction difference signal 13 is each produced
- the prediction difference signal 13 generated in accordance with all intra prediction modes specified by the prediction parameter 10 is evaluated by the encoding control unit 3, and the optimal prediction parameter 10a including the optimal intra prediction mode is determined.
- an encoding cost J 2 described later is calculated using the compressed data 21 obtained by transforming and quantizing the prediction difference signal 13, and an intra prediction mode that minimizes the encoding cost J 2 is selected.
- the encoding control unit 3 evaluates the prediction difference signals 13 respectively generated for all modes included in the encoding mode 7 in the intra prediction unit 8 or the motion compensation prediction unit 9, and performs encoding based on the evaluation result.
- the optimum coding mode 7a that can obtain the optimum coding efficiency is determined from the modes 7.
- the encoding control unit 3 determines the optimum prediction parameters 10a, 18a and the optimum compression parameter 20a corresponding to the optimum coding mode 7a from the prediction parameters 10, 18 and the compression parameter 20. Each determination procedure will be described later.
- the intra prediction mode is included in the prediction parameter 10 and the optimal prediction parameter 10a.
- the prediction parameter 18 and the optimal prediction parameter 18a include a motion vector, a reference image identification number (reference image index) indicated by each motion vector, and the like.
- the compression parameter 20 and the optimum compression parameter 20a include a transform block size, a quantization step size, and the like.
- the encoding control unit 3 outputs the optimal encoding mode 7a, the optimal prediction parameters 10a and 18a, and the optimal compression parameter 20a for the macroblock or subblock to be encoded to the variable length encoding unit 23. . Also, the encoding control unit 3 outputs the optimum compression parameter 20 a of the compression parameters 20 to the transform / quantization unit 19 and the inverse quantization / inverse transform unit 22.
- the transform / quantization unit 19 includes the optimal encoding mode 7a determined by the encoding control unit 3 and the optimal prediction among the plurality of prediction difference signals 13 generated corresponding to all modes included in the encoding mode 7.
- a prediction difference signal 13 (hereinafter referred to as an optimum prediction difference signal 13a) corresponding to the prediction images 11 and 17 generated based on the parameters 10a and 18a is selected, and the optimum prediction difference signal 13a is encoded.
- a transform coefficient is calculated by performing transform processing such as DCT based on the transform block size of the optimum compression parameter 20a determined by the control unit 3, and the transform coefficient is optimally instructed from the encoding control unit 3
- Quantization is performed based on the quantization step size of the compression parameter 20a, and the compressed data 21 that is the quantized transform coefficient is converted into an inverse quantization / inverse transform unit 22 and a variable length code. And outputs it to the unit 23.
- the inverse quantization / inverse transform unit 22 performs inverse transform processing such as inverse DCT by performing inverse quantization on the compressed data 21 input from the transform / quantization unit 19 using the optimal compression parameter 20a.
- a local decoded prediction difference signal 24 of the difference signal 13 a is generated and output to the adding unit 25.
- the adding unit 25 adds the local decoded prediction difference signal 24 and the predicted image 11 or the predicted image 17 to generate a local decoded image signal 26, and outputs the local decoded image signal 26 to the loop filter unit 27 and intra. Store in the prediction memory 28. This locally decoded image signal 26 becomes an image signal for intra-frame prediction.
- the loop filter unit 27 performs a predetermined filtering process on the local decoded image signal 26 input from the adder unit 25 and stores the local decoded image 29 after the filtering process in the motion compensated prediction frame memory 14. This locally decoded image 29 becomes the reference image 15 for motion compensation prediction.
- the filtering process by the loop filter unit 27 may be performed in units of macroblocks of the input local decoded image signal 26, or after one local decoded image signal 26 corresponding to one macroblock is input. You may go together.
- the variable length encoding unit 23 includes the compressed data 21 output from the transform / quantization unit 19, the optimal encoding mode 7a output from the encoding control unit 3, the optimal prediction parameters 10a and 18a, and the optimal compression parameter.
- 20a is entropy-encoded, and a bit stream 30 indicating the encoding result is generated.
- the optimal prediction parameters 10a and 18a and the optimal compression parameter 20a are encoded in units corresponding to the encoding mode indicated by the optimal encoding mode 7a.
- the moving picture coding apparatus is configured so that the motion compensation prediction unit 9 and the transform / quantization unit 19 operate in cooperation with the coding control unit 3, respectively.
- the coding mode, the prediction parameter, and the compression parameter (that is, the optimum coding mode 7a, the optimum prediction parameters 10a and 18a, and the optimum compression parameter 20a) are determined.
- a procedure for determining a coding mode, a prediction parameter, and a compression parameter by which the coding control unit 3 can obtain optimum coding efficiency is as follows. Prediction parameters, 2. 2. compression parameters; The description will be made in the order of the encoding mode.
- Prediction Parameter Determination Procedure when the encoding mode 7 is the inter-frame prediction mode, a prediction parameter including a motion vector related to the inter-frame prediction, a reference image identification number (reference image index) indicated by each motion vector, etc. A procedure for determining 18 will be described.
- the motion compensation prediction unit 9 in cooperation with the encoding control unit 3, all the encoding modes 7 (for example, the encoding modes shown in FIG. 2A or FIG. 2B) instructed from the encoding control unit 3 to the motion compensation prediction unit 9 are performed.
- the prediction parameter 18 is determined for each set. The detailed procedure will be described below.
- FIG. 3 is a block diagram showing the internal configuration of the motion compensation prediction unit 9.
- the motion compensation prediction unit 9 illustrated in FIG. 3 includes a motion compensation region division unit 40, a motion detection unit 42, and an interpolated image generation unit 43.
- the encoding mode 7 input from the encoding control unit 3, the macro / sub-block image 5 input from the switching unit 6, and the reference image 15 input from the motion compensated prediction frame memory 14. There is.
- the motion compensation region dividing unit 40 divides the macro / sub-block image 5 input from the switching unit 6 into blocks serving as motion compensation units in accordance with the encoding mode 7 instructed from the encoding control unit 3.
- the motion compensation region block image 41 is output to the motion detection unit 42.
- the interpolated image generation unit 43 specifies the reference image 15 used for prediction image generation from one or more frames of reference image data stored in the motion compensated prediction frame memory 14, and the motion detection unit 42 specifies the reference image specified.
- the motion vector 44 is detected within a predetermined motion search range on 15.
- the motion vector is detected by a virtual sample precision motion vector, as in the MPEG-4 AVC standard.
- This detection method creates a virtual sample (pixel) by interpolation between integer pixels for pixel information (referred to as an integer pixel) of a reference image, and uses it as a predicted image.
- an integer pixel In the MPEG-4 AVC standard, a virtual sample with 1/8 pixel accuracy can be generated and used.
- a virtual sample with 1/2 pixel accuracy is generated by interpolation using a 6-tap filter using six integer pixels in the vertical direction or the horizontal direction.
- the 1 ⁇ 4 pixel precision virtual sample is generated by an interpolation operation using an average value filter of adjacent 1 ⁇ 2 pixels or integer pixels.
- the interpolation image generation unit 43 generates a predicted image 45 of virtual pixels according to the accuracy of the motion vector 44 instructed from the motion detection unit 42.
- a motion vector detection procedure with virtual pixel accuracy will be described.
- Motion vector detection procedure I The interpolated image generation unit 43 generates a predicted image 45 for a motion vector 44 with integer pixel accuracy that is within a predetermined motion search range of the motion compensation region block image 41.
- the predicted image 45 (predicted image 17) generated with integer pixel accuracy is output to the subtracting unit 12, and is subtracted from the motion compensation region block image 41 (macro / sub-block image 5) by the subtracting unit 12, thereby predicting the difference signal 13. become.
- the encoding control unit 3 evaluates the prediction efficiency with respect to the prediction difference signal 13 and the integer pixel precision motion vector 44 (prediction parameter 18).
- the prediction cost J 1 is calculated from the following equation (1), and a motion vector 44 with integer pixel precision that minimizes the prediction cost J 1 within a predetermined motion search range is determined.
- J 1 D 1 + ⁇ R 1 (1)
- D 1 and R 1 are used as evaluation values.
- D 1 is the sum of absolute values (SAD) in the macro block or sub block of the prediction difference signal
- R 1 is the estimated code amount of the identification number of the motion vector and the reference image pointed to by this motion vector
- ⁇ is a positive number.
- the code amount of the motion vector is calculated by predicting the motion vector value in each mode of FIG. 2A or FIG. 2B using the value of a nearby motion vector and converting the predicted difference value into a probability distribution. It is obtained by entropy coding based on it or by performing code amount estimation corresponding to it.
- FIG. 4 is a diagram for explaining a method for determining a motion vector prediction value (hereinafter referred to as a prediction vector) in each encoding mode 7 shown in FIG. 2B.
- a prediction vector a motion vector prediction value
- the prediction vector PMV of the rectangular block is calculated from the following equation (2).
- median () corresponds to the median filter process and is a function that outputs the median value of the motion vectors MVa, MVb, and MVc.
- PMV median (MVa, MVb, MVc) (2)
- Motion vector detection procedure II The interpolated image generation unit 43 generates a predicted image 45 for one or more 1 ⁇ 2 pixel precision motion vectors 44 positioned around the integer pixel precision motion vector determined in the “motion vector detection procedure I”. .
- the predicted image 45 (predicted image 17) generated with the 1 ⁇ 2 pixel accuracy is converted by the subtractor 12 into the motion compensation region block image 41 (macro / sub-block image 5). ) To obtain the prediction difference signal 13.
- the encoding control unit 3 evaluates the prediction efficiency with respect to the prediction difference signal 13 and the motion vector 44 with 1/2 pixel accuracy (prediction parameter 18), and is positioned around the motion vector with integer pixel accuracy.
- a motion vector 44 having a 1 ⁇ 2 pixel accuracy that minimizes the prediction cost J 1 is determined from one or more motion vectors having a 1 ⁇ 2 pixel accuracy.
- Motion vector detection procedure III The encoding control unit 3 and the motion compensation prediction unit 9 similarly apply to the motion vector with a 1 ⁇ 4 pixel accuracy around the motion vector with a 1 ⁇ 2 pixel accuracy determined in the “motion vector detection procedure II”.
- a motion vector 44 having a 1/4 pixel accuracy that minimizes the prediction cost J 1 is determined from one or more motion vectors having a 1/4 pixel accuracy positioned at the position.
- Motion vector detection procedure IV Similarly, the encoding control unit 3 and the motion compensation prediction unit 9 detect a motion vector with virtual pixel accuracy until a predetermined accuracy is obtained.
- motion vector detection with virtual pixel accuracy is performed until a predetermined accuracy is reached. For example, a threshold for the prediction cost is determined, and the prediction cost J 1 is smaller than the predetermined threshold. In such a case, the detection of the motion vector with the virtual pixel accuracy may be stopped before the predetermined accuracy is reached.
- the motion vector may refer to a pixel outside the frame defined by the reference frame size. In that case, it is necessary to generate pixels outside the frame.
- One method of generating pixels outside the frame is a method of filling in pixels at the screen edge.
- the frame size of each frame of the input video signal 1 is not an integer multiple of the macroblock size and an extended frame is input instead of each frame of the input video signal 1, the frame size is set to an integer multiple of the macroblock size.
- the expanded size extended frame size
- the frame size of the reference frame is the frame size of the original input video signal.
- the motion compensation prediction unit 9 determines the predetermined predetermined values for the motion compensation region block images 41 obtained by dividing the macro / sub-block image 5 into blocks each serving as a motion compensation unit indicated by the encoding mode 7.
- the motion vector of the accurate virtual pixel accuracy and the identification number of the reference image indicated by the motion vector are output as the prediction parameter 18.
- the motion compensation prediction unit 9 outputs the prediction image 45 (prediction image 17) generated by the prediction parameter 18 to the subtraction unit 12, and is subtracted from the macro / sub-block image 5 by the subtraction unit 12 to generate the prediction difference signal 13. Get.
- the prediction difference signal 13 output from the subtraction unit 12 is output to the transform / quantization unit 19.
- FIG. 5 is a diagram illustrating an example of adaptation of the transform block size according to the coding mode 7 illustrated in FIG. 2B.
- a 32 ⁇ 32 pixel block is used as an example of the M ⁇ L pixel block.
- the mode specified by the encoding mode 7 is mb_mode 0 to 6
- either one of 16 ⁇ 16 and 8 ⁇ 8 pixels can be adaptively selected as the transform block size.
- the transform block size can be adaptively selected from 8 ⁇ 8 or 4 ⁇ 4 pixels for each 16 ⁇ 16 pixel sub-block obtained by dividing the macroblock into four.
- the set of transform block sizes that can be selected for each encoding mode can be defined from any rectangular block size that is equal to or smaller than the sub-block size that is equally divided by the encoding mode.
- FIG. 6 is a diagram showing another example of adaptation of the transform block size according to the coding mode 7 shown in FIG. 2B.
- the coding mode 7 designates the above-described mb_mode 0, 5 and 6, in addition to 16 ⁇ 16 and 8 ⁇ 8 pixels as selectable transform block sizes, a sub-unit that is a unit of motion compensation A transform block size corresponding to the block shape can be selected.
- mb_mode0 it can be adaptively selected from 16 ⁇ 16, 8 ⁇ 8, and 32 ⁇ 32 pixels.
- mb_mode5 it is possible to adaptively select from 16 ⁇ 16, 8 ⁇ 8, and 16 ⁇ 32 pixels.
- mb_mode6 it is possible to adaptively select from 16 ⁇ 16, 8 ⁇ 8, and 32 ⁇ 16 pixels. Although illustration is omitted, in the case of mb_mode7, it is possible to adaptively select from 16 ⁇ 16, 8 ⁇ 8, and 16 ⁇ 32 pixels, and in the case of mb_mode1 to 4, it can be applied to a non-rectangular region. May be selected from 16 ⁇ 16 and 8 ⁇ 8 pixels, and may be adapted to select a rectangular area from 8 ⁇ 8 and 4 ⁇ 4 pixels.
- the encoding control unit 3 sets a transform block size set corresponding to the encoding mode 7 illustrated in FIGS. 5 and 6 as the compression parameter 20.
- a set of transform block sizes that can be selected in accordance with the macroblock encoding mode 7 is determined in advance so that it can be adaptively selected in units of macroblocks or subblocks.
- a set of selectable transform block sizes is determined in advance in accordance with the encoding mode 7 (sub_mb_mode 1 to 8 in FIG. 2B) of the sub-block obtained by dividing the macro block. May be adaptively selected for each divided block unit.
- the encoding control unit 3 determines in advance a set of transform block sizes corresponding to the encoding mode 7 so that it can be selected adaptively. Just keep it.
- the transform / quantization unit 19 cooperates with the encoding control unit 3 to make a macroblock unit designated by the macroblock size 4 or a subblock unit obtained by further dividing the macroblock unit according to the encoding mode 7 Then, an optimum conversion block size is determined from the conversion block sizes. The detailed procedure will be described below.
- FIG. 7 is a block diagram showing the internal configuration of the transform / quantization unit 19.
- the transform / quantization unit 19 illustrated in FIG. 7 includes a transform block size dividing unit 50, a transform unit 52, and a quantization unit 54.
- Input data includes a compression parameter 20 (transform block size, quantization step size, etc.) input from the encoding control unit 3 and a prediction difference signal 13 input from the encoding control unit 3.
- the transform block size dividing unit 50 transforms the prediction difference signal 13 for each macroblock or sub-block for which the transform block size is to be determined into a block corresponding to the transform block size of the compression parameter 20, and serves as a transform target block 51.
- the data is output to the conversion unit 52. If a plurality of transform block sizes are selected and specified for one macroblock or sub-block by the compression parameter 20, the transform target blocks 51 of the respective transform block sizes are sequentially output to the transform unit 52.
- the conversion unit 52 performs conversion processing on the input conversion target block 51 in accordance with a conversion method such as integer conversion or Hadamard conversion in which DCT and DCT conversion coefficients are approximated by integers, and the generated conversion coefficient 53 is quantized. To the unit 54.
- a conversion method such as integer conversion or Hadamard conversion in which DCT and DCT conversion coefficients are approximated by integers, and the generated conversion coefficient 53 is quantized.
- the quantization unit 54 quantizes the input transform coefficient 53 according to the quantization step size of the compression parameter 20 instructed from the encoding control unit 3, and dequantizes the compressed data 21 that is the quantized transform coefficient. Output to the inverse transform unit 22 and the encoding control unit 3. Note that, when a plurality of transform block sizes are selected and designated for one macroblock or sub-block by the compression parameter 20, the transform unit 52 and the quantization unit 54 are described above for all transform block sizes. Then, each compressed data 21 is output.
- the compressed data 21 output from the quantization unit 54 is input to the encoding control unit 3 and is used for evaluating the encoding efficiency with respect to the transform block size of the compression parameter 20.
- the encoding control unit 3 uses the compressed data 21 obtained for all the transform block sizes that can be selected for each of the encoding modes included in the encoding mode 7, for example, encoding using the following equation (3): the cost J 2 calculates, selects a transform block size for the encoding cost J 2 to a minimum.
- J 2 D 2 + ⁇ R 2 (3)
- D 2 and R 2 are used as evaluation values.
- the compressed data 21 obtained for the transform block size is input to the inverse quantization / inverse transform unit 22, and the local decoded prediction difference signal obtained by performing the inverse transform / inverse quantization process on the compressed data 21
- the sum of square distortion between the local decoded image signal 26 obtained by adding the predicted image 17 to 24 and the macro / sub-block image 5 is used.
- R 2 is a code obtained by actually encoding the compressed data 21 obtained with respect to the transform block size, the encoding mode 7 and the prediction parameters 10 and 18 relating to the compressed data 21 by the variable length encoding unit 23.
- a quantity (or estimated code quantity) is used.
- the encoding control unit 3 selects an optimum compression parameter by selecting a transform block size corresponding to the determined optimal encoding mode 7a after determining the optimal encoding mode 7a according to “3. Procedure for determining encoding mode” described later. 20a and output to the variable length coding unit 23.
- the variable length coding unit 23 entropy codes the optimum compression parameter 20a and then multiplexes it into the bit stream 30.
- the transform block size is selected from a transform block size set (illustrated in FIGS. 5 and 6) defined in advance according to the optimal encoding mode 7a of the macroblock or sub-block.
- identification information such as an ID may be assigned to the transform block size included in the set, and the identification information may be entropy-coded as transform block size information and multiplexed into the bitstream 30.
- identification information of the transform block size set is also set on the decoding device side.
- the transform block size can be automatically determined from the set on the decoding device side. There is no need to multiplex into stream 30.
- Coding Mode Determination Procedure According to the above “1. Prediction Parameter Determination Procedure” and “2. Compression Parameter Determination Procedure”, the prediction parameters 10, When 18 and the compression parameter 20 are determined, the encoding control unit 3 further transforms and quantizes the prediction difference signal 13 obtained by using each encoding mode 7, the prediction parameters 10 and 18 and the compression parameter 20 at that time. using compressed data 21 obtained Te, determined from the above equation (3) a coding mode 7 encoding cost J 2 is reduced, it selects the coding mode 7 as an optimal coding mode 7a of the macroblock.
- the optimal encoding mode 7a may be determined from all the encoding modes in which the skip mode is added as the macroblock or sub-block mode to the encoding mode shown in FIG. 2A or 2B.
- the skip mode is a mode in which a prediction image that has been motion-compensated using a motion vector of an adjacent macroblock or sub-block on the encoding device side is used as a local decoded image signal, and a prediction parameter or compression parameter other than the encoding mode Since it is not necessary to calculate and multiplex the bit stream, it is possible to perform encoding with a reduced code amount.
- a predicted image that has been motion-compensated using motion vectors of adjacent macroblocks or sub-blocks is output as a decoded image signal in the same procedure as the coding device side.
- the encoding mode may be determined so as to control only the skip mode and suppress the amount of code consumed in the extension region.
- the encoding control unit 3 obtains the optimum encoding efficiency determined by the above “1. Prediction parameter determination procedure”, “2. Compression parameter determination procedure”, and “3. Encoding mode determination procedure”. Is output to the variable length coding unit 23, and the prediction parameters 10 and 18 corresponding to the optimum coding mode 7a are selected as the optimum prediction parameters 10a and 18a. The corresponding compression parameter 20 is selected as the optimum compression parameter 20 a and output to the variable length coding unit 23.
- the variable length encoding unit 23 entropy-encodes the optimal encoding mode 7a, the optimal prediction parameters 10a and 18a, and the optimal compression parameter 20a and multiplexes them into the bit stream 30.
- the optimal prediction difference signal 13a obtained from the predicted images 11 and 17 based on the determined optimal encoding mode 7a, the optimal prediction parameters 10a and 18a, and the optimal compression parameter 20a is converted into a transform / quantization unit 19 as described above.
- the compressed data 21 is converted and quantized in the above manner into compressed data 21, which is entropy-coded by the variable length coding unit 23 and multiplexed into the bit stream 30.
- the compressed data 21 passes through an inverse quantization / inverse transform unit 22 and an addition unit 25 to become a locally decoded image signal 26 and is input to a loop filter unit 27.
- FIG. 8 is a block diagram showing the configuration of the video decoding apparatus according to Embodiment 1 of the present invention.
- the moving picture decoding apparatus shown in FIG. 8 entropy-decodes the optimal encoding mode 62 from the bitstream 60 in units of macroblocks, and the macroblock or sub-block divided according to the decoded optimal encoding mode 62
- the optimal prediction parameter 63, the compressed data 64, and the variable length decoding unit 61 that entropy decodes the optimal compression parameter 65 and the optimal prediction parameter 63 are input, the intra prediction mode and the intra prediction included in the optimal prediction parameter 63 are input.
- the motion vector included in the optimal prediction parameter 63 and the optimal Specified by the reference image index included in the prediction parameter 63 In accordance with the motion compensation prediction unit 70 that performs motion compensation prediction using the reference image 76 in the motion compensated prediction frame memory 75 to generate the prediction image 72, and variable length decoding according to the decoded optimum encoding mode 62 Using the switching unit 68 that inputs the optimal prediction parameter 63 decoded by the unit 61 to either the intra prediction unit 69 or the motion compensated prediction unit 70 and the optimal compression parameter 65, inverse quantization and compression are performed on the compressed data 64.
- Either the intra-prediction unit 69 or the motion compensation prediction unit 70 outputs the inverse quantization / inverse transformation unit 66 that performs the inverse transformation process and generates the prediction difference signal decoded value 67 and the prediction difference signal decoded value 67.
- An addition unit 73 that adds the predicted images 71 and 72 to generate a decoded image 74, an intra prediction memory 77 that stores the decoded image 74, and a filter for filtering the decoded image 74 It includes a loop filter unit 78 that sense to generate a reproduced image 79, and a motion compensated prediction frame memory 75 for storing the reproduced image 79.
- the variable length decoding unit 61 performs entropy decoding processing on the bit stream 60 and performs a sequence unit or a picture including one or more frames.
- the macroblock size is determined based on profile identification information decoded from the bitstream in sequence units. .
- the number of macroblocks included in each frame is determined, and the optimal encoding mode 62, optimal prediction parameter 63, and compressed data 64 of each macroblock included in the frame are determined.
- optimum compression parameter 65 Transform block size information, quantization step size, and the like are decoded.
- the optimum coding mode 62, optimum prediction parameter 63, compressed data 64, and optimum compression parameter 65 decoded on the decoding device side are the optimum coding mode 7a, optimum prediction parameters 10a, 18a, coded on the coding device side, This corresponds to the compressed data 21 and the optimum compression parameter 20a.
- the transform block size information of the optimum compression parameter 65 is a transform block selected from a transform block size set defined in advance in units of macroblocks or sub-blocks according to the coding mode 7 on the encoding device side. This is identification information for specifying the size, and the decoding apparatus side specifies the conversion block size of the macroblock or sub-block from the optimal encoding mode 62 and the conversion block size information of the optimal compression parameter 65.
- the inverse quantization / inverse transform unit 66 uses the compressed data 64 and the optimum compression parameter 65 input from the variable length decoding unit 61 to perform inverse quantization / inverse transform processing in units of blocks specified by the transform block size information. And a prediction difference signal decoded value 67 is calculated.
- variable length decoding unit 61 determines a prediction vector by the process shown in FIG. 4 with reference to the motion vectors of the peripheral blocks that have already been decoded, and uses the prediction difference value decoded from the bitstream 60.
- the decoded value of the motion vector is obtained by addition.
- the variable length decoding unit 61 includes the decoded value of the motion vector in the optimum prediction parameter 63 and outputs it to the switching unit 68.
- the switching unit 68 is a switch that switches the input destination of the optimal prediction parameter 63 in accordance with the optimal encoding mode 62.
- the switching unit 68 also uses the optimum prediction parameter 63 (intra prediction mode) input from the variable length decoding unit 61. Is output to the intra prediction unit 69, and when the optimal encoding mode 62 indicates the inter-frame prediction mode, the optimal prediction parameter 63 (motion vector, reference image identification number (reference image index) indicated by each motion vector, etc.) Is output to the motion compensation prediction unit 70.
- the intra prediction unit 69 refers to the decoded image (decoded image signal in the frame) 74a in the frame stored in the intra prediction memory 77, and corresponds to the intra prediction mode indicated by the optimal prediction parameter 63.
- a predicted image 71 is generated and output.
- the method of generating the predicted image 71 by the intra prediction unit 69 is the same as the operation of the intra prediction unit 8 on the encoding device side.
- the intra prediction unit 8 applies to all intra prediction modes instructed in the encoding mode 7.
- the intra prediction unit 69 is different in that only the prediction image 71 corresponding to the intra prediction mode indicated in the optimum encoding mode 62 is generated.
- the motion compensated prediction unit 70 predicts a predicted image from one or more reference images 76 stored in the motion compensated prediction frame memory 75 based on a motion vector, a reference image index, and the like indicated by the input optimum prediction parameter 63. 72 is generated and output.
- the method of generating the predicted image 72 by the motion compensation prediction unit 70 is a process of searching for a motion vector from a plurality of reference images among the operations of the motion compensation prediction unit 9 on the encoding device side (motion detection unit 42 shown in FIG. 3). And corresponding to the operation of the interpolated image generating unit 43), and only the process of generating the predicted image 72 is performed according to the optimal prediction parameter 63 given from the variable length decoding unit 61. Similar to the encoding device, the motion compensation prediction unit 70 embeds pixels outside the frame with pixels at the end of the screen when the motion vector refers to a pixel outside the frame defined by the reference frame size. The prediction image 72 is produced
- the reference frame size may be defined by a size obtained by extending the decoded frame size to an integral multiple of the decoded macroblock size, or may be defined by the decoded frame size. To determine the reference frame size.
- the adding unit 73 adds either one of the predicted image 71 or the predicted image 72 and the predicted difference signal decoded value 67 output from the inverse quantization / inverse transform unit 66 to generate a decoded image 74.
- the decoded image 74 is stored in the intra prediction memory 77 and input to the loop filter unit 78 in order to be used as a reference image (decoded image 74a) for generating an intra predicted image of a subsequent macroblock.
- the loop filter unit 78 performs the same operation as the loop filter unit 27 on the encoding device side, generates a reproduced image 79, and outputs it from the moving image decoding device. Further, the reproduced image 79 is stored in the motion compensated prediction frame memory 75 for use as a reference image 76 for subsequent prediction image generation. Note that the size of the reproduced image obtained after decoding all the macroblocks in the frame is an integral multiple of the macroblock size. When the size of the reproduced image is larger than the decoded frame size corresponding to the frame size of each frame of the video signal input to the encoding device, the reproduced image includes an extended region in the horizontal direction or the vertical direction. In this case, a decoded image obtained by removing the decoded image of the extended area portion from the reproduced image is output from the decoding device.
- the decoded image in the extended region portion of the reproduced image stored in the motion compensated prediction frame memory 75 is not referred to in the subsequent predicted image generation. Therefore, the decoded image obtained by removing the decoded image of the extended area portion from the reproduced image may be stored in the motion compensated prediction frame memory 75.
- the macro / subblock image 5 divided according to the macroblock coding mode 7 depends on the size of the macroblock or subblock.
- a set of transform blocks including a plurality of transform block sizes is determined in advance, and the encoding control unit 3 selects one transform block size with the best coding efficiency from the transform block size set as the optimum compression parameter 20a.
- the conversion / quantization unit 19 divides the optimal prediction difference signal 13a into blocks of the conversion block size included in the optimal compression parameter 20a, and performs conversion and quantization processing. Since the compressed data 21 is generated, the transform block size set is set to a macro block or a sub block. Compared to the conventional method which is irrespective fixed size, with equal code amount, it is possible to improve the quality of encoded video.
- variable length encoding unit 23 is configured to multiplex the transform block size adaptively selected from the set of transform block sizes according to the encoding mode 7 into the bitstream 30,
- the variable length decoding unit 61 decodes the optimal compression parameter 65 from the bit stream 60 in units of macroblocks or subblocks, and the inverse quantization / inverse conversion unit 66
- the transform block size is determined based on the transform block size information included in the optimum compression parameter 65, and the compressed data 64 is inversely transformed and inversely quantized in units of the transform block size.
- the video decoding device can decode the compressed data by selecting the transform block size used on the encoding device side from the set of transform block sizes defined in the same way as the video encoding device. It becomes possible to correctly decode the bitstream encoded by the moving picture encoding apparatus according to Embodiment 1.
- Embodiment 2 a modification of the variable length coding unit 23 of the video encoding device according to the first embodiment and the variable length decoding unit 61 of the video decoding device according to the first embodiment are the same. A modification will be described.
- FIG. 9 is a block diagram showing an internal configuration of the variable length coding unit 23 of the moving picture coding apparatus according to Embodiment 2 of the present invention.
- the same or equivalent parts as in FIG. the configuration of the video encoding apparatus according to the second embodiment is the same as that of the first embodiment, and the operation of each component except for the variable-length encoding unit 23 is the same as that of the first embodiment. Therefore, FIG. 1 to FIG. 8 are used.
- the apparatus configuration and the processing method are based on the assumption that the encoding mode set shown in FIG. 2A is used. However, the encoding mode set shown in FIG. 2B is used. Needless to say, the present invention is also applicable to the assumed apparatus configuration and processing method.
- the variable length encoding unit 23 illustrated in FIG. 9 specifies the correspondence between the index value of the multilevel signal representing the encoding mode 7 (or the optimal prediction parameters 10a and 18a and the optimal compression parameter 20a) and the binary signal.
- a binarization table memory 105 for storing a binarization table, and an optimal encoding mode 7a (or optimal prediction parameters 10a, 18a, optimal) of the multilevel signal selected by the encoding control unit 3 using the binarization table
- the binarization unit 92 that converts the index value of the multilevel signal of the compression parameter 20a) into the binary signal 103, the context identification information 102 generated by the context generation unit 99, the context information memory 96, the probability table memory 97, and the state transition
- the binary signal 103 converted by the binarization unit 92 with reference to the table memory 98 is arithmetically encoded and encoded bits 111, and the frequency of occurrence of the arithmetic coding processing operation unit 104 that multiplexes the coded bit string
- variable-length coding procedure of the variable-length coding unit 23 will be described using the macroblock optimum coding mode 7a output from the coding control unit 3 as an example of the entropy-coded parameter.
- the optimal prediction parameters 10a and 18a and the optimal compression parameter 20a which are parameters to be encoded, may be variable-length encoded in the same procedure as in the optimal encoding mode 7a, and the description thereof is omitted.
- the encoding control unit 3 outputs a context information initialization flag 91, a type signal 100, peripheral block information 101, and a binarization table update flag 113. Details of each information will be described later.
- the initialization unit 90 initializes the context information 106 stored in the context information memory 96 in accordance with the context information initialization flag 91 instructed from the encoding control unit 3, and sets the initial state. Details of the initialization processing by the initialization unit 90 will be described later.
- the binarization unit 92 refers to the binarization table stored in the binarization table memory 105 and refers to the index of the multilevel signal representing the type of the optimal encoding mode 7a input from the encoding control unit 3
- the value is converted into a binary signal 103 and output to the arithmetic coding processing operation unit 104.
- FIG. 10 is a diagram illustrating an example of a binarization table held in the binarization table memory 105.
- the “coding mode” shown in FIG. 10 is a prediction that is motion-compensated using a motion mode of a macroblock adjacent to the coding mode (mb_skip: coding apparatus side) in the coding mode (mb_mode 0 to 3) shown in FIG. 2A.
- the index values of these encoding modes are each binarized with 1 to 3 bits and stored as “binary signal”.
- each bit of the binary signal is referred to as a “bin” number.
- a small index value is assigned to a coding mode having a high occurrence frequency, and the binary signal is also set to be as short as 1 bit.
- the optimal encoding mode 7 a output from the encoding control unit 3 is input to the binarization unit 92 and also to the frequency information generation unit 93.
- the frequency information generation unit 93 counts the frequency of occurrence of the index value of the encoding mode included in the optimum encoding mode 7a (the selection frequency of the encoding mode selected by the encoding control unit) to generate the frequency information 94.
- the data is output to a binarization table update unit 95 described later.
- the probability table memory 97 includes a plurality of symbols (MPS: Most Probable Symbol) having a high occurrence probability among symbol values “0” or “1” of each bin included in the binary signal 103 and a plurality of combinations of the occurrence probabilities. This is a memory for holding a pair of stored tables.
- MPS Most Probable Symbol
- FIG. 11 is a diagram showing an example of a probability table held in the probability table memory 97. As shown in FIG. In FIG. 11, “probability table numbers” are assigned to discrete probability values (“occurrence probabilities”) between 0.5 and 1.0.
- the state transition table memory 98 is encoded from the “probability table number” stored in the probability table memory 97 and the probability state before the MPS encoding of “0” or “1” indicated by the probability table number. This is a memory for holding a table storing a plurality of combinations of state transitions to the probability state.
- FIG. 12 is a diagram illustrating an example of a state transition table held in the state transition table memory 98.
- “Probability table number”, “Probability transition after LPS encoding”, and “Probability transition after MPS encoding” in FIG. 12 respectively correspond to the probability table numbers shown in FIG.
- the probability state of “probability table number 1” surrounded by a frame in FIG. 12 MPS occurrence probability 0.527 from FIG. 11
- LPS Last Probable Symbol
- the occurrence probability of MPS is reduced due to the occurrence of LPS.
- the probability state represents a transition from “probability transition after MPS encoding” to probability table number 2 (MPS occurrence probability 0.550 from FIG. 11). That is, the occurrence probability of MPS is increased due to the occurrence of MPS.
- the context generation unit 99 includes a type signal 100 indicating the types of parameters to be encoded (the optimal encoding mode 7a, the optimal prediction parameters 10a and 18a, and the optimal compression parameter 20a) input from the encoding control unit 3 and peripheral block information.
- the context identification information 102 is generated for each bin of the binary signal 103 obtained by binarizing the encoding target parameter.
- the type signal 100 is the optimum encoding mode 7a of the encoding target macroblock.
- the peripheral block information 101 is the optimal encoding mode 7a of the macroblock adjacent to the encoding target macroblock.
- FIG. 13A is a diagram showing the binary table shown in FIG. 10 in binary tree representation.
- a thick-framed encoding target macroblock shown in FIG. 13B and peripheral blocks A and B adjacent to the encoding target macroblock will be described as an example.
- a black circle is called a node, and a line connecting the nodes is called a path.
- An index of a multilevel signal to be binarized is assigned to the end node of the binary tree.
- the depth of the binary tree corresponds to the bin number, and a bit string obtained by combining symbols (0 or 1) assigned to each path from the root node to the end node is The binary signal 103 corresponding to the index of the multilevel signal assigned to the terminal node is obtained.
- one or more context identification information is prepared according to the information of the peripheral blocks A and B.
- the context generation unit 99 uses the neighboring block information of neighboring neighboring blocks A and B. Referring to 101, any one of the three pieces of context identification information C0, C1, and C2 is selected from the following equation (4).
- the context generation unit 99 outputs the selected context identification information as the context identification information 102.
- the context information identified by the context identification information 102 holds an MPS value (0 or 1) and a probability table number that approximates the occurrence probability, and is in an initial state. This context information is stored in the context information memory 96.
- the arithmetic coding processing operation unit 104 arithmetically codes the 1 to 3 bit binary signal 103 input from the binarizing unit 92 for each bin to generate an encoded bit string 111 and multiplexes the encoded bit string 111 into the bit stream 30.
- an arithmetic coding procedure based on the context information will be described.
- the arithmetic coding processing operation unit 104 refers to the context information memory 96 to obtain context information 106 based on the context identification information 102 corresponding to the bin 0 of the binary signal 103. Subsequently, the arithmetic coding processing calculation unit 104 refers to the probability table memory 97 and specifies the MPS occurrence probability 108 of bin 0 corresponding to the probability table number 107 held in the context information 106.
- the arithmetic coding processing operation unit 104 determines the symbol value 109 (0 or 0) of bin 0 based on the MPS value (0 or 1) held in the context information 106 and the identified MPS occurrence probability 108. 1) is arithmetically encoded. Subsequently, the arithmetic coding processing calculation unit 104 refers to the state transition table memory 98 and converts the probability table number 107 held in the context information 106 and the symbol value 109 of bin 0 previously arithmetically coded. Based on this, the probability table number 110 after bin 0 symbol encoding is obtained.
- the arithmetic coding processing calculation unit 104 uses the value of the probability table number (that is, the probability table number 107) of the context information 106 of bin 0 stored in the context information memory 96 as the probability table number after the state transition ( That is, it is updated to the probability table number 110) obtained from the state transition table memory 98 before encoding the symbol of bin 0.
- the arithmetic coding processing calculation unit 104 performs arithmetic coding on the bins 1 and 2 based on the context information 106 identified by each context identification information 102 as in the bin 0, and after the symbol coding of each bin, The information 106 is updated.
- the arithmetic encoding processing operation unit 104 outputs an encoded bit string 111 obtained by arithmetically encoding all bin symbols, and the variable length encoding unit 23 multiplexes the bit stream 30.
- the context information 106 identified by the context identification information 102 is updated every time a symbol is arithmetically encoded. That is, it means that the probability state of each node transitions for each symbol encoding.
- the initialization of the context information 106 that is, the resetting of the probability state is performed by the initialization unit 90 described above.
- the initialization unit 90 performs initialization in response to an instruction by the context information initialization flag 91 of the encoding control unit 3, and this initialization is performed at the head of the slice or the like.
- the initial state of each context information 106 MPS value and probability table number initial value approximating the occurrence probability
- the control unit 3 may include the context information initialization flag 91 and instruct the initialization unit 90.
- the binarization table update unit 95 is based on the binarization table update flag 113 instructed by the encoding control unit 3 and is generated by the frequency information generation unit 93.
- the encoding target parameter here, the optimal encoding mode 7a.
- the binarization table memory 105 is updated with reference to the frequency information 94 indicating the frequency of occurrence of the index values. The procedure for updating the binarization table by the binarization table update unit 95 will be described below.
- binarization is performed so that the coding mode with the highest occurrence frequency can be binarized with a short code word according to the occurrence frequency of the encoding mode specified by the optimum encoding mode 7a that is the encoding target parameter.
- the correspondence between the table coding mode and the index is updated to reduce the code amount.
- FIG. 14 is a diagram illustrating an example of the binarized table after the update, and is a post-update state when it is assumed that the state of the binarized table before the update is the state illustrated in FIG. For example, when the occurrence frequency of mb_mode3 is the highest, the binarization table update unit 95 assigns the smallest index value so that a binary signal of a short code word is assigned to the mb_mode3.
- the binarization table update unit 95 generates binarization table update identification information 112 for enabling the decoding device to identify the updated binarization table when the binarization table is updated. Thus, it is necessary to multiplex the bit stream 30. For example, when there are a plurality of binarization tables for each encoding target parameter, an ID for identifying each encoding target parameter is assigned in advance to the encoding device side and the decoding device side, respectively, and the binarization table update unit 95 may output the ID of the updated binarization table as the binarization table update identification information 112 and multiplex it with the bitstream 30.
- the encoding control unit 3 refers to the frequency information 94 of the encoding target parameter at the head of the slice, and determines that the occurrence frequency distribution of the encoding target parameter has greatly changed beyond a predetermined allowable range.
- the binarization table update flag 113 is output.
- the variable length coding unit 23 may multiplex the binarization table update flag 113 into the slice header of the bit stream 30. Further, when the binarization table update flag 113 indicates “the binarization table is updated”, the variable length coding unit 23 stores the coding mode, compression parameter, and prediction parameter binarization table. Among them, the binarization table update identification information 112 indicating which binarization table is updated is multiplexed into the bitstream 30.
- the encoding control unit 3 may instruct the update of the binarization table at a timing other than the head of the slice.
- the encoding control unit 3 outputs the binarization table update flag 113 at the head of an arbitrary macroblock. May be.
- the binarization table update unit 95 outputs information specifying the macroblock position where the binarization table has been updated, and the variable length encoding unit 23 also multiplexes the information into the bitstream 30. There is a need to.
- the encoding control unit 3 When the encoding control unit 3 outputs the binarization table update flag 113 to the binarization table update unit 95 to update the binarization table, the encoding control unit 3 notifies the initialization unit 90 of the context information initialization flag. 91 is output to initialize the context information memory 96.
- FIG. 15 is a block diagram showing an internal configuration of the variable length decoding unit 61 of the video decoding apparatus according to Embodiment 2 of the present invention.
- the configuration of the moving picture decoding apparatus according to the second embodiment is the same as that of the first embodiment, and the operation of each component other than the variable length decoding unit 61 is the same as that of the first embodiment. 1 to 8 are referred to.
- variable length decoding unit 61 illustrated in FIG. 15 is multiplexed into the bitstream 60 with reference to the context identification information 126, the context information memory 128, the probability table memory 131, and the state transition table memory 135 generated by the context generation unit 122.
- An arithmetic decoding processing operation unit 127 that arithmetically decodes the encoded bit string 133 representing the optimal encoding mode 62 (or the optimal prediction parameter 63 and the optimal compression parameter 65) to generate a binary signal 137, and a binary signal.
- a binarization table memory 143 for storing a binarization table 139 designating a correspondence relationship between the optimum coding mode 62 (or the optimum prediction parameter 63, the optimum compression parameter 65) and the multilevel signal, and the binarization table 139
- the binary signal 137 generated by the arithmetic decoding processing operation unit 127 is And a reverse binarizing section 138 for converting the decoded values 140 of the signal.
- variable-length decoding procedure of the variable-length decoding unit 61 will be described taking the optimal encoding mode 62 of the macroblock included in the bitstream 60 as an example of the entropy-decoded parameter.
- the optimal prediction parameter 63 and the optimal compression parameter 65 that are parameters to be decoded may be variable-length decoded in the same procedure as in the optimal encoding mode 62, and thus description thereof is omitted.
- the context initialization information 121 the encoded bit string 133, the binary table update flag 142, and the binary table update identification information multiplexed on the encoding device side are included. 144 is included. Details of each information will be described later.
- the initialization unit 120 initializes the context information stored in the context information memory 128 at the head of the slice or the like. Alternatively, the initialization unit 120 prepares a plurality of sets in advance for the initial state of the context information (the initial value of the MPS value and the probability table number approximating the occurrence probability), and the decoded value of the context initialization information 121 You may make it select the initial state corresponding to to from a set.
- the context generation unit 122 refers to the type signal 123 indicating the type of parameters to be decoded (optimal encoding mode 62, optimal prediction parameter 63, and optimal compression parameter 65) and the peripheral block information 124, and provides context identification information 126. Generate.
- the type signal 123 is a signal representing the type of parameter to be decoded, and it is determined according to the syntax held in the variable-length decoding unit 61 what the decoding target parameter is. Therefore, it is necessary to hold the same syntax on the encoding device side and the decoding device side.
- the encoding control unit 3 on the encoding device side holds the syntax.
- the parameter type to be encoded next and the parameter value (index value), that is, the type signal 100 are sent to the variable length encoding unit 23. It will be output sequentially.
- the peripheral block information 124 is information such as a coding mode obtained by decoding a macroblock or a subblock, and has a variable length for use as peripheral block information 124 for subsequent decoding of the macroblock or subblock. It is stored in a memory (not shown) in the decoding unit 61 and is output to the context generation unit 122 as necessary.
- the generation procedure of the context identification information 126 by the context generation unit 122 is the same as the operation of the context generation unit 99 on the encoding device side.
- the context generation unit 122 on the decoding device side also generates context identification information 126 for each bin of the binarization table 139 referred to by the inverse binarization unit 138.
- the context information of each bin holds an MPS value (0 or 1) and a probability table number for specifying the occurrence probability of the MPS as probability information for arithmetic decoding of the bin.
- the probability table memory 131 and the state transition table memory 135 store the same probability table (FIG. 11) and state transition table (FIG. 12) as the coding device side probability table memory 97 and the state transition table memory 98. .
- the arithmetic decoding processing calculation unit 127 performs arithmetic decoding on the encoded bit string 133 multiplexed in the bit stream 60 for each bin to generate a binary signal 137 and outputs the binary signal 137 to the inverse binarization unit 138.
- the arithmetic decoding processing calculation unit 127 refers to the context information memory 128 to obtain context information 129 based on the context identification information 126 corresponding to each bin of the encoded bit string 133. Subsequently, the arithmetic decoding processing calculation unit 127 refers to the probability table memory 131 and specifies the MPS occurrence probability 132 of each bin corresponding to the probability table number 130 held in the context information 129.
- the arithmetic decoding process calculation unit 127 is input to the arithmetic decoding process calculation unit 127 based on the MPS value (0 or 1) held in the context information 129 and the identified MPS occurrence probability 132.
- the encoded bit string 133 is arithmetically decoded to obtain a symbol value 134 (0 or 1) for each bin.
- the arithmetic decoding process calculation unit 127 refers to the state transition table memory 135 and decodes each bin decoded in the same procedure as the arithmetic coding process calculation unit 104 on the encoding device side.
- the probability table number 136 after the symbol decoding of each bin (after the state transition) is obtained based on the symbol value 134 and the probability table number 130 held in the context information 129.
- the arithmetic decoding processing calculation unit 127 uses the value of the probability table number (that is, the probability table number 130) of the context information 129 of each bin stored in the context information memory 128 as the probability table number after the state transition (that is, the probability table number 130). , Update to the probability table number 136) obtained from the state transition table memory 135 before the symbol decoding of each bin.
- the arithmetic decoding processing calculation unit 127 outputs a binary signal 137 obtained by combining the bin symbols obtained as a result of the arithmetic decoding to the inverse binarization unit 138.
- the inverse binarization unit 138 selects the same binarization table 139 as at the time of encoding from the binarization table prepared for each type of decoding target parameter stored in the binarization table memory 143.
- the decoded value 140 of the decoding target parameter is output from the binary signal 137 input from the arithmetic decoding processing calculation unit 127.
- the binarization table 139 is the same as the binarization table on the encoding apparatus side shown in FIG.
- the binarization table update unit 141 is based on the binarization table update flag 142 and the binarization table update identification information 144 decoded from the bitstream 60, and the binarization table stored in the binarization table memory 143. Update.
- the binarization table update flag 142 is information corresponding to the binarization table update flag 113 on the encoding device side, and is included in the header information of the bitstream 60 and the like and indicates whether or not the binarization table is updated. It is. When the decoded value of the binarized table update flag 142 indicates “binary table is updated”, the binarized table update identification information 144 is further decoded from the bitstream 60.
- the binarization table update identification information 144 is information corresponding to the binarization table update identification information 112 on the encoding device side, and is information for identifying the binarization table of parameters updated on the encoding device side. is there. For example, as described above, when there are a plurality of binarization tables in advance for each encoding target parameter, the ID for identifying each encoding target parameter and the ID of the binarization table are set on the encoding device side and the decoding device side.
- the binarization table update unit 141 updates the binarization table corresponding to the ID value in the binarization table update identification information 144 decoded from the bitstream 60.
- the binarization table memory 143 is prepared in advance with the two types of binarization tables of FIG.
- the state of the binarization table before the update is the state shown in FIG.
- the binarization table update unit 141 performs the update process according to the binarization table update flag 142 and the binarization table update identification information 144, it corresponds to the ID included in the binarization table update identification information 144. Since the binarized table is selected, the state of the updated binarized table becomes the state shown in FIG. 14, which is the same as the binarized table after the update on the encoding device side.
- the coding control unit 3 includes the optimum coding mode 7a, the optimum prediction parameters 10a and 18a, and the optimum compression parameter 20a in which the coding efficiency is optimum.
- the encoding target parameter is selected and output, and the binarization unit 92 of the variable length encoding unit 23 uses the binarization table of the binarization table memory 105 to encode the encoding target represented by a multilevel signal.
- the parameter is converted into the binary signal 103, the arithmetic encoding processing arithmetic unit 104 arithmetically encodes the binary signal 103 and outputs the encoded bit string 111, and the frequency information generating unit 93 outputs the frequency information 94 of the encoding target parameter.
- the binarization table update unit 95 is configured to update the correspondence between the multilevel signal and the binary signal of the binarization table based on the frequency information 94, the binarization table There compared with always the conventional method is fixed, in the same quality of the coded video, it is possible to reduce the code amount.
- the binarization table update unit 95 also includes binarization table update identification information 112 indicating whether or not the binarization table is updated, and binarization table update identification information 112 for identifying the binarized table after the update. Therefore, the video decoding device according to the second embodiment is configured so that the arithmetic decoding processing operation unit 127 of the variable length decoding unit 61 includes the bit stream 60.
- the multiplexed coded bit string 133 is arithmetically decoded to generate a binary signal 137, and the inverse binarization unit 138 uses the binarization table 139 of the binarization table memory 143 to convert the binary signal 137.
- a binarized table update flag is obtained by converting the multilevel signal into a decoded value 140, and the binarized table updating unit 141 decodes the header information multiplexed in the bitstream 60. And configured to update the predetermined binary table of binary table memory 143 based on the 142 and the binary table update identification information 144. Therefore, since the moving picture decoding apparatus can update the binarization table by the same procedure as that of the moving picture encoding apparatus and debinarize the encoding target parameter, the moving picture coding according to Embodiment 2 can be performed. It becomes possible to correctly decode the bitstream encoded by the apparatus.
- Embodiment 3 FIG. In the third embodiment, a modified example of a prediction image generation process by motion compensation prediction of the motion compensation prediction unit 9 in the video encoding device and the video decoding device according to the first and second embodiments will be described.
- the motion compensation prediction unit 9 of the video encoding apparatus according to the third embodiment will be described.
- the configuration of the moving picture coding apparatus according to the third embodiment is the same as that of the first embodiment or the second embodiment, and the operation of each component other than the motion compensation prediction unit 9 is the same. 1 to 15 are incorporated by reference.
- the motion compensated prediction unit 9 has the same configuration and operation except that the configuration and operation related to the predicted image generation process with virtual sample accuracy are different from those of the first and second embodiments. That is, in the first and second embodiments, as shown in FIG. 3, the interpolated image generation unit 43 of the motion compensation prediction unit 9 generates reference image data with virtual pixel accuracy such as a half pixel or a quarter pixel, When the predicted image 45 is generated based on the reference image data with the virtual pixel accuracy, the interpolation image is calculated by a 6-tap filter using six integer pixels in the vertical direction or the horizontal direction as in the MPEG-4 AVC standard.
- the motion compensated prediction unit 9 While the predicted image is generated by creating the virtual pixel, the motion compensated prediction unit 9 according to the third embodiment super-resolutions the reference image 15 with integer pixel accuracy stored in the motion compensated prediction frame memory 14. By enlarging by the processing, a reference image 207 with virtual pixel accuracy is generated, and a predicted image is generated based on the reference image 207 with virtual pixel accuracy.
- the interpolated image generation unit 43 of the third embodiment also designates one or more frames of the reference image 15 from the motion compensated prediction frame memory 14, and the motion detection unit 42 is designated.
- a motion vector 44 is detected within a predetermined motion search range on the reference image 15.
- the motion vector is detected by a motion vector with virtual pixel accuracy, as in the MPEG-4 AVC standard.
- a virtual sample pixel is created by interpolation between integer pixels for pixel information (referred to as integer pixels) possessed by a reference image, and is used as a reference image.
- the motion compensation prediction unit 9 generates a super-resolution reference image 207 with virtual pixel accuracy from the reference image data stored in the motion compensation prediction frame memory 14, and the motion detection unit 42 uses it to generate motion.
- a configuration for performing vector search processing will be described.
- FIG. 16 is a block diagram showing an internal configuration of the interpolated image generating unit 43 of the motion compensated predicting unit 9 of the moving picture coding apparatus according to Embodiment 3 of the present invention.
- the interpolated image generation unit 43 shown in FIG. 16 includes an image enlargement processing unit 205 that enlarges the reference image 15 in the motion compensated prediction frame memory 14, an image reduction processing unit 200 that reduces the reference image 15, and an image reduction process.
- a high-frequency feature extraction unit 201a that extracts a feature quantity of a high-frequency region component from the unit 200; a high-frequency feature extraction unit 201b that extracts a feature quantity of a high-frequency region component from the reference image 15; and a correlation calculation that calculates a correlation value between the feature amounts.
- Unit 202 a high-frequency component estimation unit 203 that estimates a high-frequency component from the correlation value and the pre-learning data of high-frequency component pattern memory 204, and corrects the high-frequency component of the enlarged image using the estimated high-frequency component, thereby improving the virtual pixel accuracy.
- an adding unit 206 that generates a reference image 207.
- the reference image 15 in the range used for the motion search process is input to the interpolated image generation unit 43 from the reference image data stored in the motion compensated prediction frame memory 14, the reference image 15 is displayed as an image.
- the data are input to the reduction processing unit 200, the high frequency feature extraction unit 201b, and the image enlargement processing unit 205, respectively.
- the image reduction processing unit 200 generates a reduced image having a size of 1 / N in the vertical and horizontal directions (N is a power of 2 such as 2, 4) from the reference image 15 and outputs the reduced image to the high frequency feature extraction unit 201a.
- This reduction processing is realized by a general image reduction filter.
- the high-frequency feature extraction unit 201a extracts a first feature amount related to a high-frequency component such as an edge component from the reduced image generated by the image reduction processing unit 200.
- a first feature amount for example, a parameter indicating DCT or Wavelet transform coefficient distribution in the local block can be used.
- the high-frequency feature extraction unit 201b performs high-frequency feature extraction similar to the high-frequency feature extraction unit 201a, and extracts, from the reference image 15, a second feature amount having a frequency component region different from that of the first feature amount.
- the second feature amount is output to the correlation calculation unit 202 and also output to the high frequency component estimation unit 203.
- the correlation calculation unit 202 When the first feature amount is input from the high-frequency feature extraction unit 201a and the second feature amount is input from the high-frequency feature extraction unit 201b, the correlation calculation unit 202 localizes between the reference image 15 and the reduced image. The correlation value of the high frequency component region on the basis of the feature amount in block units is calculated. As this correlation value, for example, there is a distance between the first feature value and the second feature value.
- the high frequency component estimator 203 uses the second feature amount input from the high frequency feature extractor 201b and the correlation value input from the correlation calculator 202 to pre-learn high frequency component patterns from the high frequency component pattern memory 204. Is identified and a high frequency component to be included in the reference image 207 with virtual pixel accuracy is estimated and generated. The generated high frequency component is output to the adding unit 206.
- the image enlargement processing unit 205 performs a 6-tap operation using six integer pixels in the vertical direction or the horizontal direction on the input reference image 15 in the same manner as the half-pixel accuracy sample generation processing according to the MPEG-4 AVC standard.
- An enlarged image obtained by enlarging the reference image 15 in the vertical and horizontal N times size is generated by performing an interpolation operation using a filter or an enlargement filter process such as a bilinear filter.
- the adding unit 206 adds the high-frequency component input from the high-frequency component estimation unit 203 to the enlarged image input from the image enlargement processing unit 205, that is, corrects the high-frequency component of the enlarged image, thereby obtaining a vertical and horizontal N-fold size.
- An enlarged enlarged reference image is generated.
- the interpolated image generation unit 43 uses the enlarged reference image data as a reference image 207 with virtual pixel accuracy in which 1 / N is 1.
- the interpolation image generation unit 43 switches whether or not to add the high frequency component output from the high frequency component estimation unit 203 to the enlarged image output from the image enlargement processing unit 205,
- the generation result of the reference image 207 with pixel accuracy may be controlled.
- the addition unit 206 selectively determines whether or not the high-frequency component output from the high-frequency component estimation unit 203 is added, a predicted image 45 is generated in both cases of addition and non-addition to generate motion compensation. Prediction is performed, and the result is encoded to determine the more efficient one. And the information of the addition process which shows whether it added is multiplexed to the bit stream 30 as control information.
- the interpolation image generation unit 43 may control the addition process of the addition unit 206 by uniquely determining from other parameters to be multiplexed into the bitstream 30.
- determining from other parameters for example, the type of encoding mode 7 shown in FIG. 2A or 2B may be used.
- the interpolation image generation unit 43 considers that the super-resolution effect is low, and controls the addition unit 206 not to add the high frequency component output from the high frequency component estimation unit 203.
- the interpolated image generation unit 43 considers that the super-resolution effect is high, and controls the addition unit 206 to add the high-frequency component output from the high-frequency component estimation unit 203.
- parameters such as the magnitude of the motion vector and the variation of the motion vector field in consideration of the surrounding area may be used.
- the interpolation image generation unit 43 of the motion compensation prediction unit 9 does not have to multiplex the control information of the addition processing directly into the bit stream 30 by determining the type of parameter shared with the decoding device side, and the compression efficiency is improved. Can be increased.
- the reference image 15 stored in the motion compensated prediction frame memory 14 is converted into the virtual pixel precision reference image 207 by the super-resolution processing described above before being stored in the motion compensated prediction frame memory 14 and then stored thereafter. You may comprise.
- the memory size required as the motion compensated prediction frame memory 14 increases, but it becomes unnecessary to perform the super-resolution processing sequentially during the motion vector search and the prediction image generation, and the motion compensation prediction processing itself.
- the frame encoding process and the generation process of the reference image 207 with virtual pixel accuracy can be performed in parallel, and the processing speed can be increased.
- Motion vector detection procedure I ′ The interpolated image generation unit 43 generates a predicted image 45 for a motion vector 44 with integer pixel accuracy that is within a predetermined motion search range of the motion compensation region block image 41.
- the predicted image 45 (predicted image 17) generated with integer pixel accuracy is output to the subtracting unit 12, and is subtracted from the motion compensation region block image 41 (macro / sub-block image 5) by the subtracting unit 12, thereby predicting the difference signal 13. become.
- the encoding control unit 3 evaluates the prediction efficiency with respect to the prediction difference signal 13 and the integer pixel precision motion vector 44 (prediction parameter 18). Since this prediction efficiency may be evaluated by the above equation (1) described in the first embodiment, description thereof is omitted.
- Motion vector detection procedure II The interpolated image generating unit 43 performs the interpolated image generating unit 43 illustrated in FIG. 16 on the motion vector 44 having the 1 ⁇ 2 pixel accuracy positioned around the motion vector having the integer pixel accuracy determined in the “motion vector detection procedure I”.
- a predicted image 45 is generated using a reference image 207 with virtual pixel accuracy generated internally.
- the predicted image 45 (predicted image 17) generated with the 1 ⁇ 2 pixel accuracy is converted by the subtractor 12 into the motion compensation region block image 41 (macro / sub-block image 5). ) To obtain the prediction difference signal 13.
- the encoding control unit 3 evaluates the prediction efficiency with respect to the prediction difference signal 13 and the motion vector 44 with 1/2 pixel accuracy (prediction parameter 18), and is positioned around the motion vector with integer pixel accuracy.
- a motion vector 44 having a 1 ⁇ 2 pixel accuracy that minimizes the prediction cost J 1 is determined from one or more motion vectors having a 1 ⁇ 2 pixel accuracy.
- Motion vector detection procedure III The encoding control unit 3 and the motion compensation prediction unit 9 similarly apply to the motion vector with a 1 ⁇ 4 pixel accuracy around the motion vector with a 1 ⁇ 2 pixel accuracy determined in the “motion vector detection procedure II”.
- a motion vector 44 having a 1/4 pixel accuracy that minimizes the prediction cost J 1 is determined from one or more motion vectors having a 1/4 pixel accuracy positioned at the position.
- Motion vector detection procedure IV Similarly, the encoding control unit 3 and the motion compensation prediction unit 9 detect a motion vector with virtual pixel accuracy until a predetermined accuracy is obtained.
- the motion compensation prediction unit 9 determines the predetermined predetermined values for the motion compensation region block images 41 obtained by dividing the macro / sub-block image 5 into blocks each serving as a motion compensation unit indicated by the encoding mode 7.
- the motion vector of the accurate virtual pixel accuracy and the identification number of the reference image indicated by the motion vector are output as the prediction parameter 18.
- the motion compensation prediction unit 9 outputs the prediction image 45 (prediction image 17) generated by the prediction parameter 18 to the subtraction unit 12, and is subtracted from the macro / sub-block image 5 by the subtraction unit 12 to generate the prediction difference signal 13. Get.
- the prediction difference signal 13 output from the subtraction unit 12 is output to the transform / quantization unit 19. Subsequent processes are the same as those described in the first embodiment, and a description thereof will be omitted.
- the configuration of the moving picture decoding apparatus according to the third embodiment is the same as that of the first embodiment except that the configuration and operation related to the predicted image generation processing with virtual pixel accuracy in the motion compensation prediction unit 70 according to the first and second embodiments are different. Since it is the same as the moving picture decoding apparatus according to Embodiments 1 and 2, FIGS. 1 to 16 are used.
- the motion compensated prediction unit 70 when the motion compensated prediction unit 70 generates a predicted image based on a reference image with a virtual pixel accuracy such as a half pixel or a quarter pixel, it is vertical as in the MPEG-4 AVC standard.
- the motion compensated prediction unit 70 according to the third embodiment generates a predicted image by generating a virtual pixel by interpolation using a 6-tap filter using six integer pixels in the horizontal direction or the horizontal direction.
- the reference image 76 with integer pixel accuracy stored in the motion compensated prediction frame memory 75 is enlarged by super-resolution processing, thereby generating a reference image with virtual pixel accuracy.
- the motion compensated prediction unit 70 includes the motion vector included in the input optimum prediction parameter 63 and the identification number of the reference image indicated by each motion vector (reference image).
- the prediction image 72 is generated from the reference image 76 stored in the motion compensated prediction frame memory 75 and output based on the index).
- the adder 73 adds the predicted image 72 input from the motion compensation prediction unit 70 to the predicted differential signal decoded value 67 input from the inverse quantization / inverse transform unit 66 to generate a decoded image 74.
- the method of generating the predicted image 72 by the motion compensation prediction unit 70 is a process of searching for a motion vector from a plurality of reference images among the operations of the motion compensation prediction unit 9 on the encoding device side (motion detection unit 42 shown in FIG. 3). And corresponding to the operation of the interpolated image generating unit 43), and only the process of generating the predicted image 72 is performed according to the optimal prediction parameter 63 given from the variable length decoding unit 61.
- a motion compensated prediction unit for the reference image 76 specified by the reference image identification number (reference image index) on the motion compensated prediction frame memory 75 is used.
- 70 performs a process similar to the process shown in FIG. 16 to generate a reference image with virtual pixel accuracy, and generates a predicted image 72 using the decoded motion vector.
- the decoding device side selectively determines whether or not to add the high-frequency component output from the high-frequency component estimation unit 203 shown in FIG. 16 to the enlarged image
- the decoding device side performs addition processing.
- the control information indicating the presence or absence is extracted from the bit stream 60 or is uniquely determined from other parameters to control the addition processing in the motion compensation prediction unit 70.
- the encoding mode 7 When determining from other parameters, the encoding mode 7, the size of the motion vector, the variation of the motion vector field in consideration of the surrounding area, etc. can be used as in the above-described encoding device side, and motion compensation is performed.
- the prediction unit 70 determines the parameter type in common with the encoding device side, the encoding device does not have to multiplex the control information of the addition process directly into the bitstream 30, and the compression efficiency can be improved. .
- the motion vector included in the optimal prediction parameter 18a output from the encoding device side (that is, the optimal prediction parameter 63 on the decoding device side) is virtual. You may implement only when indicating pixel accuracy.
- the motion compensation prediction unit 9 uses the reference image 15 of the motion compensation prediction frame memory 14 according to the motion vector, or the interpolation image generation unit 43 generates the reference image 207 with virtual pixel accuracy.
- the prediction image 17 is generated from the reference image 15 or the reference image 207 with virtual pixel accuracy by switching whether to use.
- the process shown in FIG. 16 is performed on the reference image before being stored in the motion compensated prediction frame memory 75, and the reference image with virtual pixel accuracy in which the enlargement process and the high frequency component are corrected is stored in the motion compensated prediction frame memory 75.
- the memory size to be prepared as the motion compensated prediction frame memory 75 increases, but when the number of times the motion vector points to the pixel at the same virtual sample position is large, the processing shown in FIG. Since there is no need, the amount of calculation can be reduced.
- the motion compensation prediction unit 70 may be configured to perform the process shown in FIG.
- the range of displacement indicated by the motion vector is set by, for example, multiplexing and transmitting a value range indicating the range of displacement indicated by the motion vector in the bitstream 60, or by deciding between the encoding device side and the decoding device side in operation. It may be made known to the decoding device side.
- the motion compensation prediction unit 9 performs the enlargement process on the reference image 15 in the motion compensation prediction frame memory 14 and corrects the high-frequency component thereof to perform virtual processing.
- An interpolated image generation unit 43 that generates a reference image 207 with pixel accuracy, and switches between whether to use the reference image 15 or to generate and use a reference image 207 with virtual pixel accuracy according to the motion vector. Therefore, even when the input video signal 1 containing a lot of high-frequency components such as fine edges is highly compressed, the predicted image 17 generated by the motion compensation prediction contains a lot of high-frequency components. It becomes possible to generate from a reference image, and compression encoding can be performed efficiently.
- the video decoding apparatus also includes an interpolation image generation unit in which the motion compensation prediction unit 70 generates a reference image with virtual pixel accuracy in the same procedure as the video encoding apparatus.
- the prediction image 72 is generated by switching between using the reference image 76 of the motion compensated prediction frame memory 75 or generating and using a reference image with virtual pixel accuracy according to the motion vector multiplexed in the stream 60. Therefore, it is possible to correctly decode the bitstream encoded by the video encoding apparatus according to Embodiment 3.
- the reference image 207 with virtual pixel accuracy is generated by the super-resolution processing based on the technique disclosed in (2000), the super-resolution processing itself is not limited to this technique, and any other super-resolution An image technique may be applied to generate the reference image 207 with virtual pixel accuracy.
- the moving picture coding apparatus when configured by a computer, the block dividing unit 2, the coding control unit 3, the switching unit 6, the intra prediction unit 8, the motion compensation prediction unit 9, the motion A moving picture code describing the processing contents of the compensated prediction frame memory 14, the transform / quantization unit 19, the inverse quantization / inverse transform unit 22, the variable length coding unit 23, the loop filter unit 27, and the intra prediction memory 28
- the computer program may be stored in the memory of the computer, and the CPU of the computer may execute the moving image encoding program stored in the memory.
- a variable length decoding unit 61 when the video decoding apparatus according to Embodiments 1 to 3 is configured by a computer, a variable length decoding unit 61, an inverse quantization / inverse transform unit 66, a switching unit 68, an intra prediction unit 69, a motion compensation prediction unit 70, a motion compensation prediction frame memory 75, an intra prediction memory 77, and a moving picture decoding program describing the processing contents of the loop filter unit 78 are stored in the memory of the computer, and the moving image in which the CPU of the computer is stored in the memory An image decoding program may be executed.
- the moving picture coding apparatus and the moving picture decoding apparatus according to the present invention are capable of performing compression coding by adaptively switching the transform block size for each region that is a unit of motion compensated prediction in a macroblock.
- a moving picture decoding apparatus a moving picture coding apparatus that divides a moving picture into predetermined areas and performs coding in units of areas, and decodes the coded moving pictures in units of predetermined areas It is suitable for use in a moving picture decoding apparatus.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Discrete Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
Description
実施の形態1.
本実施の形態1では、映像の各フレーム画像を入力として用いて、近接フレーム間で動き補償予測を行い、得られた予測差分信号に対して直交変換・量子化による圧縮処理を施した後、可変長符号化を行ってビットストリームを生成する動画像符号化装置と、そのビットストリームを復号する動画像復号装置について説明する。
なお、符号化制御部3は符号化モードのセットの中から所定の符号化モードを選択可能であるが、この符号化モードのセットは任意であり、例えば以下に示す図2Aまたは図2Bのセットの中から所定の符号化モードを選択可能とする。
mb_mode3は、マクロブロックを4分割し、分割された各サブブロックに異なる符号化モード(sub_mb_mode)を割り当てるモードである。
mb_mode7は、マクロブロックを4分割し、分割された各サブブロックに異なる符号化モード(sub_mb_mode)を割り当てるモードである。
なお、上述したように、フレーム内予測モードの場合、予測パラメータ10および最適予測パラメータ10aにはイントラ予測モードが含まれる。一方、フレーム間予測モードの場合、予測パラメータ18および最適予測パラメータ18aには動きベクトル、各動きベクトルが指す参照画像の識別番号(参照画像インデックス)等が含まれる。
また、圧縮パラメータ20および最適圧縮パラメータ20aには、変換ブロックサイズ、量子化ステップサイズ等が含まれる。
ここでは、符号化モード7がフレーム間予測モードのときに、そのフレーム間予測に係わる動きベクトル、各動きベクトルが指す参照画像の識別番号(参照画像インデックス)等を含む予測パラメータ18を決定する手順を説明する。
補間画像生成部43は、動き補償領域ブロック画像41の所定の動き探索範囲内にある整数画素精度の動きベクトル44に対する予測画像45を生成する。整数画素精度で生成された予測画像45(予測画像17)は、減算部12へ出力され、減算部12により動き補償領域ブロック画像41(マクロ/サブブロック画像5)から差し引かれて予測差分信号13になる。符号化制御部3は、予測差分信号13と整数画素精度の動きベクトル44(予測パラメータ18)とに対して予測効率の評価を行う。予測効率の評価は、例えば下式(1)より予測コストJ1を計算し、所定の動き探索範囲内で予測コストJ1を最小にする整数画素精度の動きベクトル44を決定する。
J1=D1+λR1 (1)
ここでは評価値としてD1,R1を用いることとする。D1は予測差分信号のマクロブロック内またはサブブロック内の絶対値和(SAD)、R1は動きベクトルおよびこの動きベクトルが指す参照画像の識別番号の推定符号量、λは正数である。
PMV=median(MVa,MVb,MVc) (2)
補間画像生成部43は、上記「動きベクトル検出手順I」で決定した整数画素精度の動きベクトルの周囲に位置する1以上の1/2画素精度の動きベクトル44に対し、予測画像45を生成する。以下、上記「動きベクトル検出手順I」と同様に、1/2画素精度で生成された予測画像45(予測画像17)が、減算部12により動き補償領域ブロック画像41(マクロ/サブブロック画像5)から差し引かれ、予測差分信号13を得る。続いて符号化制御部3が、この予測差分信号13と1/2画素精度の動きベクトル44(予測パラメータ18)とに対して予測効率の評価を行い、整数画素精度の動きベクトルの周囲に位置する1以上の1/2画素精度の動きベクトルの中から予測コストJ1を最小にする1/2画素精度の動きベクトル44を決定する。
符号化制御部3と動き補償予測部9とは、1/4画素精度の動きベクトルに対しても同様に、上記「動きベクトル検出手順II」で決定した1/2画素精度の動きベクトルの周囲に位置する1以上の1/4画素精度の動きベクトルの中から予測コストJ1を最小にする1/4画素精度の動きベクトル44を決定する。
以下同様に、符号化制御部3と動き補償予測部9とが、所定の精度になるまで仮想画素精度の動きベクトルの検出を行う。
ここでは、上記「1.予測パラメータの決定手順」にて符号化モード7毎に決定された予測パラメータ18に基づいて生成される予測差分信号13を、変換・量子化処理する際に用いる圧縮パラメータ20(変換ブロックサイズ)を決定する手順を説明する。
なお、それぞれの符号化モードごとに選択可能な変換ブロックサイズのセットは、符号化モードによって均等分割されるサブブロックサイズ以下の任意の矩形ブロックサイズの中から定義することができる。
なお、図5および図6の例では、マクロブロックの符号化モード7に応じて選択可能な変換ブロックサイズのセットを予め決めておき、マクロブロック単位またはサブブロック単位に適応的に選択できるようにしたが、同様にマクロブロックを分割したサブブロックの符号化モード7(図2Bのsub_mb_mode1~8等)に応じて、選択可能な変換ブロックサイズのセットを予め決めておき、サブブロック単位またはサブブロックをさらに分割したブロック単位に適応的に選択できるようにしてもよい。
同様に、符号化制御部3は、図2Aに示す符号化モード7を用いる場合にはその符号化モード7に応じた変換ブロックサイズのセットを予め決めておき、適応的に選択できるようにしておけばよい。
なお、圧縮パラメータ20で1つのマクロブロックまたはサブブロックに対して複数の変換ブロックサイズが選択指定されている場合は、各変換ブロックサイズの変換対象ブロック51を順次、変換部52へ出力する。
なお、変換部52および量子化部54は、圧縮パラメータ20で1つのマクロブロックまたはサブブロックに対して複数の変換ブロックサイズが選択指定されている場合にはそれらすべての変換ブロックサイズに対して上述の変換・量子化処理を行って、各々の圧縮データ21を出力する。
J2=D2+λR2 (3)
ここでは評価値としてD2,R2を用いることとする。D2として、変換ブロックサイズに対して得られた圧縮データ21を逆量子化・逆変換部22へ入力して、圧縮データ21を逆変換・逆量子化処理して得られる局所復号予測差分信号24に予測画像17を加算して得られる局所復号画像信号26と、マクロ/サブブロック画像5との間の二乗ひずみ和等を用いる。R2として、変換ブロックサイズに対して得られた圧縮データ21と、圧縮データ21に係わる符号化モード7および予測パラメータ10,18とを可変長符号化部23で実際に符号化して得られる符号量(または推定符号量)を用いる。
上記「1.予測パラメータの決定手順」および「2.圧縮パラメータの決定手順」によって、符号化制御部3が指示したすべての符号化モード7に対してそれぞれ予測パラメータ10,18および圧縮パラメータ20が決定すると、符号化制御部3は、それぞれの符号化モード7とそのときの予測パラメータ10,18および圧縮パラメータ20を用いて得られる予測差分信号13をさらに変換・量子化して得られる圧縮データ21を用いて、符号化コストJ2が小さくなる符号化モード7を上式(3)より求め、その符号化モード7を当該マクロブロックの最適符号化モード7aとして選択する。
図8は、この発明の実施の形態1に係る動画像復号装置の構成を示すブロック図である。図8に示す動画像復号装置は、ビットストリーム60から、マクロブロック単位に最適符号化モード62をエントロピ復号すると共に、当該復号された最適符号化モード62に応じて分割されたマクロブロックまたはサブブロック単位に最適予測パラメータ63、圧縮データ64、最適圧縮パラメータ65をエントロピ復号する可変長復号部61と、最適予測パラメータ63が入力されると、当該最適予測パラメータ63に含まれるイントラ予測モードとイントラ予測用メモリ77に格納された復号画像74aとを用いて予測画像71を生成するイントラ予測部69と、最適予測パラメータ63が入力されると、当該最適予測パラメータ63に含まれる動きベクトルと、当該最適予測パラメータ63に含まれる参照画像インデックスで特定される動き補償予測フレームメモリ75内の参照画像76とを用いて動き補償予測を行って予測画像72を生成する動き補償予測部70と、復号された最適符号化モード62に応じて、可変長復号部61が復号した最適予測パラメータ63をイントラ予測部69または動き補償予測部70のいずれか一方に入力する切替部68と、最適圧縮パラメータ65を用いて、圧縮データ64に対して逆量子化および逆変換処理を行い、予測差分信号復号値67を生成する逆量子化・逆変換部66と、予測差分信号復号値67に、イントラ予測部69または動き補償予測部70のいずれか一方が出力する予測画像71,72を加算して復号画像74を生成する加算部73と、復号画像74を格納するイントラ予測用メモリ77と、復号画像74をフィルタ処理して再生画像79を生成するループフィルタ部78と、再生画像79を格納する動き補償予測フレームメモリ75とを含む。
なお、復号装置側で復号した最適符号化モード62、最適予測パラメータ63、圧縮データ64、最適圧縮パラメータ65は、符号化装置側で符号化した最適符号化モード7a、最適予測パラメータ10a,18a、圧縮データ21、最適圧縮パラメータ20aに対応するものである。
本実施の形態2では、上記実施の形態1に係る動画像符号化装置の可変長符号化部23の変形例と、同じく上記実施の形態1に係る動画像復号装置の可変長復号部61の変形例を説明する。
図9は、この発明の実施の形態2に係る動画像符号化装置の可変長符号化部23の内部構成を示すブロック図である。なお、図9において図1と同一または相当の部分については同一の符号を付し説明を省略する。また、本実施の形態2に係る動画像符号化装置の構成は上記実施の形態1と同じであり、可変長符号化部23を除く各構成要素の動作も上記実施の形態1と同じであるため、図1~図8を援用する。また、説明の便宜上、本実施の形態2では図2Aに示す符号化モードのセットを用いることを前提とした装置構成および処理方法にするが、図2Bに示す符号化モードのセットを用いることを前提とした装置構成および処理方法にも適用可能であることは言うまでもない。
なお、詳細は後述するが、図10の例では、発生頻度の高い符号化モードに小さいインデックス値が割り当てられており、また、2値信号も1ビットと短く設定されている。
例えば、図12中に枠で囲った「確率テーブル番号1」の確率状態(図11よりMPSの発生確率0.527)のときに、「0」または「1」のうち発生確率が低いいずれかのシンボル(LPS:Least Probable Symbol)を符号化したことによって、確率状態は「LPS符号化後の確率遷移」より確率テーブル番号0(図11よりMPSの発生確率0.500)へ遷移することを表す。即ち、LPSが発生したことによって、MPSの発生確率は小さくなっている。
逆に、MPSを符号化すると、確率状態は「MPS符号化後の確率遷移」より確率テーブル番号2(図11よりMPSの発生確率0.550)へ遷移することを表す。即ち、MPSが発生したことによって、MPSの発生確率は大きくなっている。
以下、コンテキスト生成部99によるコンテキスト識別情報の生成手順を説明する。
図13(a)において、黒丸をノード、ノード間を結ぶ線をパスと呼ぶ。二分木の終端ノードには、2値化対象の多値信号のインデックスが割り当てられている。また、紙面上の上から下へ向って、二分木の深さがビン番号に対応し、ルートノードから終端ノードまでの各パスに割り当てられたシンボル(0または1)を結合したビット列が、各終端ノードに割り当てられた多値信号のインデックスに対応する2値信号103になる。二分木の各親ノード(終端ではないノード)に対し、周辺ブロックA,Bの情報に応じて1以上のコンテキスト識別情報が用意されている。
算術符号化処理演算部104は、すべてのビンのシンボルを算術符号化して得られる符号化ビット列111を出力し、可変長符号化部23がビットストリーム30に多重化する。
初期化部90は、符号化制御部3のコンテキスト情報初期化フラグ91による指示に応じて初期化するが、この初期化はスライスの先頭等で行われる。各コンテキスト情報106の初期状態(MPSの値とその発生確率を近似する確率テーブル番号の初期値)については、予め複数のセットを用意しておき、いずれの初期状態を選択するかどうかを符号化制御部3がコンテキスト情報初期化フラグ91に含めて、初期化部90へ指示するようにしてもよい。
図15は、この発明の実施の形態2に係る動画像復号装置の可変長復号部61の内部構成を示すブロック図である。なお、本実施の形態2に係る動画像復号装置の構成は上記実施の形態1と同じであり、可変長復号部61を除く各構成要素の動作も上記実施の形態1と同じであるため、図1~図8を援用する。
また、確率テーブルメモリ131および状態遷移テーブルメモリ135は、符号化装置側の確率テーブルメモリ97および状態遷移テーブルメモリ98と同じ確率テーブル(図11)および状態遷移テーブル(図12)を格納している。
算術復号処理演算部127は、上記算術復号の結果得られた各ビンのシンボルを結合した2値信号137を、逆2値化部138へ出力する。
なお、復号対象パラメータの種別がマクロブロックの符号化モード(最適符号化モード62)のとき、2値化テーブル139は図10に示した符号化装置側の2値化テーブルと同じである。
本実施の形態3では、上記実施の形態1,2に係る動画像符号化装置および動画像復号装置において、動き補償予測部9の動き補償予測による予測画像の生成処理の変形例を説明する。
上記実施の形態1,2と同様に、本実施の形態3の補間画像生成部43も、動き補償予測フレームメモリ14から1フレーム以上の参照画像15を指定し、動き検出部42が指定された参照画像15上の所定の動き探索範囲内で動きベクトル44を検出する。動きベクトルの検出は、MPEG-4 AVC規格等と同様に、仮想画素精度の動きベクトルによって行う。この検出方法は、参照画像の持つ画素情報(整数画素と呼ぶ)に対し、整数画素の間に内挿演算によって仮想的なサンプル(画素)を作り出し、それを参照画像として利用するものである。
なお、高周波成分推定部203が出力する高周波成分を加算部206において加算するかしないかを選択的に定める場合は、加算した場合と加算しない場合の両ケースの予測画像45を生成して動き補償予測を行い、その結果を符号化して効率のよいほうを決定する。そして、加算したか否かを示す加算処理の情報は、制御情報としてビットストリーム30へ多重化する。
補間画像生成部43は、動き補償領域ブロック画像41の所定の動き探索範囲内にある整数画素精度の動きベクトル44に対する予測画像45を生成する。整数画素精度で生成された予測画像45(予測画像17)は、減算部12へ出力され、減算部12により動き補償領域ブロック画像41(マクロ/サブブロック画像5)から差し引かれて予測差分信号13になる。符号化制御部3は、予測差分信号13と整数画素精度の動きベクトル44(予測パラメータ18)とに対して予測効率の評価を行う。この予測効率の評価は上記実施の形態1で説明した上式(1)により行えばよいので、説明は省略する。
補間画像生成部43は、上記「動きベクトル検出手順I」で決定した整数画素精度の動きベクトルの周囲に位置する1/2画素精度の動きベクトル44に対し、図16に示す補間画像生成部43内部で生成される仮想画素精度の参照画像207を用いて予測画像45を生成する。以下、上記「動きベクトル検出手順I」と同様に、1/2画素精度で生成された予測画像45(予測画像17)が、減算部12により動き補償領域ブロック画像41(マクロ/サブブロック画像5)から差し引かれ、予測差分信号13を得る。続いて符号化制御部3が、この予測差分信号13と1/2画素精度の動きベクトル44(予測パラメータ18)とに対して予測効率の評価を行い、整数画素精度の動きベクトルの周囲に位置する1以上の1/2画素精度の動きベクトルの中から予測コストJ1を最小にする1/2画素精度の動きベクトル44を決定する。
符号化制御部3と動き補償予測部9とは、1/4画素精度の動きベクトルに対しても同様に、上記「動きベクトル検出手順II」で決定した1/2画素精度の動きベクトルの周囲に位置する1以上の1/4画素精度の動きベクトルの中から予測コストJ1を最小にする1/4画素精度の動きベクトル44を決定する。
以下同様に、符号化制御部3と動き補償予測部9とが、所定の精度になるまで仮想画素精度の動きベクトルの検出を行う。
本実施の形態3に係る動画像復号装置の構成は、上記実施の形態1,2の動き補償予測部70における仮想画素精度の予測画像生成処理に係る構成および動作が異なる以外は、上記実施の形態1,2の動画像復号装置と同じであるため、図1~図16を援用する。
加算部73は、動き補償予測部70から入力された予測画像72を、逆量子化・逆変換部66から入力される予測差分信号復号値67に加算して、復号画像74を生成する。
同様に、実施の形態1~3に係る動画像復号装置をコンピュータで構成する場合、可変長復号部61、逆量子化・逆変換部66、切替部68、イントラ予測部69、動き補償予測部70、動き補償予測フレームメモリ75、イントラ予測用メモリ77、ループフィルタ部78の処理内容を記述している動画像復号プログラムをコンピュータのメモリに格納し、コンピュータのCPUがメモリに格納されている動画像復号プログラムを実行するようにしてもよい。
Claims (3)
- 入力画像を所定サイズの複数ブロックに分割したマクロブロック画像を、符号化モードに応じて1以上のブロックに分割したブロック画像を出力するブロック分割部と、
前記ブロック画像が入力されると、当該ブロック画像に対し、フレーム内の画像信号を用いてフレーム内予測して予測画像を生成するイントラ予測部と、
前記ブロック画像が入力されると、当該ブロック画像に対し、1フレーム以上の参照画像を用いて動き補償予測を行って予測画像を生成する動き補償予測部と、
前記ブロック分割部が出力するブロック画像の符号化モードに応じて、当該ブロック画像を前記イントラ予測部または前記動き補償予測部のいずれか一方に入力する切替部と、
前記ブロック分割部が出力するブロック画像から、前記イントラ予測部または前記動き補償予測部のいずれか一方が出力する予測画像を差し引いて、予測差分信号を生成する減算部と、
前記予測差分信号に対し、変換および量子化処理を行って圧縮データを生成する変換・量子化部と、
前記圧縮データをエントロピ符号化してビットストリームへ多重化する可変長符号化部と、
前記ブロック画像のブロックサイズに応じて予め定められた変換ブロックサイズのセットの中から、所定の変換ブロックサイズを前記変換・量子化部へ指示する符号化制御部とを備え、
前記変換・量子化部は、前記予測差分信号を、前記符号化制御部から指示される変換ブロックサイズのブロックに分割して変換および量子化処理を行い、圧縮データを生成することを特徴とする動画像符号化装置。 - 前記符号化制御部は、変換ブロックサイズのセットに含まれる1以上の変換ブロックサイズそれぞれを前記変換・量子化部へ指示してそれぞれの圧縮データを取得して符号化効率を評価し、当該評価結果に基づいて前記変換ブロックサイズのセットの中から1つの変換ブロックサイズを選択し、
前記変換・量子化部は、予測差分信号を前記符号化制御部が指示した前記変換ブロックサイズのセットに含まれる1以上の変換ブロックサイズそれぞれおよび当該セットの中から選択された前記1つの変換ブロックサイズに分割して、変換および量子化処理を行ってそれぞれの圧縮データを生成し、
前記可変長符号化部は、前記セットの中から選択された前記1つの変換ブロックサイズを特定する情報およびその圧縮データを、ブロック画像のブロック単位にエントロピ符号化してビットストリームへ多重化することを特徴とする請求項1記載の動画像符号化装置。 - 画像を所定サイズの複数ブロックに分割したマクロブロック単位に圧縮符号化されたビットストリームを入力として、当該ビットストリームから、前記マクロブロック単位に符号化モードをエントロピ復号すると共に、当該復号された符号化モードに応じて分割されたブロック単位に予測パラメータ、圧縮パラメータおよび圧縮データをエントロピ復号する可変長復号部と、
前記予測パラメータが入力されると、当該予測パラメータに含まれるイントラ予測モードとフレーム内の復号済み画像信号とを用いて予測画像を生成するイントラ予測部と、
前記予測パラメータが入力されると、当該予測パラメータに含まれる動きベクトルと、当該予測パラメータに含まれる参照画像インデックスで特定される参照画像とを用いて動き補償予測を行って予測画像を生成する動き補償予測部と、
前記復号された符号化モードに応じて、前記可変長復号部が復号した予測パラメータを前記イントラ予測部または前記動き補償予測部のいずれか一方に入力する切替部と、
前記圧縮パラメータを用いて、前記圧縮データに対して逆量子化および逆変換処理を行い、復号予測差分信号を生成する逆量子化・逆変換部と、
前記復号予測差分信号に、前記イントラ予測部または前記動き補償予測部のいずれか一方が出力する予測画像を加算して復号画像信号を出力する加算部とを備え、
前記逆量子化・逆変換部は、前記復号された符号化モードと、前記圧縮パラメータに含まれる変換ブロックサイズ情報とに基づいて変換ブロックサイズを決定して、前記圧縮データを当該変換ブロックサイズのブロック単位に逆変換および逆量子化処理することを特徴とする動画像復号装置。
Priority Applications (29)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020147035021A KR20150013776A (ko) | 2010-04-09 | 2011-03-31 | 동화상 부호화 장치 및 동화상 복호 장치 |
KR1020147008944A KR101500914B1 (ko) | 2010-04-09 | 2011-03-31 | 동화상 복호 장치 |
KR1020127028737A KR101389163B1 (ko) | 2010-04-09 | 2011-03-31 | 화상 복호 장치 |
CN201810242573.3A CN108462874B (zh) | 2010-04-09 | 2011-03-31 | 运动图像编码装置以及运动图像解码装置 |
PL16181069T PL3101897T3 (pl) | 2010-04-09 | 2011-03-31 | Urządzenie i sposób kodowania ruchomych obrazów, urządzenie i sposób dekodowania ruchomych obrazów, strumień bitów |
CA2795425A CA2795425C (en) | 2010-04-09 | 2011-03-31 | Moving image encoding device and moving image decoding device |
KR1020147022232A KR101540899B1 (ko) | 2010-04-09 | 2011-03-31 | 화상 부호화 장치 |
CN2011800183242A CN102934438A (zh) | 2010-04-09 | 2011-03-31 | 运动图像编码装置以及运动图像解码装置 |
MX2015010849A MX353109B (es) | 2010-04-09 | 2011-03-31 | Dispositivo codificador de imagen en movimiento y dispositivo decodificador de imagen en movimiento. |
EP11765215.6A EP2557792A4 (en) | 2010-04-09 | 2011-03-31 | VIDEO CODING DEVICE AND VIDEO CODING DEVICE |
KR1020137034944A KR20140010192A (ko) | 2010-04-09 | 2011-03-31 | 화상 부호화 장치 |
EP16181069.2A EP3101897B1 (en) | 2010-04-09 | 2011-03-31 | Moving image encoding device and method, moving image decoding device and method, bitstream |
US13/639,134 US20130028326A1 (en) | 2010-04-09 | 2011-03-31 | Moving image encoding device and moving image decoding device |
MX2015010847A MX353107B (es) | 2010-04-09 | 2011-03-31 | Dispositivo codificador de imagen en movimiento y dispositivo decodificador de imagen en movimiento. |
SG2012075099A SG184528A1 (en) | 2010-04-09 | 2011-03-31 | Moving image encoding device and moving image decoding device |
JP2012509307A JPWO2011125313A1 (ja) | 2010-04-09 | 2011-03-31 | 動画像符号化装置および動画像復号装置 |
BR112012025206-2A BR112012025206B1 (pt) | 2010-04-09 | 2011-03-31 | Dispositivo de decodificação de imagem em movimento |
KR1020177001857A KR101817481B1 (ko) | 2010-04-09 | 2011-03-31 | 동화상 복호 장치, 동화상 복호 방법, 동화상 부호화 장치, 동화상 부호화 방법 및 기억 매체 |
MX2012011695A MX2012011695A (es) | 2010-04-09 | 2011-03-31 | Dispositivo codificador de imagen en movimiento y dispositivo decodificador de imagen en movimento. |
RU2012147595/08A RU2523071C1 (ru) | 2010-04-09 | 2011-03-31 | Устройство кодирования движущихся изображений и устройство декодирования движущихся изображений |
TW104143548A TWI601415B (zh) | 2010-04-09 | 2011-04-07 | 動畫像編碼裝置、動畫像解碼裝置及儲存裝置 |
TW106125286A TWI688267B (zh) | 2010-04-09 | 2011-04-07 | 動畫像編碼裝置、動畫像解碼裝置及編碼資料 |
TW109104055A TWI765223B (zh) | 2010-04-09 | 2011-04-07 | 動畫像編碼裝置和方法、動畫像解碼裝置和方法及媒體 |
TW100111972A TWI520617B (zh) | 2010-04-09 | 2011-04-07 | 動畫像編碼裝置、動畫像解碼裝置及儲存裝置 |
US15/443,431 US9973753B2 (en) | 2010-04-09 | 2017-02-27 | Moving image encoding device and moving image decoding device based on adaptive switching among transformation block sizes |
US15/920,105 US10412385B2 (en) | 2010-04-09 | 2018-03-13 | Moving image encoding device and moving image decoding device based on adaptive switching among transformation block sizes |
US15/996,293 US10390011B2 (en) | 2010-04-09 | 2018-06-01 | Moving image encoding device and moving image decoding device based on adaptive switching among transformation block sizes |
US16/444,898 US10469839B2 (en) | 2010-04-09 | 2019-06-18 | Moving image encoding device and moving image decoding device based on adaptive switching among transformation block sizes |
US16/444,922 US10554970B2 (en) | 2010-04-09 | 2019-06-18 | Moving image encoding device and moving image decoding device based on adaptive switching among transformation block sizes |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010-090534 | 2010-04-09 | ||
JP2010090534 | 2010-04-09 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/639,134 A-371-Of-International US20130028326A1 (en) | 2010-04-09 | 2011-03-31 | Moving image encoding device and moving image decoding device |
US15/443,431 Continuation US9973753B2 (en) | 2010-04-09 | 2017-02-27 | Moving image encoding device and moving image decoding device based on adaptive switching among transformation block sizes |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011125313A1 true WO2011125313A1 (ja) | 2011-10-13 |
Family
ID=44762284
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/001953 WO2011125313A1 (ja) | 2010-04-09 | 2011-03-31 | 動画像符号化装置および動画像復号装置 |
Country Status (15)
Country | Link |
---|---|
US (6) | US20130028326A1 (ja) |
EP (2) | EP3101897B1 (ja) |
JP (8) | JPWO2011125313A1 (ja) |
KR (6) | KR101500914B1 (ja) |
CN (4) | CN108462874B (ja) |
BR (1) | BR112012025206B1 (ja) |
CA (1) | CA2795425C (ja) |
ES (1) | ES2899780T3 (ja) |
HK (1) | HK1257212A1 (ja) |
MX (3) | MX353109B (ja) |
PL (1) | PL3101897T3 (ja) |
RU (7) | RU2523071C1 (ja) |
SG (3) | SG184528A1 (ja) |
TW (4) | TWI520617B (ja) |
WO (1) | WO2011125313A1 (ja) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103108177A (zh) * | 2011-11-09 | 2013-05-15 | 华为技术有限公司 | 图像编码方法及图像编码装置 |
EP2768220A1 (en) * | 2011-11-04 | 2014-08-20 | Huawei Technologies Co., Ltd. | Method and device for encoding and decoding based on transformation mode |
JP2014532377A (ja) * | 2011-10-14 | 2014-12-04 | アドバンスト・マイクロ・ディバイシズ・インコーポレイテッドAdvanced Micro Devices Incorporated | 領域ベースの画像圧縮 |
WO2015033510A1 (ja) * | 2013-09-09 | 2015-03-12 | 日本電気株式会社 | 映像符号化装置、映像符号化方法及びプログラム |
JPWO2013065402A1 (ja) * | 2011-10-31 | 2015-04-02 | 三菱電機株式会社 | 動画像符号化装置、動画像復号装置、動画像符号化方法及び動画像復号方法 |
JPWO2015011752A1 (ja) * | 2013-07-22 | 2017-03-02 | ルネサスエレクトロニクス株式会社 | 動画像符号化装置およびその動作方法 |
JP2017103799A (ja) * | 2012-04-24 | 2017-06-08 | リリカル ラブズ ビデオ コンプレッション テクノロジー、エルエルシー | ビデオ符号化システム及びビデオを符号化する方法 |
CN107371020A (zh) * | 2011-12-28 | 2017-11-21 | Jvc 建伍株式会社 | 动图像解码装置、动图像解码方法以及存储介质 |
WO2020184145A1 (ja) * | 2019-03-08 | 2020-09-17 | ソニー株式会社 | 画像符号化装置、画像符号化方法、画像復号装置、および画像復号方法 |
JP2021507583A (ja) * | 2018-01-12 | 2021-02-22 | 富士通株式会社 | ユニフォーム変換ユニットモードに対してグループマーキングを行う方法、装置及び電子装置 |
Families Citing this family (75)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103299637B (zh) * | 2011-01-12 | 2016-06-29 | 三菱电机株式会社 | 图像编码装置、图像译码装置、图像编码方法以及图像译码方法 |
JP2013012995A (ja) * | 2011-06-30 | 2013-01-17 | Sony Corp | 画像処理装置および方法 |
US8571092B2 (en) * | 2011-10-14 | 2013-10-29 | Texas Instruments Incorporated | Interconnect coding method and apparatus |
JP6034010B2 (ja) * | 2011-10-24 | 2016-11-30 | ソニー株式会社 | 符号化装置、符号化方法、およびプログラム |
KR102021257B1 (ko) * | 2012-01-19 | 2019-09-11 | 미쓰비시덴키 가부시키가이샤 | 화상 복호 장치, 화상 부호화 장치, 화상 복호 방법, 화상 부호화 방법 및 기억 매체 |
US11039138B1 (en) * | 2012-03-08 | 2021-06-15 | Google Llc | Adaptive coding of prediction modes using probability distributions |
SG10201710903VA (en) * | 2012-04-15 | 2018-02-27 | Samsung Electronics Co Ltd | Parameter update method for entropy coding and decoding of conversion coefficient level, and entropy coding device and entropy decoding device of conversion coefficient level using same |
KR101436369B1 (ko) * | 2013-06-25 | 2014-09-11 | 중앙대학교 산학협력단 | 적응적 블록 분할을 이용한 다중 객체 검출 장치 및 방법 |
JP5719410B2 (ja) * | 2013-07-25 | 2015-05-20 | 日本電信電話株式会社 | 画像符号化方法、画像符号化装置及び画像符号化プログラム |
US9534469B2 (en) | 2013-09-27 | 2017-01-03 | Baker Hughes Incorporated | Stacked tray ball dropper for subterranean fracking operations |
EP3104614A4 (en) * | 2014-02-03 | 2017-09-13 | Mitsubishi Electric Corporation | Image encoding device, image decoding device, encoded stream conversion device, image encoding method, and image decoding method |
FR3029333A1 (fr) | 2014-11-27 | 2016-06-03 | Orange | Procede de codage et decodage d'images, dispositif de codage et decodage et programmes d'ordinateur correspondants |
KR102264840B1 (ko) * | 2014-11-27 | 2021-06-15 | 삼성전자주식회사 | 비디오 프레임 인코딩 회로, 그것의 인코딩 방법 및 그것을 포함하는 비디오 데이터 송수신 장치 |
CN106331722B (zh) | 2015-07-03 | 2019-04-26 | 华为技术有限公司 | 图像预测方法和相关设备 |
US10123045B2 (en) * | 2015-07-24 | 2018-11-06 | Qualcomm Incorporated | Modification to block size for transform mode in display stream compression |
CN115278230A (zh) * | 2015-11-11 | 2022-11-01 | 三星电子株式会社 | 对视频进行解码的设备和对视频进行编码的设备 |
US20170180757A1 (en) * | 2015-12-18 | 2017-06-22 | Blackberry Limited | Binarizer selection for image and video coding |
US10142635B2 (en) | 2015-12-18 | 2018-11-27 | Blackberry Limited | Adaptive binarizer selection for image and video coding |
KR102411911B1 (ko) * | 2015-12-24 | 2022-06-22 | 삼성전자주식회사 | 프레임 레이트 변환 장치 및 그 프레임 레이트 변환 방법 |
US10560702B2 (en) | 2016-01-22 | 2020-02-11 | Intel Corporation | Transform unit size determination for video coding |
JP6408724B2 (ja) * | 2016-01-25 | 2018-10-17 | 京セラ株式会社 | 通信方法及び無線端末 |
KR20180040319A (ko) * | 2016-10-12 | 2018-04-20 | 가온미디어 주식회사 | 영상 처리 방법, 그를 이용한 영상 복호화 및 부호화 방법 |
US10721468B2 (en) * | 2016-09-12 | 2020-07-21 | Nec Corporation | Intra-prediction mode determination method, intra-prediction mode determination device, and storage medium for storing intra-prediction mode determination program |
AU2016425069B2 (en) * | 2016-09-30 | 2022-03-03 | Huawei Technologies Co., Ltd. | Method and apparatus for image coding and decoding through inter prediction |
WO2018092869A1 (ja) * | 2016-11-21 | 2018-05-24 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 符号化装置、復号装置、符号化方法及び復号方法 |
US9959586B1 (en) * | 2016-12-13 | 2018-05-01 | GoAnimate, Inc. | System, method, and computer program for encoding and decoding a unique signature in a video file as a set of watermarks |
US20200177889A1 (en) * | 2017-03-21 | 2020-06-04 | Lg Electronics Inc. | Transform method in image coding system and apparatus for same |
CN109391814B (zh) | 2017-08-11 | 2023-06-06 | 华为技术有限公司 | 视频图像编码和解码的方法、装置及设备 |
JP6981540B2 (ja) | 2017-12-06 | 2021-12-15 | 富士通株式会社 | モード情報のコーディングとデコーディング方法、装置及び電子機器 |
WO2019160133A1 (ja) * | 2018-02-19 | 2019-08-22 | 日本電信電話株式会社 | 情報管理装置、情報管理方法及び情報管理プログラム |
JP7273845B2 (ja) | 2018-04-02 | 2023-05-15 | エルジー エレクトロニクス インコーポレイティド | 動きベクトルに基づく映像コーディング方法およびその装置 |
US10986354B2 (en) * | 2018-04-16 | 2021-04-20 | Panasonic Intellectual Property Corporation Of America | Encoder, decoder, encoding method, and decoding method |
US11170481B2 (en) * | 2018-08-14 | 2021-11-09 | Etron Technology, Inc. | Digital filter for filtering signals |
US11425378B2 (en) | 2019-01-31 | 2022-08-23 | Hfi Innovation Inc. | Method and apparatus of transform type assignment for intra sub-partition in video coding |
WO2020156547A1 (en) | 2019-02-02 | 2020-08-06 | Beijing Bytedance Network Technology Co., Ltd. | Buffer resetting for intra block copy in video coding |
EP3915265A4 (en) | 2019-03-01 | 2022-06-22 | Beijing Bytedance Network Technology Co., Ltd. | DIRECTION-BASED PREDICTION FOR INTRA BLOCK COPY IN VIDEO CODING |
US10939107B2 (en) | 2019-03-01 | 2021-03-02 | Sony Corporation | Embedded codec circuitry for sub-block based allocation of refinement bits |
KR20210125506A (ko) | 2019-03-04 | 2021-10-18 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | 비디오 코딩에서 인트라 블록 복사를 위한 버퍼 관리 |
KR102639936B1 (ko) | 2019-03-08 | 2024-02-22 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | 비디오 처리에서 모델-기반 변형에 대한 제약들 |
CN113785588B (zh) | 2019-04-12 | 2023-11-24 | 北京字节跳动网络技术有限公司 | 基于矩阵的帧内预测的色度编解码模式确定 |
CN115914627A (zh) | 2019-04-15 | 2023-04-04 | 北京字节跳动网络技术有限公司 | 自适应环路滤波器中的裁剪参数推导 |
CN113767623B (zh) | 2019-04-16 | 2024-04-02 | 北京字节跳动网络技术有限公司 | 用于视频编解码的自适应环路滤波 |
JP7317991B2 (ja) | 2019-04-23 | 2023-07-31 | 北京字節跳動網絡技術有限公司 | クロスコンポーネント依存性を低減するための方法 |
KR20220011127A (ko) | 2019-05-22 | 2022-01-27 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | 업샘플링을 이용한 행렬 기반 인트라 예측 |
EP3954125A4 (en) | 2019-05-31 | 2022-06-22 | ByteDance Inc. | INTRA-BLOCK COPY PREDICTION PALETTE MODE |
WO2020239017A1 (en) | 2019-05-31 | 2020-12-03 | Beijing Bytedance Network Technology Co., Ltd. | One-step downsampling process in matrix-based intra prediction |
CN113950836B (zh) | 2019-06-05 | 2024-01-12 | 北京字节跳动网络技术有限公司 | 基于矩阵的帧内预测的上下文确定 |
JP7418478B2 (ja) | 2019-06-22 | 2024-01-19 | 北京字節跳動網絡技術有限公司 | クロマ残差スケーリングのためのシンタックス要素 |
KR20220023338A (ko) | 2019-06-25 | 2022-03-02 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | 모션 벡터 차이에 대한 제약 |
CN117395396A (zh) | 2019-07-07 | 2024-01-12 | 北京字节跳动网络技术有限公司 | 色度残差缩放的信令通知 |
MX2022000110A (es) | 2019-07-10 | 2022-02-10 | Beijing Bytedance Network Tech Co Ltd | Identificacion de muestras para la copia intra-bloque en codificacion de video. |
CN114128295B (zh) | 2019-07-14 | 2024-04-12 | 北京字节跳动网络技术有限公司 | 视频编解码中几何分割模式候选列表的构建 |
CN114208184A (zh) | 2019-08-13 | 2022-03-18 | 北京字节跳动网络技术有限公司 | 基于子块的帧间预测中的运动精度 |
EP4008109A4 (en) | 2019-09-02 | 2022-09-14 | Beijing Bytedance Network Technology Co., Ltd. | ENCODING MODE DETERMINATION BASED ON COLOR FORMAT |
EP4011080A4 (en) | 2019-09-12 | 2023-04-12 | ByteDance Inc. | USING PALETTE PREDICTION IN VIDEO CODING |
JP7321364B2 (ja) | 2019-09-14 | 2023-08-04 | バイトダンス インコーポレイテッド | ビデオコーディングにおけるクロマ量子化パラメータ |
WO2021057996A1 (en) | 2019-09-28 | 2021-04-01 | Beijing Bytedance Network Technology Co., Ltd. | Geometric partitioning mode in video coding |
JP2021061501A (ja) * | 2019-10-04 | 2021-04-15 | シャープ株式会社 | 動画像変換装置及び方法 |
EP4032290A4 (en) | 2019-10-18 | 2022-11-30 | Beijing Bytedance Network Technology Co., Ltd. | SYNTAX CONSTRAINTS IN REPORTING SUBPICTURE PARAMETER SETS |
CN114600461A (zh) | 2019-10-23 | 2022-06-07 | 北京字节跳动网络技术有限公司 | 用于多编解码工具的计算 |
EP4042689A4 (en) | 2019-10-28 | 2023-06-07 | Beijing Bytedance Network Technology Co., Ltd. | SIGNALING AND SYNTAX ANALYSIS BASED ON A COLOR COMPONENT |
WO2021118977A1 (en) | 2019-12-09 | 2021-06-17 | Bytedance Inc. | Using quantization groups in video coding |
JP7393550B2 (ja) | 2019-12-11 | 2023-12-06 | 北京字節跳動網絡技術有限公司 | クロス成分適応ループフィルタリングのためのサンプルパディング |
KR20220113379A (ko) | 2019-12-27 | 2022-08-12 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | 비디오 픽처 헤더의 슬라이스 유형의 시그널링 |
CN115176460A (zh) | 2020-02-05 | 2022-10-11 | 抖音视界有限公司 | 局部双树的调色板模式 |
CN115362673A (zh) | 2020-02-14 | 2022-11-18 | 抖音视界有限公司 | 视频比特流中的并置图片指示 |
CN115336277A (zh) | 2020-03-17 | 2022-11-11 | 字节跳动有限公司 | 在视频编解码中使用视频参数集 |
WO2021185306A1 (en) | 2020-03-18 | 2021-09-23 | Beijing Bytedance Network Technology Co., Ltd. | Intra block copy buffer and palette predictor update |
KR20220156828A (ko) | 2020-03-19 | 2022-11-28 | 바이트댄스 아이엔씨 | 레퍼런스 픽처 순서에 대한 제약들 |
KR20220156829A (ko) | 2020-03-20 | 2022-11-28 | 바이트댄스 아이엔씨 | 이웃 서브픽처의 코딩 |
EP4107957A4 (en) | 2020-03-21 | 2023-08-23 | Beijing Bytedance Network Technology Co., Ltd. | RESAMPLING REFERENCE IMAGE |
WO2022002007A1 (en) | 2020-06-30 | 2022-01-06 | Beijing Bytedance Network Technology Co., Ltd. | Boundary location for adaptive loop filtering |
CN111968151B (zh) * | 2020-07-03 | 2022-04-05 | 北京博雅慧视智能技术研究院有限公司 | 一种运动估计精细搜索方法及装置 |
US11962936B2 (en) | 2020-09-29 | 2024-04-16 | Lemon Inc. | Syntax for dependent random access point indication in video bitstreams |
US11750778B2 (en) | 2021-09-30 | 2023-09-05 | Coretronic Corporation | Method for adjusting pixel values of blending image and projection system thereof |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003533141A (ja) * | 2000-05-10 | 2003-11-05 | ローベルト ボツシユ ゲゼルシヤフト ミツト ベシユレンクテル ハフツング | 動画像シーケンスの変換符号化方法 |
JP2004134896A (ja) * | 2002-10-08 | 2004-04-30 | Ntt Docomo Inc | 画像符号化方法、画像復号方法、画像符号化装置、画像復号装置、画像処理システム、画像符号化プログラム、画像復号プログラム。 |
JP2009005413A (ja) * | 2008-09-30 | 2009-01-08 | Toshiba Corp | 画像符号化装置 |
JP2009049779A (ja) * | 2007-08-21 | 2009-03-05 | Toshiba Corp | 情報処理装置およびインター予測モード判定方法 |
WO2010116869A1 (ja) * | 2009-04-08 | 2010-10-14 | シャープ株式会社 | 動画像符号化装置および動画像復号装置 |
Family Cites Families (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5107345A (en) | 1990-02-27 | 1992-04-21 | Qualcomm Incorporated | Adaptive block size image compression method and system |
FR2756399B1 (fr) * | 1996-11-28 | 1999-06-25 | Thomson Multimedia Sa | Procede et dispositif de compression video pour images de synthese |
US6600836B1 (en) | 2000-01-28 | 2003-07-29 | Qualcomm, Incorporated | Quality based image compression |
WO2003026350A2 (en) | 2001-09-14 | 2003-03-27 | The Regents Of The University Of Michigan | Audio distributor |
CN1285216C (zh) * | 2001-11-16 | 2006-11-15 | 株式会社Ntt都科摩 | 图像编码方法和装置、图像译码方法和装置 |
JP2003189313A (ja) * | 2001-12-20 | 2003-07-04 | Matsushita Electric Ind Co Ltd | 画面間予測符号化方法および画面間予測復号化方法 |
CN1225904C (zh) * | 2002-04-12 | 2005-11-02 | 精工爱普生株式会社 | 在压缩域视频处理中降低存储器要求和实施有效的逆运动补偿的方法和设备 |
JP4090862B2 (ja) * | 2002-04-26 | 2008-05-28 | 松下電器産業株式会社 | 可変長符号化方法および可変長復号化方法 |
JP2003318740A (ja) * | 2002-04-23 | 2003-11-07 | Matsushita Electric Ind Co Ltd | 可変長符号化方法および可変長復号化方法 |
JP2003319400A (ja) * | 2002-04-26 | 2003-11-07 | Sony Corp | 符号化装置、復号装置、画像処理装置、それらの方法およびプログラム |
JP2003324731A (ja) * | 2002-04-26 | 2003-11-14 | Sony Corp | 符号化装置、復号装置、画像処理装置、それらの方法およびプログラム |
US6975773B1 (en) * | 2002-07-30 | 2005-12-13 | Qualcomm, Incorporated | Parameter selection in data compression and decompression |
US7336720B2 (en) * | 2002-09-27 | 2008-02-26 | Vanguard Software Solutions, Inc. | Real-time video coding/decoding |
JP4238553B2 (ja) | 2002-10-08 | 2009-03-18 | 日本電気株式会社 | 携帯電話装置および表示装置の画像表示方法 |
JP2004135252A (ja) * | 2002-10-09 | 2004-04-30 | Sony Corp | 符号化処理方法、符号化装置及び復号化装置 |
JP4702059B2 (ja) * | 2003-12-22 | 2011-06-15 | 日本電気株式会社 | 動画像を符号化する方法及び装置 |
US20050249278A1 (en) * | 2004-04-28 | 2005-11-10 | Matsushita Electric Industrial Co., Ltd. | Moving image coding method, moving image decoding method, moving image coding device, moving image decoding device, moving image coding program and program product of the same |
CN100401780C (zh) * | 2004-05-07 | 2008-07-09 | 美国博通公司 | 在视频解码器中动态选择变换尺寸的方法和系统 |
US7894530B2 (en) * | 2004-05-07 | 2011-02-22 | Broadcom Corporation | Method and system for dynamic selection of transform size in a video decoder based on signal content |
JP4421940B2 (ja) * | 2004-05-13 | 2010-02-24 | 株式会社エヌ・ティ・ティ・ドコモ | 動画像符号化装置および方法、並びに動画像復号化装置および方法 |
KR100813958B1 (ko) * | 2004-06-07 | 2008-03-14 | 세종대학교산학협력단 | 동영상의 무손실 인코딩 및 디코딩 방법, 그 장치 |
CN100568974C (zh) * | 2004-09-08 | 2009-12-09 | 松下电器产业株式会社 | 动态图像编码方法及动态图像解码方法 |
CA2610276C (en) * | 2005-07-22 | 2013-01-29 | Mitsubishi Electric Corporation | Image encoder and image decoder, image encoding method and image decoding method, image encoding program and image decoding program, and computer readable recording medium recorded with image encoding program and computer readable recording medium recorded with image decoding program |
CN103118254B (zh) | 2005-09-26 | 2016-01-20 | 三菱电机株式会社 | 运动图像编码装置以及运动图像译码装置 |
JP2007243427A (ja) * | 2006-03-07 | 2007-09-20 | Nippon Hoso Kyokai <Nhk> | 符号化装置及び復号化装置 |
US20070286277A1 (en) * | 2006-06-13 | 2007-12-13 | Chen Xuemin Sherman | Method and system for video compression using an iterative encoding algorithm |
JP2008022383A (ja) * | 2006-07-13 | 2008-01-31 | Matsushita Electric Ind Co Ltd | 画像符号化装置 |
AU2006346583B2 (en) * | 2006-07-28 | 2011-04-28 | Kabushiki Kaisha Toshiba | Image encoding and decoding method and apparatus |
US8565314B2 (en) * | 2006-10-12 | 2013-10-22 | Qualcomm Incorporated | Variable length coding table selection based on block type statistics for refinement coefficient coding |
US20080170793A1 (en) * | 2007-01-12 | 2008-07-17 | Mitsubishi Electric Corporation | Image encoding device and image encoding method |
JP4635016B2 (ja) * | 2007-02-16 | 2011-02-16 | 株式会社東芝 | 情報処理装置およびインター予測モード判定方法 |
JP2010135864A (ja) * | 2007-03-29 | 2010-06-17 | Toshiba Corp | 画像符号化方法及び装置並びに画像復号化方法及び装置 |
JP4364919B2 (ja) * | 2007-04-20 | 2009-11-18 | 三菱電機株式会社 | 動画像復号化装置 |
US8345968B2 (en) * | 2007-06-28 | 2013-01-01 | Mitsubishi Electric Corporation | Image encoding device, image decoding device, image encoding method and image decoding method |
JP4325708B2 (ja) * | 2007-07-05 | 2009-09-02 | ソニー株式会社 | データ処理装置、データ処理方法およびデータ処理プログラム、符号化装置、符号化方法および符号化プログラム、ならびに、復号装置、復号方法および復号プログラム |
JP4821723B2 (ja) * | 2007-07-13 | 2011-11-24 | 富士通株式会社 | 動画像符号化装置及びプログラム |
JP2009055236A (ja) * | 2007-08-24 | 2009-03-12 | Canon Inc | 映像符号化装置及び方法 |
US20090274213A1 (en) | 2008-04-30 | 2009-11-05 | Omnivision Technologies, Inc. | Apparatus and method for computationally efficient intra prediction in a video coder |
US8908763B2 (en) * | 2008-06-25 | 2014-12-09 | Qualcomm Incorporated | Fragmented reference in temporal compression for video coding |
KR101549823B1 (ko) * | 2008-09-02 | 2015-09-04 | 삼성전자주식회사 | 적응적 이진화를 이용한 영상 부호화, 복호화 방법 및 장치 |
JP5259828B2 (ja) * | 2008-10-03 | 2013-08-07 | クゥアルコム・インコーポレイテッド | 4×4および8×8よりも大きい変換を使用するビデオ符号化 |
US8619856B2 (en) | 2008-10-03 | 2013-12-31 | Qualcomm Incorporated | Video coding with large macroblocks |
WO2010041857A2 (en) * | 2008-10-06 | 2010-04-15 | Lg Electronics Inc. | A method and an apparatus for decoding a video signal |
WO2010090629A1 (en) * | 2009-02-05 | 2010-08-12 | Thomson Licensing | Methods and apparatus for adaptive mode video encoding and decoding |
JP5216145B2 (ja) | 2009-02-09 | 2013-06-19 | ローベルト ボツシユ ゲゼルシヤフト ミツト ベシユレンクテル ハフツング | コンピュータネットワークの使用方法 |
US20120040366A1 (en) | 2009-02-10 | 2012-02-16 | Tissue Genetics, Inc. | Compositions, methods and uses for disease diagnosis |
-
2011
- 2011-03-31 KR KR1020147008944A patent/KR101500914B1/ko active IP Right Grant
- 2011-03-31 JP JP2012509307A patent/JPWO2011125313A1/ja active Pending
- 2011-03-31 CN CN201810242573.3A patent/CN108462874B/zh active Active
- 2011-03-31 CN CN201710222189.2A patent/CN106998473B/zh active Active
- 2011-03-31 ES ES16181069T patent/ES2899780T3/es active Active
- 2011-03-31 KR KR1020147022232A patent/KR101540899B1/ko active IP Right Grant
- 2011-03-31 KR KR1020177001857A patent/KR101817481B1/ko active IP Right Grant
- 2011-03-31 SG SG2012075099A patent/SG184528A1/en unknown
- 2011-03-31 RU RU2012147595/08A patent/RU2523071C1/ru active
- 2011-03-31 KR KR1020127028737A patent/KR101389163B1/ko active IP Right Grant
- 2011-03-31 BR BR112012025206-2A patent/BR112012025206B1/pt active IP Right Grant
- 2011-03-31 SG SG10201502226SA patent/SG10201502226SA/en unknown
- 2011-03-31 RU RU2014116111/08A patent/RU2573222C2/ru active
- 2011-03-31 SG SG10202001623RA patent/SG10202001623RA/en unknown
- 2011-03-31 WO PCT/JP2011/001953 patent/WO2011125313A1/ja active Application Filing
- 2011-03-31 KR KR1020137034944A patent/KR20140010192A/ko not_active Application Discontinuation
- 2011-03-31 MX MX2015010849A patent/MX353109B/es unknown
- 2011-03-31 CA CA2795425A patent/CA2795425C/en active Active
- 2011-03-31 EP EP16181069.2A patent/EP3101897B1/en active Active
- 2011-03-31 US US13/639,134 patent/US20130028326A1/en not_active Abandoned
- 2011-03-31 CN CN201710222188.8A patent/CN107046644B/zh active Active
- 2011-03-31 MX MX2012011695A patent/MX2012011695A/es active IP Right Grant
- 2011-03-31 MX MX2015010847A patent/MX353107B/es unknown
- 2011-03-31 KR KR1020147035021A patent/KR20150013776A/ko active Application Filing
- 2011-03-31 CN CN2011800183242A patent/CN102934438A/zh active Pending
- 2011-03-31 EP EP11765215.6A patent/EP2557792A4/en not_active Ceased
- 2011-03-31 PL PL16181069T patent/PL3101897T3/pl unknown
- 2011-04-07 TW TW100111972A patent/TWI520617B/zh active
- 2011-04-07 TW TW106125286A patent/TWI688267B/zh active
- 2011-04-07 TW TW104143548A patent/TWI601415B/zh active
- 2011-04-07 TW TW109104055A patent/TWI765223B/zh active
-
2014
- 2014-10-08 JP JP2014207401A patent/JP2015029348A/ja active Pending
-
2015
- 2015-12-01 RU RU2015151397A patent/RU2627101C2/ru active
-
2016
- 2016-03-01 JP JP2016039080A patent/JP2016129405A/ja active Pending
-
2017
- 2017-02-27 US US15/443,431 patent/US9973753B2/en active Active
- 2017-07-13 RU RU2017124991A patent/RU2663374C1/ru active
-
2018
- 2018-03-13 US US15/920,105 patent/US10412385B2/en active Active
- 2018-03-22 JP JP2018054476A patent/JP6605063B2/ja active Active
- 2018-06-01 US US15/996,293 patent/US10390011B2/en active Active
- 2018-07-12 RU RU2018125649A patent/RU2699049C1/ru active
- 2018-12-18 HK HK18116215.5A patent/HK1257212A1/zh unknown
-
2019
- 2019-06-18 US US16/444,922 patent/US10554970B2/en active Active
- 2019-06-18 US US16/444,898 patent/US10469839B2/en active Active
- 2019-08-20 RU RU2019126198A patent/RU2716032C1/ru active
- 2019-09-09 JP JP2019163631A patent/JP7126310B2/ja active Active
- 2019-09-09 JP JP2019163630A patent/JP7126309B2/ja active Active
- 2019-09-09 JP JP2019163628A patent/JP7129958B2/ja active Active
- 2019-09-09 JP JP2019163629A patent/JP7126308B2/ja active Active
-
2020
- 2020-02-20 RU RU2020107720A patent/RU2734871C1/ru active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003533141A (ja) * | 2000-05-10 | 2003-11-05 | ローベルト ボツシユ ゲゼルシヤフト ミツト ベシユレンクテル ハフツング | 動画像シーケンスの変換符号化方法 |
JP2004134896A (ja) * | 2002-10-08 | 2004-04-30 | Ntt Docomo Inc | 画像符号化方法、画像復号方法、画像符号化装置、画像復号装置、画像処理システム、画像符号化プログラム、画像復号プログラム。 |
JP2009049779A (ja) * | 2007-08-21 | 2009-03-05 | Toshiba Corp | 情報処理装置およびインター予測モード判定方法 |
JP2009005413A (ja) * | 2008-09-30 | 2009-01-08 | Toshiba Corp | 画像符号化装置 |
WO2010116869A1 (ja) * | 2009-04-08 | 2010-10-14 | シャープ株式会社 | 動画像符号化装置および動画像復号装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2557792A4 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10659784B2 (en) | 2011-10-14 | 2020-05-19 | Advanced Micro Devices, Inc. | Region-based image compression |
JP2014532377A (ja) * | 2011-10-14 | 2014-12-04 | アドバンスト・マイクロ・ディバイシズ・インコーポレイテッドAdvanced Micro Devices Incorporated | 領域ベースの画像圧縮 |
US11503295B2 (en) | 2011-10-14 | 2022-11-15 | Advanced Micro Devices, Inc. | Region-based image compression and decompression |
US9848192B2 (en) | 2011-10-14 | 2017-12-19 | Advanced Micro Devices, Inc. | Region-based image decompression |
JP2017085603A (ja) * | 2011-10-14 | 2017-05-18 | アドバンスト・マイクロ・ディバイシズ・インコーポレイテッドAdvanced Micro Devices Incorporated | 領域ベースの画像圧縮 |
JPWO2013065402A1 (ja) * | 2011-10-31 | 2015-04-02 | 三菱電機株式会社 | 動画像符号化装置、動画像復号装置、動画像符号化方法及び動画像復号方法 |
EP2768220A1 (en) * | 2011-11-04 | 2014-08-20 | Huawei Technologies Co., Ltd. | Method and device for encoding and decoding based on transformation mode |
EP2768220A4 (en) * | 2011-11-04 | 2014-10-29 | Huawei Tech Co Ltd | METHOD AND DEVICE FOR ENCODING AND DECODING BASED ON TRANSFORMATION MODE |
US9462274B2 (en) | 2011-11-04 | 2016-10-04 | Huawei Technologies Co., Ltd. | Transformation mode encoding and decoding method and apparatus |
CN103108177B (zh) * | 2011-11-09 | 2016-11-23 | 华为技术有限公司 | 图像编码方法及图像编码装置 |
CN103108177A (zh) * | 2011-11-09 | 2013-05-15 | 华为技术有限公司 | 图像编码方法及图像编码装置 |
CN107371020A (zh) * | 2011-12-28 | 2017-11-21 | Jvc 建伍株式会社 | 动图像解码装置、动图像解码方法以及存储介质 |
JP2017103799A (ja) * | 2012-04-24 | 2017-06-08 | リリカル ラブズ ビデオ コンプレッション テクノロジー、エルエルシー | ビデオ符号化システム及びビデオを符号化する方法 |
JPWO2015011752A1 (ja) * | 2013-07-22 | 2017-03-02 | ルネサスエレクトロニクス株式会社 | 動画像符号化装置およびその動作方法 |
US10356437B2 (en) | 2013-07-22 | 2019-07-16 | Renesas Electronics Corporation | Moving image encoding apparatus including padding processor for adding padding data to moving image and operation method thereof |
JPWO2015033510A1 (ja) * | 2013-09-09 | 2017-03-02 | 日本電気株式会社 | 映像符号化装置、映像符号化方法及びプログラム |
WO2015033510A1 (ja) * | 2013-09-09 | 2015-03-12 | 日本電気株式会社 | 映像符号化装置、映像符号化方法及びプログラム |
JP2021507583A (ja) * | 2018-01-12 | 2021-02-22 | 富士通株式会社 | ユニフォーム変換ユニットモードに対してグループマーキングを行う方法、装置及び電子装置 |
WO2020184145A1 (ja) * | 2019-03-08 | 2020-09-17 | ソニー株式会社 | 画像符号化装置、画像符号化方法、画像復号装置、および画像復号方法 |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6605063B2 (ja) | 動画像復号装置、動画像復号方法、動画像符号化装置、および、動画像符号化方法 | |
JP6347860B2 (ja) | 画像復号装置、画像復号方法、画像符号化装置および画像符号化方法 | |
WO2011125256A1 (ja) | 画像符号化方法及び画像復号化方法 | |
WO2011125314A1 (ja) | 動画像符号化装置および動画像復号装置 | |
JP2011223319A (ja) | 動画像符号化装置および動画像復号装置 | |
JP5367161B2 (ja) | 画像符号化方法、装置、及びプログラム | |
JP5649701B2 (ja) | 画像復号化方法、装置、及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180018324.2 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11765215 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012509307 Country of ref document: JP |
|
ENP | Entry into the national phase |
Ref document number: 2795425 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2012/011695 Country of ref document: MX |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13639134 Country of ref document: US |
|
REEP | Request for entry into the european phase |
Ref document number: 2011765215 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011765215 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 9213/CHENP/2012 Country of ref document: IN |
|
ENP | Entry into the national phase |
Ref document number: 20127028737 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2012147595 Country of ref document: RU Kind code of ref document: A |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112012025206 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 112012025206 Country of ref document: BR Kind code of ref document: A2 Effective date: 20121002 |