WO2013001730A1 - 画像符号化装置、画像復号装置、画像符号化方法および画像復号方法 - Google Patents
画像符号化装置、画像復号装置、画像符号化方法および画像復号方法 Download PDFInfo
- Publication number
- WO2013001730A1 WO2013001730A1 PCT/JP2012/003785 JP2012003785W WO2013001730A1 WO 2013001730 A1 WO2013001730 A1 WO 2013001730A1 JP 2012003785 W JP2012003785 W JP 2012003785W WO 2013001730 A1 WO2013001730 A1 WO 2013001730A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- motion
- unit
- prediction
- encoding
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/48—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using compressed domain processing techniques other than decoding, e.g. modification of transform coefficients, variable length coding [VLC] data or run-length data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/55—Motion estimation with spatial constraints, e.g. at image or region borders
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/96—Tree coding, e.g. quad-tree coding
Definitions
- the present invention relates to an image encoding device, an image decoding device, an image encoding method, and an image decoding method used for image compression encoding technology, compressed image data transmission technology, and the like.
- AVC / H.264 block data (hereinafter referred to as “brightness signal 16 ⁇ 16 pixels” and two corresponding color difference signals 8 ⁇ 8 pixels).
- Macroblock as a unit, a compression method based on a motion compensation prediction technique and an orthogonal transform / transform coefficient quantization technique is employed.
- motion compensated prediction a motion vector search and a prediction image are generated in units of macroblocks using a front or rear encoded picture as a reference image.
- a picture that performs inter-frame prediction encoding with reference to only one picture is referred to as a P picture, and a picture that performs inter-frame prediction encoding with reference to two pictures simultaneously is referred to as a B picture.
- An object of the present invention is to provide an image encoding device, an image decoding device, an image encoding method, and an image decoding method that provide a video encoding method capable of encoding and decoding the above.
- An image coding apparatus divides a picture block of a moving picture signal into coding blocks that are predetermined coding units, and performs compression coding using motion compensation prediction for each coding block.
- a motion compensation unit that generates a prediction image for a motion compensated prediction unit region using a motion vector selected for each motion compensated prediction unit region that is a unit obtained by dividing the coding block or the encoded block; Compressed data obtained by compressing a differential image between an input signal corresponding to a predicted image and the predicted image, and a region on the reference image that can be used for motion compensated prediction while generating a bitstream by variable-length encoding information about a motion vector
- a variable-length encoding unit that multiplexes a reference image restriction flag indicating whether or not to limit a significant reference image area to a predetermined area in a bitstream Comprising a motion compensation unit based on the reference image restriction flag identifies the significant reference image area, when the predicted image contains significant reference image area outside the
- a coding apparatus that detects or generates a motion vector in parallel in units of divided pictures, and a decoding apparatus that generates a motion compensated prediction image using the same, it is efficient with a small memory amount and memory access. Therefore, it is possible to perform highly efficient image encoding / decoding processing even in operation with high processing load such as high resolution video.
- Embodiment 1 of this invention It is a block diagram which shows the case where motion information is produced
- Embodiment 1 FIG. With reference to FIG. 1, a description will be given of parts that are characteristic of the encoding apparatus (decoding apparatus) according to Embodiment 1 of the present invention.
- a description will be given of parts that are characteristic of the encoding apparatus (decoding apparatus) according to Embodiment 1 of the present invention.
- w * h region where w is the horizontal size of the frame and h is the number of vertical lines of the divided region as a predetermined screen division unit.
- the w * h area is an area where the reference image can be accessed as significant image data (hereinafter referred to as a significant reference image area).
- the decoding device side does not execute a motion vector search process with a high load as shown in FIG. 5C, so that it is not necessary to perform screen division and perform parallel processing.
- the significant prediction image region itself is not divided, it is possible to generate all the pixels in the prediction image block from the significant reference image data for any of the motion vectors (a) and (b). it can. That is, there is a problem in that, even when an ideal motion vector is received on the decoding device side, a predicted image can be generated without any problem, but an ideal motion vector cannot be searched on the encoding side.
- the image encoding device and the image decoding device in Embodiment 1 will be described below.
- each frame image of video is input, motion compensation prediction is performed between adjacent frames, and compression processing by orthogonal transformation / quantization is performed on the obtained prediction difference signal, and then variable length
- An image encoding device that performs encoding to generate a bitstream and an image decoding device that decodes a bitstream output from the image encoding device will be described.
- the image coding apparatus adapts to local changes in the spatial and temporal directions of a video signal, divides the video signal into regions of various sizes, and performs intraframe / interframe adaptive coding. It is characterized by performing.
- a video signal has a characteristic that the complexity of the signal changes locally in space and time.
- there are patterns that have uniform signal characteristics in a relatively large image area such as the sky and walls, and there are small images such as people and paintings with fine textures.
- a pattern having a complicated texture pattern may be mixed in the region.
- a prediction difference difference signal with a small signal power and entropy is generated by temporal and spatial prediction to reduce the entire code amount.
- the prediction parameters are set as large as possible in the image signal region. If it can be applied uniformly, the code amount of the parameter can be reduced.
- prediction errors increase by applying the same prediction parameter to a large image region, and the code amount of the prediction difference signal cannot be reduced.
- the coding apparatus performs hierarchical division of the video signal starting from a predetermined maximum block size and is divided. For each region, the prediction and the encoding process of the prediction difference are adapted.
- the video signal format to be processed by the image coding apparatus is a color in an arbitrary color space such as a YUV signal composed of a luminance signal and two color difference signals, or an RGB signal output from a digital image sensor.
- a color in an arbitrary color space such as a YUV signal composed of a luminance signal and two color difference signals, or an RGB signal output from a digital image sensor.
- any video signal in which the video frame is composed of a horizontal and vertical two-dimensional digital sample (pixel) sequence such as a monochrome image signal or an infrared image signal, is used.
- the gradation of each pixel may be 8 bits, or may be gradation such as 10 bits or 12 bits.
- the input video signal is a YUV signal, and a signal in 4: 2: 0 format in which two color difference components U and V are subsampled with respect to the luminance component Y is handled.
- a signal in 4: 2: 0 format in which two color difference components U and V are subsampled with respect to the luminance component Y is handled.
- the present invention can also be applied to other formats having different U and V sampling intervals (for example, 4: 2: 2 format or 4: 4: 4 format).
- a processing data unit corresponding to each frame of a video is called a “picture”.
- “picture” is described as a video frame signal that has been sequentially scanned (progressive scanning).
- the video signal is an interlaced signal
- “picture” constitutes a video frame. It may be a field image signal, which is a unit to perform.
- FIG. 2 is a block diagram showing the configuration of the image coding apparatus according to Embodiment 1 of the present invention.
- FIG. 3 shows a processing flow at the picture level of the image coding apparatus of FIG.
- the encoding control unit 3 hierarchically divides the size of the maximum encoding block used for encoding the picture to be encoded (current picture) and the maximum encoding block. The upper limit of the number of layers is determined (step S1 in FIG. 3).
- the same size may be set for all the pictures according to the resolution of the input video signal 1, or the difference in the complexity of local motion of the input video signal 1 May be quantified as a parameter to determine a small size for a picture with high motion and a large size for a picture with little motion.
- the upper limit of the number of division layers is set such that the number of layers is deepened so that more detailed motion can be detected, and when the movement is small, the number of layers is set to be suppressed. is there.
- the block dividing unit 2 divides the picture with the maximum coding block size determined above.
- the encoding control unit 3 hierarchically determines the encoding block size 4 and the encoding mode 7 for each encoding block until the upper limit of the number of division layers is reached for each image area of the maximum encoding block size. .
- the block dividing unit 2 further divides the block according to the encoded block size 4 and outputs the encoded block 5 (step S2 in FIG. 3).
- FIG. 4 shows an example of how the maximum coding block is hierarchically divided into a plurality of coding blocks 5.
- the maximum coding block is defined as a coding block having a size of (L 0 , M 0 ) with a luminance component indicated as “0th layer” in FIG.
- the encoding block 5 is obtained by performing a hierarchical division to a predetermined depth determined separately in a quadtree structure with the maximum encoding block as a starting point.
- the coding block 5 is an image area of size (L n , M n ).
- the encoding block 5 in the nth layer is denoted by B n
- the coding mode 7 that can be selected by B n is denoted by m (B n ).
- the encoding mode m (B n ) 7 may be configured to use an individual mode for each color component.
- the present invention can be applied to any video format, color component, and encoding mode.
- the encoding mode m (B n ) 7 includes one or a plurality of intra encoding modes (generally referred to as INTRA) and one or a plurality of inter encoding modes (collectively referred to as INTER). Based on a selection method described later, the encoding control unit 3 selects an encoding mode with the highest encoding efficiency for the encoding block B n 5 from all the modes available for the picture or a subset thereof. select.
- INTRA intra encoding modes
- INTER inter encoding modes
- B n is further divided into one or more prediction processing units (partitions).
- the partition belonging to B n is hereinafter referred to as P i n (i: partition number in the nth layer). How the partitioning of B n is performed is included as information in the encoding mode m (B n ) 7. All partitions P i n are subjected to prediction processing according to the encoding mode m (B n ) 7, but individual prediction parameters can be selected for each partition.
- the encoding control unit 3 identifies the encoding block 5 by generating a block division state as shown in FIG. 5 for the maximum encoding block, for example.
- the shaded portion in FIG. 6A shows the distribution of the partitions after the division
- FIG. 5B shows the situation where the encoding mode m (B n ) 7 is assigned by the hierarchical division in a quadtree graph.
- a node surrounded by a square in (b) is a node to which the encoding mode 7 is assigned, that is, the encoding block 5.
- Detailed processing of such layer division / encoding mode determination in the encoding control unit 3 will be described later.
- the intra prediction parameter 10 is set in the intra prediction unit 8 in FIG. based on the intra prediction processing for each partition P i n in B n is performed, the intra prediction image 11 generated is output to the subtraction unit 12 (step S4 in FIG. 3).
- the intra prediction parameter 10 used to generate the intra predicted image 11 is multiplexed into the bit stream 30 by the variable length encoding unit 23 in order to generate the same intra predicted image 11 on the decoding device side.
- the intra prediction process in the first embodiment is not limited to the algorithm defined in the AVC / H.264 standard (ISO / IEC 14496-10), but the intra prediction parameters are completely different on the encoding device side and the decoding device side. It is necessary to include information necessary for generating the same intra prediction image.
- the motion compensation prediction unit 9 in FIG. based on the inter-frame motion prediction process for each partition P i n is performed, the inter prediction image 17 generated motion vector 31 is output to the variable length coding unit 23 is outputted to the subtraction section 12 ( Step S5) in FIG.
- the inter prediction parameter 16 used to generate the inter prediction image 17 is multiplexed into the bitstream 30 by the variable length encoding unit 23 in order to generate the exact same inter prediction image 17 on the decoding device side.
- the inter prediction parameters used to generate the inter prediction image include Mode information describing partitioning within the coding block B n Motion vector of each partition Motion prediction frame memory 14 includes a plurality of reference images, and prediction is performed using any reference image Reference image indication index information indicating whether or not there are a plurality of motion vector prediction value candidates Index information indicating which motion vector prediction value is selected and used, or if there are a plurality of motion compensation interpolation filters Index information that indicates whether to select and use the filter ⁇ If the motion vector of the partition can indicate multiple pixel accuracy (half pixel, 1/4 pixel, 1/8 pixel, etc.), which pixel In order to generate exactly the same inter-prediction image on the decoding device side, including variable information such as selection information indicating whether to use accuracy, variable length codes The encoding unit 23 multiplexes the bit stream. Detailed processing contents of the motion compensation prediction unit 9 will be described later.
- the subtraction unit 12 subtracts either the intra predicted image 11 or the inter predicted image 17 from the partition P i n to obtain a predicted difference signal e i n 13 (step S6 in FIG. 3).
- the transform / quantization unit 19 performs DCT (discrete cosine transform) or a specific learning sequence in advance on the prediction difference signal e i n 13 based on the prediction difference encoding parameter 20 instructed from the encoding control unit 3.
- the transform coefficient is calculated by performing orthogonal transform processing such as KL transform on which the base design is performed, and the transform coefficient is quantized based on the prediction difference coding parameter 20 instructed from the coding control unit 3. 3 (step S7 in FIG.
- the compressed data 21 which is the transform coefficient after quantization is converted into an inverse quantization / inverse transform unit 22 (inverse quantization / inverse transform processing unit in step S8 in FIG. 3) and variable length coding.
- the data is output to the unit 23 (the variable length coding unit in step S8 in FIG. 3).
- the inverse quantization / inverse transform unit 22 inversely quantizes the compressed data 21 input from the transform / quantization unit 19 based on the prediction difference encoding parameter 20 instructed from the encoding control unit 3, and further performs inverse DCT. Then, a local decoded prediction difference signal e i n '24 of the prediction difference signal e i n 13 is generated by performing an inverse transformation process such as an inverse KL transformation, and is output to the adder 25 (step S9 in FIG. 2).
- the prediction difference encoding parameter 20 includes information on the quantization parameter and transform block size used for encoding the prediction difference signal e i n 13 inside each area of the encoding block 5.
- the prediction difference encoding parameter 20 is determined by the encoding control unit 3 as part of the encoding mode determination in step S2 of FIG.
- the quantization parameter may be assigned in units of the maximum code block, and may be used in common in units of the divided encoding blocks, or expressed as a difference value from the value of the maximum encoding block for each encoding block. You may make it do.
- the transform block size information may be expressed by quadtree partitioning starting from the coding block 5 as in the case of the division of the maximum coding block, or several selectable transform block sizes are represented as index information.
- the transform / quantization unit 19 and the inverse quantization / inverse transform unit 22 specify the block size of the transform / quantization process based on the transform block size information and perform the process.
- the information of the transform block size, the coding block 5 no may be configured to determine the partition P i n that divides the coded block 5 as a unit.
- the adding unit 25 adds the local decoded prediction difference signal e i n '24 and the intra predicted image 11 or the inter predicted image 17 to add the local decoded partition image P i n 'or a local decoded encoded block image as a collection thereof.
- B n ′ (hereinafter referred to as local decoded image) 26 is generated (step S10 in FIG. 3), the local decoded image 26 is output to the loop filter unit 27 (loop filter unit in step S11 in FIG. 3), and intra prediction is performed. (Intra prediction memory in step S11 in FIG. 3).
- the locally decoded image 26 becomes an image signal for subsequent intra prediction.
- the output destination is an intra prediction memory, then it is determined whether all the encoded blocks in the picture have been processed, and if the processing of all the encoded blocks has not been completed, the process proceeds to the next encoded block. The same encoding process is repeated (step S12 in FIG. 3).
- the loop filter unit 27 When the output destination of the addition unit 25 is the loop filter unit 27, the loop filter unit 27 performs a predetermined filtering process on the local decoded image 26 output from the addition unit 25, and obtains a local decoded image 29 after the filtering process. It is stored in the motion compensated prediction frame memory 14 (step S13 in FIG. 3). The locally decoded image 29 after the filtering process becomes the reference image 15 for motion compensation prediction.
- the filtering process by the loop filter unit 27 may be performed in units of the maximum encoded block or individual encoded blocks of the input local decoded image signal 26, or the local decoded image signal 26 corresponding to a macroblock for one screen. It may be performed for one screen after the input.
- the variable length encoding unit 23 includes the compressed data 21 output from the transform / quantization unit 19, the encoding mode 7 (including the division state of the maximum encoding block) output from the encoding control unit 3, and intra
- the prediction parameter 10 to the inter prediction parameter 16 and the prediction differential encoding parameter 20 are entropy-encoded to generate a bitstream 30 indicating the encoding result (step S14 in FIG. 3).
- each division unit is referred to as a tile
- motion compensation prediction is performed independently for each tile.
- the size of the tile in the horizontal and vertical directions is a multiple of the size of the maximum coding block.
- the division state of the tile may be fixedly and uniquely determined on the encoding device side (in this case, the decoding device performs decoding processing without being aware of the structure of tiles), or for processing other than motion compensation prediction.
- a mechanism for transmitting to the decoding apparatus side via a bit stream may be provided so that the position and size of the upper left corner of each tile can be determined freely.
- the tile may be a slice used in the conventional AVC / H.264 or the like.
- the motion compensation prediction unit 9 executes processing for each coding block 5 in the tile. Thus, since the picture can be divided into screens and the motion compensation prediction process can be executed in parallel, the encoding process can be performed at high speed even if the input video signal is a high resolution video.
- FIG. 7 shows the configuration of the motion compensation prediction unit 9.
- the motion information generation unit 100 performs a motion vector search with reference to the reference image 15 or performs encoding by referring to the motion information 102 of a plurality of encoded blocks held in the motion information memory 101. motion information 103 generated for each partition P i n in the block 5, and outputs the inter prediction image generation unit 104.
- the motion information generation unit 100 determines whether or not to limit an area on the reference image 15 (hereinafter referred to as a significant reference image area) that can be used for motion compensation prediction to a predetermined area (for example, a current tile area).
- the motion information is generated based on the value of the reference image restriction flag 105 indicating “”.
- the reference image restriction flag 105 is ON, that is, when the “significant reference image region is made the current tile region” (FIG. 8), when the current partition is moved by a motion vector, a part of the pixels in the moved partition Is located outside the significant reference image area, the pixel located at the end point of the significant reference image area is expanded by a predetermined method to generate a pixel that virtually becomes a predicted image.
- a predetermined method there are a method of repeating the end point pixel, a method of performing mirroring around the end point pixel, and a method of compensating the pixel in the significant reference image area.
- the memory of the reference picture can be limited to the size of the tile, so that there is an advantage that the used memory can be reduced. Even if the memory to be used is limited, it is possible to refer to the outside of the tile by expanding the pixels by a predetermined method, so that it is not necessary to forcibly narrow the motion vector search range as shown in FIG. Contribute to improvement.
- the reference image restriction flag 105 is OFF, that is, “there is no restriction on the significant reference image area” (FIG. 9)
- the motion vector generated by the motion information generation unit 100 is obtained when the current partition is moved by the motion vector.
- the usable memory when a memory for the reference image can be secured, it is possible to refer to all the pixels in the reference image, so that there is an advantage that the encoding efficiency can be improved.
- the search range may be determined so that the motion vector search refers only to the pixels in the tile ( In the case of FIG. 1 (b)), when generating motion information by referring to motion information of a plurality of encoded blocks, if there is a motion vector that refers to outside the tile among the motion information of the encoded blocks, May be configured to be excluded or corrected. Since it is possible to suppress the processing amount by not performing pixel expansion at the end points of the significant reference image region, the reference image restriction flag 105 is turned OFF when prediction performance does not improve even if pixel expansion is performed. It is also possible to perform control such as setting.
- the inter prediction image generation unit 104 generates and outputs an inter prediction image 17 based on the input motion information 103, the reference image 15, and the reference image restriction flag 105.
- the reference image restriction flag 105 is ON, regarding the partition area moved by the motion vector (motion information 103), the pixels belonging to the tile are reference image data within the tile, and the pixels outside the tile are motion
- the inter prediction image 17 is obtained by virtually generating reference image data in the same procedure as the method used in the information generating unit 100.
- the reference image restriction flag 105 is OFF, it is interpreted that prediction is performed for the entire picture.
- the reference image restriction flag 105 is input to the variable length coding unit 23, entropy-coded as an upper syntax parameter such as a sequence unit, and multiplexed into the bit stream 30.
- the inter predicted image 17 generated by the inter predicted image generation unit 104 needs to be data equivalent to the inter predicted image 72 obtained on the decoding device side.
- FIG. 10 is a block diagram showing the configuration of the image decoding apparatus according to Embodiment 1 of the present invention.
- FIG. 11 shows a picture level processing flow of the image decoding apparatus of FIG.
- the operation of the image decoding apparatus according to the first embodiment will be described with reference to these drawings.
- the variable length decoding unit 61 performs variable length decoding processing on the bitstream 30 (step S21 in FIG. 11), and from the picture of one frame or more
- the frame size is decoded in units of sequence units or pictures.
- the maximum coding block size and the upper limit of the number of divided layers determined by the image coding apparatus according to Embodiment 1 are determined by the same procedure as that of the coding apparatus (step S22 in FIG. 11). For example, when the maximum encoding block size is determined according to the resolution of the input video signal, the maximum encoding block size is determined based on the decoded frame size in the same procedure as the encoding apparatus.
- the image coding apparatus When the maximum encoding block size and the upper limit of the number of divided layers are multiplexed on the bit stream 30 on the encoding device side, values decoded from the bit stream 30 are used.
- the image coding apparatus encodes a coding mode in units of coding blocks obtained by dividing the maximum coding block into a plurality of coding blocks hierarchically starting from the maximum coding block. Or the compressed data obtained by conversion / quantization is multiplexed into the bit stream 30.
- the variable length decoding unit 61 that has received the bit stream 30 decodes the division state of the maximum coding block included in the coding mode in the determined maximum coding block unit. Based on the decoded division state, coding blocks are identified hierarchically (step S23 in FIG. 11).
- the encoding mode 62 assigned to the specified encoding block is decoded.
- the prediction parameter 63 is decoded in units obtained by further dividing the encoded block into one or more prediction processing units (partitions) (step S24 in FIG. 11).
- the intra prediction parameter 63a is decoded for each of one or more partitions included in the coding block and serving as a prediction processing unit.
- the prediction value of the intra prediction parameter 63a of the partition P i n to be decoded is calculated on the basis of the intra prediction parameter 63a of the neighboring decoded partition in the same procedure as that of the encoding device side. Decode using the predicted value.
- the inter prediction parameter 63b is decoded for each of one or more partitions included in the coding block and serving as a prediction processing unit.
- the partition serving as the prediction processing unit is further divided into one or a plurality of partitions serving as the transform processing unit based on transform block size information (not shown) included in the prediction differential encoding parameter 65, and the partition serving as the transform processing unit.
- the compressed data transformed and transformed transform coefficients
- step S24 in FIG. 11 the compressed data (transformed and transformed transform coefficients) is decoded.
- the output destination of the variable length decoding unit 61 is a changeover switch (changeover switch in step S25 of FIG. 11)
- the encoding mode 62 assigned to the encoding block is an intra encoding mode (in step S26 of FIG. 11).
- intra prediction unit 69 based on the decoded intra prediction parameter 63a, intra prediction processing is performed for each partition in the encoded block (step S27 in FIG. 11), and the generated intra predicted image 71 is added to the adding unit 73. Is output.
- the intra prediction process based on the intra prediction parameter 63a is the same as the process in the intra prediction unit 8 on the encoding device side.
- the motion compensation unit 70 performs coding based on the decoded inter prediction parameter 63b (including the motion vector). An inter-frame motion prediction process is performed on each partition in the block (step S28 in FIG. 11), and the generated inter predicted image 72 is output to the adding unit 73.
- the inverse quantization / inverse transform unit 66 has a variable length.
- the compressed data 64 input for each transform processing unit from the decoding unit 61 is inversely quantized based on the quantization parameter included in the prediction differential encoding parameter 65, and further subjected to inverse transform processing such as inverse DCT and inverse KL transform.
- a decoded prediction difference signal 67 is generated (step S29 in FIG. 11) and output to the adder 73.
- the adder 73 adds the decoded prediction difference signal 67 and the intra predicted image 71 or the inter predicted image 72 to generate a decoded partition image (step S30 in FIG. 11), and includes one or more included in the encoded block.
- the decoded partition image 74 is output to the loop filter unit 78 and stored in the intra prediction memory 77.
- the decoded partition image 74 becomes an image signal for subsequent intra prediction.
- the loop filter unit 78 performs the same filtering process as the loop filter unit 27 on the encoding device side (Yes in step S31 in FIG. 11) after processing all the encoded blocks (step S32 in FIG. 11), and performs decoding after the filtering process.
- the image 79 is stored in the motion compensated prediction frame memory 75.
- the decoded image 79 becomes a reference image 76 for subsequent motion compensation processing and a reproduced image.
- the motion compensation unit 70 that is a feature of the present invention will be described below.
- the internal configuration of the motion compensation unit 70 is shown in FIG.
- the motion information generation unit 200 refers to the inter prediction parameter 63b given from the variable length decoding unit 61 and the motion information 202 of a plurality of encoded blocks held in the motion information memory 201, etc. generating motion information 203 about each partition P i n comprising, input to the inter prediction image generation unit 204.
- the inter predicted image generation unit 204 Based on the input motion information 203, the motion compensated prediction reference image 76, and the reference image restriction flag 105 decoded from the bitstream 30 in the variable length decoding unit 61, the inter predicted image generation unit 204 performs inter prediction image generation. 72 is generated and output.
- the reference image restriction flag 105 When the reference image restriction flag 105 is ON, in the partition area moved by the motion vector, the pixels belonging to the tile are reference image data in the tile, and the pixels belonging to the outside of the tile are the motion information generation unit 100. A predicted image is obtained by virtually generating reference image data in the same procedure as that used. On the other hand, when the reference image restriction flag 105 is OFF, the use range of the reference image is not particularly limited, and a predicted image is obtained from the reference image by the same procedure as the method used in the motion information generating unit 100. As described above, the inter prediction image 72 generated by the inter prediction image generation unit 204 needs to be data equivalent to the inter prediction image 17 obtained on the encoding device side, but the reference image restriction flag 105 is set. By introducing, even if motion vector search processing is performed in parallel in units such as tiles in the encoding device, it is possible to avoid mismatch of predicted images during encoding and decoding, and stable and highly efficient encoding It can be performed.
- FIG. 13 shows an operation in the case where the reference image restriction flag 105 is ON when the significant reference image region is expanded.
- the parameters dx and dy that specify the significant reference image area range may be determined as fixed values in advance, such as profile and level, or may be multiplexed into the bitstream as part of the upper header such as the sequence header or picture header. Good. Since the reference area can be determined according to the performance of the apparatus by being defined by the upper header, it is possible to balance the performance and the mounting load. Even in this case, when referring to the outside of the significant reference image area, as described with reference to FIG. 8 and the like, it is possible to generate a predicted image by virtually extending the pixels.
- motion information As an example of generating motion information from the plurality of encoded blocks, as shown in FIG. 14, on encoded peripheral blocks and reference images held in the motion information memories 101 to 201 A mode is conceivable in which the motion information (motion vector, reference image index, prediction direction, etc.) of blocks located at the same spatial position is used as it is.
- motion information may be generated based on the reference image restriction flag 105, leaving only candidates that can be used as motion information.
- the candidate corresponding to MV_A points outside the significant reference image region, only the candidates corresponding to MV_B and MV_C can be selected as the motion information corresponding to this mode.
- the index has three types of 0, 1, and 2, and the amount of information encoded as the index increases. By performing the exclusion measure, there is an effect that the amount of code required for the index can be suppressed.
- the reference image restriction flag 105 is multiplexed on the bitstream 30 as an upper header syntax such as a sequence.
- the same effect can be obtained even if the restriction corresponding to this flag is defined by a profile, level, or the like. can get.
- (L n + 1 , M n + 1 ) (M n , M n )
- FIG. 17 one of the divisions shown in FIGS. 15 and 16 may be selected. If selection is possible, a flag indicating which division is selected is encoded.
- AVC / H.264 of Non-Patent Document 1 such as 16 ⁇ 16 that is a single block can be connected horizontally, so that encoding that maintains compatibility with existing methods can be performed. Easy to do.
- the transform block unit of the quantization / transformation unit and the inverse quantization / inverse transform unit may be uniquely determined by the transform processing unit, or may have a hierarchical structure as shown in FIG. In this case, a flag indicating whether to divide each layer is encoded.
- the above division may be performed in units of partitions or encoded blocks.
- the above conversion assumes a square conversion, but this may be another rectangle such as a rectangle.
- the image encoding device, the image decoding device, the image encoding method, and the image decoding method according to the present invention perform highly efficient image encoding / decoding processing even in operation with high processing load such as high-resolution video. Therefore, it is suitable for use in an image encoding apparatus, an image decoding apparatus, an image encoding method, an image decoding method, and the like used for an image compression encoding technique, a compressed image data transmission technique, and the like.
- 2 block division unit 3 encoding control unit, 6 changeover switch, 8 intra prediction unit, 9 motion compensation prediction unit, 12 subtraction unit, 14 motion compensation prediction frame memory, 19 transform / quantization unit, 22 inverse quantization / inverse Conversion unit, 23 variable length coding unit, 25 addition unit, 27 loop filter unit, 28 intra prediction memory, 61 variable length decoding unit, 66 dequantization / inverse conversion unit, 68 changeover switch, 69 intra prediction unit, 70 Motion compensation unit, 73 addition unit, 75 motion compensation prediction frame memory, 77 intra prediction memory, 78 loop filter unit, 100 motion information generation unit, 101 motion information memory, 104 inter prediction image generation unit, 200 motion information generation unit, 201 motion information memory, 204 inter prediction image generation unit.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
実施の形態1.
図1を用いて、本発明の実施の形態1の符号化装置(復号装置)の特徴となる部分について説明する。同図では、符号化時の動きベクトル探索を、フレーム水平サイズをw、分割領域の垂直方向ライン数をhとした w*h の領域を所定の画面分割単位として実行する例を示す。w*hの領域が有意な画像データとして参照画像をアクセスできる領域(以下、有意参照画像領域)であるとする。この際、予測画像の良さを最大限にするように動きベクトル探索を行うことを考えると、理想的には同図(a)のように、予測画像の一部が有意参照画像領域の外部を指す場合も許容することが望ましい。しかし、このような有意参照画像領域外のデータは、動きベクトル探索を行う回路にとっては存在しないため、実際には、同図(b)のように、動きベクトル探索の範囲を強制的に狭めて、有意参照画像領域にアクセスを完結させるような動きベクトルを見つける必要がある。
以下に実施の形態1における画像符号化装置および画像復号装置について説明する。
また、図3に図2の画像符号化装置のピクチャレベルの処理フローを示す。以下、これらの図を用いて、本実施の形態1の画像符号化装置の動作を説明する。図2に示す画像符号化装置は、まず、符号化制御部3において、符号化対象となるピクチャ(カレントピクチャ)の符号化に用いる最大符号化ブロックのサイズと、最大符号化ブロックを階層分割する階層数の上限を決定する(図3のステップS1)。最大符号化ブロックのサイズの決め方としては、例えば入力映像信号1の解像度に応じてすべてのピクチャに対して同じサイズに定めてもよいし、入力映像信号1の局所的な動きの複雑さの違いをパラメータとして定量化して、動きの激しいピクチャでは小さいサイズ、動きが少ないピクチャでは大きいサイズ、のように定めてもよい。分割階層数上限は例えば入力映像信号1の動きが激しい場合は階層数を深くしてより細かい動きが検出できるように設定し、動きが少ない場合は階層数を抑えるように設定するなどの方法がある。
以降、符号化ブロックサイズ4は、符号化ブロック5の輝度成分におけるサイズ(Ln, Mn)と定義する。4分木分割を行うため、常に(Ln+1, Mn+1) = (Ln/2, Mn/2)が成り立つ。なお、RGB信号など、すべての色成分が同一サンプル数をもつカラー映像信号(4:4:4フォーマット)では、すべての色成分のサイズが(Ln, Mn)になるが、4:2:0フォーマットを扱う場合、対応する色差成分の符号化ブロックサイズは(Ln/2, Mn/2)である。以降、第n階層の符号化ブロック5をBnとし、Bnで選択しうる符号化モード7をm(Bn)と記す。複数の色成分からなるカラー映像信号の場合、符号化モードm(Bn)7は各色成分ごとにそれぞれ個別のモードを用いるように構成されてもよいが、以降、特に断らない限り、YUV信号、4:2:0フォーマットの符号化ブロックの輝度成分に対する符号化モードのことを指すものとして説明を行うが、本発明は任意の映像フォーマット、色成分、符号化モードに適用できる。
インター予測画像の生成に用いられたインター予測パラメータには、
・符号化ブロックBn内のパーティション分割を記述するモード情報
・各パーティションの動きベクトル
・動き補償予測フレームメモリ14内に複数の参照画像を含む構成の場合、いずれの参照画像を用いて予測を行うかを示す参照画像指示インデックス情報
・複数の動きベクトル予測値候補がある場合にいずれの動きベクトル予測値を選択して使用するかを示すインデックス情報
・複数の動き補償内挿フィルタがある場合にいずれのフィルタを選択して使用するかを示すインデックス情報
・当該パーティションの動きベクトルが複数の画素精度(半画素、1/4画素、1/8画素など)を示すことが可能な場合、いずれの画素精度を使用するかを示す選択情報
などの情報を含み、復号装置側でまったく同じインター予測画像を生成するために、可変長符号化部23によってビットストリームに多重化される。動き補償予測部9の詳細な処理内容は後述する。
出力先がイントラ予測用メモリの場合、続いて、ピクチャ中の全ての符号化ブロックを処理したかどうかを判定し、全符号化ブロックの処理が終了していなければ次の符号化ブロックへ以降して同様の符号化処理を繰り返す(図3のステップS12)。
当該ビットストリーム30を受け取った可変長復号部61は、決定された最大符号化ブロック単位に符号化モードに含まれる最大符号化ブロックの分割状態を復号する。復号された分割状態に基づき、階層的に符号化ブロックを特定する(図11のステップS23)。
一方、参照画像制限フラグ105がOFFの場合は、参照画像の使用範囲に特に制限はなく、動き情報生成部100で用いた方法と同一の手順で参照画像から予測画像を得る。なお、前述したようにインター予測画像生成部204で生成されるインター予測画像72は、符号化装置側で得られるインター予測画像17と等価なデータである必要があるが、参照画像制限フラグ105を導入することによって、符号化装置で動きベクトル探索処理がタイルなどの単位で並列処理されていても、符号化・復号時の予測画像のミスマッチを回避することができ、安定かつ高能率な符号化を行うことができる。
Claims (4)
- 動画像信号の各ピクチャの所定の符号化単位となる符号化ブロックに分割して、該符号化ブロックごとに動き補償予測を用いて圧縮符号化を行う動画像符号化装置において、
該符号化ブロックないしそれを分割した単位である動き補償予測単位領域ごとに選択された動きベクトルを用いて、前記動き補償予測単位領域に対する予測画像を生成する動き補償部と、
前記予測画像に対応する入力信号と前記予測画像との差分画像を圧縮した圧縮データ、前記動きベクトルに関する情報を可変長符号化してビットストリームを生成するとともに、前記動き補償予測に用いることのできる参照画像上の領域である有意参照画像領域を所定の領域に限定するか否かを示す参照画像制限フラグをビットストリームに多重化する可変長符号化部とを備え、
前記動き補償部は、前記参照画像制限フラグに基づいて前記有意参照画像領域を特定し、前記予測画像が有意参照画像領域外の画素を含む場合、所定の拡張処理を行うことを特徴とする画像符号化装置。 - ビットストリームに多重化されている符号化データから符号化ブロックに係る圧縮データ、前記動き補償予測に用いることのできる参照画像上の領域である有意参照画像領域を所定の領域に限定するか否かを示す参照画像制限フラグ、動きベクトルの情報である動き情報を可変長復号する可変長復号部と、
前記動き情報に基づいて、前記符号化ブロックに対する動き補償予測処理を実施して予測画像を生成する動き補償予測部と、
前記符号化ブロックに係る圧縮データから生成された圧縮前の差分画像と前記予測画像とを加算して復号画像を生成する復号画像生成部とを備え、
前記動き補償予測部は、前記予測画像を生成する際、前記参照画像制限フラグに基づき、前記動き情報を用いて、前記予測画像が有意参照画像領域外の画素を含む場合、所定の拡張処理を行って予測画像を生成することを特徴とする画像復号装置。 - 動画像信号の各ピクチャの所定の符号化単位となる符号化ブロックに分割して、該符号化ブロックごとに動き補償予測を用いて圧縮符号化を行う動画像符号化方法において、
該符号化ブロックないしそれを分割した単位である動き補償予測単位領域ごとに選択された動きベクトルを用いて、前記動き補償予測単位領域に対する予測画像を生成する動き補償ステップと、
前記予測画像に対応する入力信号と前記予測画像との差分画像を圧縮した圧縮データ、前記動きベクトルに関する情報を可変長符号化してビットストリームを生成するとともに、前記動き補償予測に用いることのできる参照画像上の領域である有意参照画像領域を所定の領域に限定するか否かを示す参照画像制限フラグをビットストリームに多重化する可変長符号化ステップとを備え、
前記動き補償ステップは、前記参照画像制限フラグに基づいて前記有意参照画像領域を特定し、前記予測画像が有意参照画像領域外の画素を含む場合、所定の拡張処理を行うことを特徴とする画像符号化方法。 - ビットストリームに多重化されている符号化データから符号化ブロックに係る圧縮データ、前記動き補償予測に用いることのできる参照画像上の領域である有意参照画像領域を所定の領域に限定するか否かを示す参照画像制限フラグ、動きベクトルの情報である動き情報を可変長復号する可変長復号ステップと、
前記動き情報に基づいて、前記符号化ブロックに対する動き補償予測処理を実施して予測画像を生成する動き補償予測ステップと、
前記符号化ブロックに係る圧縮データから生成された圧縮前の差分画像と前記予測画像とを加算して復号画像を生成する復号画像生成ステップとを備え、
前記動き補償予測ステップは、前記予測画像を生成する際、前記参照画像制限フラグに基づき、前記動き情報を用いて、前記予測画像が有意参照画像領域外の画素を含む場合、所定の拡張処理を行って予測画像を生成することを特徴とする画像復号方法。
Priority Applications (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP12804149.8A EP2680588B1 (en) | 2011-06-30 | 2012-06-11 | Image encoding apparatus, image decoding apparatus, image encoding method and image decoding method |
CN201280009905.4A CN103385004B (zh) | 2011-06-30 | 2012-06-11 | 图像编码装置、图像解码装置、图像编码方法以及图像解码方法 |
ES12804149T ES2862898T3 (es) | 2011-06-30 | 2012-06-11 | Aparato de codificación de imágenes, aparato de decodificación de imágenes, método de codificación de imágenes y método de decodificación de imágenes |
PL12804149T PL2680588T3 (pl) | 2011-06-30 | 2012-06-11 | Urządzenie do kodowania obrazu, urządzenie do dekodowania obrazu, sposób kodowania obrazu i sposób dekodowania obrazu |
US14/000,305 US9503718B2 (en) | 2011-06-30 | 2012-06-11 | Image coding device, image decoding device, image coding method, and image decoding method |
KR1020137025216A KR20130135925A (ko) | 2011-06-30 | 2012-06-11 | 화상 부호화 장치, 화상 복호 장치, 화상 부호화 방법 및 화상 복호 방법 |
JP2013522714A JP5711370B2 (ja) | 2011-06-30 | 2012-06-11 | 画像符号化装置、画像復号装置、画像符号化方法および画像復号方法 |
BR112013020878-3A BR112013020878B1 (pt) | 2011-06-30 | 2012-06-11 | Dispositivos e métodos de codificação e decodificação de imagem |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011-145572 | 2011-06-30 | ||
JP2011145572 | 2011-06-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013001730A1 true WO2013001730A1 (ja) | 2013-01-03 |
Family
ID=47423662
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/003785 WO2013001730A1 (ja) | 2011-06-30 | 2012-06-11 | 画像符号化装置、画像復号装置、画像符号化方法および画像復号方法 |
Country Status (9)
Country | Link |
---|---|
US (1) | US9503718B2 (ja) |
EP (1) | EP2680588B1 (ja) |
JP (4) | JP5711370B2 (ja) |
KR (1) | KR20130135925A (ja) |
CN (1) | CN103385004B (ja) |
BR (1) | BR112013020878B1 (ja) |
ES (1) | ES2862898T3 (ja) |
PL (1) | PL2680588T3 (ja) |
WO (1) | WO2013001730A1 (ja) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014107565A1 (en) * | 2013-01-04 | 2014-07-10 | Qualcomm Incorporated | Bitstream constraints and motion vector restriction for inter-view or inter-layer reference pictures |
JP2014527316A (ja) * | 2011-08-25 | 2014-10-09 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 符号化方法、符号化装置、復号方法、および復号装置 |
JP2015015666A (ja) * | 2013-07-08 | 2015-01-22 | ルネサスエレクトロニクス株式会社 | 動画像符号化装置およびその動作方法 |
CN104702954A (zh) * | 2013-12-05 | 2015-06-10 | 华为技术有限公司 | 视频编码方法及装置 |
CN105519117A (zh) * | 2013-09-06 | 2016-04-20 | 三菱电机株式会社 | 动态图像编码装置、动态图像转码装置、动态图像编码方法、动态图像转码方法以及动态图像流传输系统 |
JP2016534649A (ja) * | 2013-08-26 | 2016-11-04 | クゥアルコム・インコーポレイテッドQualcomm Incorporated | イントラブロックコピー実行時の領域決定 |
JPWO2014148310A1 (ja) * | 2013-03-21 | 2017-02-16 | ソニー株式会社 | 画像符号化装置および方法、並びに、画像復号装置および方法 |
JP2017513437A (ja) * | 2014-03-28 | 2017-05-25 | ソニー株式会社 | データ符号化及び復号化 |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5711370B2 (ja) * | 2011-06-30 | 2015-04-30 | 三菱電機株式会社 | 画像符号化装置、画像復号装置、画像符号化方法および画像復号方法 |
WO2015169230A1 (en) * | 2014-05-06 | 2015-11-12 | Mediatek Inc. | Video processing method for determining position of reference block of resized reference frame and related video processing apparatus |
US9626733B2 (en) * | 2014-11-24 | 2017-04-18 | Industrial Technology Research Institute | Data-processing apparatus and operation method thereof |
WO2017220164A1 (en) * | 2016-06-24 | 2017-12-28 | Huawei Technologies Co., Ltd. | Devices and methods for video coding using segmentation based partitioning of video coding blocks |
CN117176948A (zh) * | 2016-10-04 | 2023-12-05 | 有限公司B1影像技术研究所 | 图像编码/解码方法、记录介质和传输比特流的方法 |
EP4375917A3 (en) * | 2017-10-19 | 2024-10-09 | Panasonic Intellectual Property Corporation of America | Three-dimensional data encoding method, three-dimensional data decoding method, three-dimensional data encoding device, and three-dimensional data decoding device |
WO2020004349A1 (ja) * | 2018-06-29 | 2020-01-02 | シャープ株式会社 | 動画像符号化装置および動画像復号装置 |
JP7005480B2 (ja) * | 2018-12-27 | 2022-01-21 | Kddi株式会社 | 画像復号装置、画像符号化装置、プログラム及び画像処理システム |
MX2021011619A (es) | 2019-04-01 | 2021-10-13 | Beijing Bytedance Network Tech Co Ltd | Uso de filtros de interpolacion para la prediccion de vector de movimiento basada en historia. |
BR112022002480A2 (pt) | 2019-08-20 | 2022-04-26 | Beijing Bytedance Network Tech Co Ltd | Método para processamento de vídeo, aparelho em um sistema de vídeo, e, produto de programa de computador armazenado em uma mídia legível por computador não transitória |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001346217A (ja) * | 2000-05-30 | 2001-12-14 | Alcatel | 動きの予測によるセグメント画像の符号化 |
JP2004297566A (ja) * | 2003-03-27 | 2004-10-21 | Ntt Docomo Inc | 動画像符号化装置、動画像符号化方法、動画像符号化プログラム、動画像復号装置、動画像復号方法、及び動画像復号プログラム |
JP2007259149A (ja) * | 2006-03-23 | 2007-10-04 | Sanyo Electric Co Ltd | 符号化方法 |
WO2009037726A1 (ja) * | 2007-09-18 | 2009-03-26 | Fujitsu Limited | 動画像符号化装置および動画像復号装置 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
SG116400A1 (en) | 1997-10-24 | 2005-11-28 | Matsushita Electric Ind Co Ltd | A method for computational graceful degradation inan audiovisual compression system. |
JP3544852B2 (ja) | 1998-03-12 | 2004-07-21 | 株式会社東芝 | 映像符号化装置 |
JP4511842B2 (ja) * | 2004-01-26 | 2010-07-28 | パナソニック株式会社 | 動きベクトル検出装置及び動画撮影装置 |
JP5115498B2 (ja) * | 2009-03-05 | 2013-01-09 | 富士通株式会社 | 画像符号化装置、画像符号化制御方法およびプログラム |
JPWO2012120840A1 (ja) * | 2011-03-07 | 2014-07-17 | パナソニック株式会社 | 画像復号方法、画像符号化方法、画像復号装置および画像符号化装置 |
JP5711370B2 (ja) * | 2011-06-30 | 2015-04-30 | 三菱電機株式会社 | 画像符号化装置、画像復号装置、画像符号化方法および画像復号方法 |
-
2012
- 2012-06-11 JP JP2013522714A patent/JP5711370B2/ja active Active
- 2012-06-11 BR BR112013020878-3A patent/BR112013020878B1/pt active IP Right Grant
- 2012-06-11 PL PL12804149T patent/PL2680588T3/pl unknown
- 2012-06-11 EP EP12804149.8A patent/EP2680588B1/en active Active
- 2012-06-11 KR KR1020137025216A patent/KR20130135925A/ko not_active Application Discontinuation
- 2012-06-11 WO PCT/JP2012/003785 patent/WO2013001730A1/ja active Application Filing
- 2012-06-11 CN CN201280009905.4A patent/CN103385004B/zh active Active
- 2012-06-11 ES ES12804149T patent/ES2862898T3/es active Active
- 2012-06-11 US US14/000,305 patent/US9503718B2/en active Active
-
2015
- 2015-03-05 JP JP2015043616A patent/JP6391500B2/ja active Active
-
2017
- 2017-04-07 JP JP2017076778A patent/JP6381724B2/ja active Active
-
2018
- 2018-07-31 JP JP2018143947A patent/JP6615287B2/ja active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001346217A (ja) * | 2000-05-30 | 2001-12-14 | Alcatel | 動きの予測によるセグメント画像の符号化 |
JP2004297566A (ja) * | 2003-03-27 | 2004-10-21 | Ntt Docomo Inc | 動画像符号化装置、動画像符号化方法、動画像符号化プログラム、動画像復号装置、動画像復号方法、及び動画像復号プログラム |
JP2007259149A (ja) * | 2006-03-23 | 2007-10-04 | Sanyo Electric Co Ltd | 符号化方法 |
WO2009037726A1 (ja) * | 2007-09-18 | 2009-03-26 | Fujitsu Limited | 動画像符号化装置および動画像復号装置 |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014527316A (ja) * | 2011-08-25 | 2014-10-09 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 符号化方法、符号化装置、復号方法、および復号装置 |
US10021414B2 (en) | 2013-01-04 | 2018-07-10 | Qualcomm Incorporated | Bitstream constraints and motion vector restriction for inter-view or inter-layer reference pictures |
WO2014107565A1 (en) * | 2013-01-04 | 2014-07-10 | Qualcomm Incorporated | Bitstream constraints and motion vector restriction for inter-view or inter-layer reference pictures |
CN104885458A (zh) * | 2013-01-04 | 2015-09-02 | 高通股份有限公司 | 用于视图间或层间参考图片的位流约束和运动向量限制 |
CN104885458B (zh) * | 2013-01-04 | 2019-05-28 | 高通股份有限公司 | 用于视图间或层间参考图片的位流约束和运动向量限制 |
JPWO2014148310A1 (ja) * | 2013-03-21 | 2017-02-16 | ソニー株式会社 | 画像符号化装置および方法、並びに、画像復号装置および方法 |
US12113976B2 (en) | 2013-03-21 | 2024-10-08 | Sony Corporation | Image encoding device and method and image decoding device and method |
JP2015015666A (ja) * | 2013-07-08 | 2015-01-22 | ルネサスエレクトロニクス株式会社 | 動画像符号化装置およびその動作方法 |
JP2016534649A (ja) * | 2013-08-26 | 2016-11-04 | クゥアルコム・インコーポレイテッドQualcomm Incorporated | イントラブロックコピー実行時の領域決定 |
CN105519117A (zh) * | 2013-09-06 | 2016-04-20 | 三菱电机株式会社 | 动态图像编码装置、动态图像转码装置、动态图像编码方法、动态图像转码方法以及动态图像流传输系统 |
EP3043560A4 (en) * | 2013-09-06 | 2017-03-01 | Mitsubishi Electric Corporation | Video encoding device, video transcoding device, video encoding method, video transcoding method and video stream transmission system |
CN104702954A (zh) * | 2013-12-05 | 2015-06-10 | 华为技术有限公司 | 视频编码方法及装置 |
CN104702954B (zh) * | 2013-12-05 | 2017-11-17 | 华为技术有限公司 | 视频编码方法及装置 |
JP2017513437A (ja) * | 2014-03-28 | 2017-05-25 | ソニー株式会社 | データ符号化及び復号化 |
Also Published As
Publication number | Publication date |
---|---|
JP6615287B2 (ja) | 2019-12-04 |
JP2015130689A (ja) | 2015-07-16 |
EP2680588A1 (en) | 2014-01-01 |
US9503718B2 (en) | 2016-11-22 |
JPWO2013001730A1 (ja) | 2015-02-23 |
CN103385004A (zh) | 2013-11-06 |
JP6381724B2 (ja) | 2018-08-29 |
BR112013020878A2 (pt) | 2016-09-27 |
JP2018186569A (ja) | 2018-11-22 |
EP2680588B1 (en) | 2021-02-17 |
BR112013020878B1 (pt) | 2022-05-10 |
PL2680588T3 (pl) | 2021-09-06 |
ES2862898T3 (es) | 2021-10-08 |
JP6391500B2 (ja) | 2018-09-19 |
EP2680588A4 (en) | 2015-04-29 |
JP5711370B2 (ja) | 2015-04-30 |
US20130343460A1 (en) | 2013-12-26 |
CN103385004B (zh) | 2016-12-28 |
KR20130135925A (ko) | 2013-12-11 |
JP2017121089A (ja) | 2017-07-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6615287B2 (ja) | 画像復号装置 | |
JP6863669B2 (ja) | 画像符号化装置、画像符号化方法、画像復号装置および画像復号方法 | |
JP6716836B2 (ja) | 動画像符号化データ | |
KR101728285B1 (ko) | 화상 부호화 장치, 화상 부호화 방법, 화상 복호 장치, 화상 복호 방법 및 기억 매체 | |
US11350120B2 (en) | Image coding device, image decoding device, image coding method, and image decoding method | |
WO2013065402A1 (ja) | 動画像符号化装置、動画像復号装置、動画像符号化方法及び動画像復号方法 | |
US20160050421A1 (en) | Color image encoding device, color image decoding device, color image encoding method, and color image decoding method | |
JP2012080213A (ja) | 動画像符号化装置、動画像復号装置、動画像符号化方法及び動画像復号方法 | |
US20150271502A1 (en) | Video encoding device, video decoding device, video encoding method, and video decoding method | |
WO2012176387A1 (ja) | 動画像符号化装置、動画像復号装置、動画像符号化方法及び動画像復号方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12804149 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2013522714 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14000305 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 20137025216 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012804149 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112013020878 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 112013020878 Country of ref document: BR Kind code of ref document: A2 Effective date: 20130815 |