WO2012043541A1 - Procédé de génération de vecteurs de prédiction, procédé de codage d'image, procédé de décodage d'image, dispositif de génération de vecteurs de prédiction, dispositif de codage d'image, dispositif de décodage d'image, programme de génération de vecteurs de prédiction, programme de codage d'image et programme de décodage d'image - Google Patents

Procédé de génération de vecteurs de prédiction, procédé de codage d'image, procédé de décodage d'image, dispositif de génération de vecteurs de prédiction, dispositif de codage d'image, dispositif de décodage d'image, programme de génération de vecteurs de prédiction, programme de codage d'image et programme de décodage d'image Download PDF

Info

Publication number
WO2012043541A1
WO2012043541A1 PCT/JP2011/072032 JP2011072032W WO2012043541A1 WO 2012043541 A1 WO2012043541 A1 WO 2012043541A1 JP 2011072032 W JP2011072032 W JP 2011072032W WO 2012043541 A1 WO2012043541 A1 WO 2012043541A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
block
prediction vector
target
target block
Prior art date
Application number
PCT/JP2011/072032
Other languages
English (en)
Japanese (ja)
Inventor
貴也 山本
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Publication of WO2012043541A1 publication Critical patent/WO2012043541A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/521Processing of motion vectors for estimating the reliability of the determined motion vectors or motion vector field, e.g. for smoothing the motion vector field or for correcting motion vectors

Definitions

  • the present invention relates to a prediction vector generation method, an image encoding method, an image decoding method, a prediction vector generation device, an image encoding device, an image decoding device, a prediction vector generation program, an image encoding program, and an image decoding program.
  • MPEG Motion Compensation Picture Experts Group
  • MPEG-4 Motion Compensation interframe predictive coding
  • H. H.264 a coding system that uses a temporal correlation of moving pictures
  • motion compensation interframe predictive coding an image to be coded is divided into blocks, and a motion vector is obtained for each block, thereby realizing efficient coding.
  • a block adjacent on the encoding target block D (adjacent block A in the figure) and a block adjacent on the upper right (adjacent block B in the figure)
  • the median of the horizontal and vertical components of the motion vector (mv_a, mv_b, mv_c) of the block adjacent to the left (adjacent block C in the figure) is used as the prediction vector, and the difference vector between the motion vector and the prediction vector is obtained. ing.
  • MVC Multiview Video Coding
  • the motion compensation interframe prediction coding and the disparity compensation prediction coding are coded using the correlation in the time direction and the correlation between the cameras, the correlation between the detected motion vector and the disparity vector is No. Therefore, when an adjacent block is encoded by a different encoding method from the encoding target block, there is a problem that a motion vector or a disparity vector of the adjacent block cannot be used for generating a prediction vector.
  • Patent Document 1 when the encoding method of the adjacent block is different from the encoding target block, the disparity vector of the adjacent block is referenced when the encoding method of the encoding target block is motion compensation interframe predictive encoding.
  • the block containing the largest number of motion vectors in the region to be used is used when generating the prediction vector, and when the coding method of the coding target block is the parallax compensation prediction coding, the block containing the most in the region referenced by the motion vector of the adjacent block
  • the generation accuracy of the prediction vector is improved by using the parallax vector when generating the prediction vector.
  • the depth map is information representing the distance from the camera to the subject, and as a generation method, for example, there is a method of obtaining from a device that measures the distance installed in the vicinity of the camera.
  • a depth map can be generated by analyzing images taken from cameras of a plurality of viewpoints.
  • FIG. 15 An overall view of the system in the new MPEG-3DV standard is shown in FIG.
  • This new standard corresponds to a plurality of viewpoints of two viewpoints or more.
  • the cameras 602 and 604 shoot the subject 601 and output images, and sensors 603 and 605 that measure the distance to the subject installed in the vicinity of each camera generate and output a depth map.
  • the encoder 606 receives an image and a depth map as inputs, and encodes and outputs the image and the depth map using motion compensation interframe prediction encoding or disparity compensation prediction.
  • the decoder 607 receives the output result of the encoder 606 transmitted as input, decodes it, and outputs a decoded image and a decoded depth map.
  • the display unit 608 receives the decoded image and the decoded depth map as input, displays the decoded image, or displays the decoded image after performing processing using the depth map.
  • Non-Patent Document 1 and Patent Document 1 when predicting a motion vector, a disparity vector, and the like, an object displayed in a target block is different from an object displayed in a block adjacent to the target block. Since these objects may move in different directions and the distance from the camera may differ greatly, the difference between the prediction vector and the motion vector or disparity vector increases, resulting in a decrease in coding efficiency. There is a problem that there is.
  • the present invention has been made in view of such circumstances, and an object thereof is to provide a prediction vector generation method, an image encoding method, an image decoding method, a prediction vector generation device, and an image encoding device that exhibit excellent encoding efficiency.
  • An image decoding device, a prediction vector generation program, an image encoding program, and an image decoding program are provided.
  • an encoding or decoding target image is divided into blocks, and an interframe motion prediction encoding method or a disparity compensation prediction encoding method is applied to each of the blocks to perform encoding or decoding.
  • a predicted image of the target block is generated based on a reference image of the target block, which is the target block, and reference information indicating a position of an area corresponding to the target block in the reference image, and the image is encoded
  • the method for generating a prediction vector of the reference information used when converting or decoding the information representing the distance corresponding to the target image and the reference information of a block adjacent to the target block are used. It is a prediction vector production
  • the other aspect of this invention is the above-mentioned prediction vector generation method, Comprising:
  • the said prediction vector generation step is a block adjacent to the said object block based on the information showing the distance corresponding to the said object image.
  • the other aspect of this invention is the above-mentioned prediction vector production
  • a prediction vector generation method as described above, wherein in the object estimation step, out of information representing a distance corresponding to the target image, a distance corresponding to the target block is calculated.
  • the information indicating the possibility is generated by taking the sum of the square of the difference in each pixel between the information indicating and the information indicating the distance corresponding to the adjacent block of the target block.
  • edge detection is performed on distance information corresponding to an adjacent block of the target block, Based on the result of the edge detection, information indicating the possibility is generated.
  • the distance corresponding to the adjacent block including the edge detected by the edge detection is calculated.
  • Information indicating the possibility is generated based on the magnitude relationship between the information to be expressed and the information to represent the distance corresponding to the neighboring blocks of the adjacent block.
  • the other aspect of this invention is the above-mentioned prediction vector production
  • the block reference information is the prediction vector.
  • a prediction vector generation method as described above, wherein in the prediction vector calculation step, a weighted average of the reference information of the adjacent blocks using the information indicating the possibility as a weight.
  • a prediction vector is calculated from
  • a target image to be encoded is divided into blocks, and the target block is predicted from a plurality of already encoded images for each of the blocks. Selecting a reference image to be generated, generating a predicted image using reference information designating an area corresponding to the target block in the reference image, and encoding a difference between the predicted image and the target block.
  • the entire decoding target image is divided into blocks, and each block is used for predicting the target block from a plurality of already decoded images.
  • An image is generated by selecting a reference image, generating a predicted image using reference information designating an area corresponding to the target block in the reference image, and decoding a difference between the predicted image and the target block A prediction vector for generating a prediction vector of the target block using information representing a distance corresponding to the target image and reference information of a block adjacent to the target block
  • An image decoding method having a generation step.
  • an encoding or decoding target image is divided into blocks, and an inter-frame motion prediction encoding method or a disparity compensation prediction encoding method is applied to each of the blocks.
  • an encoding or decoding target image is divided into blocks, and an inter-frame motion prediction encoding method or a disparity compensation prediction encoding method is applied to each of the blocks.
  • Generating a predicted image of the target block based on a reference image of the target block that is the block to be converted or decoded and reference information indicating a position of a region corresponding to the target block in the reference image A computer of a prediction vector generation device that generates a prediction vector of the reference information, which is used when encoding or decoding an image, and information indicating a distance corresponding to the target image and a reference to a block adjacent to the target block And a prediction vector for executing a prediction vector generation step for generating a prediction vector of the target block using the information.
  • the target image to be encoded is divided into blocks, and each of the blocks is used when predicting the target block from a plurality of already encoded images. Selecting a reference image to be generated, generating a predicted image using reference information designating an area corresponding to the target block in the reference image, and encoding a difference between the predicted image and the target block An image encoding apparatus that encodes an image using the information representing a distance corresponding to the target image and reference information of a block adjacent to the target block, and calculating a prediction vector of the target block An image encoding device to be generated.
  • a target image to be encoded is divided into blocks, and the target block is predicted from a plurality of already encoded images for each of the blocks. Selecting a reference image to be generated, generating a predicted image using reference information designating an area corresponding to the target block in the reference image, and encoding a difference between the predicted image and the target block.
  • the computer of the image encoding apparatus that encodes the image with the information representing the distance corresponding to the target image and the reference information of the block adjacent to the target block, the prediction vector of the target block It is an image coding program for performing the prediction vector production
  • the entire decoding target image is divided into blocks, and each block is used for predicting the target block from a plurality of already decoded images.
  • An image is generated by selecting a reference image, generating a predicted image using reference information designating an area corresponding to the target block in the reference image, and decoding a difference between the predicted image and the target block.
  • An image decoding device that decodes the image and generates a prediction vector of the target block using information representing a distance corresponding to the target image and reference information of a block adjacent to the target block It is a decoding device.
  • the entire decoding target image is divided into blocks, and each of the blocks is used for predicting the target block from a plurality of already decoded images.
  • An image is generated by selecting a reference image, generating a predicted image using reference information designating an area corresponding to the target block in the reference image, and decoding a difference between the predicted image and the target block
  • a prediction vector for generating a prediction vector of the target block using information representing a distance corresponding to the target image and reference information of a block adjacent to the target block.
  • An image decoding program for executing a generation step.
  • FIG. 1 is a schematic block diagram illustrating a configuration of an image encoding device 100 according to the present embodiment.
  • the image encoding device 100 encodes (compresses) a two-viewpoint moving image for stereoscopic display.
  • an image encoding device 100 according to the present embodiment includes an image input unit 101, a block matching execution unit 102, a predicted image creation unit 103, a difference image encoding unit 104, and a difference image decoding unit.
  • a reference image memory 106 a reference information storage memory 107, a prediction vector generation unit 108, a difference reference information encoding unit 110, a reference image designation information storage memory 111, a reference image selection unit 112, and a reference image
  • a designation information encoding unit 113, subtraction units 114 and 115, and an addition unit 116 are provided.
  • the image input unit 101 receives input of image data GD of an image to be encoded (target image).
  • the image to be encoded in the present embodiment is a two-viewpoint moving image for stereoscopic display.
  • the block matching execution unit 102 divides the image of the image data GD received by the image input unit 101 into blocks.
  • the block matching execution unit 102 refers to the reference image memory 106, selects a reference image to be used for prediction of the block from among the already encoded images for each of the blocks, and designates it to the reference image selection unit 112. To do. Further, the block matching execution unit 102 performs block matching for searching for an area corresponding to the block from the selected reference image for each block, and generates reference information for each block.
  • a vector from the block obtained by the division to the corresponding area is used as a reference information indicating the position of the corresponding area.
  • This vector is called a motion vector when the encoded image is an image of the same viewpoint and different frame (time) as the divided image, and the encoded image has a different viewpoint from that of the divided image. Is the parallax vector.
  • an already encoded image that is a candidate for searching for a corresponding block is different from an image at the same viewpoint and a different time as the divided image and the divided image. Although it is only a viewpoint and the image of the same time, it is not limited to these.
  • the predicted image generation unit 103 refers to the reference image memory 106, arranges the image of the corresponding region obtained by the block matching by the block matching execution unit 102 at the position of the block obtained by the original division, and performs prediction. Generate an image.
  • the subtracting unit 115 takes the difference between the pixel value of each pixel constituting the image received by the image input unit 101 and the pixel value of each pixel constituting the predicted image generated by the predicted image generating unit 103, and obtains the difference image. Generate.
  • the difference image encoding unit 104 performs quantization, discrete cosine transform, and the like on the difference image generated by the subtraction unit 115 to generate encoded difference image encoded data DE.
  • the difference image decoding unit 105 decodes the encoded difference image.
  • the adding unit 116 adds the pixel value of each pixel constituting the prediction image generated by the prediction image generation unit 103 and the pixel value of each pixel constituting the difference image decoded by the difference image decoding unit 105, thereby obtaining a reference image. And stored in the reference image memory 106.
  • the reference information storage memory 107 stores the reference information of each block generated by the block matching execution unit 102.
  • the prediction vector generation unit 108 includes a portion corresponding to the block in the depth map DM, which is information indicating a distance, and reference information NR of a block adjacent to the block stored in the reference information storage memory 107. Is used to generate a prediction vector PV of the block.
  • the adjacent blocks are three blocks on the left side, the upper side, and the upper right side of the target block when encoding is performed in the raster scan order in which the image is scanned for each block row from the upper left to the lower right. It is.
  • the prediction vector generation unit 108 in the present embodiment also uses information RA that specifies the reference image of each block stored in the reference image specification information storage memory 11 when generating a prediction vector.
  • the reference image selection unit 112 outputs information specifying the reference image selected by the block matching execution unit 102 for each block to the reference image memory 106, the reference image specification information storage memory 111, and the reference image specification information encoding unit 113.
  • the reference image designation information encoding unit 113 encodes information specifying the reference image of each block received from the reference image selection unit 112, generates reference image designation information encoded data RAE, and outputs it.
  • the subtraction unit 114 takes the difference between the reference information of each block generated by the block matching execution unit 102 and the prediction vector of each block generated by the prediction vector generation unit 108, and generates difference reference information.
  • the difference reference information encoding 110 encodes the difference reference information generated by the subtraction unit 114 to generate difference reference information encoded data DRE.
  • FIG. 2 is a flowchart for explaining the operation of the image coding apparatus 100.
  • a plurality of images with the same viewpoint as the encoding target image and an image with a different viewpoint at the same time position (time) as the encoding target image have already been encoded, and the encoding target image is also in the middle of the block
  • the data has been encoded and the result is stored in the reference image memory 106, the reference information storage memory 107, and the reference image designation information storage memory 111.
  • image data GD of an image to be encoded is input from the image input unit 101 (A1).
  • the block matching execution unit 102 divides the input encoding target image into blocks. Encoding is performed for each block.
  • the image encoding device 100 repeatedly executes the following processing (steps A2 to A15) until all blocks in the encoding target image are encoded.
  • the block matching execution unit 102 sends information indicating the encoding mode to be applied to the encoding target block to the reference image selection unit 112, and the reference image selection unit 112 transmits a necessary reference image based on the information to the reference image memory 106.
  • the reference image memory 106 outputs the designated reference image to the block matching execution unit 102.
  • a reference image is an image that has already been encoded and decoded.
  • the block matching unit 102 receives a reference image as input, and performs block matching in all coding modes (motion compensation interframe prediction coding and disparity compensation prediction coding) for each block (A3).
  • Block matching is a process for obtaining an absolute value difference between luminance values of an encoding target block and an already encoded image area.
  • the block matching execution unit 102 determines the coding mode with the highest coding efficiency and indicates the reference image necessary for the coding mode.
  • the reference image designation information which is information, is output to the reference image selection unit 112 (A4).
  • the block matching execution unit 102 stores the reference information in the reference information storage memory 107 (A5), and the reference image selection unit 112 receives the reference image designation information output from the block matching execution unit 102. It is stored in the reference image designation information storage memory 111 (A6).
  • the prediction vector generation unit 108 generates a prediction vector (A7). A specific method for generating a prediction vector will be described in detail later.
  • the subtraction unit 114 takes the difference between the reference information and the prediction vector, and generates difference reference information (A8).
  • the reference image selection unit 112 outputs the reference image designated by the reference image designation information from the reference image memory 106 to the predicted image generation unit 103.
  • the predicted image generation unit 103 generates a predicted image from the received reference image and reference information.
  • the subtractor 115 takes the difference between the encoding target block and the generated predicted image, and generates a difference image (A9).
  • the reference image designation information encoding unit 113 encodes the reference image designation information and generates reference image designation information code data RAE as a result.
  • the difference reference information encoding unit 110 encodes the difference reference information and generates difference reference information encoded data DRE as a result.
  • the differential image encoding unit 104 encodes the differential image and generates differential image encoded data DE as a result (A10). Then, the image encoding device 100 outputs these three encoded data.
  • the differential image decoding unit 105 decodes the encoded differential image (A12).
  • the adding unit 116 adds the decoded difference image and the predicted image to obtain a decoded image (A13), and stores the decoded image in the reference image memory 106 (A14). If all the blocks have not been processed, the process returns to step A3. If all the blocks have been processed, the process ends (A15). In step A11, if the image to be encoded is not used as a reference image in later encoding (A11-No), the process proceeds to step A15. If all blocks have not been processed, the process returns to step A3, If it is being processed, the process is terminated (A15).
  • FIG. 3 is a schematic block diagram illustrating a configuration of the prediction vector generation unit 108 in the present embodiment.
  • the prediction vector generation unit 108 includes an inter-block correlation calculation unit 301 and a prediction vector calculation unit 109.
  • the prediction vector calculation unit 109 includes an adjacent block reference information determination unit 302, an adjacent block reference information storage memory 303, and a prediction vector setting unit 304.
  • the inter-block correlation calculation unit 301 calculates the correlation in the depth map DM between the encoding target block and each adjacent block.
  • this correlation is information indicating the possibility that the adjacent block displays the same object as the object displayed in the encoding target block. This utilizes the fact that the distance does not change significantly if the object displayed in the adjacent block is the same as the object displayed in the block to be encoded.
  • the prediction vector calculation unit 109 generates a prediction vector using the correlation calculated by the inter-block correlation calculation unit 301 and the reference information NR of the adjacent block.
  • the adjacent block reference information determination unit 302 receives the encoding target block, reference image designation information RA of the adjacent block, and reference information NR of the adjacent block. Then, the adjacent block reference information determination unit 302 determines whether or not the reference image of the encoding target block and the reference image of the adjacent block are the same. If they are the same, the adjacent block reference information is stored in the adjacent block reference information. Output to the memory 303.
  • the adjacent block reference information storage memory 303 stores the reference information output by the adjacent block reference information determination unit 302.
  • the prediction vector setting unit 304 sets the reference information of the adjacent block having the highest correlation among the adjacent blocks having the same encoding target block and the reference image as the prediction vector PV.
  • FIG. 4 is a flowchart showing a prediction vector generation method. This flowchart shows the details of step A7 in FIG. 5 and 6 are conceptual explanatory diagrams of a prediction vector generation method according to the present invention.
  • FIG. 5 is a diagram illustrating an example 701 of an encoding target image.
  • the symbol O1 is the foreground subject, and the symbol O2 is the subject behind the subject O1. These subjects O1 and O2 are placed on the base O3.
  • Reference numeral 703 denotes a block to be encoded.
  • a reference numeral 704 is a block adjacent to the left of the encoding target block 703.
  • Reference numeral 705 denotes an adjacent block above the block 703 to be encoded.
  • Reference numeral 706 is an adjacent block on the upper right of the block 703 to be encoded.
  • FIG. 6 is a diagram illustrating a depth map 702 corresponding to an example 701 of an encoding target image.
  • the symbol OD1 is the subject O1 in the depth map.
  • the symbol OD2 is the subject O2 in the depth map.
  • the code OD3 is a table O3 in the depth map.
  • Reference numeral 707 denotes an area in the depth map corresponding to the encoding target block 703.
  • Reference numeral 708 denotes an area in the depth map corresponding to the left adjacent block 704 of the block 703 to be encoded.
  • Reference numeral 709 denotes a region in the depth map corresponding to the adjacent block 705 above the block 703 to be encoded.
  • Reference numeral 710 denotes an area in the depth map corresponding to the adjacent block 706 at the upper right of the block 703 to be encoded.
  • the prediction vector generation method will be described with reference to the flowchart of FIG. 4 and FIGS. 5 and 6.
  • the inter-block correlation calculation unit 301 receives an already encoded depth map 702 corresponding to the encoding target image 701 as an input (B1).
  • the reason why the encoded depth map 702 is used here is to match the generated prediction vectors by making the depth maps the same in the encoding device and the decoding device.
  • the depth map encoding method the depth map arranged on the time axis is regarded as a moving image, and a conventional moving image encoding method such as MPEG-2 or H.264 is used. H.264 / AVC may be used.
  • information representing the correlation between the encoding target block 703 and the adjacent blocks 704, 705, and 706 is calculated using the depth map (step B2).
  • the average value of the depth values of the block 707 located at the same position as the encoding target block 703 in the depth map 702, and each adjacent of the encoding target block 703 in the depth map 702 There is a method of calculating an absolute difference value with respect to the average value of the depth values of the blocks 708, 709, and 710 at the same position as the blocks 704, 705, and 706 as information indicating the correlation between the encoding target block and the adjacent block.
  • the absolute difference value of the average value of the depth values of the block 707 and the block 708 existing in the same object is the smallest, and all are included in the background area.
  • the difference absolute value of the average value of the depth values from the block 709 is the largest.
  • the depth value difference absolute value comparison unit 301 outputs inter-block correlation information indicating that a block having a small average difference absolute value of depth values is in the order of block 708, block 710, and block 709. That is, the adjacent block that is most likely to display the same object as the encoding target block is the block 708, the next highest is the block 710, and the information indicating that the next is the block 709 is output. To do.
  • the adjacent block reference information determination unit 302 receives, as inputs, reference image designation information of the encoding target block 703 and its adjacent blocks 704, 705, and 706, and reference information of the adjacent blocks 704, 705, and 706. First, the adjacent block reference information determination unit 302 determines whether or not the reference image of the block 708 having the highest rank in the inter-block correlation information is the same as the reference image of the encoding target block 703 (B3). If they are the same, the reference information of the determined adjacent block, here the block 708, is output and the process proceeds to step B6. If not, the process proceeds to step B4.
  • step B4 the adjacent block reference information determination unit 302 determines whether or not the reference image of the block 710 having the next highest rank in the inter-block correlation information and the reference image of the encoding target block 703 are the same. . If they are the same, the reference information of the determined adjacent block, here the block 710, is output and the process proceeds to step B6. If not, the process proceeds to step B5. In step B5, the adjacent block reference information determination unit 302 determines whether or not the reference image of the next highest (lowest) block 709 and the reference image of the encoding target block 703 are the same in the inter-block correlation information. Determine whether. If they are the same, the reference information of the determined adjacent block, here the block 709, is output and the process proceeds to step B6. If not, the process proceeds to step B7.
  • step B6 the reference information storage memory 303 stores the reference information of the adjacent block output by the adjacent block reference information determination unit 302 before the step (B6), and proceeds to step B8.
  • step B7 the reference information storage memory 303 stores (0, 0), that is, a 0 vector as reference information.
  • step B8 the prediction vector setting unit 304 sets the reference information stored in step B6 or B7 by the reference information storage memory 303 as a prediction vector and outputs the prediction vector.
  • the adjacent block having the closest depth value to the encoding target block that is, the reference information of the adjacent block most likely to belong to the same object as the encoding target block is selected as the prediction vector.
  • the accuracy of the prediction vector is improved, and the encoding efficiency of the reference information is improved.
  • FIG. 7 is a schematic block diagram illustrating the configuration of the image decoding device 200 according to this embodiment.
  • the moving image decoding apparatus 200 includes a difference image decoding unit 201, a difference reference information decoding unit 202, a reference image designation information decoding unit 203, a predicted image creation unit 204, a reference image memory 205, , A reference information storage memory 206, a prediction vector generation unit 108, a reference image designation information storage memory 209, and addition units 210 and 211.
  • the image decoding apparatus 200 includes a prediction vector generation unit 108 as with the image encoding apparatus 100.
  • FIG. 8 is a flowchart for explaining the operation of the image decoding apparatus 200 according to this embodiment.
  • a process executed when encoded data corresponding to a moving image of two viewpoints is input to the image decoding apparatus 200 according to this flowchart.
  • a plurality of images having the same viewpoint as the decoding target image and an image of another viewpoint having the same time axis as the decoding target image have already been decoded, and the decoding target image has also been decoded up to an intermediate block. Description will be made assuming that the reference image memory 205, the reference information storage memory 206, and the reference image designation information storage memory 209 are in a state of being stored.
  • the difference image encoded data DE, the reference image specified information encoded data DRE, and the difference reference information encoded data RAE are input to the difference image decoding unit 201, the difference reference information decoding unit 202, and the reference image designation information decoding unit 203, respectively.
  • the data is input in units corresponding to blocks in the image encoding device 100, and the image decoding device 200 performs decoding in the order of the input blocks.
  • the image decoding apparatus 200 repeatedly executes the following processing until all blocks in the decoding target image are decoded (steps C2 to C14).
  • the reference image designation information decoding unit 203 decodes the reference image designation information encoded data DRE, and acquires reference image designation information (step C3).
  • the reference image designation information decoding unit 203 stores the decoded reference image designation information in the reference image designation information storage memory 209 (step C4).
  • the prediction vector generation unit 108 performs the same processing as the prediction vector generation unit 108 of the image encoding device 100, and generates a prediction vector (step C5).
  • the difference reference information decoding unit 202 decodes the difference reference information encoded data RAE and acquires difference reference information (step C6).
  • the adding unit 211 obtains reference information by calculating the sum of the difference reference information decoded by the difference reference information decoding unit 202 and the prediction vector generated by the prediction vector generation unit 108 (step C7). For later decoding processing, the adding unit 211 outputs the acquired reference information to the reference information storage memory 206 and stores it (step C8).
  • the reference image memory 205 outputs the reference image to the predicted image generation unit 204 according to the reference image designation information decoded by the reference image designation information decoding unit 203.
  • the predicted image generation unit 204 generates a predicted image from the reference image output from the reference image memory 205 and the reference information acquired by the adding unit 211 (step C9).
  • the difference image decoding unit 201 decodes the difference image encoded data DE and acquires a difference image (step C10).
  • the adding unit 210 obtains a decoded image by calculating the sum of the difference image acquired by the difference image decoding unit 201 and the predicted image acquired by the predicted image generation unit 204 (step C11), and obtains the decoded image data DD. Output as the output of the image decoding apparatus 200.
  • the adding unit 210 stores the decoded image in the reference image memory 205 (steps C12 and 13). The process returns to step C3 and is repeated until decoding of all the blocks included in the image is completed. Note that the output from the adding unit 210 is output from the image decoding apparatus 200 after all the images that are displayed temporally before that image are output.
  • the adjacent block having the closest depth value to the encoding target block that is, the reference information of the adjacent block most likely to belong to the same object as the encoding target block is selected as the prediction vector.
  • the accuracy of the prediction vector is improved, and the encoding efficiency of the reference information is improved.
  • the image encoding device 100a in this embodiment is different from the image encoding device 100 shown in FIG. 1 in that a prediction vector generation unit 108a is provided instead of the prediction vector generation unit 108.
  • the image decoding device 200a in the present embodiment is different from the image decoding device 200 shown in FIG. 7 in that a prediction vector generation unit 108a is provided instead of the prediction vector generation unit 108.
  • FIG. 9 is a schematic block diagram illustrating the configuration of the prediction vector generation unit 108a in the present embodiment.
  • the prediction vector 108a includes an edge detection unit 401, an inter-block correlation determination unit 402, and a prediction vector calculation unit 109a.
  • the prediction vector calculation unit 109a includes an adjacent block reference information determination unit 302, a prediction vector candidate determination unit 403, and a prediction vector generation unit 404 using a median value.
  • the edge detection unit 401 determines the correlation between the encoding target block and the adjacent block by performing edge detection on the area corresponding to the adjacent block in the depth map.
  • the inter-block correlation determining unit 402 determines the correlation between the encoding target block and the adjacent block based on the magnitude relationship between the area corresponding to the adjacent block in the depth map and the area corresponding to the peripheral block of the adjacent block. judge.
  • the adjacent block reference information determination unit 302 is the same as the adjacent block reference information determination unit 302 in FIG.
  • the prediction vector candidate determination unit 403 combines the result of edge detection by the edge detection unit 401 and the determination result by the inter-block correlation determination unit 402 as information indicating the correlation between the adjacent block and the encoding target block, Based on the information, reference information that is a candidate for a prediction vector is determined from the determination results by the adjacent block reference information determination unit 302.
  • FIG. 10 is a flowchart for explaining the operation of the prediction vector generation unit 108a in the present embodiment.
  • a method for generating a prediction vector of one coding target block will be described with reference to this flowchart and FIGS. 5 and 11.
  • a depth map 702 shown in FIG. 11 is the same as the depth map 702 shown in FIG. 6, and is a depth map corresponding to the image 701 shown in FIG.
  • Reference numeral 707 is an area corresponding to the encoding target block, and reference numerals 708, 709, and 710 are areas corresponding to adjacent blocks.
  • a rectangle 801 on the left side of the area 708 corresponding to the adjacent block, a rectangle 802 on the upper side of the area 709, and a rectangle 803 on the upper right of the area 710 are areas corresponding to the peripheral blocks of the adjacent block.
  • the peripheral block is a block that is in a symmetrical position with the encoding target block with respect to the adjacent block.
  • the edge detection unit 401 and the inter-block correlation determination unit 402 receive the depth map 702 corresponding to the encoding target image 701 as an input (step D1). Note that step D1 may be performed for each frame instead of for each block to be encoded. Then, edge information of the depth map 702 is acquired as information indicating the correlation between the encoding target block 703 and each of the adjacent blocks 704, 705, and 706, and the result is output (step D2).
  • an edge detection method for example, a known method such as a method using a Canny filter or an edge detection method by differentiation can be used.
  • step D4 the adjacent block reference information determination unit 302 determines whether or not the reference image of the adjacent block is the same as the reference image of the encoding target block, as in the first embodiment (step D4). When it is determined that they are not identical (D4-No), the process proceeds to step D7. When it is determined that they are the same (D4-Yes), the process proceeds to step D5. In step D5, the prediction vector candidate determination unit 403 determines whether an edge is included in the adjacent block based on the output result of the edge detection unit 401.
  • step D8 When it is determined that no edge is included (D5-No), the process proceeds to step D8, and the prediction vector candidate determination unit 403 sets the reference information of the adjacent block as a prediction vector candidate.
  • step D9 if all the adjacent blocks are processed, the process proceeds to step D10. If there is an unprocessed adjacent block, the process returns to step D4.
  • step D5 when it is determined in step D5 that an edge is included (D5-Yes), the process proceeds to step D6.
  • step D6 the inter-block correlation determining unit 402 acquires information indicating the correlation with the adjacent block. Specifically, the correlation between the encoding target block and the adjacent block (block X) is determined using the following equation (1).
  • Depth [Block to be encoded] ⁇ Depth [Block X]
  • Depth [block ⁇ ] indicates an average value of values of regions corresponding to the block ⁇ in the depth map, and
  • the block X ′ is a peripheral block of the block X. That is, in the example of FIG. 11, Depth [encoding target block] is an average value of the depth values of the region 707.
  • Depth [block X] is an average value of the depth value of the region 708, and Depth [block X ′] is an average value of the depth value of the region 801.
  • Depth [block X] is the average value of the depth value of the area 709
  • Depth [block X ′] is the average value of the depth value of the area 802. is there.
  • Depth [block X] is the average value of the depth value of the area 710
  • Depth [block X ′] is the average value of the depth value of the area 803. .
  • the inter-block correlation determining unit 402 performs the loop of steps D3 to D9 among the average value of the depth value of the area 707 corresponding to the encoding target block and the areas 708, 709, and 710, and The absolute difference between the average value of the depth value of the area corresponding to the adjacent block to be processed, the average value of the depth value of the area corresponding to the adjacent block to be processed of the loop, and the loop The difference absolute value is compared with the average value of the depth values of the areas corresponding to the neighboring blocks of the adjacent block to be processed. Then, the inter-block correlation determination unit 402 determines whether or not there is a correlation between the encoding target block and the adjacent block based on the magnitude relationship of the adjacent block that is the processing target of the loop.
  • step D6-Yes If the result of determination in step D6 is that there is a correlation (step D6-Yes), the process proceeds to step D8 described above. On the other hand, if the result of determination in step D6 is that there is no correlation (step D6-No), the process proceeds to step D7.
  • step D7 the prediction vector candidate determination unit 403 transitions to step D9 without setting the reference information of the adjacent block as a prediction vector candidate.
  • step D9 as described above, if all adjacent blocks have been processed, the process proceeds to step D10. If there is an unprocessed adjacent block, the process returns to step D4.
  • the median prediction vector generation unit 404 receives from 0 to 3 reference information set as prediction vector candidates as input, and generates a prediction vector in the same manner as in H.264 / AVC. Output. Specifically, when three pieces of reference information are received, a median value is taken for each of the horizontal component and the vertical component from the three pieces of reference information, and the value is used as a prediction vector.
  • the reference information is added to the reference information with the component and the vertical component being 0, the reference information is set to three, and the prediction vector is generated in the same manner as when the three pieces of reference information are received.
  • the reference information is a prediction vector, and in the case of 0, the horizontal and vertical components of the prediction vector are set to 0.
  • step D6 When it is determined in step D6 that there is a correlation, the process proceeds to step D8 to set the reference information of the adjacent block as a prediction vector. Even if the adjacent block contains an edge, it is possible that most of the adjacent block is included in the same object as the encoding target block. This is because it is more likely that the accuracy of the prediction vector is improved when the prediction vector is generated using the reference information.
  • an adjacent block including an edge information indicating the correlation between the adjacent block and the encoding target block, which is the output of the inter-block correlation determining unit 402, has a correlation. If it indicates that there is a correlation, the reference information of the adjacent block is set as a prediction vector candidate, and if it indicates that there is no correlation, the reference information of the adjacent block is not set as a prediction vector candidate.
  • the third embodiment of the present invention will be described below with reference to the drawings.
  • a method of generating a prediction vector by weighted average using information indicating the correlation between the encoding target block and adjacent blocks as a weight will be described.
  • the image encoding device 100b according to this embodiment is different from the image encoding device 100 illustrated in FIG. 1 in that a prediction vector generation unit 108b is provided instead of the prediction vector generation unit 108.
  • the image decoding device 200b according to the present embodiment is different from the image decoding device 200 shown in FIG. 7 in that a prediction vector generation unit 108b is provided instead of the prediction vector generation unit 108.
  • FIG. 12 is a schematic block diagram showing the configuration of the prediction vector generation unit 108b in the present embodiment.
  • the prediction vector generation unit 108 generates a prediction vector by a weighted average using information indicating the correlation between the encoding target block and the adjacent block as a weight.
  • the prediction vector generation unit 108b includes an inter-block correlation calculation unit 301 and a prediction vector calculation unit 109b.
  • the prediction vector calculation unit 109b includes an adjacent block reference information determination unit 302 and a prediction vector generation unit 502 using a weighted average.
  • the inter-block correlation calculation unit 301 and the adjacent block reference information determination unit 302 are the same as the inter-block correlation calculation unit 301 and the adjacent block reference information determination unit 302 in FIG.
  • the prediction vector generation unit 502 based on the weighted average calculates the weighted average of the reference information of the adjacent blocks using the weight according to the correlation calculated by the inter-block correlation calculation unit 301, and sets it as the prediction vector.
  • FIG. 13 is a flowchart for explaining the operation of the prediction vector generation unit 108b in the present embodiment.
  • the prediction vector generation method will be described with reference to this flowchart and FIG.
  • the inter-block correlation calculation unit 301 receives a depth map 702 corresponding to the image 701 as an input (step E1). Then, the inter-block correlation calculation unit 301 calculates and outputs information representing the correlation between the encoding target block 703 and the adjacent blocks 704, 705, and 706 using the depth map (step E2).
  • the difference absolute value of the average value of the depth values of the block 707 at the same position as the encoding target block on the depth map 702 and the average value of the depth values of the adjacent blocks 708, 709 and 710 is calculated. Calculate and output. A square error may be calculated and output instead of the difference absolute value.
  • the adjacent block reference information determination unit 302 determines whether the encoding target block and the reference image are the same for each adjacent block (E4), as in the first embodiment. For example, the reference information of the adjacent block is set as a prediction vector candidate (E5), and if it is not the same, the reference information of the adjacent block is excluded from the prediction vector candidates (E6).
  • the prediction vector generation unit 502 based on the weighted average receives the reference information of the adjacent block as a prediction vector candidate from the adjacent block reference information determination unit 302 as an input, and encodes the adjacent block from the inter-block correlation calculation unit 501. Information indicating the correlation with the target block is received. Then, a weighted average of the reference information with the reciprocal of the information representing the correlation as a weight is calculated for each of the horizontal component and the vertical component, and the calculation result is set as a prediction vector (step E8).
  • This enables acquisition of reference information with emphasis on neighboring blocks that are likely to exist in the same object as the encoding target block, thereby improving the accuracy of the prediction vector and improving the encoding efficiency of the reference information.
  • the image encoding device and the image decoding device target two-viewpoint moving images, but target three-viewpoint moving images, one-viewpoint moving images, and multi-viewpoint still images. You may make it.
  • the parallax compensation prediction cannot be selected as a coding mode in the case of a one-view video
  • the inter-frame motion compensation prediction cannot be selected as a coding mode in the case of a multi-view still image.
  • the above processing related to image encoding and decoding can be realized as a transmission and storage device using hardware, as well as firmware stored in ROM, flash memory, etc., and software such as a computer. Can also be realized.
  • the firmware program and software program can be recorded on a computer-readable recording medium, provided from a server through a wired or wireless network, or provided as a data broadcast of terrestrial or satellite digital broadcasting Is also possible.
  • the embodiment of the present invention has been described in detail with reference to the drawings. However, the specific configuration is not limited to this embodiment, and the design and the like within the scope of the present invention are also within the scope of the claims. include.
  • the functions of the image encoding device or the image decoding device in each of the above-described embodiments, or a program for realizing a part of these functions are recorded on a computer-readable recording medium and recorded on the recording medium. These functions may be realized by reading the executed program into a computer system and executing the program.
  • the “computer system” includes an OS and hardware such as peripheral devices.
  • the “computer system” includes a homepage providing environment (or display environment) if a WWW system is used.
  • the “computer-readable recording medium” refers to a storage device such as a flexible medium, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, and a hard disk incorporated in a computer system.
  • the “computer-readable recording medium” dynamically holds a program for a short time like a communication line when transmitting a program via a network such as the Internet or a communication line such as a telephone line.
  • a volatile memory in a computer system serving as a server or a client in that case, and a program that holds a program for a certain period of time are also included.
  • the program may be a program for realizing a part of the functions described above, and may be a program capable of realizing the functions described above in combination with a program already recorded in a computer system.
  • difference image decoding unit 202 ... difference reference information decoding unit 203 ... reference image designation information decoding unit 204 ... predicted image generation unit 205 ... Reference image memory 206 ... Reference information storage memory 209 ... Reference Image designation information storage memory 210, 211 ... Addition unit 301 ... Inter-block correlation calculation unit 302 ; Adjacent block reference information determination unit 303 ... Adjacent block reference information storage memory 304 ... Prediction vector setting unit 401 ... Edge detection unit 402 ... Between blocks Correlation determination unit 403 ... prediction vector candidate determination unit 404 ... prediction vector generation unit based on median value 502 ... prediction vector generation unit based on weighted average

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

L'invention porte sur un procédé de génération de vecteurs de prédiction d'informations de référence qui est utilisé lors de la séparation d'une image sujet de codage ou de décodage en blocs, l'application d'une technique de codage par prédiction de mouvement inter-image ou d'une technique de codage par prédiction de compensation de parallaxe à des blocs respectifs, la génération d'une image de prédiction de blocs sujets sur la base de l'image de référence des blocs sujets et des informations de référence représentant la position d'une région correspondant aux blocs sujets dans l'image de référence et le codage ou le décodage de l'image. Le procédé de génération de vecteurs de prédiction donne un rendement de codage supérieur par inclusion d'une étape de génération de vecteurs de prédiction consistant à générer des vecteurs de prédiction des blocs sujets à l'aide d'informations représentant des distances correspondant à l'image sujette, et des informations de référence de blocs attenants aux blocs sujets.
PCT/JP2011/072032 2010-09-30 2011-09-27 Procédé de génération de vecteurs de prédiction, procédé de codage d'image, procédé de décodage d'image, dispositif de génération de vecteurs de prédiction, dispositif de codage d'image, dispositif de décodage d'image, programme de génération de vecteurs de prédiction, programme de codage d'image et programme de décodage d'image WO2012043541A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010222189A JP4938884B2 (ja) 2010-09-30 2010-09-30 予測ベクトル生成方法、画像符号化方法、画像復号方法、予測ベクトル生成装置、画像符号化装置、画像復号装置、予測ベクトル生成プログラム、画像符号化プログラムおよび画像復号プログラム
JP2010-222189 2010-09-30

Publications (1)

Publication Number Publication Date
WO2012043541A1 true WO2012043541A1 (fr) 2012-04-05

Family

ID=45892981

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/072032 WO2012043541A1 (fr) 2010-09-30 2011-09-27 Procédé de génération de vecteurs de prédiction, procédé de codage d'image, procédé de décodage d'image, dispositif de génération de vecteurs de prédiction, dispositif de codage d'image, dispositif de décodage d'image, programme de génération de vecteurs de prédiction, programme de codage d'image et programme de décodage d'image

Country Status (2)

Country Link
JP (1) JP4938884B2 (fr)
WO (1) WO2012043541A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013157439A1 (fr) * 2012-04-17 2013-10-24 ソニー株式会社 Dispositif et procédé de décodage, dispositif et procédé de codage
JP2015516751A (ja) * 2012-04-13 2015-06-11 コーニンクレッカ フィリップス エヌ ヴェ 奥行きシグナリングデータ
CN104769947A (zh) * 2013-07-26 2015-07-08 北京大学深圳研究生院 一种基于p帧的多假设运动补偿编码方法

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9363500B2 (en) 2011-03-18 2016-06-07 Sony Corporation Image processing device, image processing method, and program
WO2012153440A1 (fr) * 2011-05-09 2012-11-15 シャープ株式会社 Procédé générateur de vecteur de prédiction, dispositif générateur de vecteur de prédiction, programme générateur de vecteur de prédiction, procédé de codage d'image, dispositif de codage d'image, programme de codage d'image, procédé de décodage d'image, dispositif de décodage d'image et programme de décodage d'image
JP2013110555A (ja) * 2011-11-21 2013-06-06 Sharp Corp 画像符号化装置、画像復号装置、並びにそれらの方法及びプログラム
US9736497B2 (en) * 2012-07-10 2017-08-15 Sharp Kabushiki Kaisha Prediction vector generation device, image encoding device, image decoding device, prediction vector generation method, and program
JP6000463B2 (ja) * 2012-09-21 2016-09-28 聯發科技股▲ふん▼有限公司Mediatek Inc. 3d映像符号化の仮想深度値の方法および装置
JP2014093602A (ja) * 2012-11-01 2014-05-19 Toshiba Corp 画像処理装置、画像処理方法、画像処理プログラム、および立体画像表示装置
JP6101067B2 (ja) * 2012-12-12 2017-03-22 日本放送協会 画像処理装置及び画像処理プログラム
JP6110724B2 (ja) * 2013-05-01 2017-04-05 日本放送協会 画像処理装置、符号化装置、及び符号化プログラム

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009013682A2 (fr) * 2007-07-26 2009-01-29 Koninklijke Philips Electronics N.V. Méthode et appareil pour la propagation d'informations liées à la profondeur
JP2009543508A (ja) * 2006-07-12 2009-12-03 エルジー エレクトロニクス インコーポレイティド 信号処理方法及び装置
WO2010064396A1 (fr) * 2008-12-03 2010-06-10 株式会社日立製作所 Procédé de décodage d'images animées et procédé de codage d'images animées

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3637226B2 (ja) * 1999-02-01 2005-04-13 株式会社東芝 動き検出方法、動き検出装置及び記録媒体
JP2001175863A (ja) * 1999-12-21 2001-06-29 Nippon Hoso Kyokai <Nhk> 多視点画像内挿方法および装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009543508A (ja) * 2006-07-12 2009-12-03 エルジー エレクトロニクス インコーポレイティド 信号処理方法及び装置
WO2009013682A2 (fr) * 2007-07-26 2009-01-29 Koninklijke Philips Electronics N.V. Méthode et appareil pour la propagation d'informations liées à la profondeur
WO2010064396A1 (fr) * 2008-12-03 2010-06-10 株式会社日立製作所 Procédé de décodage d'images animées et procédé de codage d'images animées

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHIN'YA SHIMIZU ET AL.: "Efficient Multi-view Video Coding using Multi-view Depth Map", THE JOURNAL OF THE INSTITUTE OF IMAGE INFORMATION AND TELEVISION ENGINEERS, vol. 63, no. 4, 1 April 2009 (2009-04-01), pages 524 - 532 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015516751A (ja) * 2012-04-13 2015-06-11 コーニンクレッカ フィリップス エヌ ヴェ 奥行きシグナリングデータ
WO2013157439A1 (fr) * 2012-04-17 2013-10-24 ソニー株式会社 Dispositif et procédé de décodage, dispositif et procédé de codage
CN104769947A (zh) * 2013-07-26 2015-07-08 北京大学深圳研究生院 一种基于p帧的多假设运动补偿编码方法
CN104769947B (zh) * 2013-07-26 2019-02-26 北京大学深圳研究生院 一种基于p帧的多假设运动补偿编码方法

Also Published As

Publication number Publication date
JP2012080242A (ja) 2012-04-19
JP4938884B2 (ja) 2012-05-23

Similar Documents

Publication Publication Date Title
JP4938884B2 (ja) 予測ベクトル生成方法、画像符号化方法、画像復号方法、予測ベクトル生成装置、画像符号化装置、画像復号装置、予測ベクトル生成プログラム、画像符号化プログラムおよび画像復号プログラム
CN106331703B (zh) 视频编码和解码方法、视频编码和解码装置
JP5970609B2 (ja) 3dビデオ符号化における統一された視差ベクトル導出の方法と装置
US9264691B2 (en) Method and system for backward 3D-view synthesis prediction using neighboring blocks
CN113491124A (zh) 基于dmvr的帧间预测方法和设备
US20140092210A1 (en) Method and System for Motion Field Backward Warping Using Neighboring Blocks in Videos
JP6307152B2 (ja) 画像符号化装置及び方法、画像復号装置及び方法、及び、それらのプログラム
JPWO2012147621A1 (ja) 符号化装置および符号化方法、並びに、復号装置および復号方法
JP6039178B2 (ja) 画像符号化装置、画像復号装置、並びにそれらの方法及びプログラム
CN114731428A (zh) 用于执行prof的图像编码/解码方法和装置及发送比特流的方法
KR20180037042A (ko) 모션 벡터 필드 코딩 방법 및 디코딩 방법, 및 코딩 및 디코딩 장치들
WO2012077634A9 (fr) Procédé de codage d&#39;image à vues multiples, procédé de décodage d&#39;image à vues multiples, dispositif de codage d&#39;image à vues multiples, dispositif de décodage d&#39;image à vues multiples, et programmes associés
CN111247805A (zh) 在图像编码系统中基于以子块为单元进行的运动预测的图像解码方法和设备
KR101598855B1 (ko) 입체영상 부호화 장치 및 방법
US20230199175A1 (en) Method and device for subpicture-based image encoding/decoding, and method for transmitting bitstream
CN114208171A (zh) 用于推导用于生成预测样本的权重索引信息的图像解码方法和装置
CN114080810A (zh) 基于帧间预测的图像编译方法和装置
JPWO2014168121A1 (ja) 画像符号化方法、画像復号方法、画像符号化装置、画像復号装置、画像符号化プログラム、および画像復号プログラム
US20220224912A1 (en) Image encoding/decoding method and device using affine tmvp, and method for transmitting bit stream
JP4944046B2 (ja) 映像符号化方法,復号方法,符号化装置,復号装置,それらのプログラムおよびコンピュータ読み取り可能な記録媒体
JPWO2015056712A1 (ja) 動画像符号化方法、動画像復号方法、動画像符号化装置、動画像復号装置、動画像符号化プログラム、及び動画像復号プログラム
JP2015128252A (ja) 予測画像生成方法、予測画像生成装置、予測画像生成プログラム及び記録媒体
WO2012108315A1 (fr) Procédé de génération d&#39;informations prévues, procédé de codage d&#39;image, procédé de décodage d&#39;image, appareil de génération d&#39;informations prévues, programme de génération d&#39;informations prévues, appareil de codage d&#39;image, programme de codage d&#39;image, appareil de décodage d&#39;image et programme de décodage d&#39;image
CN114007078A (zh) 一种运动信息候选列表的构建方法、装置及其设备
JP2012124946A (ja) 予測ベクトル生成方法、画像符号化方法、画像復号方法、予測ベクトル生成装置、画像符号化装置、画像復号装置、予測ベクトル生成プログラム、画像符号化プログラムおよび画像復号プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11829094

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11829094

Country of ref document: EP

Kind code of ref document: A1