WO2015060508A1 - Procédé et appareil de codage/décodage vidéo - Google Patents

Procédé et appareil de codage/décodage vidéo Download PDF

Info

Publication number
WO2015060508A1
WO2015060508A1 PCT/KR2014/003517 KR2014003517W WO2015060508A1 WO 2015060508 A1 WO2015060508 A1 WO 2015060508A1 KR 2014003517 W KR2014003517 W KR 2014003517W WO 2015060508 A1 WO2015060508 A1 WO 2015060508A1
Authority
WO
WIPO (PCT)
Prior art keywords
merge motion
motion candidate
candidate
merge
list
Prior art date
Application number
PCT/KR2014/003517
Other languages
English (en)
Korean (ko)
Inventor
방건
이광순
허남호
박광훈
허영수
김경용
이윤진
Original Assignee
한국전자통신연구원
경희대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국전자통신연구원, 경희대학교 산학협력단 filed Critical 한국전자통신연구원
Priority to EP14855443.9A priority Critical patent/EP3062518A4/fr
Priority to US14/903,117 priority patent/US10080029B2/en
Priority claimed from KR1020140048066A external-priority patent/KR102227279B1/ko
Publication of WO2015060508A1 publication Critical patent/WO2015060508A1/fr
Priority to US16/103,042 priority patent/US10412403B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding

Definitions

  • the present invention relates to a video encoding / decoding method and apparatus, and more particularly, to a method and apparatus for constructing a merge motion candidate list for 3D video coding.
  • 3D video vividly provides a user with a three-dimensional effect as seen and felt in the real world through a three-dimensional display device.
  • Related work includes three-dimensional work in The Joint Collaborative Team on 3D Video Coding Extension Development (JCT-3V), a joint standardization group of ISO / IEC's Moving Picture Experts Group (MPEG) and ITU-T's Video Coding Experts Group (VCEG).
  • JCT-3V Joint Collaborative Team on 3D Video Coding Extension Development
  • MPEG Moving Picture Experts Group
  • VCEG ITU-T's Video Coding Experts Group
  • the video standard is in progress.
  • the 3D video standard uses an advanced data format that can support the playback of autostereoscopic images as well as stereoscopic images using a texture view and its depth map. Contains standards for technology.
  • the present invention provides an image encoding / decoding method and apparatus capable of improving image encoding / decoding efficiency.
  • the present invention provides a 3D video encoding / decoding method and apparatus capable of improving encoding / decoding efficiency.
  • the present invention provides a method and apparatus for constructing a merge motion candidate list in 3D video encoding / decoding.
  • a video decoding method including a plurality of views may include constructing a merge motion candidate list by deriving a basic merge motion candidate for a current prediction unit (PU), and when the current PU is a depth map or a dependent view, extended merging for the current PU Deriving a motion candidate and adding the extended merge motion candidate to the merge motion candidate list.
  • PU current prediction unit
  • the extended merge motion candidate when the extended merge motion candidate is not the same as the basic merge motion candidate in the merge motion candidate list, the extended merge motion candidate may be added to the merge motion candidate list. .
  • a video decoding apparatus including a plurality of views.
  • the video decoding apparatus derives a basic merge motion list constructing module for deriving a basic merge motion candidate for a current prediction unit (PU) and configures a merge motion candidate list, and when the current PU is a depth map or a dependent view, the current
  • An additional merge motion list construction module for deriving an extended merge motion candidate for a PU and adding the extended merge motion candidate to the merge motion candidate list.
  • the additional merge motion list construction module may add the extended merge motion candidate to the merge motion candidate list when the extended merge motion candidate is not the same as the basic merge motion candidate in the merge motion candidate list.
  • a video encoding method including a plurality of views may include constructing a merge motion candidate list by deriving a basic merge motion candidate for a current prediction unit (PU), and when the current PU is a depth map or a dependent view, extended merge for the current PU Deriving a motion candidate and adding the extended merge motion candidate to the merge motion candidate list.
  • PU current prediction unit
  • the extended merge motion candidate when the extended merge motion candidate is not the same as the basic merge motion candidate in the merge motion candidate list, the extended merge motion candidate may be added to the merge motion candidate list. .
  • a video encoding apparatus including a plurality of views.
  • the video encoding apparatus derives a basic merge motion list constructing module for deriving a basic merge motion candidate for a current prediction unit (PU) and configures a merge motion candidate list, and when the current PU is a depth map or a dependent view, the current
  • An additional merge motion list construction module for deriving an extended merge motion candidate for a PU and adding the extended merge motion candidate to the merge motion candidate list.
  • the additional merge motion list construction module may add the extended merge motion candidate to the merge motion candidate list when the extended merge motion candidate is not the same as the basic merge motion candidate in the merge motion candidate list.
  • Modules used for encoding normal images for independent views (view 0) that provide backward compatibility are dependent views (view 1) and view 2 (view 2).
  • Implementation complexity can be reduced by applying the same to the general image and the depth maps for the C).
  • the coding efficiency may be improved by additionally applying the partial encoder to the general image and the depth maps of the dependent view (View 1 and View 2).
  • 1 is an example schematically showing the basic structure and data format of a 3D video system.
  • FIG. 2 is a diagram illustrating an example of an actual image and a depth map image of a “balloons” image.
  • 3 is an example illustrating the structure of inter view prediction in a 3D video codec.
  • FIG. 4 illustrates an example of a process of encoding / decoding a texture view and a depth view in a 3D video encoder / decoder.
  • FIG. 5 shows an example of a prediction structure of a 3D video codec.
  • FIG. 6 is a schematic structural diagram of an encoder of a 3D video codec.
  • FIG. 7 is a diagram illustrating a merge motion method used in an HEVC-based 3D video codec (3D-HEVC).
  • FIG. 8 shows an example of neighboring blocks used to construct a merge motion list for a current block.
  • FIG. 9 is a diagram illustrating an example of a hardware implementation of a method of constructing a merge motion candidate list.
  • FIG. 10 is a diagram schematically illustrating a 3D video codec according to an embodiment of the present invention.
  • FIG. 11 is a conceptual diagram schematically illustrating a merge motion method according to an embodiment of the present invention.
  • FIG. 12 is a diagram illustrating an example of hardware implementation of the merge motion method of FIG. 11 according to an embodiment of the present invention.
  • FIG. 13 is a conceptual diagram illustrating a method of constructing a merge motion candidate list of FIGS. 11 and 12 according to an embodiment of the present invention.
  • FIG. 14 illustrates a method of constructing an extended merge motion candidate list according to an embodiment of the present invention.
  • 15 is a diagram for describing a method of constructing an extended merge motion candidate list according to another embodiment of the present invention.
  • 16 is a flowchart schematically illustrating a method of constructing a merge motion candidate list according to an embodiment of the present invention.
  • 17A to 17F are flowcharts illustrating a method of adding an extended merge motion candidate to a merge motion candidate list according to an embodiment of the present invention.
  • FIG. 18 is a flowchart schematically illustrating a method of constructing a merge motion candidate list in video encoding / decoding including a plurality of viewpoints according to an embodiment of the present invention.
  • first and second may be used to describe various configurations, but the configurations are not limited by the terms. The terms are used to distinguish one configuration from another.
  • first configuration may be referred to as the second configuration, and similarly, the second configuration may also be referred to as the first configuration.
  • each component shown in the embodiments of the present invention are independently shown to represent different characteristic functions, and do not mean that each component is made of separate hardware or one software component unit.
  • each component is listed as a component for convenience of description, and at least two of the components may form one component, or one component may be divided into a plurality of components to perform a function.
  • the integrated and separated embodiments of each component are also included in the scope of the present invention without departing from the spirit of the present invention.
  • the components may not be essential components for performing essential functions in the present invention, but may be optional components for improving performance.
  • the present invention can be implemented including only the components essential for implementing the essentials of the present invention except for the components used for improving performance, and the structure including only the essential components except for the optional components used for improving performance. Also included in the scope of the present invention.
  • FIG. 1 is an example schematically showing the basic structure and data format of a 3D video system.
  • the 3D video system of FIG. 1 may be a basic 3D video system under consideration in the 3D video standard.
  • a 3D video (3D video) system decodes a video content received from a sender and a transmitter that generates a multi-view video content, and thus generates a multiview video. It may include a receiver for providing a.
  • the transmitter may generate video information using a stereo camera and a multiview camera, and generate a depth map or a depth view using the depth camera.
  • the transmitter may convert a 2D image into a 3D image using a converter.
  • the transmitter may generate image content of an N (N ⁇ 2) view using the generated video information, the depth map, and the like.
  • the image content of the N view may include video information of the N view, depth-map information thereof, and additional information related to a camera.
  • the video content of N views may be compressed using a multiview video encoding method in a 3D video encoder, and the compressed video content (bitstream) may be transmitted to a terminal on the receiving side through a network.
  • the receiving side may decode the received bitstream by using a multiview video decoding method in a video decoder (eg, a 3D video decoder, a stereo video decoder, a 2D video decoder, etc.) to restore an image of N views.
  • a video decoder eg, a 3D video decoder, a stereo video decoder, a 2D video decoder, etc.
  • the reconstructed N-view image may be generated as virtual view images of N or more views through a depth-image-based rendering (DIBR) process.
  • DIBR depth-image-based rendering
  • the generated virtual viewpoint images of the N viewpoints or more are reproduced for various stereoscopic display devices (for example, N-view displays, stereo displays, 2D displays, etc.) to provide a user with a three-dimensional image.
  • FIG. 2 is a diagram illustrating an example of an actual image and a depth map image of a “balloons” image.
  • the depth map is used to generate a virtual viewpoint image, and represents the distance between the camera and the real object (depth information corresponding to each pixel at the same resolution as the actual image) in a certain number of bits in the real world. .
  • FIG. 2 (a) shows “balloons” images being used in the 3D video coding standard of MPEG, which is an international standardization organization.
  • FIG. 2B illustrates a depth map image of the “balloons” image shown in FIG. 2A.
  • the depth map image illustrated in FIG. 2B expresses depth information displayed on the screen at 8 bits per pixel.
  • a method of encoding a real image and its depth map may be, for example, using H.264 / AVC (MPEG-4 Part 10 Advanced Video Coding), or moving picture experts group (MPEG) and video coding experts group (VCEG). You can also use the HEVC (High Efficiency Video Coding) international video standard jointly standardized by H.264 / AVC (MPEG-4 Part 10 Advanced Video Coding), or moving picture experts group (MPEG) and video coding experts group (VCEG). You can also use the HEVC (High Efficiency Video Coding) international video standard jointly standardized by H.264 / AVC (MPEG-4 Part 10 Advanced Video Coding), or moving picture experts group (MPEG) and video coding experts group (VCEG). You can also use the HEVC (High Efficiency Video Coding) international video standard jointly standardized by H.264 / AVC (MPEG-4 Part 10 Advanced Video Coding), or moving picture experts group (MPEG) and video coding experts group (VCEG). You can also use the HEVC (High Efficiency Video Coding) international video standard jointly
  • the real image and its depth map may be images obtained from not only one camera but also several cameras. Images obtained from multiple cameras may be encoded independently and may be encoded using a general two-dimensional video encoding codec. In addition, since images obtained from multiple cameras have correlations between viewpoints, images obtained from multiple cameras may be encoded using different inter-view predictions to increase encoding efficiency.
  • 3 is an example illustrating the structure of inter view prediction in a 3D video codec.
  • View 1 is an image obtained from a camera located on the left side with respect to View 0, and View 2 is on the right side with respect to View 0. Image obtained from the camera located.
  • view 1 and view 2 perform inter-view prediction using view 0 as a reference image, and the encoding order is view 1 and view 2.
  • View 0 should be coded before.
  • view 0 is called an independent view because it may be independently encoded regardless of other views.
  • view 1 and view 2 are referred to as dependent views because they are encoded using view 0 as a reference image.
  • Independent viewpoint images may be encoded using a general two-dimensional video codec.
  • the dependent view image since the dependent view image needs to perform inter-view prediction, it may be encoded using a 3D video codec including an inter-view prediction process.
  • the view 1 and the view 2 may be encoded using the depth map.
  • the real image and the depth map thereof when the real image and the depth map thereof are encoded, the real image and the depth map may be encoded / decoded independently of each other.
  • the real image and the depth map when the real image and the depth map are encoded, the real image and the depth map may be encoded / decoded depending on each other as shown in FIG. 4.
  • FIG. 4 illustrates an example of a process of encoding / decoding a texture view and a depth view in a 3D video encoder / decoder.
  • the 3D video encoder may include a real image encoder for encoding a texture view and a depth map encoder for encoding a depth view. .
  • the real image encoder may encode the real image using a depth map that is already encoded by the depth map encoder.
  • the depth map encoder may encode the depth map by using the real image that is already encoded by the real image encoder.
  • the 3D video decoder may include a real image decoder for decoding an actual image and a depth map decoder for decoding a depth map.
  • the real image decoder may decode the real image using the depth map already decoded by the depth map decoder.
  • the depth map decoder may decode the depth map using the real image that is already decoded by the real image decoder.
  • FIG. 5 shows an example of a prediction structure of a 3D video codec.
  • FIG. 5 is a diagram illustrating an encoding prediction structure for encoding a real image obtained from three cameras and a depth map of the real image for convenience of description.
  • T0, T1, and T2 three real images acquired by three cameras are represented by T0, T1, and T2 according to viewpoints
  • three depth maps of the same position as the actual image are represented by D0, D1, and D2 according to viewpoints.
  • T0 and D0 are images acquired at View
  • T1 and D1 are images acquired at View 1
  • T2 and D2 are images acquired at View 2.
  • Each picture is divided into an I picture (Intra Picture), a P picture (Uni-prediction Picture), and a B picture (Bi-prediction Picture) according to an encoding type, and may be encoded according to an encoding type of each picture.
  • the I picture encodes the image itself without inter-picture prediction
  • the P picture predicts and encodes the picture using the reference picture only in the forward direction
  • the B picture encodes the picture inter prediction using the reference picture in both the forward and reverse directions.
  • Arrows in FIG. 5 indicate prediction directions. That is, the real image and its depth map may be encoded / decoded depending on the prediction direction.
  • Temporal prediction is a prediction method using temporal correlation within the same viewpoint
  • inter-view prediction is a prediction method using inter-view correlation at adjacent viewpoints. Such temporal prediction and inter-view prediction may be mixed with each other in a picture.
  • the current block refers to a block in which the current prediction is performed in the real image.
  • the motion information may mean only a motion vector or may mean a motion vector, a reference picture number, unidirectional prediction, bidirectional prediction, inter-view prediction, temporal prediction, or another prediction.
  • FIG. 6 is a schematic structural diagram of an encoder of a 3D video codec.
  • the 3D video codec 600 receives and encodes different viewpoint images (eg, view 0, view 1, and view 2) as inputs.
  • the encoded integrated bitstream may be output.
  • the images may include not only a texture view but also a depth view.
  • the 3D video codec 600 may encode input images by different encoders according to view information (View ID information).
  • the image of view 0 may be encoded by the existing 2D video codec for backward compatibility, and thus may be encoded by the base layer encoder 610.
  • Images of View 1 and View 2 must be encoded with a 3D video codec that includes an inter-view prediction algorithm and an algorithm using the correlation between the general image and the depth map, and thus an enhancement layer. It may be encoded by the encoder 620 (view 1 or view 2 encoder).
  • the encoded information may be encoded by using the encoded information of the general image, it may be encoded by the enhancement layer encoder 620. Therefore, a more complicated encoder is required when encoding the images of the view 1 and the view 2 than the encoding of the view 0 and the view of the base layer. More complex encoders are required when encoding depth maps than when encoding normal images.
  • a merge or merge motion method is used as one of encoding methods of motion information used for inter prediction during image encoding / decoding.
  • the enhancement layer uses an improved merge motion method by modifying the merge motion method in the base layer.
  • FIG. 7 is a diagram illustrating a merge motion method used in an HEVC-based 3D video codec (3D-HEVC).
  • a method 710 of merging motion configuration for view 0 and other remaining views may be used.
  • the merge motion configuration method 720 is performed separately from each other.
  • the 3D-HEVC 700 determines whether the input image is a normal image or depth information. Based on the information (Texture / Depth information) whether it is a map image and the view information (ViewID information) of the input image, the merging motion configuration method 710 for the view 0 (View 0) and the other views (view 1 One of the method 720 for configuring merge motion for View 1) and View 2) may be selected.
  • the 3D-HEVC 700 may output a merge motion candidate list for the current PU using the selected merge motion configuration method.
  • the current PU refers to a current block in which prediction within the current image is performed to encode / decode the current image.
  • the general image for view 0 constructs a merge motion candidate list using a merge motion configuration method for the base layer for backward compatibility.
  • the general image and the depth map of the view 1 and the view 2 constitute a merge motion candidate list using the merge motion configuration method for the enhancement layer.
  • the merge motion construction method for the enhancement layer is performed by adding a new candidate or modifying the candidate list order to the merge motion construction method for the base layer. That is, as shown in FIG. 7, the merge motion configuration method for the enhancement layer (the other views (view 1 and view 2) and the depth map) includes the merge motion for the base layer. Configuration methods are included.
  • the merge motion configuration method for the enhancement layer is more complicated than the merge motion configuration for the base layer, and the computational complexity is large.
  • FIG. 8 shows an example of neighboring blocks used to construct a merge motion list for a current block.
  • the merge motion method refers to a method of using motion information of a neighboring block of the current block as motion information (for example, a motion vector, a reference picture list, a reference picture index, etc.) of the current block (current PU).
  • motion information for example, a motion vector, a reference picture list, a reference picture index, etc.
  • a merge motion candidate list for the current block is constructed based on the motion information of the block.
  • the neighboring blocks correspond to neighboring blocks A, B, C, D, and E, which are spatially adjacent to the current block, and temporally with the current block.
  • a candidate block at the same position refers to a block at the same position in a co-located picture corresponding to the current picture including the current block in time. If the H block in the picture at the same position is available, the H block is determined as a candidate block at the same position. If the H block is not available, the M block in the picture at the same position is determined as the candidate block at the same position.
  • motion information of a candidate block (H or M) at the same position as neighboring blocks A, B, C, D, and E constitutes a merge motion candidate list of the current block. It is determined whether it can be used as a merge candidate, and the motion information of the next available block is determined as a merge motion candidate.
  • the merge motion candidate may be added to the merge motion candidate list.
  • FIG. 9 is a diagram illustrating an example of a hardware implementation of a method of constructing a merge motion candidate list.
  • input parameters for constructing a merge motion list used in a general image for view 0, and general image and depth information for views 1 and 2 The input parameters for constructing the merge motion list used in the map are the same. The only difference is the input parameters (“Additional Motion F” and “Additional Motion G”) for constructing the merged motion list used in the normal image and depth maps for View 1 and View 2. Is added.
  • the parts constituting the merge motion candidate list are changed due to the added motion information. That is, in order to include the added motion information in the merge motion candidate list (in order to increase the encoding efficiency), the merge motion list for the general image and the depth map for the view 1 and the view 2
  • the new configuration module must be implemented. This can increase the implementation complexity of the hardware.
  • the implementation complexity and calculation of the encoding algorithm and the video codec for the enhancement layer (for example, the general image and the depth map of the view 1 and the view 2)
  • the "merge motion candidate list construction" module for the base layer (general image for view 0), which is already implemented in the form of a hardware chip, is reused as it is, and thus an enhancement layer (for example, view 1
  • an enhancement layer for example, view 1
  • a consumer having an encoder / decoder (specifically, a "merged motion candidate list construction” module) module for a base layer used for two-dimensional video service wants to receive three-dimensional video service, only an additional module ( Specifically, attaching only a "merged motion candidate list construction" module for the enhancement layer enables easy 3D video service.
  • an encoder / decoder specifically, a "merged motion candidate list construction” module
  • FIG. 10 is a diagram schematically illustrating a 3D video codec according to an embodiment of the present invention.
  • the 3D video codec 1000 receives and encodes different viewpoint images (eg, view 0, view 1, and view 2) as inputs.
  • different viewpoint images eg, view 0, view 1, and view 2
  • one encoded bitstream may be output.
  • the images may include not only a texture view but also a depth view.
  • the images may include an image of an independent view that may be independently encoded regardless of another viewpoint, and an image of a dependent view that is encoded using an image of an independent viewpoint as a reference image.
  • view 0 may be an independent view
  • view 1 and view 2 may be dependent views encoded with reference to view 0.
  • FIG. 1
  • the 3D video codec 1000 may include an encoder 1010 capable of encoding a general image and a depth map for all views (eg, view 0, view 1, and view 2).
  • the encoder 1010 capable of encoding a general image and a depth map for all viewpoints is MPEG-1, MPEG-2, MPEG-4 Part 2 Visual, H.264 / AVC, VC-1, AVS, KTA. , HEVC (H.265 / HEVC), and the like.
  • the 3D video codec 1000 may include a partial encoder 1020 to increase encoding efficiency with respect to the general image and the depth map for the dependent view instead of the independent view.
  • the partial encoder 1020 may encode a general image and a depth map of views 1 and 2, or may encode depth maps of all views.
  • the 3D video codec 1000 may include a multiplexer 1030 for multiplexing the images encoded by the encoders 1010 and 1020.
  • the multiplexer 1030 may generate a bitstream of a general image of view 0 and a bitstream of general image and depth maps of other views (view 1 and view 2). Multiplexing may be performed to output one bitstream.
  • the 3D video codec 1000 is a module used for encoding a general image with respect to an independent viewpoint (eg, View 0) providing backward compatibility ( 1010 may be applied to the general image and the depth maps of the dependent view (eg, View 1 and View 2) as it is, thereby reducing implementation complexity.
  • the 3D video codec 1000 according to an exemplary embodiment of the present invention may perform partial encoders on general image and depth maps of dependent viewpoints (eg, View 1 and View 2). By further applying 1020, it is possible to improve coding efficiency.
  • the 3D video codec described with reference to FIG. 10 may be applied to the entire encoding / decoding process, and may be applied to each step of encoding / decoding.
  • FIG. 11 is a conceptual diagram schematically illustrating a merge motion method according to an embodiment of the present invention.
  • the merge motion method illustrated in FIG. 11 may be performed by an HEVC-based 3D video codec (3D-HEVC), and 3D-HEVC may be implemented based on the 3D video codec of FIG. 10 described above.
  • 3D-HEVC 3D-based 3D video codec
  • the merge motion method derives a spatial merge motion candidate and a temporal merge motion candidate for the current PU, and information about the current PU (eg, the current PU). Additional merge motion candidates may be additionally derived based on the view information of the UE, image type information of the current PU, etc.), and a merge candidate list for the current PU may be configured based on the derived merge motion candidates. have.
  • an input includes information on whether current PU information (or current image information), a current PU image is a normal image, or a depth map image. / Depth information), view information of the current PU (ViewID information), and an output is a merge motion candidate list for the current PU.
  • a step of “constructing a basic merge motion list” 1110 is basically performed on the current PU to output a “basic merge motion candidate list”.
  • the “basic merge motion list construction” 1110 may use the merge motion candidate list construction method in HEVC as it is.
  • “additional merge motion” is performed.
  • List construction ”1120 may be additionally performed.
  • an input is a “default merge motion candidate list” output in step “constituting a basic merge motion list” 1110, and an output is an extended merge motion candidate list.
  • the “additional merge motion list construction” step 1120 may be performed on the general image and the depth maps for the dependent viewpoints (eg, View 1 and View 2).
  • FIG. 12 is a diagram illustrating an example of hardware implementation of the merge motion method of FIG. 11 according to an embodiment of the present invention.
  • an apparatus (hereinafter, referred to as a merge movement apparatus) 1200 that performs a merge movement method according to an embodiment of the present invention may include a basic merge movement list construction module 1210 and an additional merge movement list construction module 1220. ).
  • Inputs of the merge motion device 1200 are spatial merge motion candidates, temporal merge motion candidates, and additional merge motion candidates.
  • the output of the merge motion device 1200 is a basic merge motion candidate list in the case of a normal image for an independent view, and an extended merge motion candidate list in the case of a normal image and a depth map for a dependent view.
  • the independent view refers to a view that can be encoded independently regardless of other views, and may be a base view.
  • the dependent view refers to a view that is encoded by referring to an independent view.
  • the independent view may be a view 0, and the dependent view is described as including a view 1 and a view 2.
  • the basic merge motion list construction module 1210 may construct a basic merge motion candidate list by deriving a spatial merge motion candidate and a temporal merge motion candidate for the current PU.
  • the spatial merge motion candidate may be derived from neighboring blocks A, B, C, D, E spatially adjacent to the current PU, as shown in FIG.
  • the basic merge motion list constructing module 1210 determines whether neighboring blocks A, B, C, D, and E are available, and uses the motion information of the available neighboring blocks for the spatial merge motion candidate for the current PU. Can be determined. At this time, when determining whether the neighboring blocks (A, B, C, D, E) are available, whether or not the availability of the neighboring blocks (A, B, C, D, E) in a predetermined order or in any order You can judge. For example, it may proceed in the order of A, B, C, D, and E.
  • the temporal merge motion candidate is a co-located block (col block) (H, M) in a co-located picture (col picture) with respect to the current PU.
  • the block H of the same position may be a PU block located at the bottom right based on the block X ′ of the position corresponding to the current PU in the picture of the same position.
  • the block M of the same position may be a PU block located at the center of the X ′ block based on the block X ′ of the position corresponding to the current PU in the picture of the same position.
  • the basic merge motion list construction module 1210 may determine whether the blocks H and M at the same location are available and determine the motion information of the blocks at the same location as the temporal merge motion candidate for the current PU. In this case, the order of determining availability of the blocks H and M in the same position may be in the order of H blocks, M blocks, or vice versa.
  • the additional merge motion list construction module 1220 may determine whether the current PU image is a normal image or a depth map image (Texture / Depth information), and a current PU based on view information (ViewID information) of the current PU. Additional merge motion candidates may be derived to construct an extended merge motion candidate list.
  • the additional merge motion list construction module 1220 additionally relies on the current PU.
  • a process of constructing a merge motion candidate list for the general image and the depth map for the in-view may be performed.
  • the inputs of the additional merge motion list constructing module 1220 are the basic merge motion candidate list configured by the basic merge motion list constructing module 1210 and the additional merge motion candidates F and G.
  • the output of the additional merge motion list construction module 1220 is an extended merge motion candidate list.
  • the merge motion device can reduce the implementation complexity of the hardware by implementing only additional partial modules without implementing new modules. That is, the "merge motion candidate list construction" module for the base layer (for example, the general image of the view 0), which has already been implemented in the form of a hardware chip, is reused as it is and the enhancement layer (for example, the view 1 (View 1). ) And the complexity of hardware implementation can be reduced.
  • FIG. 13 is a conceptual diagram illustrating a method of constructing a merge motion candidate list of FIGS. 11 and 12 according to an embodiment of the present invention.
  • the method of constructing the merge motion candidate list of FIG. 13 may be performed by the 3D video codec of FIG. 10 or the merge motion apparatus of FIG. 12.
  • an input includes current PU information and information on whether a current PU image is a normal image or a depth map image (Texture / Depth information). It is view information (ViewID information) for the current PU, and the output is a merge motion candidate list for the current PU.
  • a basic merge motion candidate list 1310 is configured for the current PU.
  • the basic merge motion candidate list may use the merge motion candidate list construction method in HEVC as it is, and may be configured based on the spatial merge motion candidate and the temporal merge motion candidate for the current PU as described above.
  • an extended merge motion candidate list 1320 is configured based on information on whether the current PU image is a normal image or a depth map image (Texture / Depth information) and view information on the current PU image (ViewID information).
  • the extended merge motion candidate list may be configured for general image and depth maps for dependent viewpoints (eg, view 1 and view 2).
  • the additional merge motion candidate list may be configured. Candidates may be added.
  • a basic merge motion candidate list may be output. Otherwise, when the current PU is a general image and depth maps for a dependent view (eg, view 1 and view 2), an extended merge motion candidate list may be output.
  • the number of candidates in the extended merge motion candidate list may be larger than the number of candidates in the basic merge motion candidate list.
  • FIG. 14 illustrates a method of constructing an extended merge motion candidate list according to an embodiment of the present invention.
  • an extended merge motion candidate list may include an additional merge motion candidate (eg, motion information F), which is additional motion information, in a first index (or arbitrary) of the extended merge motion candidate list. Can be inserted into the item corresponding to the position of.
  • additional merge motion candidate eg, motion information F
  • the additional merge motion candidate (eg, motion information F) and the first merge motion candidate (eg, motion information A) of the basic merge motion candidate list are compared with each other, and the two candidates are not the same. If not, an additional merge motion candidate (eg, motion information F) may be inserted into the first item of the extended merge motion candidate list, and vice versa. For example, when comparing motion information of two candidates (eg, motion information F and A), if the difference between the motion vectors of the two candidates is within an arbitrary threshold, an additional merged motion candidate (eg, motion information F) is selected. It may not be inserted into the extended merge motion candidate list, and vice versa. Alternatively, when the reference images of the two candidates are not the same, an additional merge motion candidate (eg, motion information F) may be inserted into the extended merge motion candidate list, and vice versa.
  • 15 is a diagram for describing a method of constructing an extended merge motion candidate list according to another embodiment of the present invention.
  • the extended merge motion candidate list inserts additional merge motion candidate (eg, motion information F), which is additional motion information, into a first item of the extended merge motion candidate list, and another An additional merge motion candidate (eg, motion information G), which is additional motion information, may be inserted into a third item (or an item corresponding to an arbitrary position) of the extended merge motion candidate list.
  • additional merge motion candidate eg, motion information F
  • motion information G additional motion candidate
  • the original items (first item and third item) in the basic merge motion candidate list and the additional merge motion candidates are compared with each other, and the two candidates (eg, If motion information A and F or motion information C and G) are not the same, additional merge motion candidates may be inserted into the first and third items of the extended merge motion candidate list, and vice versa.
  • additional merge motion candidates may be inserted into the extended merge motion candidate list, and vice versa.
  • the basic encoder (or basic module) is applied to the general image and depth maps for view 1 and view 2 as well as the general image for view 0. can do.
  • the base encoder may be applied only to small blocks of high complexity (eg, 8x8 units or arbitrary block sizes).
  • the data is encoded using a basic encoder (or a basic module) below the small block size and larger than the small block size.
  • a basic encoder or base module
  • a partial encoder or extension module.
  • the basic encoder or basic module
  • the partial encoder or the extension module may configure the “additional merge motion list” in FIGS. 11 and 13. Step can be performed.
  • FIG. 16 is a flowchart schematically illustrating a method of constructing a merge motion candidate list according to an embodiment of the present invention.
  • the method of FIG. 16 may be performed by the apparatus shown in FIGS. 10 and 12 described above, or may be performed by applying to 3D-HEVC.
  • the method of FIG. 16 is described as being performed by the merge motion device.
  • the merge motion apparatus adds basic merge motion candidates to the merge motion candidate list for the current PU (S1600).
  • the basic merge motion candidates may include a spatial merge motion candidate and a temporal merge motion candidate for the current PU as described above, and may be candidates for a general image of an independent view.
  • the merge motion device determines whether the current picture including the current PU is a depth map or a dependent view (S1610).
  • the merge motion device adds an extended merge motion candidate to the merge motion candidate list (S1620).
  • the extended merge motion candidates may be candidates for a depth map or an image (normal image and depth map) of a dependent view.
  • Tables 1-6 are Joint Collaborative Team on 3D Video Coding Extensions of ITU-T SG 16, currently being standardized jointly by the Moving Picture Experts Group (MPEG) and the Video Coding Experts Group (VCEG). WP 3 and ISO / IEC JTC 1 / SC 29 / WG 11).
  • Table 1 shows an example of the input and output of the process including the addition of the existing extended merge motion candidate
  • Table 2 shows an example of the input and output of the process including the addition of the extended merge motion candidate according to an embodiment of the present invention Indicates.
  • a merge motion candidate list (mergCandList) and a flag (availableFlagN) indicating whether a default merge motion candidate has been added are used as additional inputs. .
  • N is A0, A1, B0, B1, B2, meaning that it is a candidate for left, above, above-right, bottom-left, above-left positions.
  • B merge motion candidate list
  • basic merge motion candidates are stored in an arbitrary order according to a conventional method. For example, the left candidate, the top candidate, the top right candidate, the bottom left candidate, the top left candidate, the temporal (prediction) candidate, the combined bi-predictive candidate, and the candidate with zero motion may be stored.
  • the output is a merge motion candidate list in which additional work on extended merge motion candidates is completed.
  • Table 3 shows an existing extended merge motion candidate addition process
  • Table 4 shows an extended merge motion candidate addition process according to an embodiment of the present invention.
  • Table 4 processes the list to which the basic merge motion candidates are already added, and thus processes only the processes for the extended merge motion candidates. Therefore, in the existing 3D-HEVC, it can be omitted so that the processing for the merge motion candidates used in the HEVC is not repeatedly implemented.
  • 17A, 17B, 17C, 17D, 17E, and 17F are flowcharts illustrating a method of adding an extended merge motion candidate to a merge motion candidate list according to an embodiment of the present invention.
  • 17A to 17F are constructed based on the process of adding the extended merge motion candidate of Table 4 described above.
  • the method of FIGS. 17A to 17F may be performed by the apparatus shown in FIGS. 10 and 12 described above, or may be applied to 3D-HEVC.
  • the flag iv_mv_pred_flag [nuh_layer_id] means whether the current PU can attempt inter view prediction. If the flag iv_mv_pred_flag [nuh_layer_id] is 1, an inter-view merge candidate (IvMC), an inter-view disparity merge candidate (IvDC), and a shifted inter-view merge candidate (shifted inter-view merge)
  • IvMCShift an inter-view merge candidate
  • the flag view_synthesis_pred_flag [nuh_layer_id] indicates whether the current PU can try view synthesis prediction. If the flag view_synthesis_pred_flag [nuh_layer_id] is 1, whether or not the inter-view synthesis merge candidate is available is stored in the flag availableFlagVSP, and if available, the motion information is derived.
  • the flag mpi_flag [nuh_layer_id] indicates whether the current PU is a depth map and can attempt motion prediction from a texture block. If the flag mpi_flag [nuh_layer_id] is 1, whether the texture merge candidate is available or not is stored in the flag availableFlagT, and if available, the motion information is derived.
  • merge motion candidate list consisting of only basic merge motion candidates and the inter-view prediction flag (mergeCandIsVspFlag) for each candidate are reconstructed as follows.
  • numMergeCand is the total number of merge motion candidates
  • numA1B1B0 is the number of candidates corresponding to the left, upper, and right-right positions among the default merge motion candidates
  • numA0B2 is the bottom left of the default merge motion candidates.
  • -left the number of candidates corresponding to the above-left position.
  • VSP View Synthesis Prediction
  • c It is determined whether the award candidate B1 is available. If phase candidates are available, numA1B1B0 is increased by one. In addition, it stores as a flag whether the prize candidate used the VSP.
  • d It is determined whether the idol candidate B0 is available. If the idol candidate is available, numA1B1B0 is increased by one. In addition, it stores as a flag whether the idol candidate used the VSP.
  • f It is determined whether the upper left candidate B2 is available. If the top left candidate is available, numA0B2 is increased by one. It also stores as a flag whether the upper left candidate used the VSP.
  • pruneFlagA1 is set to 1.
  • pruneFlagB1 is set to one.
  • pruneFlagA1 and pruneFlagB1 are both 0, it creates a new space at numMergeCand position in the list. In this case, creating a new space means moving all values from the numMergeCand position in the list one space to the right.
  • pruneFlagA1 is set to 1.
  • pruneFlagB1 is set to one.
  • pruneFlagT is set to 1.
  • pruneFlagA1, pruneFlagB1, and pruneFlagT are all zeros, a new space is created at the numMergeCand position in the list, and addIvMC is set to one.
  • addIvMC is 1, set the first value in the list as a texture candidate and increment numMergeCand by one.
  • the motion information of the disparity merging candidate (IvDC) is compared with the available left and top candidates. As a result, if the motion information of the left candidate and the top candidate is different from each other, a new space is created at the numMergeCand position of the merge list and then a parallax merge candidate (IvDC) is added.
  • numMergeCand is less than 5 + the number of additional merge candidates (NumExtraMergeCand), then the start synthesis merge candidates are added to the list. Add and increase numMergeCand3DV, numMergeCand by 1.
  • inter-view merge candidate (IvMC) If the inter-view merge candidate (IvMC) is available, compare the inter-view merge candidate with the moved inter-view merge candidate (IvMCShift), and if they are different, create a new space at the numMergeCand position in the list, and then move the inter-view merge candidate. Add
  • the complexity when comparing the inter-view merge candidate and the disparity merge candidate with existing candidates in the list, the complexity may be reduced by using a method of comparing the candidates with some of them without comparing them with all the candidates in the list. For example, only the left candidate and the top candidate may be used for comparison.
  • Table 5 shows an example of a process of deriving an existing mixed bidirectional prediction candidate
  • Table 6 shows an example of reusing a process of deriving a HEVC mixed bidirectional prediction candidate in 3D-HEVC according to an embodiment of the present invention.
  • Table 7 shows the results of the coding efficiency and coding time comparison between the existing method (method of Table 5) and the method proposed in the present invention (method of Table 6).
  • the comparison result shows that the bitrate increase is less than 0.1% compared to the conventional method. It shows coding efficiency.
  • the above-described method may use High Efficiency Video Coding (HEVC), which is currently being jointly standardized by a Moving Picture Experts Group (MPEG) and a Video Coding Experts Group (VCEG). Therefore, the above-described method may vary the application range according to the block size, the coding unit depth (CU) depth, or the transform unit (TU) depth, as shown in the example of Table 8.
  • the variable that determines the coverage i.e., size or depth information
  • Method A) Applies only to a depth above a given depth
  • Method B) Applies only to a given depth below
  • Method C) Applies only to a given depth There may be a way.
  • the methods of the present invention may be represented by using an arbitrary flag, or may be represented by signaling a value greater than one of the maximum value of the CU depth as a CU depth value indicating an application range. have.
  • whether the above-described methods of the present invention are applied may be included in the bitstream and signaled.
  • the information on whether the above-described methods of the present invention are applied may be signaled by being included in a syntax of a sequence parameter set (SPS), a picture parameter set (PPS), and a slice header.
  • SPS sequence parameter set
  • PPS picture parameter set
  • slice header a syntax of a slice header
  • Table 9 shows an example of a method of signaling using SPS whether or not the above-described methods of the present invention are applied.
  • Table 10 shows an example of a method of signaling using PPS whether or not the above-described methods of the present invention are applied.
  • Table 11 shows an example of a method of signaling using a slice header whether or not the above-described methods of the present invention are applied.
  • Table 12 shows another example of a method of signaling using a slice header whether or not the above-described methods of the present invention are applied.
  • “reuse_enabled_flag” indicates whether or not the above-described methods of the present invention are applied.
  • “reuse_enabled_flag” becomes “1” when the above-described methods of the present invention are applied, and “reuse_enabled_flag” becomes “0” when the above-described methods of the present invention are not applied. The reverse is also possible.
  • Reuse_disabled_info is a syntax that is activated when the above-described methods of the present invention are applied (or “reuse_enabled_flag” is true). This is the depth of a CU (or the size of a CU or the size or sub-block of a CU). Whether or not the above-described methods of the present invention are applied according to the size of the macro block or the size of the block).
  • the method of the present specification may be applied only to a P picture (or a frame), and the method of the present specification may be applied only to a B picture (or a frame).
  • the above-described methods of the present invention can be applied not only to the 3D video codec but also to the scalable video codec.
  • the encoding / decoding module used in the base layer of the scalable video codec may be applied to the enhancement layer as it is, and then the enhancement layer may be encoded / decoded using the partial encoding / decoding module.
  • apply the basic merge motion list module used in the base layer of the scalable video codec to the enhancement layer and then construct the basic merge motion candidate list, and then additionally use the additional merge motion list module.
  • an "extended merge motion candidate list" for the enhancement layer can be constructed.
  • FIG. 18 is a flowchart schematically illustrating a method of constructing a merge motion candidate list in video encoding / decoding including a plurality of viewpoints according to an embodiment of the present invention.
  • the method of FIG. 18 may be performed by the apparatus shown in FIGS. 10 and 12 described above, or may be applied to 3D-HEVC. For convenience of explanation, the method of FIG. 18 is described as being performed by the merge motion device.
  • the merge motion apparatus derives a basic merge motion candidate for the current PU and constructs a merge motion candidate list based on the derived basic merge motion candidate (S1800).
  • the basic merge motion candidate may include a spatial merge motion candidate and a temporal merge motion candidate for the current PU.
  • the merge motion device includes a left block, an above block, an right-right block, a bottom-left block, and located spatially adjacent to a current PU;
  • a spatial merge motion candidate may be derived from at least one of the above-left blocks.
  • the merge motion device then merges temporally from a co-located block (eg, bottom right block, center block) in a co-located picture for the current PU.
  • the motion candidate can be derived.
  • the merge motion apparatus may configure the merge motion candidate list based on availability of the spatial merge motion candidate and the temporal merge motion candidate.
  • the merge motion device derives an extended merge motion candidate for the current PU (S1810).
  • the extended merge motion candidate refers to a merge motion candidate used for prediction of the dependent view image or the depth map image.
  • the extended merge motion candidate may include at least one of an inter-view merge candidate (IvMC), a view synthesis prediction merge candidate, and a texture merge candidate.
  • IvMC inter-view merge candidate
  • view synthesis prediction merge candidate a texture merge candidate.
  • an inter-view disparity merge candidate (IvDC) and a shifted inter-view merge candidate (IvMCShift) depending on whether the current PU performs inter-view prediction.
  • a shifted inter-view disparity merge candidate (IvDCShift) may be derived.
  • a view synthesis merging candidate may be derived according to whether the current PU performs view synthesis prediction.
  • a texture merge candidate may be derived according to whether the depth map of the current PU performs motion prediction from the texture block.
  • the merge motion apparatus may finally reconstruct the merge motion candidate list by adding the derived extended merge motion candidate to the merge motion candidate list (S1820).
  • the merge motion device adds the extended merge motion candidate to the merge motion candidate list.
  • the extended merge motion candidate may be added at any position in the merge motion candidate list (eg, the first item in the list).
  • the merge motion device adds the extended merge motion candidate to the merge motion candidate list. Add.
  • a texture merge candidate may be derived.
  • the texture merge candidate may be added to the first item in the merge motion list.
  • an inter-view merge candidate may be derived.
  • the inter-view merge candidate may be added to the first item in the merge motion list.
  • a view synthesis merge candidate may be derived.
  • the value of the number of extended merge motion candidates added to the merge motion candidate list and the number of basic merge motion candidates is smaller than the maximum candidate number of the merge motion candidate list, the derived view synthesis merge candidate is added to the merge motion candidate list. Can be.
  • the motion information on the current PU may be obtained based on the merge motion candidate list described above, and the prediction sample value of the current PU may be obtained by performing prediction on the current PU using the motion information.
  • the encoder may obtain a residual sample value of the current PU based on the predicted sample value of the current PU, transform, quantize, and entropy encode the residual sample value and transmit the same to the decoder.
  • the decoder may obtain a reconstructed sample value of the current PU based on the predicted sample value of the current PU and the residual sample value of the current PU transmitted by the encoder.
  • the methods are described based on a flowchart as a series of steps or blocks, but the present invention is not limited to the order of steps, and certain steps may occur in a different order or at the same time than other steps described above. Can be. Also, one of ordinary skill in the art appreciates that the steps shown in the flowcharts are not exclusive, that other steps may be included, or that one or more steps in the flowcharts may be deleted without affecting the scope of the present invention. I can understand.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un procédé et un appareil de codage/décodage vidéo comprenant une pluralité de vues. Le procédé de décodage vidéo mettant en œuvre la pluralité de vues comprend les étapes suivantes : induction de candidats de mouvement combiné de base pour une unité de prédiction actuelle (PU) afin de configurer une liste de candidats de mouvement combiné; induction de candidats de mouvement combiné étendu pour la PU actuelle lorsqu'elle correspond à une carte d'informations de profondeur ou à une vue dépendante; et ajout des candidats de mouvement combiné étendu à la liste de candidats de mouvement combiné.
PCT/KR2014/003517 2013-10-24 2014-04-22 Procédé et appareil de codage/décodage vidéo WO2015060508A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP14855443.9A EP3062518A4 (fr) 2013-10-24 2014-04-22 Procédé et appareil de codage/décodage vidéo
US14/903,117 US10080029B2 (en) 2013-10-24 2014-04-22 Video encoding/decoding method and apparatus
US16/103,042 US10412403B2 (en) 2013-10-24 2018-08-14 Video encoding/decoding method and apparatus

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
KR20130126852 2013-10-24
KR10-2013-0126852 2013-10-24
KR10-2013-0146600 2013-11-28
KR20130146600 2013-11-28
KR1020140048066A KR102227279B1 (ko) 2013-10-24 2014-04-22 비디오 부호화/복호화 방법 및 장치
KR10-2014-0048066 2014-04-22

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US14/903,117 A-371-Of-International US10080029B2 (en) 2013-10-24 2014-04-22 Video encoding/decoding method and apparatus
US16/103,042 Continuation US10412403B2 (en) 2013-10-24 2018-08-14 Video encoding/decoding method and apparatus

Publications (1)

Publication Number Publication Date
WO2015060508A1 true WO2015060508A1 (fr) 2015-04-30

Family

ID=52993076

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2014/003517 WO2015060508A1 (fr) 2013-10-24 2014-04-22 Procédé et appareil de codage/décodage vidéo

Country Status (1)

Country Link
WO (1) WO2015060508A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120080122A (ko) * 2011-01-06 2012-07-16 삼성전자주식회사 경쟁 기반의 다시점 비디오 부호화/복호화 장치 및 방법
WO2012171442A1 (fr) * 2011-06-15 2012-12-20 Mediatek Inc. Procédé et appareil de prédiction et de compensation de vecteurs de mouvement et de disparité pour codage vidéo 3d
JP2013034187A (ja) * 2011-06-30 2013-02-14 Jvc Kenwood Corp 画像復号装置、画像復号方法および画像復号プログラム
KR20130048122A (ko) * 2011-10-26 2013-05-09 경희대학교 산학협력단 움직임 후보 리스트 생성 방법 및 그를 이용한 부호화 장치
KR20130085382A (ko) * 2012-01-19 2013-07-29 한국전자통신연구원 영상 부호화/복호화 방법 및 장치

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120080122A (ko) * 2011-01-06 2012-07-16 삼성전자주식회사 경쟁 기반의 다시점 비디오 부호화/복호화 장치 및 방법
WO2012171442A1 (fr) * 2011-06-15 2012-12-20 Mediatek Inc. Procédé et appareil de prédiction et de compensation de vecteurs de mouvement et de disparité pour codage vidéo 3d
JP2013034187A (ja) * 2011-06-30 2013-02-14 Jvc Kenwood Corp 画像復号装置、画像復号方法および画像復号プログラム
KR20130048122A (ko) * 2011-10-26 2013-05-09 경희대학교 산학협력단 움직임 후보 리스트 생성 방법 및 그를 이용한 부호화 장치
KR20130085382A (ko) * 2012-01-19 2013-07-29 한국전자통신연구원 영상 부호화/복호화 방법 및 장치

Similar Documents

Publication Publication Date Title
WO2017034089A1 (fr) Procédé de traitement d'image basé sur un mode d'inter-prédiction et appareil associé
WO2017090988A1 (fr) Procédé de codage/décodage vidéo à points de vue multiples
WO2020166897A1 (fr) Procédé et dispositif d'inter-prédiction sur la base d'un dmvr
WO2015002460A1 (fr) Procédé de codage et de décodage de vidéo comprenant une pluralité de couches
WO2020180155A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2020004990A1 (fr) Procédé de traitement d'image sur la base d'un mode de prédiction inter et dispositif correspondant
WO2014058216A1 (fr) Procédé et appareil de décodage de données vidéo
WO2019194514A1 (fr) Procédé de traitement d'image fondé sur un mode de prédiction inter et dispositif associé
WO2020180129A1 (fr) Procédé et dispositif de traitement de signal vidéo destiné à une prédiction inter
WO2018105759A1 (fr) Procédé de codage/décodage d'image et appareil associé
WO2019216714A1 (fr) Procédé de traitement d'image fondé sur un mode de prédiction inter et appareil correspondant
WO2019235822A1 (fr) Procédé et dispositif de traitement de signal vidéo à l'aide de prédiction de mouvement affine
WO2021194307A1 (fr) Procédé et appareil de codage/décodage d'image basés sur une compensation de mouvement enveloppante, et support d'enregistrement stockant un train de bits
WO2016056779A1 (fr) Procédé et dispositif pour traiter un paramètre de caméra dans un codage de vidéo tridimensionnelle (3d)
WO2021006579A1 (fr) Procédé et dispositif de codage/décodage de vidéo permettant de dériver un indice de poids pour une prédiction bidirectionnelle de candidat de fusion et procédé de transmission de flux binaire
WO2020256454A1 (fr) Procédé de décodage d'image permettant de réaliser une inter-prédiction lorsque le mode de prédiction du bloc actuel ne peut finalement pas être sélectionné et dispositif associé
WO2014014276A1 (fr) Procédé de filtrage en boucle et appareil associé
WO2021194308A1 (fr) Procédé et dispositif de codage/décodage d'image basés sur une compensation de mouvement enveloppante, et support d'enregistrement stockant un flux binaire
WO2020009447A1 (fr) Procédé de traitement d'images sur la base d'un mode d'inter-prédiction et dispositif associé
WO2021141477A1 (fr) Procédé et appareil de codage/décodage d'image, et procédé de transmission de flux binaire à l'aide d'un ensemble de paramètres de séquence comprenant des informations concernant le nombre maximal de candidats à la fusion
WO2019216736A1 (fr) Procédé de traitement d'image fondé sur un mode de prédiction inter et appareil correspondant
WO2021112633A1 (fr) Procédé et appareil de codage/décodage d'image sur la base d'un en-tête d'image comprenant des informations relatives à une image co-localisée, et procédé de transmission de flux binaire
WO2020180153A1 (fr) Procédé et appareil de traitement de signal vidéo pour inter-prédiction
WO2021015512A1 (fr) Procédé et appareil de codage/décodage d'images utilisant une ibc, et procédé de transmission d'un flux binaire
WO2021049865A1 (fr) Procédé et dispositif de codage/décodage d'image pour réaliser un bdof, et procédé de transmission de flux binaire

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14855443

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 14903117

Country of ref document: US

REEP Request for entry into the european phase

Ref document number: 2014855443

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2014855443

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE