WO2020140915A1 - 视频处理方法和装置 - Google Patents

视频处理方法和装置 Download PDF

Info

Publication number
WO2020140915A1
WO2020140915A1 PCT/CN2019/130869 CN2019130869W WO2020140915A1 WO 2020140915 A1 WO2020140915 A1 WO 2020140915A1 CN 2019130869 W CN2019130869 W CN 2019130869W WO 2020140915 A1 WO2020140915 A1 WO 2020140915A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
motion vector
current block
candidate
type
Prior art date
Application number
PCT/CN2019/130869
Other languages
English (en)
French (fr)
Inventor
郑萧桢
孟学苇
王苏红
马思伟
王苫社
Original Assignee
深圳市大疆创新科技有限公司
北京大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司, 北京大学 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980009160.3A priority Critical patent/CN111630860A/zh
Publication of WO2020140915A1 publication Critical patent/WO2020140915A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/537Motion estimation other than block-based
    • H04N19/54Motion estimation other than block-based using feature points or meshes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/109Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/527Global motion vector estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures

Definitions

  • the present application relates to the field of video encoding and decoding, and more specifically, to a video processing method and device.
  • a prediction block refers to a basic unit used for prediction in a frame of image. In some standards, the prediction block is also called a prediction unit (Prediction Unit, PU). In some video standards, each of the multiple image blocks into which a frame image is first divided is called a coding tree unit (Coding Tree Unit, CTU); each coding tree unit may contain a coding unit ( Coding Unit (CU) or divided into multiple coding units again.
  • CTU Coding Tree Unit
  • CU Coding Unit
  • Prediction refers to finding image data similar to the predicted block, also referred to as the reference block of the predicted block. By encoding/compressing the difference between the prediction block and the reference block of the prediction block, redundant information in encoding/compression is reduced.
  • the difference between the prediction block and the reference block may be a residual obtained by subtracting corresponding pixel values of the prediction block and the reference block.
  • the prediction includes intra prediction and inter prediction. Intra prediction refers to searching for the reference block of the prediction block in the frame where the prediction block is located, and inter prediction refers to searching for the reference block of the prediction block in a frame other than the frame where the prediction block is located.
  • a motion vector candidate list is constructed, and the current image block is predicted according to the candidate motion vector selected in the motion vector candidate list.
  • Both intra prediction and inter prediction have multiple modes, and correspondingly, the motion vector candidate list also has multiple modes. This means that the corresponding software and hardware resources are required to support the motion vector candidate list in multiple modes, which reduces the resource utilization.
  • the present application provides a video processing method and device, which can improve resource utilization.
  • a video processing method including: acquiring motion vectors of a first type candidate block of a current block according to a first type candidate block acquisition order and adding to a motion vector candidate list in a first type prediction mode, the first A type of prediction mode performs intra prediction on the current block based on the motion information of the encoded block in the current frame, where the first of the first type candidate blocks obtains the current block above the current frame The neighboring block of; predicting the current block according to the motion vector candidate list of the current block.
  • a video processing apparatus including: a memory for storing code; a processor for executing the code stored in the memory to perform the following operations: according to the order of acquiring candidate blocks of the first type, acquiring the current The motion vectors of the first type candidate blocks of the block are added to the motion vector candidate list in the first type prediction mode, and the first type prediction mode performs intra prediction on the current block based on the motion information of the coded block in the current frame , Where the first one of the candidate blocks of the first type is the neighboring block of the current block above the current frame; the current block is predicted according to the motion vector candidate list of the current block .
  • a computer-readable storage medium is provided on which instructions for performing the method of the first aspect are stored.
  • a computer program product including instructions for performing the method of the first aspect.
  • FIG. 1 is a frame diagram of video encoding according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a prediction method according to an embodiment of the present application.
  • Figure 3 is a flow chart for constructing an affiliate list.
  • FIG. 4 is a schematic diagram of surrounding blocks of the current block based on the inter prediction mode.
  • FIG. 5 is a schematic diagram of surrounding blocks of the current block based on the IBC mode.
  • FIG. 6 is a schematic flowchart of a video processing method provided by an embodiment of the present application.
  • Fig. 7 is a flowchart of an implementation process of ATMVP.
  • FIG. 8 is an exemplary diagram of a manner of acquiring motion information of sub-blocks of the current block.
  • FIG. 9 is a schematic flowchart of a video processing method according to another embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a video processing device provided by an embodiment of the present application.
  • This application can be applied to a variety of video coding standards, such as H.264, high efficiency video coding (HEVC), universal video coding (versatile video coding, VVC), audio and video coding standard (audio video coding standard, AVS), AVS+, AVS2 and AVS3 etc.
  • video coding standards such as H.264, high efficiency video coding (HEVC), universal video coding (versatile video coding, VVC), audio and video coding standard (audio video coding standard, AVS), AVS+, AVS2 and AVS3 etc.
  • the video encoding process mainly includes prediction, transformation, quantization, entropy encoding, loop filtering and other parts.
  • Prediction is an important part of mainstream video coding technology. Prediction can be divided into intra prediction and inter prediction. Inter prediction can be achieved through motion compensation. The following is an example of the motion compensation process.
  • the coding region may also be called a coding tree unit (CTU).
  • the size of the CTU may be, for example, 64 ⁇ 64 or 128 ⁇ 128 (units are pixels, and the units will be omitted for similar descriptions hereinafter).
  • Each CTU can be divided into square or rectangular image blocks.
  • the image block may also be referred to as a coding unit (CU).
  • CU coding unit
  • the current CU to be encoded will be referred to as a current block.
  • Inter prediction may include forward prediction, backward prediction, dual prediction, and so on.
  • forward prediction is to use the previous reconstructed frame (which may be called a historical frame) of the current frame (for example, the frame labeled t as shown in FIG. 2) to predict the current frame.
  • Backward prediction is to use the frame after the current frame (which may be called a future frame) to predict the current frame.
  • Bi-prediction can be bidirectional prediction, using both "historical frames” (for example, frames labeled t-2 and t-1 as shown in Figure 2) and “future frames” (for example, as shown in Figure 2, labeled For t+2 and t+1 frames) to predict the current frame.
  • Bi-prediction can also be prediction in the same direction, for example, using two "historical frames” to predict the current frame, or using two "future frames” to predict the current frame.
  • a similar block of the current block can be found from the reference frame (which may be a reconstructed frame near the time domain) as the prediction block of the current block.
  • the relative displacement between the current block and the similar block is called a motion vector (motion vector, MV).
  • the process of finding a similar block in the reference frame as the prediction block of the current block is motion compensation.
  • inter prediction modes in HEVC include inter mode (also called AMVP mode), merge mode and skip mode.
  • inter mode you need to obtain the MV candidate list of the neighboring block (spatial or time domain). Determine an MV in the MV candidate list as the motion vector prediction (motion vector prediction (MVP) of the current block.
  • MVP motion vector prediction
  • MVP determines the starting point of motion estimation, and performs motion search near the starting point.
  • the optimal motion vector (Motion Vector, MV) is obtained, and the prediction block (PU, or reference frame) is determined in the reference image by MV
  • inter mode needs to transmit the MVP index and MVD in the code stream.
  • merge mode you need to obtain the MV candidate list of the neighboring block (space domain or time domain), and determine an MV in the MV candidate list as the MVP of the current block. Therefore, for merge mode, only the MVP needs to be transmitted in the code stream The index does not need to transmit the MVD, that is, the MVP in merge mode is the MV of the current block.
  • the skip mode is a special case of the merge mode. It only needs to pass the MVP index, and does not need to pass the MVD and residual.
  • the inter-frame prediction method introduces alternative/advanced temporal motion vector prediction (ATMVP) technology.
  • ATMVP alternative/advanced temporal motion vector prediction
  • the current block is divided into multiple sub-blocks, and the motion information of the sub-blocks is calculated.
  • the ATMVP technology aims to introduce motion vector prediction at the sub-block level to improve the overall encoding performance of the video.
  • the traditional motion vector uses a simple translation model, that is, the motion vector of the current block represents the relative displacement between the current block and the reference block. This type of motion vector is difficult to accurately describe the more complex motion conditions in the video, such as zoom, rotation, perspective, etc.
  • affine models affine models
  • the affine model uses the motion vectors of two or three control points (CP) of the current block to describe the affine sports field of the current block.
  • the two control points may be, for example, the upper left corner point and the upper right corner point of the current block; for example, the three control points may be the upper left corner point, the upper right corner point, and the lower left corner point of the current block.
  • the affine model is combined with the merge mode mentioned above to form the affine merge mode.
  • the general merge mode motion vector candidate list (recorded as merge candidate list) records the MVP of the image block
  • the affine merge mode motion vector candidate list (recorded as affiliate or merge candidate list) records the control point motion vector prediction (control point motion vector prediction, CPMVP). Similar to the ordinary merge mode, the affine merge mode does not need to add MVD in the code stream, but directly uses CPMVP as the CPMV of the current block.
  • IBC Intra Block Copy
  • IBC technology can include IBC merge mode and IBC inter mode, where the two modes are similar to the ordinary merge mode, the difference is that the MV in the MV candidate list obtained by IBC merge and IBC inter mode is the weight of the current frame.
  • the MV of the building block that is to say, the prediction block of the IBC is a reconstructed block in the current frame
  • the length of the MV candidate list of IBC merge and the MV candidate list of IBC inter are different, which may be predefined.
  • the construction process of the MV candidate list is one of the important processes of the inter prediction mode and the IBC mode.
  • Figure 3 shows a possible construction of the affine merge candidate list (affine fusion candidate list).
  • Step S110 Insert ATMVP in the affiliate merged list of the current block.
  • ATMVP contains the motion information of the sub-blocks of the current block.
  • the affiliate merge list will insert the motion information of the sub-block of the current block, so that the affiliate mode can perform motion compensation at the sub-block level, thereby improving the overall encoding performance of the video.
  • step S110 will be described in detail below in conjunction with FIG. 7 and will not be described in detail here.
  • the motion information includes a combination of one or more of the following information: motion vector; motion vector difference; reference frame index value; reference direction of inter prediction; information of image block using intra coding or inter coding; image block The division mode.
  • step S120 the inherited affiliates are inserted in the affiliate candidate list.
  • step S130 it is judged whether the number of affiliated candidates in the affiliate merged list is less than a preset value.
  • step S140 If the number of affine candidates in the affine mergecandidate list has reached the preset value, end the process in FIG. 3; if the number of affine candidates in the affine mergecandidate list is less than the preset value, continue to step S140.
  • step S140 the constructed affiliated candidates are inserted into the affiliated candidate list.
  • step S150 it is judged whether the number of affiliates in the affiliate merged list is less than the preset value.
  • step S160 If the number of affine candidates in the affine mergecandidate list has reached the preset value, end the process of FIG. 3; if the number of affine candidates in the affine mergecandidate list is less than the preset value, continue to step S160.
  • step S160 a 0 vector is inserted into the affine mergecandidate list.
  • step S110 the construction process shown in FIG. 3 may not include step S110.
  • the construction process of the MV candidate list in the IBC mode is similar to the construction process shown in FIG. 3, the difference is that the scanning order and length of the MV candidate list are different.
  • the MV candidate list As shown in FIG. 4, the MV candidate list’s
  • the scanning sequence is A0->B0.
  • the MV candidate list in the IBC mode includes only air-space candidate blocks, and does not include time-domain candidate blocks.
  • behind the air-space candidate blocks may be history-based MVP (History-based Motion) Vector Prediction).
  • the length of the MV candidate list in IBC merge mode and IBC inter mode is a preset value, for example, the length of the MV candidate list in IBC merge mode is 6, and the length of the MV candidate list in IBC inter mode is 2. If the MV of the candidate block in the air domain does not reach the preset value, other content may be filled in after it, for example, zero padding.
  • each construction method requires corresponding hardware and software resources, which reduces resource utilization.
  • this application proposes a video processing method and device, which can reduce the waste of software and hardware resources to a certain extent and improve system performance.
  • FIG. 6 is a schematic flowchart of a video processing method provided by an embodiment of the present application. The method of FIG. 6 can be applied to the encoding side and also to the decoding side.
  • step S210 according to the order of acquiring the first-type candidate blocks, the motion vectors of the first-type candidate blocks of the current block are acquired and added to the motion vector candidate list in the first-type prediction mode, the first-type prediction mode is based on the The motion information of the coded block performs intra prediction on the current block, wherein the first one of the candidate blocks of the first type is the adjacent block of the current block above the current frame;
  • step S220 the current block is predicted according to the motion vector candidate list of the current block.
  • the current block may also be referred to as a current CU.
  • the first-type prediction mode performs intra prediction on the current block based on the motion information of the coded block in the current frame.
  • the first-type prediction mode may be an IBC mode, for example, the IBC merge mode or IBC inter mode.
  • the first type candidate block is a candidate block used for prediction based on the first type prediction mode, the first type candidate block includes at least one candidate block, and the at least one candidate block is the current block in the current Adjacent blocks in the frame.
  • the embodiments of the present application do not limit the size of the adjacent block and the current block, for example, the adjacent block may be the same size as the current block, for example, the current block and the adjacent The blocks are all 64*64, or may be smaller than the current block.
  • the size of the current block is 64*64
  • the size of the adjacent block is 16*16.
  • the order of acquiring the candidate blocks of the first type is the order of acquiring candidate blocks corresponding to the prediction mode of the first type. Based on the order of acquiring the candidate blocks of the first type, the adjacent blocks of the current block above the current frame are first acquired, as shown in FIG. 5 As shown, the first candidate block obtained is the neighboring block B0.
  • the second candidate block obtained in the first type candidate block may be the left adjacent block of the current block in the current frame, that is, That is, as shown in FIG. 5, the MVs of the neighboring blocks of the current block can be sequentially acquired in the order of B0->A0 and added to the MV candidate list for the first-type prediction mode.
  • the MV can be filled with a 0 vector Candidate list to make it reach the preset length.
  • the number of the first-type candidate blocks is not limited, for example, it may be 2, or more, for example, 4 or 5, etc. .
  • the order of acquiring the first-type candidate blocks may be: the current block is an adjacent block above the current frame, the current The adjacent block with the block on the left side of the current frame, the adjacent block with the current block on the upper right side of the current frame, the adjacent block with the current block on the lower left side of the current frame, and the current block on the left side of the current frame The adjacent block at the top left. That is, as shown in FIG. 4, the MV of each neighboring block is sequentially obtained in the order of B0->A0->B1->A1->B2.
  • the motion vector of the second-type candidate block of the current block may be acquired according to the second-type candidate block acquisition order and added to the motion vector candidate in the second-type prediction mode List.
  • the second-type prediction mode performs inter prediction on the current block based on motion information in a motion vector candidate list, the first N candidate blocks of the first-type candidate block and the second The first N candidate blocks of the class candidate block are respectively the same, and the N is greater than or equal to 1.
  • the i-th candidate block of the first-type candidate block is the same as the i-th candidate block of the second-type candidate block, where 1 ⁇ i ⁇ N, N ⁇ P and N ⁇ Q, P is The number of candidate blocks of the first type, Q is the number of candidate blocks of the second type, and P and Q are positive integers.
  • the candidate block acquisition order corresponding to the first-type prediction mode and the second-type prediction mode is at least partially the same.
  • the same part of the candidate block acquisition order of the first prediction mode and the second prediction mode can be implemented with the same software and hardware resources, which is beneficial to improve the resource utilization rate.
  • the number Q of candidate blocks of the second type is greater than the number of candidate blocks P of the first type.
  • the number P of candidate blocks of the first type is 2, and the number Q of candidate blocks of the second type is 4 or 5.
  • the N may be the number of candidate blocks of the first type.
  • the first two candidate blocks in the first type candidate block and the second type candidate block are the same, for example, the first two candidate blocks in the first type candidate block include the current The adjacent block of the block above the current frame and the adjacent block of the current block on the left side of the current frame, the first two candidate blocks of the candidate block of the second type include the current block above the current frame The neighboring block and the neighboring block where the current block is on the left side of the current frame.
  • first N candidate blocks in the first type candidate block and the second type candidate block being the same only represent the first N candidate blocks in the first type candidate block and the second type candidate block
  • the same position relative to the current block does not mean that the size and content of the first N candidate blocks are the same.
  • the order of acquiring the first type candidate blocks may be B0->A0
  • the order of acquiring the second type candidate blocks may be B0->A0->B1->A1->B2.
  • the order of acquiring the candidate blocks of the first type may be A0->B0
  • the order of acquiring the candidate blocks of the second type may be A0->B0>B1->A1->B2.
  • the number of candidate blocks of the second type is equal to the number of candidate blocks of the first type.
  • the number of candidate blocks of the first type is 5, and the number of candidate blocks of the second type is 5.
  • the number of candidate blocks of the first type is 2, and the number of candidate blocks of the second type is 2.
  • the order of acquiring the first type candidate blocks may be B0->A0->B1->A1->B2, and the order of acquiring the second type candidate blocks may be B0->A0->B1- >A1->B2.
  • the order of acquiring the first-type candidate blocks may be A0->B0>B1->A1->B2, and the order of acquiring the second-type candidate blocks may be A0->B0>B1-> A1->B2.
  • the step S220 may include:
  • the current block is predicted according to the motion vector candidate list in the first-type prediction mode of the current block and/or the motion vector candidate list in the second-type prediction mode.
  • step S220 may be performed with reference to related technologies, which is not limited in the embodiments of the present application.
  • the first type of prediction mode includes IBC merge mode and/or IBC inter mode, or may also include other prediction modes that need to be predicted based on the motion information of the coded block in the current frame
  • the second type of prediction mode includes at least one of the following: ATMVP mode, AMVP mode, merge mode, and affine mode, or may also include other inter prediction modes, which are not limited in the embodiments of the present application.
  • the following describes the construction process of the MV candidate list by taking the ATMVP mode as an example in conjunction with FIG. 7.
  • the method of inserting ATMVP in the affiliate block mergecandidate list of the current block described below may not be limited to the embodiment shown in FIG. 3 described above.
  • the implementation of the ATVMP technology that is, the acquisition method of the motion information of the sub-block of the current block can be roughly divided into two steps: steps S310 and S320.
  • step S310 a corresponding block (corresponding block) in the reference frame of the current block is determined.
  • the frame used for acquiring motion information in the current frame (the frame where the current block is located) is called a co-located picture.
  • the co-located frame of the current frame is set when the slice is initialized.
  • the first reference frame list may be a forward reference frame list or a reference frame list that includes the first group of reference frames.
  • the first group of reference frames includes reference frames in time sequence before and after the current frame.
  • the first frame in the first reference frame list of the current block is usually set as the co-located frame of the current frame.
  • the corresponding block of the current block in the reference frame is determined by a temporal motion vector (tempMV). Therefore, in order to obtain the corresponding block of the current block in the reference frame, the time-domain motion vector needs to be derived first.
  • tempMV temporal motion vector
  • forward prediction and bidirectional prediction are taken as examples to explain the derivation process of the time-domain motion vector.
  • the number of reference frame lists (also referred to as reference lists or reference image lists) of the current block is 1.
  • the reference frame list of the current block may be called a first reference frame list (reference list 0).
  • the first reference frame list may be a forward reference frame list.
  • the co-located frame of the current frame is usually set as the first frame in the first reference frame list.
  • one implementation is to first scan the motion vector candidate list of the current block (the motion vector candidate list can be constructed based on the motion vectors of the image blocks at four adjacent positions in the spatial domain), and then The first candidate motion vector in the motion vector candidate list serves as the initial time domain motion vector. Then, the first reference frame list of the current block is scanned. If the reference frame of the first candidate motion vector is the same as the co-located frame of the current frame, the first candidate motion vector can be used as the time domain motion vector; if the first If the reference frame of a candidate motion vector is different from the co-located frame of the current frame, you can set the time domain motion vector to 0 vector and stop scanning. In this implementation, a motion vector candidate list needs to be constructed to obtain the first candidate motion vector in the list.
  • the motion vector of a neighboring block in a certain space domain of the current block can be directly used as the initial time domain motion vector. If the reference frame of the motion vector of the adjacent block in the space domain is the same as the co-located frame of the current frame, it can be used as the time domain motion vector; otherwise, the time domain motion vector can be set to 0 vector and the scanning is stopped.
  • the adjacent block in the spatial domain may be any one of the encoded blocks around the current block, for example, it may be fixed to the left block of the current block, or fixed to the upper block of the current block, or fixed to the upper left block of the current block, etc.
  • the number of reference frame lists of the current block is 2, which includes the first reference frame list (reference list 0) and the second reference frame list (reference list 1).
  • the first reference frame list may be a forward reference frame list
  • the second reference frame list may be a backward reference frame list.
  • one implementation is to scan the current motion vector candidate list first, and use the first candidate motion vector in the motion vector candidate list as the initial time-domain motion vector. Then, first scan a reference frame list (which may be the first reference frame list or the second reference frame list) in the current reference direction of the current block, if the reference frame of the first candidate motion vector is If the co-located frame is the same, the first candidate motion vector can be used as the time-domain motion vector; if the reference frame of the first candidate motion vector is different from the co-located frame of the current frame, continue scanning in the other reference direction of the current block List of reference frames.
  • a reference frame list which may be the first reference frame list or the second reference frame list
  • both the first reference frame list and the second reference frame list may include reference frames before and after the current frame in chronological order.
  • the bidirectional prediction refers to the first reference frame list. A reference frame with a different reference direction is selected from the second reference frame list. In this implementation, the temp MV derived from ATMVP in bidirectional prediction still needs to construct a motion vector candidate list.
  • the motion vector of a neighboring block in a certain space domain of the current block can be directly used as the initial time domain motion vector.
  • first scan a reference frame list (which may be the first reference frame list or the second reference frame list) in the current reference direction of the current block, if the motion vectors of adjacent blocks in the spatial domain are in this reference
  • the reference frame in the direction is the same as the co-located frame of the current frame, which can be used as a time-domain motion vector.
  • the reference frame of the motion vector of the adjacent block in the spatial domain in this reference direction is different from the co-located frame of the current frame, then continue to scan the list of reference frames in the other reference direction of the current block.
  • the motion vector of the adjacent block in the spatial domain in the other reference frame list is the same as the co-located frame of the current frame
  • the motion vector of the adjacent block in the spatial domain can be used as the temporal motion vector
  • the reference frame of the motion vector of the adjacent block in the spatial domain is different from the co-located frame of the current frame, and the motion vector in the time domain can be set to a vector of 0, and the scanning is stopped.
  • the adjacent block in the spatial domain may be any one of the encoded blocks around the current block, for example, fixed to the left block of the current block, or fixed to the upper block of the current block, or fixed to the upper left block of the current block, etc.
  • the scanning order of the first reference frame list and the second reference frame list can be determined according to the following rules:
  • the second reference frame list is scanned first; otherwise, the first frame is scanned first.
  • the current frame adopts a low delay (low delay) coding mode to indicate that the playback sequence of the reference frame of the current frame in the video sequence is before the current frame; the co-located frame of the current frame is set to the first in the second reference frame list
  • One frame may indicate that the quantization step of the first slice of the first reference frame list of the current frame is smaller than the quantization step of the first slice of the second reference frame list.
  • the time-domain motion vector can be used to find the corresponding block of the current block in the reference frame.
  • step S320 according to the corresponding block of the current block, the motion information of the sub-block of the current block is acquired.
  • the current block may be divided into multiple sub-blocks, and then the motion information of the sub-blocks in the corresponding block is determined. It is worth noting that for each sub-block, the motion information of the corresponding block can be determined by the smallest motion information storage unit where it is located.
  • the worst case is that: during the process of deriving the time-domain motion vector, both reference frame lists are scanned, and the eligible conditions are still not derived Of the time-domain motion vector, in this case, the scan for the two reference frame lists is redundant.
  • the reference frames in the first reference frame list and the second reference frame list will have certain The degree of overlap, therefore, in the process of acquiring the time domain motion vector, there will be redundant operations on the scanning process of the two reference frame lists.
  • the method shown in FIG. 6 further includes the following steps:
  • a reference frame list of the current block is obtained.
  • the reference frame list of the current block includes a first reference frame list and a second reference frame list.
  • the reference frame list of the current block includes a first reference frame list and a second reference frame list, indicating that the current block is to perform bidirectional prediction between frames.
  • the first reference frame list may be a forward reference frame list, or may be a reference frame list containing a first group of reference frames.
  • the first group of reference frames includes reference frames in time sequence before and after the current frame.
  • the second reference frame list may be a backward reference frame list, or a reference frame list containing a second group of reference frames, the second group of reference frames including the time sequence before the current frame And subsequent reference frames.
  • both the first reference frame list and the second reference frame list may include reference frames before and after the current frame in chronological order.
  • the bidirectional prediction may refer to the first reference frame list and Reference frames with different reference directions are selected in the second reference frame list.
  • step S520 the target reference frame list is determined according to the reference frame list of the current block.
  • the target reference frame list is one of the first reference frame list and the second reference frame list.
  • the target reference frame list can be selected randomly or according to certain rules. For example, it can be selected according to the following rule: if the current frame where the current block is located adopts the low-latency encoding mode, and the co-located frame of the current frame is the first frame in the second reference frame list, the second reference frame list is determined as the target reference Frame list; and/or if the current frame where the current block is located does not adopt the low-latency encoding mode or the co-located frame of the current frame is not the first frame in the second reference frame list, determine the first reference frame list as the target reference frame list .
  • the acquiring the motion vector of the second type candidate block of the current block into the motion vector candidate list in the second type prediction mode may include the following steps S530, S540, and S550.
  • step S530 the time domain motion vector of the current block is determined according to the target reference frame list of the current block.
  • the embodiment of the present application determines the time domain motion vector of the current block according to one reference frame list in the first reference frame list and the second reference frame list. In other words, regardless of whether the time-domain motion vector can be derived from the target reference frame list, the scanning is stopped after the target reference frame list is scanned. In other words, the time-domain motion vector of the current block can be determined only from the target reference frame list.
  • the first candidate motion vector can be selected from the current motion vector candidate list (the motion vector candidate list can be constructed based on the motion vectors of image blocks at four adjacent positions in the spatial domain); find it from the target reference frame list
  • the scanning is also stopped, instead of continuing to scan another reference frame list of the current block as described in the embodiment of FIG. 7, in this case, the 0 vector can be used as Time domain motion vector of the current block.
  • step S540 the motion information of the sub-block of the current block is determined according to the time-domain motion vector.
  • the corresponding block of the current block in the reference frame may be determined according to the time-domain motion vector.
  • the motion information of the sub-block of the current block may be determined according to the corresponding block of the current block in the reference frame.
  • the motion information includes a combination of one or more of the following information: motion vector; motion vector difference; reference frame index value; reference direction of inter prediction; information of image block using intra coding or inter coding; image block The division mode.
  • Step S540 can be implemented with reference to step S320 in the foregoing, which will not be described in detail here.
  • step S550 the motion information of the sub-block of the current block is added to the motion vector candidate list in the second-type prediction mode to perform inter prediction on the current block according to the motion vector candidate list.
  • step S550 may include: performing inter prediction according to the motion information of the sub-block of the current block in units of the sub-block of the current block.
  • the motion information of the sub-blocks of the current block can be inserted as an ATMVP into the current block’s merger candidates list, and then a complete affine can be constructed as shown in steps S120 to S160 in FIG. mergecandidateslist. Then, the candidate motion vectors in the affiliate candidate list can be used to perform inter prediction on the current block to determine the optimal candidate motion vector.
  • step S550 can be performed with reference to related technologies, which is not limited in the embodiments of the present application.
  • the embodiment of the present application can simplify the operation of the codec by limiting the number of reference frame lists that need to be scanned in the bidirectional prediction process.
  • step S550 when the method of FIG. 9 is applied to the encoding end and the decoding end, respectively, the inter prediction process for the current block described in step S550 may be different.
  • performing inter prediction on the current block may include: determining the prediction block of the current block; and calculating the residual block of the current block according to the original block and the prediction block of the current block.
  • performing inter prediction on the current block may include: determining the prediction block and the residual block of the current block; calculating the current block's current block according to the prediction block and the residual block of the current block Reconstruct the block.
  • FIG. 10 is a schematic structural diagram of a video processing device provided by an embodiment of the present application.
  • the device 60 of FIG. 10 includes a memory 62 and a processor 64.
  • the memory 62 may be used to store codes.
  • the processor 64 may be used to execute the code stored in the memory to perform the following operations:
  • the motion vectors of the first-type candidate blocks of the current block are acquired and added to the motion vector candidate list in the first-type prediction mode, the first-type prediction mode is based on the encoded blocks in the current frame.
  • Motion information performs intra prediction on the current block, where the first acquisition of the first type candidate block is the adjacent block of the current block above the current frame;
  • the current block is predicted according to the motion vector candidate list of the current block.
  • the second one of the candidate blocks of the first type is the adjacent block of the current block on the left side of the current frame.
  • processor 64 is also used to:
  • the motion vector of the second type candidate block of the current block is acquired and added to the motion vector candidate list in the second type prediction mode, and the second type prediction mode is based on the Motion information performs inter prediction on the current block;
  • the predicting the current block according to the motion vector candidate list of the current block includes:
  • the current block is predicted according to the motion vector candidate list in the first-type prediction mode of the current block and/or the motion vector candidate list in the second-type prediction mode.
  • the i-th candidate block of the first-type candidate block and the i-th candidate block of the second-type candidate block are the same, where 1 ⁇ i ⁇ N, N ⁇ P and N ⁇ Q,P Is the number of candidate blocks of the first type, and Q is the number of candidate blocks of the second type.
  • the first N candidate blocks of the first-type candidate block include an adjacent block of the current block above the current frame and an adjacent block of the current block on the left side of the current frame;
  • the first N candidate blocks of the second type of candidate blocks include the adjacent block of the current block above the current frame and the adjacent block of the current block on the left side of the current frame.
  • the number of candidate blocks of the second type is greater than the number of candidate blocks of the first type.
  • the first type prediction mode includes an intra-frame copy IBC merge mode and/or IBC inter mode
  • the second type prediction mode includes at least one of the following: optional/advanced time-domain motion vector prediction ATMVP Mode, optional/advanced motion vector prediction AMVP mode, merge mode and affine mode.
  • the processor 64 is further configured to:
  • Obtaining a reference frame list of the current block where the reference frame list of the current block includes a first reference frame list and a second reference frame list;
  • Adding the motion vector of the second type candidate block of the current block to the motion vector candidate list in the second type prediction mode includes:
  • the determining the time domain motion vector of the current block according to the target reference frame list of the current block includes:
  • the motion vector of the adjacent block in the spatial domain is determined to be the temporal motion vector.
  • the position of the adjacent block in the spatial domain at a specific position of the current block is the same as the position of the first obtained adjacent block in the motion vector candidate list in the first-type prediction mode.
  • the temporal motion vector is determined to be a 0 vector.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a dedicated computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be from a website site, computer, server or data center Transmission to another website, computer, server or data center via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more available medium integrated servers, data centers, and the like.
  • the available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, digital video disc (DVD)), or semiconductor media (eg, solid state disk (SSD)), etc. .
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical, or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

提供一种视频处理方法和装置,能够提升资源利用率,该方法包括:根据第一类候选块获取顺序,获取当前块的第一类候选块的运动矢量加入第一类预测模式下的运动矢量候选列表,所述第一类预测模式基于当前帧内的已编码块的运动信息对所述当前块进行帧内预测,其中,所述第一类候选块中第一个获取的是所述当前块在当前帧的上方的相邻块;根据所述当前块的所述运动矢量候选列表对所述当前块进行预测。

Description

视频处理方法和装置
本申请要求于2019年1月3日提交中国专利局、申请号为PCT/CN2019/070306、申请名称为“视频处理方法和装置”的PCT专利申请的优先权,其全部内容通过引用结合在本申请中。
版权申明
本专利文件披露的内容包含受版权保护的材料。该版权为版权所有人所有。版权所有人不反对任何人复制专利与商标局的官方记录和档案中所存在的该专利文件或者该专利披露。
技术领域
本申请涉及视频编解码领域,并且更为具体地,涉及一种视频处理方法和装置。
背景技术
在视频编解码中,预测步骤用于减少图像中的冗余信息。预测块指的是一帧图像中用于预测的基本单元,在一些标准中,该预测块也称为预测单元(Prediction Unit,PU)。在一些视频标准中,一帧图像第一次被分割成的多个图像块中的每个图像块称为编码树单元(Coding Tree Unit,CTU);每个编码树单元可以包含一个编码单元(Coding Unit,CU)或者再次分割成多个编码单元。
预测指的是查找与该预测块相似的图像数据,也称为该预测块的参考块。通过对该预测块和该预测块的参考块之间的差异进行编码/压缩,以减少编码/压缩中的冗余信息。其中,预测块与参考块的差异可以是由该预测块与该参考块的相应像素值相减得到的残差。预测包括帧内预测和帧间预测。帧内预测指的是在预测块所在帧内查找该预测块的参考块,帧间预测指的是在除预测块所在帧以外的其他帧内查找该预测块的参考块。
在对当前图像块进行预测之前,会构建运动矢量候选列表,根据在该运动矢量候选列表中选中的候选运动矢量对当前图像块进行预测。帧内预测和帧间预测都具有多种模式,对应地,运动矢量候选列表也有多种模式。这也 就意味着需要对应的软硬件资源支持该多种模式的运动矢量候选列表,降低了资源利用率。
因此,如何构建运动矢量候选列表以提升资源利用率是一项亟需解决的问题。
发明内容
本申请提供一种视频处理方法和装置,能够提升资源利用率。
第一方面,提供一种视频处理方法,包括:根据第一类候选块获取顺序,获取当前块的第一类候选块的运动矢量加入第一类预测模式下的运动矢量候选列表,所述第一类预测模式基于当前帧内的已编码块的运动信息对所述当前块进行帧内预测,其中,所述第一类候选块中第一个获取的是所述当前块在当前帧的上方的相邻块;根据所述当前块的所述运动矢量候选列表对所述当前块进行预测。
第二方面,提供一种视频处理装置,包括:存储器,用于存储代码;处理器,用于执行所述存储器中存储的代码,以执行如下操作:根据第一类候选块获取顺序,获取当前块的第一类候选块的运动矢量加入第一类预测模式下的运动矢量候选列表,所述第一类预测模式基于当前帧内的已编码块的运动信息对所述当前块进行帧内预测,其中,所述第一类候选块中第一个获取的是所述当前块在当前帧的上方的相邻块;根据所述当前块的所述运动矢量候选列表对所述当前块进行预测。
第三方面,提供一种计算机可读存储介质,其上存储有用于执行第一方面中的方法的指令。
第四方面,提供一种计算机程序产品,包含用于执行第一方面中的方法的指令。
通过设置第一类预测模式下当前块的MV候选列表的获取顺序,能够提升资源利用率。
附图说明
图1是根据本申请实施例的视频编码的框架图。
图2是根据本申请实施例的预测方式的示意性图。
图3是构造affine merge candidate list的流程图。
图4是基于帧间预测模式的当前块的周围块的示意图。
图5是基于IBC模式的当前块的周围块的示意图。
图6是本申请实施例提供的视频处理方法的流程示意图。
图7是ATMVP的实现过程的流程图。
图8是当前块的子块的运动信息的获取方式的示例图。
图9是本申请另一实施例的视频处理方法的流程示意图。
图10是本申请实施例提供的视频处理装置的结构示意图。
具体实施方式
本申请可应用于多种视频编码标准,如H.264,高效率视频编码(high efficiency video coding,HEVC),通用视频编码(versatile video coding,VVC),音视频编码标准(audio video coding standard,AVS),AVS+,AVS2以及AVS3等。
如图1所示,视频编码过程主要包括预测、变换、量化、熵编码、环路滤波等部分。预测是主流视频编码技术的重要组成部分。预测可以分为帧内预测和帧间预测。帧间预测可以通过运动补偿的方式来实现。下面对运动补偿过程进行举例说明。
例如,对于一帧图像,可以先将其划分成一个或多个编码区域。该编码区域也可称为编码树单元(coding tree unit,CTU)。CTU的尺寸例如可以是64×64,也可以是128×128(单位为像素,后文的类似描述均省略单位)。每个CTU可以划分成方形或矩形的图像块。该图像块也可称为编码单元(coding unit,CU),后文会将待编码的当前CU称为当前块。
帧间预测可以包括前向预测、后向预测、双预测等。
其中,前向预测是利用当前帧(例如,如图2所示的,标号为t的帧)的前一重构帧(可以称为历史帧)对当前帧进行预测。后向预测是利用当前帧之后的帧(可以称为将来帧)对当前帧进行预测。双预测可以是双向预测,既利用“历史帧”(例如,如图2所示,标号为t-2和t-1的帧)也利用“将来帧”(例如,如图2所示,标号为t+2和t+1的帧)来对当前帧进行预测。双预测还可以是同一方向的预测,例如,利用两个“历史帧”来对当前帧进行预测,或者,利用两个“将来帧”来对当前帧进行预测。
在对当前块进行帧间预测时,可以从参考帧(可以是时域附近的已重构 帧)中寻找当前块的相似块,作为当前块的预测块。当前块与相似块之间的相对位移称为运动矢量(motion vector,MV)。在参考帧中寻找相似块作为当前块的预测块的过程即为运动补偿。
以下分别介绍几种典型的帧间预测模式和帧内预测模式。
一:HEVC中的帧间预测模式包括inter模式(也叫AMVP模式),merge模式和skip模式。
对于inter模式而言,需要获取临近块(空域或时域)的MV候选列表,在该MV候选列表中确定一个MV作为当前块的运动矢量预测(motionvectorprediction,MVP),在得到MVP之后,可以根据MVP确定运动估计的起始点,在起始点附近,进行运动搜索,搜索完毕之后得到最优的运动矢量(Motion Vector,MV),由MV确定预测块(PU,或称参考帧)在参考图像中的位置,预测块减去当前块得到残差,MV减去MVP得到运动矢量差值(Motion Vector Difference,MVD),inter模式需要在码流中传输MVP的索引以及MVD。
在merge模式中,需要获取临近块(空域或时域)的MV候选列表,在该MV候选列表中确定一个MV作为当前块的MVP,因此,对于merge模式,在码流中只需传输MVP的索引,不需传输MVD,也就是说,merge模式下的MVP即为当前块的MV。
skip模式是merge模式的一种特例,只需要传递MVP的索引,不需要传递MVD和残差。
随着编码技术的发展,帧间预测方式引入了可选/高级时域运动矢量预测(alternative/advanced temporal motion vector prediction,ATMVP)技术。在ATMVP技术中,当前块会被划分成多个子块,并计算子块的运动信息。ATMVP技术旨在引入子块级别的运动矢量预测,以提升视频的整体编码性能。
传统的运动矢量采用的是简单的平移模型,即当前块的运动矢量代表的是当前块与参考块之间的相对位移。这种类型的运动矢量难以准确描述视频中的更为复杂的运动情况,如缩放、旋转、透视等。为了能够描述更为复杂的运动情况,相关编解码标准中引入了仿射模型(affine模型)。仿射模型利用当前块的两个或三个控制点(control point,CP)的运动矢量描述当前块的仿射运动场。该两个控制点例如可以是当前块的左上角点和右上角点;该 三个控制点例如可以是当前块的左上角点,右上角点和左下角点。
将仿射模型与前文提及的merge模式结合在一起,即形成affine merge模式。普通merge模式的运动矢量候选列表(记为merge candidate list)中记录的是图像块的MVP,而affine merge模式的运动矢量候选列表(记为affine merge candidate list)中记录的是控制点运动矢量预测(control point motion vector prediction,CPMVP)。与普通merge模式类似,affine merge模式无需在码流中添加MVD,而是直接将CPMVP作为当前块的CPMV。
二、帧内预测技术
屏幕图像在桌面协作,桌面共享,第二屏幕,云游戏等各种场景中普遍存在。对于文字、图形等屏幕图像,同一帧中存在很多重复纹理,即具有较强的空间相关性,如果在编码当前块时,参考当前帧内已编码完的块,能够大大提升编码效率。因此,HEVC引入了一种帧内块拷贝(Intra Block Copy,IBC)技术,该IBC技术中采用当前帧内的已重构块作为预测块,可以认为IBC技术是对当前编码图像内的运动补偿。IBC技术和帧间预测技术的原理类似,只不过IBC技术的预测块是由当前帧内的已重构块产生的,因此可以认为IBC技术是一种特殊的帧间预测技术,或者一种特殊的帧内预测技术。
IBC技术可以包括IBC merge模式和IBC inter模式,其中,这两种模式与普通merge模式的原理类似,区别在于IBC merge和IBC inter模式所获取的MV候选列表中的MV为当前帧内的已重构块的MV,也就是说,IBC的预测块为当前帧内的已重构块,IBC merge的MV候选列表和IBC inter的MV候选列表的长度不同,具体可以是预定义的。
由上文描述可知,MV候选列表的构建过程是帧间预测模式和IBC模式的重要过程之一。
以affine merge模式下的MV候选列表的构建为例说明帧间预测模式的MV候选列表的构建过程。图3示出了affine mergecandidate list(仿射融合候选列表)的一种可能的构造方式。
步骤S110,在当前块的affine mergecandidate list中插入ATMVP。
ATMVP包含的是当前块的子块的运动信息。换句话说,采用ATMVP技术时,affine merge candidate list会插入当前块的子块的运动信息,使得affine merge模式能够在子块这一级别进行运动补偿,从而提升视频的整体编码性能。下文会结合图7,对步骤S110的实现方式进行详细描述,此处暂不 详述。
所述运动信息包括以下一种或多种信息的组合:运动矢量;运动矢量差值;参考帧索引值;帧间预测的参考方向;图像块采用帧内编码或帧间编码的信息;图像块的划分模式。
步骤S120,在affine mergecandidate list中插入继承的affine candidates。
例如,如图4所示,可以按照B0->A0->B1->A1->B2的顺序扫描当前块的周围块,将采用affine merge模式的周围块的CPMV作为当前块的affine candidates,插入当前块的affine mergecandidate list。
步骤S130,判断affine mergecandidate list中的affine candidates的数量是否小于预设值。
如果affine mergecandidate list中的affine candidates的数量已达到预设值,结束图3的流程;如果affine mergecandidate list中的affine candidates的数量小于预设值,继续执行步骤S140。
步骤S140,在affine mergecandidate list中插入构造的affine candidates。
例如,可以将当前块的周围块的运动信息进行组合,以构造出新的affine candidates,并将构造生成的affine candidates插入affine mergecandidate list。
步骤S150,判断affine mergecandidate list中的affine candidates的数量是否小于预设值。
如果affine mergecandidate list中的affine candidates的数量已达到预设值,结束图3的流程;如果affine mergecandidate list中的affine candidates的数量小于预设值,继续执行步骤S160。
在步骤S160,在affine mergecandidate list中插入0矢量。
换句话说,使用0矢量填充(padding)affine mergecandidate list,使其达到预设值。
应理解,在其他场景中,若不采用ATMVP的实现方式,图3所示的构建过程可以不包括步骤S110。
IBC模式下的MV候选列表的构建过程和图3所示例的构建过程类似,区别在于,MV候选列表的扫描顺序和长度不同,以IBC merge模式为例,如图4所示,MV候选列表的扫描顺序是A0->B0。
应理解,基于IBC模式的原理,IBC模式下的MV候选列表只包括空域候选块,不包括时域候选块,可选地,在空域候选块的后面可以为基于历史 的MVP(History-based Motion Vector Prediction)。
可选地,IBC merge模式和IBC inter模式的MV候选列表的长度为预设值,例如,IBC merge模式下的MV候选列表的长度为6,IBC inter模式下的MV候选列表的长度为2,若填充空域候选块的MV后未达到该预设值,可以在其后填充其他内容,例如,补零。
也就是说,运动矢量候选列表有多种构建方式,每种构建方式需要有对应的软硬件资源,降低了资源利用率。
针对上述问题,本申请提出一种视频处理方法与装置,可以在一定程度上减少软硬件资源的浪费,提升系统性能。
下面结合图6,对本申请实施例进行详细描述。
图6是本申请实施例提供的视频处理方法的示意性流程图。图6的方法可应用于编码端,也可应用于解码端。
在步骤S210,根据第一类候选块获取顺序,获取当前块的第一类候选块的运动矢量加入第一类预测模式下的运动矢量候选列表,所述第一类预测模式基于当前帧内的已编码块的运动信息对所述当前块进行帧内预测,其中,所述第一类候选块中第一个获取的是所述当前块在当前帧的上方的相邻块;
在步骤S220,根据所述当前块的所述运动矢量候选列表对所述当前块进行预测。
可选地,所述当前块也可以称为当前CU。所述第一类预测模式基于当前帧内的已编码块的运动信息对所述当前块进行帧内预测,作为一个示例,所述第一类预测模式可以为IBC模式,例如,IBC merge模式或IBC inter模式。
所述第一类候选块为基于所述第一类预测模式进行预测所使用的候选块,所述第一类候选块包括至少一个候选块,所述至少一个候选块为所述当前块在当前帧中的相邻块。
应理解,本申请实施例并不限定所述相邻块和所述当前块的大小,例如,所述相邻块可以与所述当前块大小相同,比如,所述当前块和所述相邻块都为64*64,或者也可以小于所述当前块,例如,当前块的大小为64*64,所述相邻块的大小为16*16。
所述第一类候选块获取顺序为所述第一类预测模式对应的候选块获取顺序,基于第一类候选块获取顺序,首先获取当前块在当前帧的上方的相邻 块,如图5所示,第一个获取的候选块为相邻块B0。
在所述第一类候选块包括多个候选块时,所述第一类候选块中第二个获取的候选块可以为所述当前块在当前帧中的左侧的相邻块,也就是说,如图5所示,可以按照B0->A0的获取顺序依次获取当前块的相邻块的MV,并将其加入到该第一类预测模式对于的MV候选列表。
可选地,若所述第一类候选块中的所有候选块都加入到MV候选列表后,仍未达到该MV候选列表的预设长度,在一种实现方式,可以使用0矢量填充该MV候选列表,使其达到预设长度。
应理解,本申请实施例对于在所述第一类预测模式下,所述第一类候选块的个数不作限定,例如可以为2个,或者更多个,例如,4个或5个等。
当所述第一类候选块的个数为5个时,作为一个示例,所述第一类候选块的获取顺序可以为:所述当前块在当前帧的上方的相邻块,所述当前块在当前帧的左侧的相邻块,所述当前块在当前帧的右上方的相邻块,所述当前块在当前帧的左下方的相邻块,所述当前块在当前帧的左上方的相邻块。即如图4所示,按照B0->A0->B1->A1->B2的顺序依次获取每个相邻块的MV。
在本申请一些实施例中,对于第二类预测模式,可以根据第二类候选块获取顺序,获取所述当前块的第二类候选块的运动矢量加入第二类预测模式下的运动矢量候选列表。
在一些实施例中,所述第二类预测模式基于运动矢量候选列表中的运动信息对所述当前块进行帧间预测,所述第一类候选块的前N个候选块和所述第二类候选块的前N个候选块分别相同,所述N大于或等于1。
即,所述第一类候选块的第i个候选块和所述第二类候选块的第i个候选块相同,其中,1≤i≤N,N≤P且N≤Q,P为所述第一类候选块的数量,Q为所述第二类候选块的数量,P,Q为正整数。
换句话说,第一类预测模式和所述第二类预测模式对应的候选块获取顺序至少部分相同。这样,所述第一类预测模式和所述第二预测模式的候选块获取顺序中相同的部分可以采用相同的软硬件资源实现,有利于提升资源利用率。
可选地,在一些实施例中,所述第二类候选块的数量Q大于所述第一类候选块P的数量。作为一个示例,所述第一类候选块的数量P为2,所述第二类候选块的数量Q为4或5。此情况下,所述N可以为所述第一类候选块 的数量。
在一些实施例中,所述第一类候选块和所述第二类候选块中的前两个候选块相同,例如,所述第一类候选块中的前两个候选块包括所述当前块在当前帧的上方的相邻块和所述当前块在当前帧的左侧的相邻块,所述第二类候选块的前两个候选块包括所述当前块在当前帧的上方的相邻块和所述当前块在当前帧的左侧的相邻块。
应理解,所述第一类候选块和所述第二类候选块中的前N个候选块相同仅表示所述第一类候选块和所述第二类候选块中的前N个候选块相对于所述当前块的位置相同,并不表示该前N个候选块的大小和内容相同。
在一种实现方式中,所述第一类候选块获取顺序可以为B0->A0,所述第二类候选块获取顺序可以为B0->A0->B1->A1->B2。
在另一种实现方式中,所述第一类候选块获取顺序可以为A0->B0,所述第二类候选块获取顺序可以为A0->B0>B1->A1->B2。
可选地,在另一些实施例中,所述第二类候选块的数量等于所述第一类候选块的数量。作为一个示例,所述第一类候选块的数量为5,所述第二类候选块的数量为5。作为另一个示例,所述第一类候选块的数量为2,所述第二类候选块的数量为2。
作为一种实现方式,所述第一类候选块获取顺序可以为B0->A0->B1->A1->B2,所述第二类候选块获取顺序可以为B0->A0->B1->A1->B2。
作为另一种实现方式,所述第一类候选块获取顺序可以为A0->B0>B1->A1->B2,所述第二类候选块获取顺序可以为A0->B0>B1->A1->B2。
进一步地,在一些实施例中,所述步骤S220可以包括:
根据所述当前块的所述第一类预测模式下的运动矢量候选列表和/或所述第二类预测模式下的运动矢量候选列表对所述当前块进行预测。
该步骤S220的详细实现方式可以参照相关技术执行,本申请实施例对此并不限定。
可选地,在一些实施例中,所述第一类预测模式包括IBC merge模式和/或IBC inter模式,或者也可以包括其他需要基于当前帧内的已编码块的运动信息进行预测的预测模式,所述第二类预测模式包括以下中的至少一种:ATMVP模式、AMVP模式,merge模式和affine模式,或者也可以包括其 他帧间预测模式,本申请实施例对此不作限定。
下面结合图7,以ATMVP模式为例,对MV候选列表的构建过程进行举例说明。在一些示例中,以下所介绍的在当前块的affine mergecandidate list中插入ATMVP的方法也可以不局限与上述图3所示的实施例中。
如图7所示,ATVMP技术的实现方式,即当前块的子块的运动信息的获取方式大致可以分为两步:步骤S310和S320。
在步骤S310,确定当前块的参考帧中的对应块(corresponding block)。
在目前的ATMVP技术中,当前帧(当前块所在的帧)的用于获取运动信息的帧被称为同位帧(co-located picture)。当前帧的同位帧会在slice(条带)初始化时设置。以前向预测为例,第一参考帧列表可以是前向参考帧列表,也可以是包含了第一组参考帧的参考帧列表。所述第一组参考帧中包括了时间顺序在当前帧之前及之后的参考帧。在slice初始化时,通常会将当前块的第一参考帧列表中的第一帧设置为当前帧的同位帧。
当前块在参考帧中的对应块是通过一个时域运动矢量(temp MV)来确定的。因此,为了得到当前块在参考帧中的对应块,需要先推导该时域运动矢量。下面分别以前向预测和双向预测为例,对时域运动矢量的推导过程进行说明。
对于前向预测而言,当前块的参考帧列表(也可称为参考列表或参考图像列表)的个数为1。当前块的参考帧列表可以称为第一参考帧列表(reference list 0)。在一种场景中,该第一参考帧列表可以为前向参考帧列表。当前帧的同位帧通常被设置为第一参考帧列表中的第一帧。
在推导时域运动矢量的过程中,一种实现方式是:先扫描当前块的运动矢量候选列表(该运动矢量候选列表可以基于空域4个相邻位置的图像块的运动矢量构建),将该运动矢量候选列表中的第一个候选运动矢量作为初始的时域运动矢量。然后,扫描当前块的第一参考帧列表,如果该第一个候选运动矢量的参考帧与当前帧的同位帧相同,则可以将该第一个候选运动矢量作为时域运动矢量;如果该第一个候选运动矢量的参考帧与当前帧的同位帧不同,则可以将时域运动矢量设置为0矢量,并停止扫描。在该实现方式中,需要构建运动矢量候选列表,以获得列表中第一个候选运动矢量。
在另一种实现方式中:可以直接取当前块某一个空域相邻块的运动矢量作为初始的时域运动矢量。如果该空域相邻块的运动矢量的参考帧与当前帧 的同位帧相同,则可以将该其作为时域运动矢量;否则,则可以将时域运动矢量设置为0矢量,并停止扫描。这里,空域相邻块可以是当前块周围已编码块中的任一个,如可以固定是当前块的左侧块、或固定是当前块的上方块、或固定是当前块的左上块等。
对于双向预测而言,当前块的参考帧列表的个数为2,即包括第一参考帧列表(reference list 0)和第二参考帧列表(reference list 1)。其中在一种场景中,第一参考帧列表可以是前向参考帧列表,第二参考帧列表可以是后向参考帧列表。
在推导时域运动矢量的过程中,一种实现方式是:先扫描当前的运动矢量候选列表,将该运动矢量候选列表中的第一个候选运动矢量作为初始的时域运动矢量。然后,先扫描当前块的当前参考方向上的一个参考帧列表(可以是第一参考帧列表,也可以是第二参考帧列表),如果该第一个候选运动矢量的参考帧与当前帧的同位帧相同,则可以将该第一个候选运动矢量作为时域运动矢量;如果该第一个候选运动矢量的参考帧与当前帧的同位帧不同,则继续扫描当前块的另一参考方向上的参考帧列表。同样地,如果第一个候选运动矢量在该另一参考帧列表中的参考帧与当前帧的同位帧相同,则可以将该第一个候选运动矢量作为时域运动矢量;如果该第一个候选运动矢量的参考帧与当前帧的同位帧不同,则可以将时域运动矢量设置为0矢量,并停止扫描。需要注意的是,在另外一些场景中,第一参考帧列表和第二参考帧列表都可以包含时间顺序在当前帧之前及之后的参考帧,所述的双向预测是指从第一参考帧列表和第二参考帧列表中选择了参考方向不同的参考帧。在该实现方式中,双向预测中导出ATMVP的temp MV仍需要构建运动矢量候选列表。
在另一种实现方式中:可以直接取当前块某一个空域相邻块的运动矢量作为初始的时域运动矢量。对于双向预测,先扫描当前块的当前参考方向上的一个参考帧列表(可以是第一参考帧列表,也可以是第二参考帧列表),如果该空域相邻块的运动矢量的在该参考方向上的参考帧与当前帧的同位帧相同,则可以将该其作为时域运动矢量。可选地,如果该空域相邻块的运动矢量在这个参考方向上的参考帧与当前帧的同位帧不同,则继续扫描当前块的另一参考方向上的参考帧列表。同样地,如果该空域相邻块的运动矢量在该另一参考帧列表中的参考帧与当前帧的同位帧相同,则可以将该空域相 邻块的运动矢量作为时域运动矢量;如果该空域相邻块运动矢量的参考帧与当前帧的同位帧不同,则可以将时域运动矢量设置为0矢量,并停止扫描。这里,空域相邻块可以是当前块周围已编码块中的任一个,如固定是当前块的左侧块、或固定是当前块的上方块、或固定是当前块的左上块等。
对于双向预测而言,第一参考帧列表和第二参考帧列表的扫描顺序可以按照如下规则确定:
当当前帧采用的是低时延(low delay)编码模式,且当前帧的同位帧被设置为第二参考帧列表中的第一帧,则先扫描第二参考帧列表;否则,先扫描第一参考帧列表。
其中,当前帧采用低时延(low delay)编码模式可表示当前帧的参考帧在视频序列中的播放顺序均处于当前帧之前;当前帧的同位帧被设置为第二参考帧列表中的第一帧可表示当前帧的第一参考帧列表的第一个slice的量化步长小于第二参考帧列表的第一个slice的量化步长。
在推导出时域运动矢量之后,即可利用该时域运动矢量在参考帧中找到当前块的对应块。
在步骤S320,根据当前块的对应块,获取当前块的子块的运动信息。
如图8所示,可以将当前块划分成多个子块,然后确定子块在对应块中的运动信息。值得注意的是,对于每个子块而言,对应块的运动信息可以由其所在的最小运动信息存储单位确定。
从上文描述的ATMVP的实现过程可以看出,对于双向预测而言,最坏的情况是:在推导时域运动矢量的过程中,对两个参考帧列表均进行扫描,仍然没有导出符合条件的时域运动矢量,在这种情况下,对于两个参考帧列表的扫描是冗余的。
此外,在双向预测中,如果当前帧的编码模式为低时延模式(low delay B)或随机访问模式(random access),第一参考帧列表与第二参考帧列表中的参考帧会有一定程度上的重叠,因此,在获取时域运动矢量的过程中,对两个参考帧列表的扫描过程会存在冗余操作。
进一步地,在所述获取所述当前块的第二类候选块的运动矢量加入第二类预测模式下的运动矢量候选列表之前,图6所述方法还包括如下步骤:
参见图9,在步骤S510,获取当前块的参考帧列表,当前块的参考帧列表包括第一参考帧列表和第二参考帧列表。
当前块的参考帧列表包括第一参考帧列表和第二参考帧列表,表示当前块要执行的是帧间的双向预测。
可选地,所述第一参考帧列表可以为前向参考帧列表,也可以是包含了第一组参考帧的参考帧列表。所述第一组参考帧中包括了时间顺序在当前帧之前及之后的参考帧。
可选地,所述第二参考帧列表可以为后向参考帧列表,也可以是包含了第二组参考帧的参考帧列表,所述第二组参考帧中包括了时间顺序在当前帧之前及之后的参考帧。
需要注意的是,在一些场景中,第一参考帧列表和第二参考帧列表都可以包含时间顺序在当前帧之前及之后的参考帧,所述的双向预测可以指从第一参考帧列表和第二参考帧列表中选择了参考方向不同的参考帧。
在步骤S520,根据当前块的参考帧列表,确定目标参考帧列表。
目标参考帧列表为第一参考帧列表和第二参考帧列表之一。该目标参考帧列表可以随机选取,也可以按照一定的规则选取。例如,可以按照如下规则选取:如果当前块所在的当前帧采用低延时编码模式、且当前帧的同位帧为第二参考帧列表中的第一帧,将第二参考帧列表确定为目标参考帧列表;和/或如果当前块所在的当前帧未采用低延时编码模式或当前帧的同位帧不是第二参考帧列表中的第一帧,将第一参考帧列表确定为目标参考帧列表。
进一步地,所述获取所述当前块的第二类候选块的运动矢量加入第二类预测模式下的运动矢量候选列表,可以包括如下步骤S530、S540和S550。
在步骤S530中,根据当前块的目标参考帧列表确定当前块的时域运动矢量。
在双向预测过程中,本申请实施例根据第一参考帧列表和第二参考帧列表中的一个参考帧列表确定当前块的时域运动矢量。换句话说,无论是否能够从目标参考帧列表中推导出时域运动矢量,在扫描完目标参考帧列表之后,即停止扫描。换句话说,可以仅根据目标参考帧列表确定当前块的时域运动矢量。
举例说明,可以先从当前的运动矢量候选列表(该运动矢量候选列表可以基于空域4个相邻位置的图像块的运动矢量构建)中选取第一个候选运动矢量;从目标参考帧列表中查找第一个候选运动矢量的参考帧;当第一个候选运动矢量的参考帧与当前块的同位帧相同时,可以将第一个候选运动矢量 确定为时域运动矢量;当第一个候选运动矢量的参考帧与当前块的同位帧不同时,也停止扫描,而不像图7实施例中描述的那样继续扫描当前块的另一参考帧列表,在这种情况下,可以将0矢量作为当前块的时域运动矢量。
在步骤S540,根据时域运动矢量确定当前块的子块的运动信息。
例如,可以根据时域运动矢量确定当前块在参考帧中的对应块。然后,可以根据当前块在参考帧中的对应块确定当前块的子块的运动信息。所述运动信息包括以下一种或多种信息的组合:运动矢量;运动矢量差值;参考帧索引值;帧间预测的参考方向;图像块采用帧内编码或帧间编码的信息;图像块的划分模式。步骤S540可以参照上文中的步骤S320实现,此处不再详述。
在步骤S550,将所述当前块的子块的运动信息加入到所述第二类预测模式下的运动矢量候选列表,以根据所述运动矢量候选列表对所述当前块进行帧间预测。
作为一个示例,步骤S550可以包括:以当前块的子块为单位根据当前块的子块的运动信息进行帧间预测。
例如,可以像图3所示的那样,将当前块的子块的运动信息作为ATMVP插入当前块的affine merge candidates list中,然后按照图3中的步骤S120至步骤S160那样,构造出完整的affine merge candidates list。接着,可以利用该affine merge candidates list中的候选运动矢量,对当前块进行帧间预测,以确定最优的候选运动矢量。步骤S550的详细实现方式可以参照相关技术执行,本申请实施例对此并不限定。
本申请实施例通过限制双向预测过程中需要扫描的参考帧列表的数量,可以简化编解码端的操作。
可以理解的是,图9的方法分别应用于编码端和解码端时,步骤S550描述的对当前块的帧间预测过程会有所差异。例如,当图9的方法应用于编码端时,对当前块进行帧间预测可以包括:确定当前块的预测块;根据当前块的原始块和预测块,计算当前块的残差块。又如,当图9的方法应用于解码端时,对当前块进行帧间预测可以包括:确定当前块的预测块和残差块;根据当前块的预测块和残差块,计算当前块的重构块。
上文结合图1至图9,详细描述了本申请的方法实施例,下面结合图10,详细描述本申请的装置实施例。应理解,方法实施例的描述与装置实施例的 描述相互对应,因此,未详细描述的部分可以参见前面方法实施例。
图10是本申请实施例提供的视频处理装置的示意性结构图。图10的装置60包括:存储器62和处理器64。
存储器62可用于存储代码。
处理器64可用于执行所述存储器中存储的代码,以执行如下操作:
根据第一类候选块获取顺序,获取当前块的第一类候选块的运动矢量加入第一类预测模式下的运动矢量候选列表,所述第一类预测模式基于当前帧内的已编码块的运动信息对所述当前块进行帧内预测,其中,所述第一类候选块中第一个获取的是所述当前块在当前帧的上方的相邻块;
根据所述当前块的所述运动矢量候选列表对所述当前块进行预测。
可选地,所述第一类候选块中第二个获取的是所述当前块在当前帧的左侧的相邻块。
可选地,所述处理器64还用于:
根据第二类候选块获取顺序,获取所述当前块的第二类候选块的运动矢量加入第二类预测模式下的运动矢量候选列表,所述第二类预测模式基于运动矢量候选列表中的运动信息对所述当前块进行帧间预测;
所述根据所述当前块的所述运动矢量候选列表对所述当前块进行预测,包括:
根据所述当前块的所述第一类预测模式下的运动矢量候选列表和/或所述第二类预测模式下的运动矢量候选列表对所述当前块进行预测。
可选地,所述第一类候选块的第i个候选块和所述第二类候选块的第i个候选块相同,其中,1≤i≤N,N≤P且N≤Q,P为所述第一类候选块的数量,Q为所述第二类候选块的数量。
可选地,所述第一类候选块的前N个候选块包括所述当前块在当前帧的上方的相邻块和所述当前块在当前帧的左侧的相邻块;
所述第二类候选块的前N个候选块包括所述当前块在当前帧的上方的相邻块和所述当前块在当前帧的左侧的相邻块。
可选地,所述第二类候选块的数量大于所述第一类候选块的数量。
可选地,所述第一类预测模式包括帧内拷贝IBC merge模式和/或IBC inter模式,所述第二类预测模式包括以下中的至少一种:可选/高级时域运动矢量预测ATMVP模式、可选/高级运动矢量预测AMVP模式,merge模式 和affine模式。
可选地,在所述获取所述当前块的第二类候选块的运动矢量加入第二类预测模式下的运动矢量候选列表之前,所述处理器64还用于:
获取当前块的参考帧列表,所述当前块的参考帧列表包括第一参考帧列表和第二参考帧列表;
根据所述当前块的参考帧列表,确定目标参考帧列表,所述目标参考帧列表为所述第一参考帧列表和所述第二参考帧列表之一;
所述获取所述当前块的第二类候选块的运动矢量加入第二类预测模式下的运动矢量候选列表,包括:
根据所述当前块的目标参考帧列表确定所述当前块的时域运动矢量;
根据所述时域运动矢量确定所述当前块的子块的运动信息;
将所述当前块的子块的运动信息加入第二类预测模式下的运动矢量候选列表。
可选地,所述根据所述当前块的目标参考帧列表确定所述当前块的时域运动矢量,包括:
确定所述当前块的一个特定位置的空域相邻块的运动矢量;
当所述空域相邻块的运动矢量的参考帧与所述当前块的同位帧相同时,将所述空域相邻块的运动矢量确定为所述时域运动矢量。
可选地,所述当前块的一个特定位置的空域相邻块的位置与所述第一类预测模式下的运动矢量候选列表第一个获取的相邻块的位置相同。
可选地,当所述空域相邻块的参考帧与所述当前块的同位帧不同时,将所述时域运动矢量确定为0矢量。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其他任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线 (例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如数字视频光盘(digital video disc,DVD))、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (22)

  1. 一种视频处理方法,其特征在于,包括:
    根据第一类候选块获取顺序,获取当前块的第一类候选块的运动矢量加入第一类预测模式下的运动矢量候选列表,所述第一类预测模式基于当前帧内的已编码块的运动信息对所述当前块进行帧内预测,其中,所述第一类预测模式下的运动矢量候选列表第一个获取的是所述当前块在当前帧的上方的相邻块的运动矢量;
    根据所述当前块的所述运动矢量候选列表对所述当前块进行预测。
  2. 根据权利要求1所述的方法,其特征在于,所述第一类预测模式下的运动矢量候选列表第二个获取的是所述当前块在当前帧的左侧的相邻块的运动矢量。
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    根据第二类候选块获取顺序,获取所述当前块的第二类候选块的运动矢量加入第二类预测模式下的运动矢量候选列表,所述第二类预测模式基于运动矢量候选列表中的运动信息对所述当前块进行帧间预测;
    所述根据所述当前块的所述运动矢量候选列表对所述当前块进行预测,包括:
    根据所述当前块的所述第一类预测模式下的运动矢量候选列表和/或所述第二类预测模式下的运动矢量候选列表对所述当前块进行预测。
  4. 根据权利要求3所述的方法,其特征在于,所述第一类候选块的第i个候选块和所述第二类候选块的第i个候选块相同,其中,1≤i≤N,N≤P且N≤Q,P为所述第一类候选块的数量,Q为所述第二类候选块的数量。
  5. 根据权利要求3或4所述的方法,其特征在于,
    所述第一类候选块的前N个候选块包括所述当前块在当前帧的上方的相邻块和所述当前块在当前帧的左侧的相邻块;
    所述第二类候选块的前N个候选块包括所述当前块在当前帧的上方的相邻块和所述当前块在当前帧的左侧的相邻块。
  6. 根据权利要求3至6中任一项所述的方法,其特征在于,
    所述第二类候选块的数量大于所述第一类候选块的数量。
  7. 根据权利要求3至6中任一项所述的方法,其特征在于,所述第一类预测模式包括帧内拷贝IBC merge模式和/或IBC inter模式,所述第二类 预测模式包括以下中的至少一种:可选/高级时域运动矢量预测ATMVP模式、可选/高级运动矢量预测AMVP模式,merge模式和affine模式。
  8. 根据权利要求3所述的方法,其特征在于,
    在所述获取所述当前块的第二类候选块的运动矢量加入第二类预测模式下的运动矢量候选列表之前,所述方法还包括:
    获取当前块的参考帧列表,所述当前块的参考帧列表包括第一参考帧列表和第二参考帧列表;
    根据所述当前块的参考帧列表,确定目标参考帧列表,所述目标参考帧列表为所述第一参考帧列表和所述第二参考帧列表之一;
    所述获取所述当前块的第二类候选块的运动矢量加入第二类预测模式下的运动矢量候选列表,包括:
    根据所述当前块的目标参考帧列表确定所述当前块的时域运动矢量;
    根据所述时域运动矢量确定所述当前块的子块的运动信息;
    将所述当前块的子块的运动信息加入所述第二类预测模式下的运动矢量候选列表。
  9. 根据权利要求8所述的方法,其特征在于,所述根据所述当前块的目标参考帧列表确定所述当前块的时域运动矢量,包括:
    确定所述当前块的一个特定位置的空域相邻块的运动矢量;
    当所述空域相邻块的运动矢量的参考帧与所述当前块的同位帧相同时,将所述空域相邻块的运动矢量确定为所述时域运动矢量。
  10. 根据权利要求9所述的方法,其特征在于,所述当前块的一个特定位置的空域相邻块的位置与所述第一类预测模式下的运动矢量候选列表第一个获取的相邻块的位置相同。
  11. 根据权利要求9所述的方法,其特征在于,当所述空域相邻块的参考帧与所述当前块的同位帧不同时,将所述时域运动矢量确定为0矢量。
  12. 一种视频处理装置,其特征在于,包括:
    存储器,用于存储代码;
    处理器,用于执行所述存储器中存储的代码,以执行如下操作:
    根据第一类候选块获取顺序,获取当前块的第一类候选块的运动矢量加入第一类预测模式下的运动矢量候选列表,所述第一类预测模式基于当前帧内的已编码块的运动信息对所述当前块进行帧内预测,其中,所述第一类候 选块中第一个获取的是所述当前块在当前帧的上方的相邻块;
    根据所述当前块的所述运动矢量候选列表对所述当前块进行预测。
  13. 根据权利要求12所述的装置,其特征在于,所述第一类候选块中第二个获取的是所述当前块在当前帧的左侧的相邻块。
  14. 根据权利要求12或13所述的装置,其特征在于,所述处理器还用于:
    根据第二类候选块获取顺序,获取所述当前块的第二类候选块的运动矢量加入第二类预测模式下的运动矢量候选列表,所述第二类预测模式基于运动矢量候选列表中的运动信息对所述当前块进行帧间预测;
    所述根据所述当前块的所述运动矢量候选列表对所述当前块进行预测,包括:
    根据所述当前块的所述第一类预测模式下的运动矢量候选列表和/或所述第二类预测模式下的运动矢量候选列表对所述当前块进行预测。
  15. 根据权利要求14所述的装置,其特征在于,所述第一类候选块的第i个候选块和所述第二类候选块的第i个候选块相同,其中,1≤i≤N,N≤P且N≤Q,P为所述第一类候选块的数量,Q为所述第二类候选块的数量。
  16. 根据权利要求14或15所述的装置,其特征在于,
    所述第一类候选块的前N个候选块包括所述当前块在当前帧的上方的相邻块和所述当前块在当前帧的左侧的相邻块;
    所述第二类候选块的前N个候选块包括所述当前块在当前帧的上方的相邻块和所述当前块在当前帧的左侧的相邻块。
  17. 根据权利要求14至16中任一项所述的装置,其特征在于,
    所述第二类候选块的数量大于所述第一类候选块的数量。
  18. 根据权利要求14至17中任一项所述的装置,其特征在于,所述第一类预测模式包括帧内拷贝IBC merge模式和/或IBC inter模式,所述第二类预测模式包括以下中的至少一种:可选/高级时域运动矢量预测ATMVP模式、可选/高级运动矢量预测AMVP模式,merge模式和affine模式。
  19. 根据权利要求14所述的装置,其特征在于,在所述获取所述当前块的第二类候选块的运动矢量加入第二类预测模式下的运动矢量候选列表之前,所述处理器还用于:
    获取当前块的参考帧列表,所述当前块的参考帧列表包括第一参考帧列 表和第二参考帧列表;
    根据所述当前块的参考帧列表,确定目标参考帧列表,所述目标参考帧列表为所述第一参考帧列表和所述第二参考帧列表之一;
    所述获取所述当前块的第二类候选块的运动矢量加入第二类预测模式下的运动矢量候选列表,包括:
    根据所述当前块的目标参考帧列表确定所述当前块的时域运动矢量;
    根据所述时域运动矢量确定所述当前块的子块的运动信息;
    将所述当前块的子块的运动信息加入所述第二类预测模式下的运动矢量候选列表。
  20. 根据权利要求19所述的装置,其特征在于,所述根据所述当前块的目标参考帧列表确定所述当前块的时域运动矢量,包括:
    确定所述当前块的一个特定位置的空域相邻块的运动矢量;
    当所述空域相邻块的运动矢量的参考帧与所述当前块的同位帧相同时,将所述空域相邻块的运动矢量确定为所述时域运动矢量。
  21. 根据权利要求20所述的装置,其特征在于,所述当前块的一个特定位置的空域相邻块的位置与所述第一类预测模式下的运动矢量候选列表第一个获取的相邻块的位置相同。
  22. 根据权利要求20所述的装置,其特征在于,当所述空域相邻块的参考帧与所述当前块的同位帧不同时,将所述时域运动矢量确定为0矢量。
PCT/CN2019/130869 2019-01-03 2019-12-31 视频处理方法和装置 WO2020140915A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201980009160.3A CN111630860A (zh) 2019-01-03 2019-12-31 视频处理方法和装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CNPCT/CN2019/070306 2019-01-03
PCT/CN2019/070306 WO2020140242A1 (zh) 2019-01-03 2019-01-03 视频处理方法和装置

Publications (1)

Publication Number Publication Date
WO2020140915A1 true WO2020140915A1 (zh) 2020-07-09

Family

ID=70562433

Family Applications (3)

Application Number Title Priority Date Filing Date
PCT/CN2019/070306 WO2020140242A1 (zh) 2019-01-03 2019-01-03 视频处理方法和装置
PCT/CN2019/130881 WO2020140916A1 (zh) 2019-01-03 2019-12-31 视频处理方法和装置
PCT/CN2019/130869 WO2020140915A1 (zh) 2019-01-03 2019-12-31 视频处理方法和装置

Family Applications Before (2)

Application Number Title Priority Date Filing Date
PCT/CN2019/070306 WO2020140242A1 (zh) 2019-01-03 2019-01-03 视频处理方法和装置
PCT/CN2019/130881 WO2020140916A1 (zh) 2019-01-03 2019-12-31 视频处理方法和装置

Country Status (6)

Country Link
US (1) US20210337232A1 (zh)
EP (1) EP3908002A4 (zh)
JP (2) JP7328337B2 (zh)
KR (1) KR20210094089A (zh)
CN (7) CN111164976A (zh)
WO (3) WO2020140242A1 (zh)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106713910B (zh) * 2011-06-14 2019-12-10 三星电子株式会社 对图像进行解码的设备
SG11202007843YA (en) 2018-03-19 2020-10-29 Qualcomm Inc Improvements to advanced temporal motion vector prediction
CN111953997B (zh) * 2019-05-15 2024-08-09 华为技术有限公司 候选运动矢量列表获取方法、装置及编解码器
CN117395397A (zh) 2019-06-04 2024-01-12 北京字节跳动网络技术有限公司 使用临近块信息的运动候选列表构建
KR20220016839A (ko) 2019-06-04 2022-02-10 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 기하학적 분할 모드 코딩을 갖는 모션 후보 리스트
KR102662603B1 (ko) 2019-06-06 2024-04-30 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 비디오 코딩을 위한 움직임 후보 리스트 구성
WO2021008511A1 (en) 2019-07-14 2021-01-21 Beijing Bytedance Network Technology Co., Ltd. Geometric partition mode candidate list construction in video coding
CN114450959B (zh) 2019-09-28 2024-08-02 北京字节跳动网络技术有限公司 视频编解码中的几何分割模式
CN114007078B (zh) * 2020-07-03 2022-12-23 杭州海康威视数字技术股份有限公司 一种运动信息候选列表的构建方法、装置及其设备
CN114222134A (zh) * 2021-12-24 2022-03-22 杭州未名信科科技有限公司 视频数据的帧间预测方法、装置及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102685477A (zh) * 2011-03-10 2012-09-19 华为技术有限公司 获取用于合并模式的图像块的方法和设备
CN102946536A (zh) * 2012-10-09 2013-02-27 华为技术有限公司 候选矢量列表构建的方法及装置
CN103338372A (zh) * 2013-06-15 2013-10-02 浙江大学 一种视频处理方法及装置
WO2017176092A1 (ko) * 2016-04-08 2017-10-12 한국전자통신연구원 움직임 예측 정보를 유도하는 방법 및 장치
WO2017204532A1 (ko) * 2016-05-24 2017-11-30 한국전자통신연구원 영상 부호화/복호화 방법 및 이를 위한 기록 매체

Family Cites Families (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11298902A (ja) * 1998-04-08 1999-10-29 Sony Corp 画像符号化装置および方法
KR100506864B1 (ko) * 2002-10-04 2005-08-05 엘지전자 주식회사 모션벡터 결정방법
CN1870748A (zh) * 2005-04-27 2006-11-29 王云川 因特网协议电视
WO2007029914A1 (en) * 2005-07-19 2007-03-15 Samsung Eletronics Co., Ltd. Video encoding/decoding method and apparatus in temporal direct mode in hierarchica structure
JP2011077722A (ja) * 2009-09-29 2011-04-14 Victor Co Of Japan Ltd 画像復号装置、画像復号方法およびそのプログラム
US9137544B2 (en) * 2010-11-29 2015-09-15 Mediatek Inc. Method and apparatus for derivation of mv/mvp candidate for inter/skip/merge modes
MX2014000159A (es) * 2011-07-02 2014-02-19 Samsung Electronics Co Ltd Metodo y aparato para la codificacion de video, y metodo y aparato para la decodificacion de video acompañada por inter prediccion utilizando imagen co-localizada.
US9083983B2 (en) * 2011-10-04 2015-07-14 Qualcomm Incorporated Motion vector predictor candidate clipping removal for video coding
BR112014025617A2 (pt) * 2012-04-15 2017-09-19 Samsung Electronics Co Ltd método para determinar uma imagem de referência para previsão inter, e aparelho para determinar uma imagem de referência
CN103533376B (zh) * 2012-07-02 2017-04-12 华为技术有限公司 帧间预测编码运动信息的处理方法、装置和编解码系统
US10785501B2 (en) * 2012-11-27 2020-09-22 Squid Design Systems Pvt Ltd System and method of performing motion estimation in multiple reference frame
CN104427345B (zh) * 2013-09-11 2019-01-08 华为技术有限公司 运动矢量的获取方法、获取装置、视频编解码器及其方法
CN106416243B (zh) * 2014-02-21 2019-05-03 联发科技(新加坡)私人有限公司 利用基于帧内图像区块复制预测的视频编码方法
WO2015143603A1 (en) * 2014-03-24 2015-10-01 Mediatek Singapore Pte. Ltd. An improved method for temporal motion vector prediction in video coding
US9854237B2 (en) * 2014-10-14 2017-12-26 Qualcomm Incorporated AMVP and merge candidate list derivation for intra BC and inter prediction unification
US11477477B2 (en) * 2015-01-26 2022-10-18 Qualcomm Incorporated Sub-prediction unit based advanced temporal motion vector prediction
CN104717513B (zh) * 2015-03-31 2018-02-09 北京奇艺世纪科技有限公司 一种双向帧间预测方法及装置
CN104811729B (zh) * 2015-04-23 2017-11-10 湖南大目信息科技有限公司 一种视频多参考帧编码方法
US10271064B2 (en) * 2015-06-11 2019-04-23 Qualcomm Incorporated Sub-prediction unit motion vector prediction using spatial and/or temporal motion information
CN108432250A (zh) * 2016-01-07 2018-08-21 联发科技股份有限公司 用于视频编解码的仿射帧间预测的方法及装置
WO2017131908A1 (en) * 2016-01-29 2017-08-03 Google Inc. Dynamic reference motion vector coding mode
US9866862B2 (en) * 2016-03-18 2018-01-09 Google Llc Motion vector reference selection through reference frame buffer tracking
ES2711189R1 (es) * 2016-04-06 2020-02-04 Kt Corp Metodo y aparato para procesar senales de video
WO2018066874A1 (ko) * 2016-10-06 2018-04-12 세종대학교 산학협력단 비디오 신호의 복호화 방법 및 이의 장치
US10602180B2 (en) * 2017-06-13 2020-03-24 Qualcomm Incorporated Motion vector prediction
CN109089119B (zh) * 2017-06-13 2021-08-13 浙江大学 一种运动矢量预测的方法及设备
WO2019223790A1 (en) * 2018-05-25 2019-11-28 Mediatek Inc. Method and apparatus of affine mode motion-vector prediction derivation for video coding system
CN117812256A (zh) * 2018-07-02 2024-04-02 Lg电子株式会社 图像解码设备、图像编码设备和发送设备
US10944984B2 (en) * 2018-08-28 2021-03-09 Qualcomm Incorporated Affine motion prediction
KR102651158B1 (ko) * 2018-09-20 2024-03-26 한국전자통신연구원 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
JP7212161B2 (ja) * 2018-11-29 2023-01-24 北京字節跳動網絡技術有限公司 イントラブロックコピーモードとインター予測ツールとの間の相互作用

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102685477A (zh) * 2011-03-10 2012-09-19 华为技术有限公司 获取用于合并模式的图像块的方法和设备
CN102946536A (zh) * 2012-10-09 2013-02-27 华为技术有限公司 候选矢量列表构建的方法及装置
CN103338372A (zh) * 2013-06-15 2013-10-02 浙江大学 一种视频处理方法及装置
WO2017176092A1 (ko) * 2016-04-08 2017-10-12 한국전자통신연구원 움직임 예측 정보를 유도하는 방법 및 장치
WO2017204532A1 (ko) * 2016-05-24 2017-11-30 한국전자통신연구원 영상 부호화/복호화 방법 및 이를 위한 기록 매체

Also Published As

Publication number Publication date
JP2023139221A (ja) 2023-10-03
CN113453015A (zh) 2021-09-28
CN116866605A (zh) 2023-10-10
WO2020140916A1 (zh) 2020-07-09
CN111630861A (zh) 2020-09-04
US20210337232A1 (en) 2021-10-28
EP3908002A4 (en) 2022-04-20
CN113453015B (zh) 2022-10-25
JP7328337B2 (ja) 2023-08-16
CN113507612A (zh) 2021-10-15
JP2022515807A (ja) 2022-02-22
CN113507612B (zh) 2023-05-12
CN113194314A (zh) 2021-07-30
EP3908002A1 (en) 2021-11-10
CN111630861B (zh) 2021-08-24
CN111164976A (zh) 2020-05-15
CN111630860A (zh) 2020-09-04
CN113194314B (zh) 2022-10-25
WO2020140242A1 (zh) 2020-07-09
KR20210094089A (ko) 2021-07-28

Similar Documents

Publication Publication Date Title
WO2020140915A1 (zh) 视频处理方法和装置
US11178419B2 (en) Picture prediction method and related apparatus
WO2020140331A1 (zh) 视频图像处理方法与装置
US11102501B2 (en) Motion vector field coding and decoding method, coding apparatus, and decoding apparatus
US8311106B2 (en) Method of encoding and decoding motion picture frames
US8229233B2 (en) Method and apparatus for estimating and compensating spatiotemporal motion of image
JP7520931B2 (ja) 双方向インター予測の方法および装置
US20220232208A1 (en) Displacement vector prediction method and apparatus in video encoding and decoding and device
BR122021006509A2 (pt) método de decodificação de imagem com base na predição de movimento afim e dispositivo usando lista de candidatos à fusão afins no sistema de codificação de imagem
WO2019037533A1 (zh) 一种处理视频数据的方法和装置
US20220224912A1 (en) Image encoding/decoding method and device using affine tmvp, and method for transmitting bit stream
WO2020258024A1 (zh) 视频处理方法和装置
CN111357288B (zh) 视频图像处理方法与装置
WO2021134631A1 (zh) 视频处理的方法与装置
CN113852811A (zh) 基于cu相关性的帧间预测快速方法、系统及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19907173

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19907173

Country of ref document: EP

Kind code of ref document: A1