CN113329225A - Video coding and decoding method and device - Google Patents

Video coding and decoding method and device Download PDF

Info

Publication number
CN113329225A
CN113329225A CN202010474296.6A CN202010474296A CN113329225A CN 113329225 A CN113329225 A CN 113329225A CN 202010474296 A CN202010474296 A CN 202010474296A CN 113329225 A CN113329225 A CN 113329225A
Authority
CN
China
Prior art keywords
motion information
coding unit
block
surrounding
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010474296.6A
Other languages
Chinese (zh)
Inventor
吕卓逸
朴银姬
周川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Samsung Telecom R&D Center
Beijing Samsung Telecommunications Technology Research Co Ltd
Samsung Electronics Co Ltd
Original Assignee
Beijing Samsung Telecommunications Technology Research Co Ltd
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Samsung Telecommunications Technology Research Co Ltd, Samsung Electronics Co Ltd filed Critical Beijing Samsung Telecommunications Technology Research Co Ltd
Publication of CN113329225A publication Critical patent/CN113329225A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/587Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

Abstract

The present disclosure provides a video encoding and decoding method and device, wherein the decoding step includes: acquiring the type of a coding unit to be decoded currently from the code stream; if the current coding unit to be decoded is in a skip mode or a direct mode, filling motion information of surrounding blocks according to the width and height information of the current coding unit to be decoded, and deriving the number of effective angle prediction modes; acquiring a subtype index of a current coding unit to be decoded and other inter-frame prediction mode marks from the code stream, and determining the subtype of the current coding unit to be decoded according to the information and the derived number of the effective angle prediction modes; if the subtype of the current coding unit to be decoded is the angle prediction mode of the skip mode or the angle prediction mode of the direct mode, deriving angle motion information, namely motion information of each sub-block in the current coding unit to be decoded; and obtaining the predicted value of each sub-block according to the motion information of each sub-block in the current coding unit to be decoded.

Description

Video coding and decoding method and device
Technical Field
The present disclosure relates to video coding and decoding techniques, and more particularly, to inter prediction techniques in video coding and decoding.
Background
In existing video coding standards (e.g., HEVC, AVS2, AVS3, VVC, etc.) there are defined: each picture in a video sequence is divided into a plurality of encoded blocks, and each encoded block is decoded and decoded. The specific encoding steps are prediction, transformation, quantization and entropy coding. The prediction is to obtain the prediction value of the current coding block by using the spatial correlation between the current coding block and the coded reconstructed coding block in the current image or the temporal correlation between the current coding block and the reconstructed block in the coded image. And the encoder entropy encodes residual information obtained by performing transform quantization on the finally selected prediction mode information, the predicted value and the original value difference value of each coding block and writes the residual information into a code stream. The specific decoding steps include: the decoding end obtains the prediction information of the current block to be decoded from the code stream, and obtains a prediction value according to the prediction information and a prediction method same as that of the encoding end; obtaining residual information of a current block to be decoded from the code stream, and performing inverse quantization and inverse transformation to obtain a residual value; and adding the residual value and the predicted value to obtain a reconstruction value of the current block to be decoded.
The coding unit division defined in the AVS3 standard has six ways. For different coding unit partitions, there may be a problem that inter prediction is inaccurate, resulting in a reduction in coding performance.
Disclosure of Invention
Aspects of the present disclosure are to address at least the above problems and/or disadvantages and to provide at least the advantages described below. Accordingly, aspects of the present disclosure provide an angular prediction mode motion information padding method that pads different numbers of neighboring block motion information on the left and upper sides of a current coding unit according to the shape (i.e., width and height information) of the current coding unit. The following advantages are thereby achieved: the redundancy caused by filling repeated motion information in the short side is reduced, the diversity of filling motion information in the long side is increased, the inter-frame prediction accuracy is improved, and the coding performance and efficiency are improved.
According to an aspect of the present disclosure, a method of deriving angular prediction motion information at a decoding end includes: filling surrounding block motion information according to the width and height information of the current coding unit to be decoded; and deriving angular prediction motion information from the filled surrounding block motion information.
According to an aspect of the present disclosure, padding surrounding block motion information according to aspect information of a coding unit to be currently decoded includes: the method includes determining the number of peripheral blocks filled in a horizontal direction and the number of peripheral blocks filled in a vertical direction of a coding unit to be currently decoded, respectively, according to a size relationship of a width and a height of the coding unit to be currently decoded, and filling peripheral block motion information according to the determined number of peripheral blocks to be filled.
According to an aspect of the present disclosure, padding surrounding block motion information according to aspect information of a coding unit to be currently decoded further comprises: traversing the surrounding blocks, and if the motion information of the currently traversed surrounding block is not available, padding with the motion information of other surrounding blocks, wherein available means that the block is within the current image, the block is already decoded, and the block is in inter prediction mode.
According to an aspect of the disclosure, deriving the angular prediction motion information from the padded surrounding block motion information comprises: obtaining surrounding block motion information corresponding to an angular prediction mode; and determining the number of effective angle prediction modes based on the judgment condition.
According to an aspect of the disclosure, the angle prediction mode includes: horizontal, vertical, horizontal up, horizontal down, and vertical right.
According to an aspect of the disclosure, wherein deriving the angular prediction motion information from the padded surrounding block motion information further comprises: modifying the determined valid angular prediction mode information according to the aspect information of the coding unit to be currently decoded, wherein the valid angular prediction mode information includes at least one of: the number of effective angle prediction modes, the effectiveness identification of the angle prediction modes and the auxiliary information identification of the angle prediction modes.
According to an aspect of the disclosure, the determination condition includes: judging whether all the surrounding block motion information corresponding to each angle prediction mode is the same or not pairwise; or judging whether the motion information of the surrounding blocks corresponding to each angle prediction mode at the equal interval positions is the same; or judging whether the motion information at any interval position of the surrounding blocks corresponding to each angle prediction mode is the same.
According to an aspect of the disclosure, deriving the angular prediction motion information from the padded surrounding block motion information further comprises: and determining the subtype of the current coding unit to be decoded based on the type, subtype index and other inter-frame prediction mode marks of the current coding unit to be decoded, which are acquired from the code stream, and the derived number of effective angle prediction modes.
According to an aspect of the disclosure, deriving the angular prediction motion information from the padded surrounding block motion information further comprises: motion information is predicted according to an auxiliary information identification of a coding unit subtype or an angular prediction mode to be currently decoded.
According to an aspect of the present disclosure, a method of deriving angular prediction motion information at an encoding end includes: filling surrounding block motion information according to the width and height information of the current coding unit to be coded; and deriving angular prediction motion information from the filled surrounding block motion information.
According to an aspect of the present disclosure, padding surrounding block motion information according to aspect information of a coding unit to be currently coded includes: the method includes determining the number of peripheral blocks filled in a horizontal direction and the number of peripheral blocks filled in a vertical direction of a coding unit to be currently encoded, respectively, according to a size relationship of a width and a height of the coding unit to be currently encoded, and filling peripheral block motion information according to the determined number of peripheral blocks to be filled.
According to an aspect of the present disclosure, padding surrounding block motion information according to the aspect information of the coding unit to be currently coded further includes: traversing surrounding blocks, and if motion information of the currently traversed surrounding block is not available, padding with motion information of other surrounding blocks, wherein available means that the block is within the current image, the block is already encoded, and the block is in inter prediction mode.
According to an aspect of the disclosure, deriving the angular prediction motion information from the padded surrounding block motion information comprises: obtaining surrounding block motion information corresponding to an angular prediction mode; and determining the number of effective angle prediction modes based on the judgment condition.
According to an aspect of the disclosure, the angle prediction mode includes: horizontal, vertical, horizontal up, horizontal down, and vertical right. According to an aspect of the disclosure, the determination condition includes: judging whether all the surrounding block motion information corresponding to each angle prediction mode is the same or not pairwise; or judging whether the motion information of the surrounding blocks corresponding to each angle prediction mode at the equal interval positions is the same; or judging whether the motion information at any interval position of the surrounding blocks corresponding to each angle prediction mode is the same.
According to an aspect of the disclosure, deriving the angular prediction motion information from the padded surrounding block motion information further comprises: deriving a prediction value of the coding unit to be coded based on the number of the effective angular prediction modes and angular prediction motion information corresponding to the effective angular prediction modes.
According to an aspect of the present disclosure, an encoding and decoding apparatus includes: a memory configured to store program instructions; a transceiver configured to receive and transmit a code stream; and a processor configured to execute instructions stored in the memory to perform the above-described method.
According to an aspect of the present disclosure, an encoding apparatus includes: a memory configured to store program instructions; a transceiver configured to transmit a code stream; and a processor configured to execute instructions stored in the memory to perform the above-described encoding method.
According to an aspect of the present disclosure, a decoding apparatus includes: a memory configured to store program instructions; a transceiver configured to receive a code stream; and a processor configured to execute instructions stored in the memory to perform the above decoding method.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
fig. 1 shows a division manner of a coding unit defined in the AVS3 standard;
fig. 2 illustrates a manner of padding neighboring block motion information of a current coding unit according to an embodiment of the present disclosure;
FIG. 3 illustrates construction of an angle prediction mode according to an embodiment of the present disclosure;
FIG. 4 illustrates a manner of motion information filling based on angular prediction mode according to an embodiment of the present disclosure;
fig. 5 illustrates a method of decoding-side motion information angle prediction according to an embodiment of the present disclosure;
fig. 6 illustrates a decoding apparatus according to an embodiment of the present disclosure;
fig. 7 illustrates a method of encoding-side motion information angle prediction according to an embodiment of the present disclosure; and
fig. 8 illustrates an encoding apparatus according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items. Further, the terms "first" and "second" used herein may describe various constituent elements, but they should not limit the corresponding constituent elements.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Fig. 1 shows a division manner of the coding unit defined in the AVS3 standard. As shown in fig. 1, the coding unit division defined in the AVS3 standard includes the following six types: 1) not dividing; 2) quad-tree partitioning (into four square blocks); 3) binary tree partitioning (into upper and lower blocks); 4) binary tree partitioning (into left and right blocks); 5) dividing the shape of the beam into a horizontal I shape; and 6) division into vertical I-shaped. Wherein, for the case that the coding unit is square (i.e. the width and height are the same), the existing method has no problem in filling the motion information of the surrounding neighboring blocks. However, in case that the width and height of the coding unit are not the same, the existing method fills the same amount of neighboring block motion information at the upper side and the left side of the current coding unit. This may cause the following problems: on one hand, more repeated motion information is filled in a shorter edge, so that redundancy of motion information storage is caused; on the other hand, the motion information filled by the longer edge is not diverse and rich enough, and more possible motion information cannot be obtained, so that the inter-frame prediction is inaccurate, and the coding performance is reduced
In the inter prediction, inter prediction modes that each prediction unit may select include a normal motion vector prediction mode, a skip mode, and a direct mode. If the current prediction unit is in the normal motion vector prediction mode, deriving motion information by: 1) deriving a motion vector prediction value, decoding motion vector difference information and a motion vector basic unit to obtain a difference value of a motion vector, and adding the motion vector prediction value and the difference value to obtain a motion vector value; 2) decoding reference image information, and obtaining a predicted value of a prediction unit on the obtained reference image according to the position pointed by the motion vector; 3) and decoding residual information, carrying out inverse quantization and inverse transformation on the residual information to obtain a residual value, and adding the residual value and the predicted value to obtain a reconstructed value of the current prediction unit.
If the current prediction unit is in skip mode or direct mode, motion information can be derived by: according to the time domain corresponding position block, the spatial adjacent block and the like, the motion information stored by the optimal block is used as the motion information of the current prediction unit, wherein the motion information comprises: motion vector, reference index and prediction direction.
A new derivation method of motion information is defined in the AVS3 standard for skip mode and direct mode: motion information angle prediction mode. The angular prediction mode sets motion information for each sub-block in a current coding unit with a certain angle, and its specific implementation will be described in detail below with reference to fig. 2 to 4.
1. The motion information of neighboring blocks of the current coding unit is padded.
Fig. 2 illustrates a manner of padding neighboring block motion information of a current coding unit. As shown in fig. 2, the same amount of block motion information is padded on the upper and left sides of the current coding unit, to obtain upper side padding B1-BN and D1-DN, and left side padding a 1-AN and C1-CN; the blocks available for padding should satisfy the following conditions: a. block inside picture, b. block has been coded or decoded, c. block is inter prediction mode; traversing all adjacent blocks A1-AN, B1-BN, C1-CN and D1-DN, if the motion information of the traversed block is unavailable, filling the motion information of the previous available block which is most adjacent to the traversed block with the motion information of the previous available block until the traversal is finished; the above-described adjacent blocks are blocks of size 4x4 in AVS 3.
2. And constructing an angle prediction mode.
Fig. 3 shows the construction of the angle prediction mode. Five angular prediction modes are constructed as shown in fig. 3: horizontal (Horizontal), Vertical (Vertical), Horizontal Up (Horizontal _ Up), Horizontal Down (Horizontal _ Down), and Vertical Right (Vertical _ Right). Thereby obtaining neighboring subblock motion information lists corresponding to the five angular prediction modes, respectively.
3. And (5) checking the motion information.
And respectively carrying out duplicate checking operation on the motion information in the adjacent subblock motion information lists corresponding to the five angle prediction modes. If there is different motion information, the corresponding angular prediction mode is available, otherwise the angular prediction mode is not available.
4. And filling motion information.
Fig. 4 shows the way motion information is filled according to the angular prediction mode. For the available angular prediction modes, motion information is padded for each sub-block inside the current coding unit according to the corresponding angle as its motion information. As shown in fig. 4, for the horizontal angle prediction mode, sub-blocks sub1 and sub2 are filled with motion information of a1, and sub-blocks sub3 and sub4 are filled with motion information of A3.
According to an embodiment of the present disclosure, there is provided a decoding method. Described in detail below with reference to fig. 5.
Fig. 5 illustrates a method of decoding-side motion information angle prediction according to an embodiment of the present disclosure. As shown in fig. 5, the method includes the following steps.
Step 101: the type of the current coding unit is obtained.
Step 102: if the type of the current coding unit is a direct mode or a skip mode, the number of valid angular prediction modes is derived.
Specifically, the width of the current coding unit is W, the height is H, the number of upper padding surrounding blocks of the current coding unit is m, and the number of left padding surrounding blocks is n.
-filling the surrounding block motion information according to the width and height information. One possible decision condition is: when W is equal to H, m ═ n (m >0, n > 0); when W is larger than H, m is larger than n (m is larger than 0, n is larger than or equal to 0); when W is less than H, m < n (m.gtoreq.0, n > 0). Another possible judgment condition is: when the ratio of W to H is more than 2, m is more than n (m is more than 0, n is more than or equal to 0); when the ratio of W to H is less than 1/2, m < n (m is more than or equal to 0, n is more than 0); otherwise, m is n (m >0, n > 0). Other determination conditions are also possible, and are not limited herein.
The specific positions of the adjacent blocks are not limited herein but should be consistent with the encoding end, and may be a block at a column and at an equal interval position, which is immediately adjacent to the current encoding unit on the left, and a block at an equal interval position, which is immediately adjacent to a row and at an equal interval position, which is immediately adjacent to the current encoding unit on the top; or blocks in non-equally spaced positions that are not immediately adjacent to the left columns of the current coding unit, but are not immediately adjacent to the top rows of the current coding unit. The surrounding blocks are traversed according to a certain sequence, the specific sequence is not limited herein but should be consistent with the encoding end, and may be a clockwise sequence, a counterclockwise sequence, or other sequences. During traversal, if the motion information of the currently traversed block is not available, the motion information of other blocks is used for padding, and the particular padding using the motion information of which block is not limited herein but should be consistent with the encoding end, which may be the previous available block most adjacent to the currently traversed block, or other blocks. By whether a block is available is meant: the block is within the current picture, the block is decoded, and the block is inter prediction mode.
-deriving the number of angular prediction modes: obtaining the motion information of surrounding blocks corresponding to each angle prediction mode, judging whether the motion information of the corresponding surrounding blocks is the same according to a set rule, and further deriving the number of effective angle prediction modes; the specific judgment rule is not limited herein but should be consistent with the encoding end, and the judgment rule may be: a. judging whether all the surrounding block motion information corresponding to each angle prediction mode is the same in pairs, if the blocks with different motion information exist, the angle prediction mode is effective, and adding 1 to the number of the effective angle prediction modes; b. selecting several blocks at equal interval positions from the surrounding blocks corresponding to each angle prediction mode, judging whether the motion information is the same, if the blocks with different motion information exist, the angle prediction mode is effective, and adding 1 to the number of the effective angle prediction modes; c. selecting several blocks at any interval position from the surrounding blocks corresponding to each angle prediction mode, judging whether the motion information is the same, if the blocks with different motion information exist, the angle prediction mode is effective, and adding 1 to the number of the effective angle prediction modes.
Step 103: obtaining the subtype index and other inter-frame prediction mode flags of the current coding unit, and determining the subtype of the current coding unit according to the number of the effective angular prediction modes, the type of the coding unit, the subtype index of the coding unit and the other inter-frame prediction mode flags derived in step 102;
step 104: the prediction value of the current coding unit is obtained according to a subtype of the current coding unit, and angular motion information is derived if the subtype of the current coding unit is an angular prediction mode of a skip mode or a direct mode. Specifically, the angular mode of the current coding unit is obtained according to the valid angular mode information obtained in step 102 and the subtype index of the current coding unit, and a prediction unit sub-block motion information array, that is, the motion information of each sub-block in the current coding unit, is derived according to the angular mode.
Step 105: and obtaining the predicted value of each sub-block according to the motion information of each sub-block in the current coding unit.
Step 106: and residual information of the current coding unit is obtained, and a residual value is obtained after inverse quantization and inverse transformation.
Step 107: and adding the predicted value and the residual value of the current coding unit to obtain a reconstructed value of the current coding unit.
With the decoding method described in the above embodiment, motion information padding may be performed according to the angular prediction mode to pad different numbers of neighboring block motion information on the left and upper sides of the current coding unit according to the shape (i.e., width and height information) of the current coding unit. Therefore, redundancy caused by filling repeated motion information in the short side is reduced, diversity of filling motion information in the long side is increased, inter-frame prediction accuracy is improved, and coding performance and efficiency are improved.
According to an embodiment of the present disclosure, there is provided a decoding apparatus performing the decoding method illustrated in fig. 5. Described in detail below with reference to fig. 6.
Fig. 6 illustrates a decoding apparatus according to an embodiment of the present disclosure. The decoding apparatus 600 includes: memory 601, transceiver 602, and processor 603. The decoding apparatus is configured to perform decoding according to the method shown in fig. 5.
The transceiver 602 is configured to receive a current coding unit type from the codestream.
If the type of the current coding unit is direct mode or skip mode, the processor 603 is configured to derive the number of valid angular prediction modes.
The transceiver 602 is configured to receive the subtype index and other inter prediction mode flags of the current coding unit from the codestream, and the processor 603 is configured to determine the subtype of the current coding unit according to the number of valid angular prediction modes, the type of coding unit, the subtype index of the coding unit, and the other inter prediction mode flags derived in step 102.
The processor 603 is configured to obtain a prediction value of the current coding unit according to a subtype of the current coding unit, and derive angular motion information, i.e., motion information of each sub-block in the current coding unit, if the subtype of the current coding unit is an angular prediction mode of a skip mode or a direct mode; and obtaining the predicted value of each sub-block according to the motion information of each sub-block in the current coding unit.
The processor 603 is configured to obtain residual information of the current coding unit, and obtain a residual value after inverse quantization and inverse transformation.
The processor 603 is configured to add the prediction value and the residual value of the current coding unit to obtain a reconstructed value of the current coding unit.
With the decoding apparatus described in the above embodiment, motion information padding may be performed according to the angular prediction mode to pad different numbers of neighboring block motion information on the left and upper sides of the current coding unit according to the shape (i.e., width and height information) of the current coding unit. Therefore, redundancy caused by filling repeated motion information in the short side is reduced, diversity of filling motion information in the long side is increased, inter-frame prediction accuracy is improved, and coding performance and efficiency are improved.
Further, although one transceiver and one processor are shown in fig. 6, the present disclosure is not limited thereto. The number of processors and transceivers shown is merely an example, which may be one or more; and the processing described above may be performed by one or more processors or transceivers.
According to another embodiment of the present disclosure, there is provided an encoding method. This is described in detail below with reference to fig. 7.
Fig. 7 illustrates a method of encoding-side motion information angle prediction according to an embodiment of the present disclosure. As shown in fig. 7, the method includes the following steps.
Step 201: the method includes that an angle prediction mode of a skip mode or a direct mode is adopted for a current unit to be coded to obtain a prediction value, and specifically:
(1) deriving the number of effective angle prediction modes;
specifically, the width of the current coding unit is W, the height is H, the number of upper padding surrounding blocks of the current coding unit is m, and the number of left padding surrounding blocks is n.
-filling the surrounding block motion information according to the aspect information: one possible decision condition is: when W is equal to H, m ═ n (m >0, n > 0); when W is larger than H, m is larger than n (m is larger than 0, n is larger than or equal to 0); when W is less than H, m < n (m.gtoreq.0, n > 0). Another possible judgment condition is: when the ratio of W to H is more than 2, m is more than n (m is more than 0, n is more than or equal to 0); when the ratio of W to H is less than 1/2, m < n (m is more than or equal to 0, n is more than 0); otherwise, m is n (m >0, n > 0). Other determination conditions are also possible, and are not limited herein.
The specific positions of the adjacent blocks are not limited herein but should be consistent with the encoding end, and may be a block at a column and at an equal interval position, which is immediately adjacent to the current encoding unit on the left, and a block at an equal interval position, which is immediately adjacent to a row and at an equal interval position, which is immediately adjacent to the current encoding unit on the top; or blocks in non-equally spaced positions that are not immediately adjacent to the left columns of the current coding unit, but are not immediately adjacent to the top rows of the current coding unit. The surrounding blocks are traversed according to a certain sequence, the specific sequence is not limited herein but should be consistent with the encoding end, and may be a clockwise sequence, a counterclockwise sequence, or other sequences. During traversal, if the motion information of the currently traversed block is not available, the motion information of other blocks is used for padding, and the particular padding using the motion information of which block is not limited herein but should be consistent with the encoding end, which may be the previous available block most adjacent to the currently traversed block, or other blocks. By whether a block is available is meant: the block is within the current picture, the block is decoded, and the block is inter prediction mode.
-deriving the number of angular prediction modes: obtaining the motion information of surrounding blocks corresponding to each angle prediction mode, judging whether the motion information of the corresponding surrounding blocks is the same according to a set rule, and further deriving the number of effective angle prediction modes; the specific judgment rule is not limited herein but should be consistent with the encoding end, and the rule may be: a. judging whether all the surrounding block motion information corresponding to each angle prediction mode is the same in pairs, if the blocks with different motion information exist, the angle prediction mode is effective, and adding 1 to the number of the effective angle prediction modes; b. selecting several blocks at equal interval positions from the surrounding blocks corresponding to each angle prediction mode, judging whether the motion information is the same, if the blocks with different motion information exist, the angle prediction mode is effective, and adding 1 to the number of the effective angle prediction modes; c. selecting several blocks at any interval position from the surrounding blocks corresponding to each angle prediction mode, judging whether the motion information is the same, if the blocks with different motion information exist, the angle prediction mode is effective, and adding 1 to the number of the effective angle prediction modes.
(2) Obtaining a prediction value of a current coding unit under each effective angle prediction mode, and specifically, deriving a prediction unit subblock motion information array, namely motion information of each subblock in the current coding unit; and obtaining the predicted value of each sub-block according to the motion information of each sub-block in the current coding unit.
Step 202: and obtaining residual values of the current coding unit in each effective angle prediction mode, and calculating the rate distortion cost.
Step 203: and selecting the corresponding prediction mode with the minimum rate distortion cost of the current coding unit as the final prediction mode of the current coding unit.
Step 204: and entropy coding the final prediction mode information, the type of the coding unit, the subtype index of the coding unit, the residual information after the residual value is transformed and quantized, and the like of the current coding unit, and writing the residual information into a code stream.
With the encoding method described in the above embodiment, motion information padding may be performed according to the angular prediction mode to pad different numbers of neighboring block motion information on the left and upper sides of the current coding unit according to the shape (i.e., width and height information) of the current coding unit. Therefore, redundancy caused by filling repeated motion information in the short side is reduced, diversity of filling motion information in the long side is increased, inter-frame prediction accuracy is improved, and coding performance and efficiency are improved.
According to another embodiment of the present disclosure, there is provided an encoding apparatus performing the encoding method shown in fig. 7. This is described in detail below with reference to fig. 8.
Fig. 8 illustrates an encoding apparatus according to an embodiment of the present disclosure. The encoding apparatus 800 includes: a memory 801, a transceiver 802, and a processor 803. The video encoder is configured to perform encoding according to the method shown in fig. 7.
The processor 803 is configured to obtain a prediction value for the current unit to be encoded in an angular prediction mode of either a skip mode or a direct mode.
The processor 803 is configured to obtain residual values of the current coding unit in each of the valid angular prediction modes and to calculate a rate-distortion cost.
The processor 803 is configured to select the corresponding prediction mode of the current coding unit having the smallest rate-distortion cost as the final prediction mode of the current coding unit.
The processor 803 is configured to entropy encode the final prediction mode information of the current coding unit, the type of the coding unit, the subtype index of the coding unit, the residual information after transform quantization of the residual value, and the like, and write the encoded stream.
With the encoding apparatus described in the above embodiment, motion information padding may be performed according to the angular prediction mode to pad different numbers of neighboring block motion information on the left and upper sides of the current coding unit according to the shape (i.e., width and height information) of the current coding unit. Therefore, redundancy caused by filling repeated motion information in the short side is reduced, diversity of filling motion information in the long side is increased, inter-frame prediction accuracy is improved, and coding performance and efficiency are improved.
Further, although one transceiver and one processor are shown in fig. 8, the present disclosure is not limited thereto. The number of processors and transceivers shown is merely an example, which may be one or more; and the processing described above may be performed by one or more processors or transceivers.
A method for decoding-side motion information angle prediction according to an embodiment of the present disclosure includes the following steps.
Step 901: the type of the current coding unit is obtained.
Step 902: if the type of the current coding unit is a direct mode or a skip mode, the number of valid angular prediction modes is derived.
Specifically, the width of the current coding unit is W, and the height of the current coding unit is H, if W is less than the first threshold, or H is less than the first threshold, or both W and H are equal to the first threshold, the number of valid angle prediction modes of the current coding unit is set to 0, and step 903 is entered; the first threshold may be 8, or other values; otherwise, executing the following steps:
1. the number of the top-filled surrounding blocks in the current coding unit is m, the number of the left-filled surrounding blocks is n, m may be equal to n, m may be smaller than n, m may be larger than n, and is not limited herein.
2. Deriving valid angular prediction mode information: and obtaining the motion information of the surrounding blocks corresponding to each angle prediction mode, judging whether the motion information of the corresponding surrounding blocks is the same according to a set rule, further deriving the number of effective angle prediction modes, setting the auxiliary information identifier of the effective angle prediction mode to be 2, and setting the auxiliary information identifier of the ineffective angle prediction mode to be 0.
The side information of the angular prediction mode identifies index values of surrounding blocks used for deriving motion information of the current sub-block in each angular prediction mode. The side information may have a value of 2, 1, or 0, which is not limited herein, and its value may be modified according to the width and height information of the current coding unit. The auxiliary information identification can be used for predicting motion information, and the accuracy of motion information prediction can be improved by using the auxiliary information identification at the same time.
3. And modifying the derived effective angle prediction mode information according to the width and height information.
One possible method is:
(1) when W and/or H is less than or equal to a second threshold, performing the following steps, otherwise performing (2): if the horizontal down mode is invalid, judging whether the vertical right mode is valid, if the vertical right mode is valid, setting the horizontal down mode to be valid, and setting the auxiliary information identifier of the horizontal down mode to be 1; if the vertical rightward mode is invalid, judging whether the horizontal downward mode is valid, if the horizontal downward mode is valid, setting the vertical rightward mode to be valid, and setting the auxiliary information identifier of the vertical rightward mode to be 1;
the second threshold may be 32, or 16, or 64, without limitation.
(2) When W is greater than H, determining whether the vertical right mode is active, if the vertical right mode is active,
setting the horizontal down mode as valid and setting the auxiliary information mark of the horizontal down mode as 1; and when H is larger than W, judging whether the horizontal downward mode is effective, if so, setting the vertical rightward mode to be effective, and setting the auxiliary information mark of the vertical rightward mode to be 1.
Another possible method is:
(1) when the value of W multiplied by H (namely the area of the coding unit) is less than or equal to a third threshold, the following steps are executed, otherwise (2) is executed: if the horizontal down mode is invalid, judging whether the vertical right mode is valid, if the vertical right mode is valid, setting the horizontal down mode to be valid, and setting the auxiliary information identifier of the horizontal down mode to be 1; if the vertical rightward mode is invalid, judging whether the horizontal downward mode is valid, if the horizontal downward mode is valid, setting the vertical rightward mode to be valid, and setting the auxiliary information identifier of the vertical rightward mode to be 1;
the third threshold may be 512, or 128, or 256, without limitation.
(2) When W is larger than H, judging whether the vertical right mode is effective, if so, setting the horizontal down mode to be effective, and setting the auxiliary information mark of the horizontal down mode to be 1; and when H is larger than W, judging whether the horizontal downward mode is effective, if so, setting the vertical rightward mode to be effective, and setting the auxiliary information mark of the vertical rightward mode to be 1.
The specific positions of the adjacent blocks are not limited herein but should be consistent with the encoding end, and may be a block at a column and at an equal interval position, which is immediately adjacent to the current encoding unit on the left, and a block at an equal interval position, which is immediately adjacent to a row and at an equal interval position, which is immediately adjacent to the current encoding unit on the top; or blocks in non-equally spaced positions that are not immediately adjacent to the left columns of the current coding unit, but are not immediately adjacent to the top rows of the current coding unit. The surrounding blocks are traversed according to a certain sequence, the specific sequence is not limited herein but should be consistent with the encoding end, and may be a clockwise sequence, a counterclockwise sequence, or other sequences. During traversal, if the motion information of the currently traversed block is not available, the motion information of other blocks is used for padding, and the particular padding using the motion information of which block is not limited herein but should be consistent with the encoding end, which may be the previous available block nearest to the currently traversed block, or other blocks, or may be set to be a zero motion vector. By whether a block is available is meant: the block is within the current picture, the block is decoded, and the block is inter prediction mode.
Step 903: obtaining the subtype index and other inter-frame prediction mode flags of the current coding unit, and determining the subtype of the current coding unit according to the number of the effective angular prediction modes, the type of the coding unit, the subtype index of the coding unit and the other inter-frame prediction mode flags derived in step 902;
step 904: the prediction value of the current coding unit is obtained according to a subtype of the current coding unit, and angular motion information is derived if the subtype of the current coding unit is an angular prediction mode of a skip mode or a direct mode. Specifically, the angular mode of the current coding unit is obtained according to the valid angular mode information obtained in step 902 and the subtype index of the current coding unit, and the motion information array of the prediction unit subblock, i.e. the motion information of each subblock in the current coding unit, is derived according to the angular mode.
Specifically, i is the index value of the column where the current sub-block position is located, j is the index value of the row where the current sub-block position is located, and > > is the right shift operation:
if the current coding unit is in an angle prediction horizontal mode, using the motion information of the surrounding blocks with the index values of (W > >2) + (H > >2) -1-2 (j > >1) as the motion information of the current sub-block;
if the current coding unit is in an angle prediction vertical mode, using the motion information of the surrounding blocks with the index values of (W > >2) + (H > >2) +1+2 (i > >1) as the motion information of the current sub-block;
if the current coding unit is in an angle prediction horizontal upward mode, using the motion information of the surrounding blocks with the index values of (W > >2) + (H > >2) -2 (j > >1) +2 (i > >1) as the motion information of the current sub-block;
if the current coding unit is in an angular prediction horizontal down mode, when the side information of the horizontal down mode is marked as 1, using the motion information of the surrounding blocks with the index values of (W > >2) + (H > >2) +2 × (j > >1) +2 × (i > >1) +1 as the motion information of the current sub-block; otherwise, using the motion information of the surrounding blocks with the index values of (W > >2) + (H > >2) -2 × (j > >1) -2 × (i > >1) -3 as the motion information of the current sub-block;
if the current coding unit predicts a vertical right mode for an angle, when the side information of the vertical right mode is marked as 1, using the motion information of the surrounding blocks with the index values of (W > >2) + (H > >2) -2 × (j > >1) -2 × (i > >1) -1 as the motion information of the current sub-block; otherwise, using the motion information of the surrounding blocks with the index values of (W > >2) + (H > >2) +2 × (j > >1) +2 × (i > >1) +3 as the motion information of the current sub-block.
Step 905: and obtaining the predicted value of each sub-block according to the motion information of each sub-block in the current coding unit.
Step 906: and residual information of the current coding unit is obtained, and a residual value is obtained after inverse quantization and inverse transformation.
Step 907: and adding the predicted value and the residual value of the current coding unit to obtain a reconstructed value of the current coding unit.
By the decoding method described in the above embodiment, the diversity of long-side motion information in the inter-frame angle prediction mode can be increased, the inter-frame prediction accuracy can be improved, and the encoding performance and efficiency can be improved.
A method for decoding-side motion information angle prediction according to an embodiment of the present disclosure includes the following steps.
Step 1001: the type of the current coding unit is obtained.
Step 1002: if the type of the current coding unit is a direct mode or a skip mode, the number of valid angular prediction modes is derived.
Specifically, the width of the current coding unit is W, and the height is H, if W is less than the first threshold, or H is less than the first threshold, or both W and H are equal to the first threshold, the number of valid angular prediction modes of the current coding unit is set to 0, and step 1003 is performed; the first threshold may be 8, or other values; otherwise, executing the following steps:
1. the number of the top-filled surrounding blocks in the current coding unit is m, the number of the left-filled surrounding blocks is n, m may be equal to n, m may be smaller than n, m may be larger than n, and is not limited herein.
2. Deriving valid angular prediction mode information: acquiring surrounding block motion information corresponding to each angle prediction mode, judging whether the corresponding surrounding block motion information is the same according to a set rule, further deriving the number of effective angle prediction modes, setting the auxiliary information identifier of the effective angle prediction mode to be 2, and setting the auxiliary information identifier of the ineffective angle prediction mode to be 0;
the specific positions of the adjacent blocks are not limited herein but should be consistent with the encoding end, and may be a block at a column and at an equal interval position, which is immediately adjacent to the current encoding unit on the left, and a block at an equal interval position, which is immediately adjacent to a row and at an equal interval position, which is immediately adjacent to the current encoding unit on the top; or blocks in non-equally spaced positions that are not immediately adjacent to the left columns of the current coding unit, but are not immediately adjacent to the top rows of the current coding unit. The surrounding blocks are traversed according to a certain sequence, the specific sequence is not limited herein but should be consistent with the encoding end, and may be a clockwise sequence, a counterclockwise sequence, or other sequences. During traversal, if the motion information of the currently traversed block is not available, the motion information of other blocks is used for padding, and the particular padding using the motion information of which block is not limited herein but should be consistent with the encoding end, which may be the previous available block nearest to the currently traversed block, or other blocks, or may be set to be a zero motion vector. By whether a block is available is meant: the block is within the current picture, the block is decoded, and the block is inter prediction mode.
Step 1003: obtaining the subtype index and other inter-frame prediction mode flags of the current coding unit, and determining the subtype of the current coding unit according to the number of the effective angular prediction modes, the type of the coding unit, the subtype index of the coding unit and the other inter-frame prediction mode flags derived in step 1002;
step 1004: the prediction value of the current coding unit is obtained according to a subtype of the current coding unit, and angular motion information is derived if the subtype of the current coding unit is an angular prediction mode of a skip mode or a direct mode. Specifically, the angular mode of the current coding unit is obtained according to the valid angular mode information obtained in step 1002 and the subtype index of the current coding unit, and a prediction unit sub-block motion information array, that is, the motion information of each sub-block in the current coding unit, is derived according to the angular mode.
Specifically, i is the index value of the column where the current sub-block position is located, j is the index value of the row where the current sub-block position is located, and > > is the right shift operation:
if the current coding unit is in an angle prediction horizontal mode, using the motion information of the surrounding blocks with the index values of (W > >2) + (H > >2) -1-2 (j > >1) as the motion information of the current sub-block;
if the current coding unit is in an angle prediction vertical mode, using the motion information of the surrounding blocks with the index values of (W > >2) + (H > >2) +1+2 (i > >1) as the motion information of the current sub-block;
if the current coding unit is in an angle prediction horizontal upward mode, using the motion information of the surrounding blocks with the index values of (W > >2) + (H > >2) -2 (j > >1) +2 (i > >1) as the motion information of the current sub-block;
if the current coding unit is in an angular prediction horizontal downward mode, when the side information of the vertical rightward mode is marked as 2, and W is greater than H, and H is greater than a fourth threshold, using motion information of surrounding blocks with index values of (W > >2) + (H > >2) +2 × (j > >1) +2 × (i > >1) +1 as motion information of the current sub-block; otherwise, using the motion information of the surrounding blocks with the index values of (W > >2) + (H > >2) -2 × (j > >1) -2 × (i > >1) -3 as the motion information of the current sub-block;
if the current coding unit is in an angular prediction vertical right mode, when the side information of the horizontal down mode is marked as 2, H is greater than W, and W is greater than a fourth threshold, using the motion information of the surrounding blocks with the index values of (W > >2) + (H > >2) -2 × (j > >1) -2 × (i > >1) -1 as the motion information of the current sub-block; otherwise, using the motion information of the surrounding blocks with the index values of (W > >2) + (H > >2) +2 × (j > >1) +2 × (i > >1) +3 as the motion information of the current sub-block.
The fourth threshold may be 16, 32, or other values, without limitation.
Step 1005: and obtaining the predicted value of each sub-block according to the motion information of each sub-block in the current coding unit.
Step 1006: and residual information of the current coding unit is obtained, and a residual value is obtained after inverse quantization and inverse transformation.
Step 1007: and adding the predicted value and the residual value of the current coding unit to obtain a reconstructed value of the current coding unit.
By the decoding method described in the above embodiment, the diversity of long-side motion information in the inter-frame angle prediction mode can be increased, the inter-frame prediction accuracy can be improved, and the encoding performance and efficiency can be improved.
The above description is only exemplary of the present disclosure and should not be taken as limiting the disclosure, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.
One skilled in the art will appreciate that the present disclosure includes apparatus related to performing one or more of the operations described in the present disclosure. These devices may be specially designed and manufactured for the required purposes, or they may comprise known devices in general-purpose computers. These devices have stored therein computer programs that are selectively activated or reconfigured. Such a computer program may be stored in a device (e.g., computer) readable medium, including, but not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magnetic-optical disks, ROMs (Read-Only memories), RAMs (random access memories), EPROMs (Erasable Programmable Read-Only memories), EEPROMs (Electrically Erasable Programmable Read-Only memories), flash memories, magnetic cards, or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a bus. That is, a readable medium includes any medium that stores or transmits information in a form readable by a device (e.g., a computer).
It will be understood by those within the art that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. Those skilled in the art will appreciate that the computer program instructions may be implemented by a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, implement the aspects specified in the block or blocks of the block diagrams and/or flowchart illustrations of the present disclosure.
Those of skill in the art will appreciate that the various operations, methods, steps in the processes, acts, or solutions discussed in the present disclosure may be interchanged, modified, combined, or eliminated. Further, other steps, measures, schemes in various operations, methods, flows that have been discussed in this disclosure may also be alternated, altered, rearranged, broken down, combined, or deleted. Further, steps, measures, schemes in the prior art having various operations, methods, procedures disclosed in the present disclosure may also be alternated, modified, rearranged, decomposed, combined, or deleted.
The foregoing is only a partial embodiment of the present disclosure, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present disclosure, and these modifications and decorations should also be regarded as the protection scope of the present disclosure.

Claims (15)

1. A method of deriving angular prediction motion information at a decoding end, comprising:
filling surrounding block motion information according to the width and height information of the current coding unit to be decoded; and
angular prediction motion information is derived from the filled surrounding block motion information.
2. The method of claim 1, wherein padding surrounding block motion information according to aspect information of a coding unit currently to be decoded comprises:
determining the number of peripheral blocks filled in the horizontal direction and the number of peripheral blocks filled in the vertical direction of the coding unit to be currently decoded, respectively, according to the size relationship of the width and the height of the coding unit to be currently decoded, and
the surrounding block motion information is filled according to the determined number of surrounding blocks to be filled.
3. The method of claim 2, wherein padding surrounding block motion information according to the aspect information of the coding unit currently to be decoded further comprises:
traverse the surrounding blocks, an
If the motion information of the surrounding block currently traversed is not available, the motion information of other surrounding blocks is used for filling,
where available means that the block is within the current picture, the block is decoded, and the block is in inter prediction mode.
4. The method of claim 1, wherein deriving angular prediction motion information from the padded surrounding block motion information comprises:
obtaining surrounding block motion information corresponding to an angular prediction mode; and
determining the number of effective angle prediction modes based on the judgment condition, wherein the angle prediction modes comprise: horizontal, vertical, horizontal up, horizontal down, and vertical right.
5. The method of claim 4, wherein deriving angular prediction motion information from the padded surrounding block motion information further comprises:
modifying the determined valid angular prediction mode information according to the aspect information of the coding unit currently to be decoded,
wherein the valid angular prediction mode information comprises at least one of: the number of effective angle prediction modes, the effectiveness identification of the angle prediction modes and the auxiliary information identification of the angle prediction modes.
6. The method of claim 4, wherein determining a condition comprises:
judging whether all the surrounding block motion information corresponding to each angle prediction mode is the same or not pairwise; or
Judging whether the motion information of the surrounding blocks corresponding to each angle prediction mode at the equal interval positions is the same or not; or
And judging whether the motion information of any interval position of the surrounding blocks corresponding to each angle prediction mode is the same or not.
7. The method of claim 4 or 5, wherein deriving angular prediction motion information from the padded surrounding block motion information comprises:
and determining the subtype of the current coding unit to be decoded based on the type, subtype index and other inter-frame prediction mode marks of the current coding unit to be decoded, which are acquired from the code stream, and the derived number of effective angle prediction modes.
8. The method of claim 7, wherein deriving angular prediction motion information from the padded surrounding block motion information further comprises:
motion information is predicted according to an auxiliary information identification of a coding unit subtype or an angular prediction mode to be currently decoded.
9. A method of deriving angular prediction motion information at an encoding end, comprising:
filling surrounding block motion information according to the width and height information of the current coding unit to be coded; and
angular prediction motion information is derived from the filled surrounding block motion information.
10. The method of claim 9, wherein padding the surrounding block motion information according to the aspect information of the coding unit to be currently encoded comprises:
determining the number of peripheral blocks filled in the horizontal direction and the number of peripheral blocks filled in the vertical direction of the coding unit to be currently encoded, respectively, according to the size relationship of the width and the height of the coding unit to be currently encoded, and
the surrounding block motion information is filled according to the determined number of surrounding blocks to be filled.
11. The method of claim 9, wherein padding surrounding block motion information according to the aspect information of the coding unit to be currently encoded further comprises:
traverse the surrounding blocks, an
If the motion information of the surrounding block currently traversed is not available, the motion information of other surrounding blocks is used for filling,
where available means that the block is within the current picture, the block is already encoded, and the block is in inter prediction mode.
12. The method of claim 9, wherein deriving angular prediction motion information from the padded surrounding block motion information comprises:
obtaining surrounding block motion information corresponding to an angular prediction mode; and
determining the number of effective angle prediction modes based on the judgment condition, wherein the angle prediction modes comprise: horizontal, vertical, horizontal up, horizontal down, and vertical right.
13. The method of claim 12, wherein determining a condition comprises:
judging whether all the surrounding block motion information corresponding to each angle prediction mode is the same or not pairwise; or
Judging whether the motion information of the surrounding blocks corresponding to each angle prediction mode at the equal interval positions is the same or not; or
And judging whether the motion information of any interval position of the surrounding blocks corresponding to each angle prediction mode is the same or not.
14. The method of claim 9, wherein deriving angular prediction motion information from the padded surrounding block motion information further comprises:
deriving a prediction value of the coding unit to be coded based on the number of the effective angular prediction modes and angular prediction motion information corresponding to the effective angular prediction modes.
15. A computer-implemented apparatus, comprising:
a memory configured to store program instructions;
a transceiver configured to receive and transmit a code stream; and
a processor configured to execute instructions stored in a memory to perform the method of any of claims 1 to 8 or claims 9 to 14.
CN202010474296.6A 2020-02-28 2020-05-29 Video coding and decoding method and device Pending CN113329225A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010130216 2020-02-28
CN2020101302165 2020-02-28

Publications (1)

Publication Number Publication Date
CN113329225A true CN113329225A (en) 2021-08-31

Family

ID=77412996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010474296.6A Pending CN113329225A (en) 2020-02-28 2020-05-29 Video coding and decoding method and device

Country Status (1)

Country Link
CN (1) CN113329225A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013155424A1 (en) * 2012-04-12 2013-10-17 Qualcomm Incorporated Common motion information candidate list construction process
WO2016078511A1 (en) * 2014-11-18 2016-05-26 Mediatek Inc. Method of bi-prediction video coding based on motion vectors from uni-prediction and merge candidate
KR20160143583A (en) * 2015-06-05 2016-12-14 인텔렉추얼디스커버리 주식회사 Method for selecting motion vector candidate and method for encoding/decoding image using the same
CN106937123A (en) * 2010-10-28 2017-07-07 韩国电子通信研究院 Video decoding apparatus and video encoding/decoding method
US20180332312A1 (en) * 2017-05-09 2018-11-15 Futurewei Technologies, Inc. Devices And Methods For Video Processing
CN109314790A (en) * 2016-05-23 2019-02-05 佳稳电子有限公司 Image treatment method, the image-decoding using it and coding method
CN110213588A (en) * 2019-06-25 2019-09-06 浙江大华技术股份有限公司 Airspace Candidate Motion information acquisition method, device, codec and storage device
CN110225346A (en) * 2018-12-28 2019-09-10 杭州海康威视数字技术股份有限公司 A kind of decoding method and its equipment
WO2020017873A1 (en) * 2018-07-16 2020-01-23 한국전자통신연구원 Image encoding/decoding method and apparatus, and recording medium in which bitstream is stored

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106937123A (en) * 2010-10-28 2017-07-07 韩国电子通信研究院 Video decoding apparatus and video encoding/decoding method
WO2013155424A1 (en) * 2012-04-12 2013-10-17 Qualcomm Incorporated Common motion information candidate list construction process
WO2016078511A1 (en) * 2014-11-18 2016-05-26 Mediatek Inc. Method of bi-prediction video coding based on motion vectors from uni-prediction and merge candidate
KR20160143583A (en) * 2015-06-05 2016-12-14 인텔렉추얼디스커버리 주식회사 Method for selecting motion vector candidate and method for encoding/decoding image using the same
CN109314790A (en) * 2016-05-23 2019-02-05 佳稳电子有限公司 Image treatment method, the image-decoding using it and coding method
US20180332312A1 (en) * 2017-05-09 2018-11-15 Futurewei Technologies, Inc. Devices And Methods For Video Processing
WO2020017873A1 (en) * 2018-07-16 2020-01-23 한국전자통신연구원 Image encoding/decoding method and apparatus, and recording medium in which bitstream is stored
CN110225346A (en) * 2018-12-28 2019-09-10 杭州海康威视数字技术股份有限公司 A kind of decoding method and its equipment
CN110213588A (en) * 2019-06-25 2019-09-06 浙江大华技术股份有限公司 Airspace Candidate Motion information acquisition method, device, codec and storage device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
SHINOBU KUDO; MASAKI KITAHARA; ATSUSHI SHIMIZU: "Motion vector prediction methods considering prediction continuity in HEVC", 《2016 PICTURE CODING SYMPOSIUM (PCS)》, 30 April 2017 (2017-04-30) *
中华人民共和国国家质量监督检验检疫总局 中国国家标准化管理委员会: "《中华人民共和国国家标准GB/T 20090.2-2006 信息技术 先进音视频编码 第2部分:视频》", 16 February 2006 *
包祥: "AVS中帧内预测及运动估计技术研究", 《万方硕士学位论文数据库》, 28 April 2012 (2012-04-28) *
陈超峰: "基于AVS快速运动估计的视频编码研究", 《中国优秀硕士学位论文全文数据库(电子期刊)》, 15 December 2011 (2011-12-15) *

Similar Documents

Publication Publication Date Title
KR102114715B1 (en) Image coding/decoding method and related apparatus
ES2593702T3 (en) Procedure and apparatus for video coding, and procedure and apparatus for video decoding
ES2548227T3 (en) Image decoding method
CN102595116B (en) Encoding and decoding methods and devices for multiple image block division ways
ES2599848T3 (en) Video encoding and decoding
TWI474721B (en) Methods for encoding a macroblock and predicting a macroblock, apparatuses for genertaing an encoded macroblock and predicting a macroblock, and computer program product for predicting a macroblock
ES2575381T3 (en) Intra-prediction decoding device
BRPI1012755B1 (en) Method and device for encoding at least one image or a sequence of at least one image, method and device for decoding a data stream representing at least one image or a sequence of at least one image, data stream and computer readable memory
KR101128580B1 (en) Apparatus and method for predicting most probable mode in intra prediction system
CN111031319A (en) Local illumination compensation prediction method, terminal equipment and computer storage medium
TWI723365B (en) Apparatus and method for encoding and decoding a picture using picture boundary handling
CN109510987B (en) Method and device for determining coding tree node division mode and coding equipment
KR20170069917A (en) Method and apparatus for encoding and decoding an information indicating a prediction method of an intra skip mode
WO2012097746A1 (en) Coding-decoding method and device
KR20110111339A (en) Apparatus and method for predicting most probable mode in intra prediction system
CN113329225A (en) Video coding and decoding method and device
WO2012095037A1 (en) Encoding and decoding method and device
CN114598878A (en) Prediction mode decoding and encoding method and device
CN112135147A (en) Encoding method, decoding method and device
ES2875586T3 (en) Image encoding and decoding procedure, encoding and decoding device and corresponding computer programs
CN112073733A (en) Video coding and decoding method and device based on motion vector angle prediction
CN103402094A (en) Method and system for predicted encoding and decoding in chrominance frame of transform domain
ES2604611T3 (en) Procedure and apparatus for decoding a video signal
KR100778471B1 (en) Method for encoding or decoding of video signal
WO2023044916A1 (en) Intra prediction method, encoder, decoder, and encoding and decoding system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination