CN112073733A - Video coding and decoding method and device based on motion vector angle prediction - Google Patents

Video coding and decoding method and device based on motion vector angle prediction Download PDF

Info

Publication number
CN112073733A
CN112073733A CN201910501806.1A CN201910501806A CN112073733A CN 112073733 A CN112073733 A CN 112073733A CN 201910501806 A CN201910501806 A CN 201910501806A CN 112073733 A CN112073733 A CN 112073733A
Authority
CN
China
Prior art keywords
mode
current coding
sub
coding block
mvap
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910501806.1A
Other languages
Chinese (zh)
Inventor
欧阳晓
王凡
朴银姬
吕卓逸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Samsung Telecom R&D Center
Beijing Samsung Telecommunications Technology Research Co Ltd
Samsung Electronics Co Ltd
Original Assignee
Beijing Samsung Telecommunications Technology Research Co Ltd
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Samsung Telecommunications Technology Research Co Ltd, Samsung Electronics Co Ltd filed Critical Beijing Samsung Telecommunications Technology Research Co Ltd
Priority to CN201910501806.1A priority Critical patent/CN112073733A/en
Publication of CN112073733A publication Critical patent/CN112073733A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A video coding and decoding method and device based on motion vector angle prediction, the method comprising: determining a candidate mode list of MVAP of a current coding block according to the prediction mode of peripheral sub-blocks of the current coding block, wherein the candidate mode list comprises a plurality of MVAP modes; determining one MVAP mode in the multiple MVAP modes as the MVAP mode of the current coding block; and writing the mode information corresponding to the MVAP mode into a code stream.

Description

Video coding and decoding method and device based on motion vector angle prediction
Technical Field
The present invention relates to a Motion Vector Angle Prediction (MVAP) -based video coding and decoding method and apparatus, and more particularly, to a coding and decoding method and apparatus for candidate mode selection and context model selection based on MVAP.
Background
In natural video, there is a high similarity between images. Therefore, in video compression, inter-frame prediction is often performed to remove information redundancy between pictures. The so-called inter prediction is to predict a current image to be encoded by using an already encoded image and then to transfer a prediction error to a decoder. The amount of information of the prediction error is much smaller than the amount of information of the current picture transmitted directly, thus achieving the purpose of compression. In practice, when performing inter-frame prediction, the encoder needs to find a reference block matching each image block as much as possible from the previously encoded reference image, so as to reduce the prediction error to the maximum extent. The position difference between the current block and the reference block is called a motion vector, and the motion vector information also needs to be transmitted in the code stream, so that the decoder can know which block the reference block of the current block is.
With the continuous progress of the prediction technology, the image prediction residual error is continuously reduced, and the proportion of motion information such as motion vectors in code streams is continuously increased. In order to reduce the overhead of transmitting motion information, a skip mode, a direct mode, a merge mode, etc. are proposed. In these modes, motion information is not transmitted in the code stream, and the encoder and the decoder directly derive motion information such as a motion vector of the current image block according to a certain rule based on the motion information of the previously encoded image block. Although these modes may omit the transmission of motion information, the derivation rule is fixed due to motion information and needs to be derived from the motion information of the previously encoded image block. When the difference between the motion situation of the current image block and the motion situation of the previous image block is large, the derived motion information is not suitable for the current block.
In recent years, a prediction technique called a motion vector angle prediction MVAP technique has appeared. The MVAP technique copies motion information of blocks around the current coding block into each sub-block of the current coding unit by several specific methods. The MVAP technology has five prediction modes, namely, a horizontal prediction mode, a horizontal upward prediction mode, a horizontal downward prediction mode, a vertical prediction mode, and a vertical rightward prediction mode, and the encoder and the decoder may copy motion information of neighboring sub-blocks to each sub-block of a current coding unit according to the selected prediction mode, and perform prediction on each sub-block. The MVAP technology allocates different motion information to each subblock according to a fixed rule, so that the motion information required to be transmitted in a code stream is saved, and the coding performance is improved.
Although different motion information is allocated to each subblock of the current coding block by the MVAP technology, the influence of the prediction modes of the surrounding blocks of the current coding block on the current coding block is not considered by the allocation rule, and certain limitation exists.
In AVS3, on the other hand, the MVAP technique is used in the skip mode and the direct mode, but due to uncertainty of candidate modes in the candidate list in the MVAP technique (e.g., the number of candidate modes is uncertain, the mode type is uncertain), the candidate list of MVAP needs to be reconstructed during decoding, which affects decoding performance of the decoder.
Disclosure of Invention
Aspects of the present disclosure are to address at least the above problems and/or disadvantages and to provide at least the advantages described below. Therefore, an object of the present invention is to provide a video coding and decoding method and apparatus based on MVAP, which can consider the influence of the prediction mode of the peripheral blocks of the current coding block on the coding and decoding of the current coding block, and design a coding mode more suitable for MVAP according to the characteristics of MVAP technology, thereby improving the coding and decoding performance.
An aspect of the present invention is to provide a video encoding method for predicting MVAP based on a motion vector angle, which may include: determining a candidate mode list of MVAP of a current coding block according to the prediction mode of peripheral sub-blocks of the current coding block, wherein the candidate mode list comprises a plurality of MVAP modes; determining one MVAP mode in the multiple MVAP modes as the MVAP mode of the current coding block; and writing the mode information corresponding to the MVAP mode into a code stream.
Alternatively, the step of determining the candidate mode list of the MVAP of the current coding block may include: when the prediction mode in the upper sub-block of the current coding block is the sub-block number alpha of the intra-prediction modecaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockcaSum of alphacacaThe number gamma of upper sub-blocks of the current coding blockcRatio (α) ofcaca)/γcAnd when the number of the candidate modes in the candidate mode list is larger than a preset threshold, reducing the number of the candidate modes in the candidate mode list to a preset number or reordering the priority of the candidate modes in the candidate mode list, wherein the unavailable subblocks represent the subblocks which are not coded.
Alternatively, the step of determining the candidate mode list of the MVAP of the current coding block may include: when the prediction mode in the left sub-block of the current coding block is the number alpha of the sub-blocks with the intra-prediction modeclAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockclSum of alphaclclNumber gamma of left sub-blocks of current coding blockcRatio (α) ofclcl)/γcAnd when the number of the candidate modes in the candidate mode list is larger than a preset threshold, reducing the number of the candidate modes in the candidate mode list to a preset number or reordering the priority of the candidate modes in the candidate mode list, wherein the unavailable subblocks represent the subblocks which are not coded.
Optionally, the step of determining one of the plurality of MVAP modes as the MVAP mode of the current coding block may include: respectively calculating the coding cost of the current coding block for coding in each MVAP mode in the multiple MVAP modes; determining the one MVAP mode based on the calculated coding cost.
Optionally, the mode information corresponding to the one MVAP mode is an index value of the one MVAP mode in the candidate mode list.
Another aspect of the present disclosure is to provide a video decoding method for predicting MVAP based on a motion vector angle, which may include: analyzing the code stream to obtain MVAP mode information of the current coding block; determining a candidate mode list of MVAP of a current coding block according to the prediction mode of peripheral sub-blocks of the current coding block, wherein the candidate mode list comprises a plurality of MVAP modes; and determining the motion information of the current coding block according to the candidate mode list and the analyzed MVAP mode information of the current coding block.
Optionally, the MVAP mode information is an index value of the MVAP mode in the candidate mode list.
Alternatively, the step of determining the candidate mode list of the MVAP of the current coding block according to the prediction modes of the peripheral sub-blocks of the current coding block may include: when the prediction mode in the upper sub-block of the current coding block is the sub-block number alpha of the intra-prediction modedaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockdaSum of alphadadaThe number gamma of upper sub-blocks of the current coding blockdRatio (α) ofdada)/γdAnd when the number of the candidate modes in the candidate mode list is larger than a preset threshold, reducing the number of the candidate modes in the candidate mode list to a preset number or reordering the priority of the candidate modes in the candidate mode list, wherein the sub-blocks which are not available represent the sub-blocks which are not decoded.
Alternatively, the step of determining the candidate mode list of the MVAP of the current coding block may include: when the prediction mode in the left sub-block of the current coding block is the number alpha of the sub-blocks with the intra-prediction modedlAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockdlSum of alphadldlNumber gamma of left sub-blocks of current coding blockdRatio (α) ofdldl)/γdWhen the number of the candidate modes in the candidate mode list is larger than a preset threshold value, reducing the number of the candidate modes in the candidate mode list to a preset number or reducing the number of the candidate modes to a candidate numberThe priorities of the candidate modes in the mode selection list are reordered, wherein the sub-blocks that are not available represent sub-blocks that are not decoded.
Optionally, the step of determining the motion information of the current coding block according to the candidate mode list and the parsed MVAP mode information of the current coding block may include: determining an MVAP mode having MVAP mode information same as the MVAP mode information in a candidate mode list as an MVAP mode of a current coding block, and acquiring motion information of the current coding block based on the determined MVAP mode of the current coding block.
Another aspect of the present disclosure is to provide a video encoding method for predicting MVAP based on a motion vector angle, which may include: determining whether the current coding block is coded through MVAP according to the prediction mode of the peripheral subblocks of the current coding block; and in response to determining not to encode the current coding block in the MVAP mode, determining not to transmit an identifier of the MVAP in the code stream, wherein the identifier indicates whether the current coding block is encoded through the MVAP.
Alternatively, the step of determining whether to encode the current coding block in the MVAP mode according to the prediction modes of the peripheral sub-blocks of the current coding block may include: when the prediction mode in the peripheral sub-blocks of the current coding block is the sub-block number alpha of the intra-prediction modecAnd the number beta of unavailable subblocks in the peripheral subblocks of the current coding blockcSum of alphaccThe number gamma of the peripheral sub-blocks of the current coding blockcRatio (α) ofcc)/γcAnd when the current coding block is larger than the preset threshold value, determining not to encode the current coding block through the MVAP, wherein the unavailable subblocks represent the subblocks which are not encoded.
Another aspect of the present disclosure is to provide a video decoding method for predicting MVAP based on a motion vector angle, which may include: determining whether an identification of an MVAP (multifunction video coding protocol) of a current coding block is analyzed from a code stream according to a prediction mode of peripheral subblocks of the current coding block, wherein the identification indicates whether the current coding block is coded through the MVAP; and in response to determining not to parse the identification of the MVAP of the current coding block from the code stream, determining not to decode the current coding block through the MVAP.
Optionally, the step of determining whether to parse the identifier of the MVAP of the current coding block from the code stream according to the prediction mode of the peripheral sub-blocks of the current coding block may include: when the prediction mode in the peripheral sub-blocks of the current coding block is the sub-block number alpha of the intra-prediction modecAnd the number beta of unavailable subblocks in the peripheral subblocks of the current coding blockcSum of alphaccThe number gamma of the peripheral sub-blocks of the current coding blockcRatio (α) ofcc)/γcAnd when the number of the sub blocks is larger than a preset threshold value, determining not to analyze the identification of the MVAP of the current coding block from the code stream, wherein the sub blocks which are not available represent the sub blocks which are not decoded.
An aspect of the present disclosure is to provide a skip mode or direct mode-based video encoding method, which may include: classifying a skip mode or a direct mode into a plurality of coding modes, wherein each of the plurality of coding modes includes a plurality of sub-modes; determining one sub-mode in one coding mode in the plurality of coding modes as the optimal coding mode of the current coding block; and writing the coding mode information corresponding to the coding mode and the sub-mode information corresponding to the sub-mode into the code stream.
Optionally, the step of classifying the skip mode or the direct mode into a plurality of coding modes comprises classifying the skip mode or the direct mode into a plurality of coding modes in one of three classification manners: the first classification method comprises the following steps: advanced motion vector expression mode, affine skipping mode, spatial and temporal skipping mode, history-based motion vector prediction mode and motion vector angle prediction mode, wherein the spatial and temporal skipping mode comprises spatial skipping mode and temporal skipping mode; the second classification mode is as follows: an advanced motion vector expression mode, a subblock mode, a spatial domain, a temporal domain and a history skip mode, wherein the spatial domain, the temporal domain and the history skip mode comprise a spatial domain skip mode, a temporal domain skip mode and a history skip mode, the subblock mode comprises an affine mode and a motion vector angle prediction mode, the affine motion mode and the motion vector angle prediction mode have different candidate mode lists, the spatial domain, the temporal domain and the history skip mode comprise spatial motion vectors, temporal motion vectors and history-based motion vectors, and the spatial domain skip mode, the temporal domain skip mode and the history skip mode in the spatial domain, the temporal domain and the history skip mode share one candidate motion vector list; the third classification mode is as follows: an advanced motion vector expression mode, a subblock mode, a spatial, temporal and history skip mode, wherein the spatial, temporal and history skip mode includes a spatial skip mode, a temporal skip mode and a history skip mode, the subblock mode includes an affine motion mode and a motion vector angle prediction mode, and the affine motion mode and the motion vector angle prediction mode share one candidate mode list, the spatial, temporal and history skip mode includes a spatial motion vector, a temporal motion vector and a history-based motion vector, and the spatial skip mode, the temporal skip mode and the history skip mode in the spatial, temporal and history skip mode share one candidate motion vector list.
Optionally, the encoding mode information and the sub-mode information are an index value corresponding to the one encoding mode and an index value corresponding to the one sub-mode, respectively.
Another aspect of the present disclosure is to provide a skip mode or direct mode-based video decoding method, which may include: analyzing coding mode information of a coding block corresponding to a current coding block and sub-mode information of the coding mode from a code stream; classifying a skip mode or a direct mode into a plurality of coding modes, wherein each of the plurality of coding modes includes a plurality of sub-modes; determining a coding mode corresponding to the coding mode information in the coding modes obtained by classification as a coding mode of a current coding block, and determining a sub-mode corresponding to the sub-mode information in the determined coding mode of the current coding block as a sub-mode of the coding mode of the current coding block; and determining the motion information of the current coding block based on the determined coding mode and the sub-mode of the current coding block.
Optionally, the encoding mode information and the sub-mode information are an index value corresponding to the one encoding mode and an index value corresponding to the one sub-mode, respectively.
Another aspect of the present disclosure is to provide a video encoding method for context model selection based on motion vector angle prediction, which may include: binarizing mode information corresponding to an MVAP mode according to a prediction mode of a peripheral subblock of a current coding block, wherein a first binary symbol of the binarized mode information indicates whether the MVAP mode is a horizontal mode or a vertical mode, and a context model corresponding to the first binary symbol indicates the occurrence probability of the horizontal mode or the vertical mode; and writing the mode information corresponding to the MVAP mode of the current coding block after binarization into a code stream according to the context model comprising the context model.
Alternatively, the binarizing of the mode information corresponding to the MVAP mode according to the prediction mode of the peripheral sub-blocks of the current coding block may include: when the prediction mode in the upper sub-block of the current coding block is the sub-block number alpha of the intra-prediction modecaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockcaSum of alphacacaThe number gamma of upper sub-blocks of the current coding blockcRatio (α) ofcaca)/γcThe number alpha of the sub-blocks which are larger than a preset threshold value or are in the intra-frame prediction mode when the prediction mode in the left sub-block of the current coding block is the intra-frame prediction modeclAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockclSum of alphaclclNumber gamma of left sub-blocks of current coding blockcRatio (α) ofclcl)/γcAnd when the value is larger than the preset threshold value, carrying out binarization on the mode information corresponding to the MVAP prediction mode.
Another aspect of the present disclosure is to provide a video decoding method for context model selection based on motion vector angle prediction, which may include: determining a context model according to the prediction mode of the peripheral sub-blocks of the current coding block, wherein the first binary symbol in the determined context modelThe context model corresponding to the number indicates the probability of occurrence of the horizontal class pattern or the vertical class pattern; and analyzing the mode information corresponding to the MVAP mode of the current coding block from the code stream according to the determined context model. Alternatively, the step of determining a context model according to the prediction modes of the peripheral sub-blocks of the current coding block may comprise: when the prediction mode in the upper sub-block of the current coding block is the sub-block number alpha of the intra-prediction modecaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockcaSum of alphacacaThe number gamma of upper sub-blocks of the current coding blockcRatio (α) ofcaca)/γcThe number alpha of the sub-blocks which are larger than a preset threshold value or are in the intra-frame prediction mode when the prediction mode in the left sub-block of the current coding block is the intra-frame prediction modeclAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockclSum of alphaclclNumber gamma of left sub-blocks of current coding blockcRatio (α) ofclcl)/γcAnd when the probability is larger than a preset threshold value, determining the probability that the context model corresponding to the first binary symbol indicates the occurrence of the horizontal mode or the vertical mode.
Another aspect of the present disclosure is to provide a video encoding device for predicting MVAP based on a motion vector angle, which may include: a candidate list determining unit configured to determine a candidate mode list of an MVAP of a current coding block according to prediction modes of peripheral sub-blocks of the current coding block, wherein the candidate mode list includes a plurality of MVAP modes; an MVAP mode determination unit configured to determine one of the plurality of MVAP modes as an MVAP mode of a current coding block; and the writing unit is configured to write the mode information corresponding to the MVAP mode into the code stream.
Alternatively, the candidate list determination unit may be configured to: when the prediction mode in the upper sub-block of the current coding block is the sub-block number alpha of the intra-prediction modecaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockcaSum of alphacacaAnd the currentNumber gamma of upper side sub-blocks of coding blockcRatio (α) ofcaca)/γcAnd when the number of the candidate modes in the candidate mode list is larger than a preset threshold, reducing the number of the candidate modes in the candidate mode list to a preset number or reordering the priority of the candidate modes in the candidate mode list, wherein the unavailable subblocks represent the subblocks which are not coded.
Alternatively, the candidate list determination unit may be configured to: when the prediction mode in the left sub-block of the current coding block is the number alpha of the sub-blocks with the intra-prediction modeclAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockclSum of alphaclclNumber gamma of left sub-blocks of current coding blockcRatio (α) ofclcl)/γcAnd when the number of the candidate modes in the candidate mode list is larger than a preset threshold, reducing the number of the candidate modes in the candidate mode list to a preset number or reordering the priority of the candidate modes in the candidate mode list, wherein the unavailable subblocks represent the subblocks which are not coded.
Alternatively, the MVAP mode determining unit may be configured to: respectively calculating the coding cost of the current coding block for coding in each MVAP mode in the multiple MVAP modes; determining the one MVAP mode based on the calculated coding cost.
Optionally, the mode information corresponding to the one MVAP mode is an index value of the one MVAP mode in the candidate mode list.
Another aspect of the present disclosure is to provide a video decoding device for predicting MVAP based on a motion vector angle, which may include: the code stream analyzing unit is configured to analyze the code stream to obtain MVAP mode information of the current coding block; a candidate list determining unit configured to determine a candidate mode list of an MVAP of a current coding block according to prediction modes of peripheral sub-blocks of the current coding block, wherein the candidate mode list includes a plurality of MVAP modes; and the motion information determining unit is configured to determine the motion information of the current coding block according to the candidate mode list and the resolved MVAP mode information of the current coding block.
Optionally, the MVAP mode information is an index value of the MVAP mode in the candidate mode list.
Alternatively, the candidate list determination unit may be configured to: when the prediction mode in the upper sub-block of the current coding block is the sub-block number alpha of the intra-prediction modedaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockdaSum of alphadadaThe number gamma of upper sub-blocks of the current coding blockdRatio (α) ofdada)/γdAnd when the number of the candidate modes in the candidate mode list is larger than a preset threshold, reducing the number of the candidate modes in the candidate mode list to a preset number or reordering the priority of the candidate modes in the candidate mode list, wherein the sub-blocks which are not available represent the sub-blocks which are not decoded.
Alternatively, the candidate list determination unit may be configured to: when the prediction mode in the left sub-block of the current coding block is the number alpha of the sub-blocks with the intra-prediction modedlAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockdlSum of alphadldlNumber gamma of left sub-blocks of current coding blockdRatio (α) ofdldl)/γdAnd when the number of the candidate modes in the candidate mode list is larger than a preset threshold, reducing the number of the candidate modes in the candidate mode list to a preset number or reordering the priority of the candidate modes in the candidate mode list, wherein the sub-blocks which are not available represent the sub-blocks which are not decoded.
Alternatively, the motion information determination unit may be configured to: determining an MVAP mode having MVAP mode information same as the MVAP mode information in a candidate mode list as an MVAP mode of a current coding block, and acquiring motion information of the current coding block based on the determined MVAP mode of the current coding block.
Another aspect of the present disclosure is to provide a video encoding device for predicting MVAP based on a motion vector angle, which may include: an encoding determination unit configured to determine whether to encode the current coding block through the MVAP according to prediction modes of peripheral sub-blocks of the current coding block; a code stream transmission determining unit configured to determine that an identifier of the MVAP is not transmitted in the code stream in response to determining that the current coding block is not encoded in the MVAP mode, wherein the identifier indicates whether the current coding block is encoded by the MVAP.
Alternatively, the encoding determination unit may be configured to: when the prediction mode in the peripheral sub-blocks of the current coding block is the sub-block number alpha of the intra-prediction modecAnd the number beta of unavailable subblocks in the peripheral subblocks of the current coding blockcSum of alphaccThe number gamma of the peripheral sub-blocks of the current coding blockcRatio (α) ofcc)/γcAnd when the current coding block is larger than the preset threshold value, determining not to encode the current coding block through the MVAP, wherein the unavailable subblocks represent the subblocks which are not encoded.
Another aspect of the present disclosure is to provide a video decoding device for predicting MVAP based on a motion vector angle, which may include: a code stream analysis determining unit configured to determine whether to analyze an identifier of an MVAP of a current coding block from a code stream according to a prediction mode of peripheral subblocks of the current coding block, wherein the identifier indicates whether the current coding block is encoded by the MVAP; and the decoding determining unit is used for responding to the identification of the MVAP which determines not to analyze the current coding block from the code stream, and determining not to decode the current coding block through the MVAP.
Optionally, the code stream parsing determining unit may be configured to: when the prediction mode in the peripheral sub-blocks of the current coding block is the sub-block number alpha of the intra-prediction modecAnd the number beta of unavailable subblocks in the peripheral subblocks of the current coding blockcSum of alphaccThe number gamma of the peripheral sub-blocks of the current coding blockcRatio (α) ofcc)/γcAnd when the number of the sub blocks is larger than a preset threshold value, determining not to analyze the identification of the MVAP of the current coding block from the code stream, wherein the sub blocks which are not available represent the sub blocks which are not decoded.
Another aspect of the present disclosure is to provide a skip mode or direct mode-based video encoding device, which may include: a mode classification unit configured to classify a skip mode or a direct mode into a plurality of encoding modes, wherein each of the plurality of encoding modes includes a plurality of sub-modes; an optimal encoding mode determining unit configured to determine one sub-mode of one of the plurality of encoding modes as an optimal encoding mode of a current encoding block; a code stream writing unit configured to write the encoding mode information corresponding to the one encoding mode and the sub-mode information corresponding to the one sub-mode into the code stream.
Alternatively, the mode classifying unit may be configured to classify the skip mode or the direct mode into the plurality of coding modes in one of three classification manners: the first classification method comprises the following steps: advanced motion vector expression mode, affine skipping mode, spatial and temporal skipping mode, history-based motion vector prediction mode and motion vector angle prediction mode, wherein the spatial and temporal skipping mode comprises spatial skipping mode and temporal skipping mode; the second classification mode is as follows: an advanced motion vector expression mode, a subblock mode, a spatial domain, a temporal domain and a history skip mode, wherein the spatial domain, the temporal domain and the history skip mode comprise a spatial domain skip mode, a temporal domain skip mode and a history skip mode, the subblock mode comprises an affine mode and a motion vector angle prediction mode, the affine motion mode and the motion vector angle prediction mode have different candidate mode lists, the spatial domain, the temporal domain and the history skip mode comprise spatial motion vectors, temporal motion vectors and history-based motion vectors, and the spatial domain skip mode, the temporal domain skip mode and the history skip mode in the spatial domain, the temporal domain and the history skip mode share one candidate motion vector list; the third classification mode is as follows: an advanced motion vector expression mode, a subblock mode, a spatial, temporal and history skip mode, wherein the spatial, temporal and history skip mode includes a spatial skip mode, a temporal skip mode and a history skip mode, the subblock mode includes an affine motion mode and a motion vector angle prediction mode, and the affine motion mode and the motion vector angle prediction mode share one candidate mode list, the spatial, temporal and history skip mode includes a spatial motion vector, a temporal motion vector and a history-based motion vector, and the spatial skip mode, the temporal skip mode and the history skip mode in the spatial, temporal and history skip mode share one candidate motion vector list.
Optionally, the encoding mode information and the sub-mode information are an index value corresponding to the one encoding mode and an index value corresponding to the one sub-mode, respectively.
Another aspect of the present disclosure is to provide a skip mode or direct mode-based video decoding device, which may include: the code stream analyzing unit is configured to analyze the coding mode information of the coding block corresponding to the current coding block and the sub-mode information of the coding mode from the code stream; a mode classification unit configured to classify a skip mode or a direct mode into a plurality of encoding modes, wherein each of the plurality of encoding modes includes a plurality of sub-modes; a mode determining unit configured to determine a coding mode corresponding to the coding mode information among the coding modes obtained by the classification as a coding mode of a current coding block, and determine a sub-mode corresponding to the sub-mode information among the determined coding modes of the current coding block as a sub-mode of the coding mode of the current coding block; and the motion information determining unit is configured to determine the motion information of the current coding block based on the determined coding mode of the current coding block and the sub-mode.
Optionally, the encoding mode information and the sub-mode information are an index value corresponding to the one encoding mode and an index value corresponding to the one sub-mode, respectively.
Another aspect of the present disclosure is to provide a video encoding device for context model selection based on motion vector angle prediction, which may include: a binarization unit configured to binarize mode information corresponding to an MVAP mode according to a prediction mode of a peripheral subblock of a current coding block, wherein a first binary symbol of the binarized mode information indicates whether the MVAP mode is a horizontal type mode or a vertical type mode, and a context model corresponding to the first binary symbol indicates a probability of occurrence of the horizontal type mode or the vertical type mode; and the code stream writing unit is configured to write the binarized mode information corresponding to the MVAP mode of the current coding block into the code stream according to the context model comprising the context model.
Alternatively, the binarization unit may be configured to: when the prediction mode in the upper sub-block of the current coding block is the sub-block number alpha of the intra-prediction modecaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockcaSum of alphacacaThe number gamma of upper sub-blocks of the current coding blockcRatio (α) ofcaca)/γcThe number alpha of the sub-blocks which are larger than a preset threshold value or are in the intra-frame prediction mode when the prediction mode in the left sub-block of the current coding block is the intra-frame prediction modeclAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockclSum of alphaclclNumber gamma of left sub-blocks of current coding blockcRatio (α) ofclcl)/γcAnd when the value is larger than the preset threshold value, carrying out binarization on the mode information corresponding to the MVAP prediction mode.
Another aspect of the present disclosure is to provide a video decoding device for context model selection based on motion vector angle prediction, which may include: a context model determining unit configured to determine a context model according to a prediction mode of peripheral sub-blocks of a current coding block, wherein the context model corresponding to a first binary symbol in the determined context model indicates a probability of occurrence of a horizontal mode or a vertical mode; and the code stream analyzing unit is configured to analyze the mode information corresponding to the MVAP mode of the current coding block from the code stream according to the determined context model. Optionally, the context model determination unit may be configured to: when the prediction mode in the upper sub-block of the current coding block is the sub-block number alpha of the intra-prediction modecaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockcaSum of alphacacaThe number gamma of upper sub-blocks of the current coding blockcRatio (α) ofcaca)/γcThe number alpha of the sub-blocks which are larger than a preset threshold value or are in the intra-frame prediction mode when the prediction mode in the left sub-block of the current coding block is the intra-frame prediction modeclAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockclSum of alphaclclNumber gamma of left sub-blocks of current coding blockcRatio (α) ofclcl)/γcAnd when the probability is larger than a preset threshold value, determining the probability that the context model corresponding to the first binary symbol indicates the occurrence of the horizontal mode or the vertical mode.
Another aspect of the present disclosure is to provide a computer-readable storage medium, wherein a computer program is stored thereon, which when executed, can implement the video coding and decoding method as described above.
Another aspect of the present disclosure is to provide a coding and decoding apparatus, which may include: a processor; a memory storing a computer program which, when executed by the processor, implements the video codec method as described above.
The invention optimizes the candidate list of the angle motion vector mode of the current coding block and the context model according to the prediction mode of the peripheral subblocks of the current coding block, reduces the cost brought by each mode in the candidate list and improves the coding performance.
In addition, compared with the method of directly coding each candidate index value of the MVAP in the skip mode and the direct mode, the method designs a coding mode more suitable for the MVAP according to the characteristics of the MVAP technology, can reduce the construction times of an MVAP candidate list, shortens the length of the pipeline in hardware implementation, effectively improves the decoding performance of a decoder, and can also improve the coding performance by the coding mode more suitable for the MVAP.
Additional aspects and/or advantages of the present general inventive concept will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the general inventive concept.
Drawings
The above and other aspects, features and advantages of particular embodiments of the present disclosure will become more apparent from the following description taken in conjunction with the accompanying drawings in which:
fig. 1 is a schematic diagram illustrating MVAP-based video coding in the prior art;
fig. 2 is a flowchart illustrating an MVAP-based video encoding method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram illustrating determining a candidate mode list of a current coding block according to prediction modes of peripheral sub-blocks according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating reordering of candidate patterns in a candidate pattern list according to prediction patterns of peripheral sub-blocks according to an embodiment of the present disclosure;
fig. 5 is a diagram illustrating determining a candidate mode list of a current coding block according to prediction modes of peripheral sub-blocks of the current coding block according to an embodiment of the present disclosure;
fig. 6 illustrates a schematic diagram of reordering candidate modes in a candidate mode list according to prediction modes of peripheral sub-blocks according to an embodiment of the present disclosure;
fig. 7 is a flowchart illustrating an MVAP-based video decoding method according to an embodiment of the present disclosure;
fig. 8 is a flowchart illustrating an MVAP-based video encoding method according to an embodiment of the present disclosure;
fig. 9 is a flowchart illustrating an MVAP-based video decoding method according to an embodiment of the present disclosure;
fig. 10 is a flowchart illustrating a skip mode or direct mode based encoding method according to an embodiment of the present disclosure;
fig. 11 is a flowchart illustrating a skip mode or direct mode based decoding method according to an embodiment of the present disclosure;
fig. 12 is a flowchart illustrating an MVAP-based encoding method for context model selection according to an embodiment of the present disclosure;
fig. 13 is a flowchart illustrating an MVAP-based decoding method for context model selection according to an embodiment of the present disclosure;
fig. 14 is a block diagram illustrating an MVAP-based video encoding device according to an embodiment of the present disclosure;
fig. 15 is a block diagram illustrating an MVAP-based video decoding device according to an embodiment of the present disclosure;
fig. 16 is a block diagram illustrating an MVAP-based video encoding device according to an embodiment of the present disclosure;
fig. 17 is a block diagram illustrating an MVAP-based video decoding device according to an embodiment of the present disclosure;
fig. 18 is a block diagram illustrating an encoding apparatus based on a skip mode or a direct mode according to an embodiment of the present disclosure;
fig. 19 is a block diagram illustrating a skip mode or direct mode based decoding apparatus according to an embodiment of the present disclosure;
fig. 20 illustrates a block diagram of an encoding apparatus for context model selection based on motion vector angle prediction according to an embodiment of the present disclosure;
fig. 21 is a block diagram illustrating a decoding apparatus for context model selection based on motion vector angle prediction according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The embodiments are described below in order to explain the present invention by referring to the figures.
Fig. 1 is a diagram illustrating a related art MVAP-based video encoding method.
Referring to fig. 1, in the conventional MVAP technology, a candidate mode list of a current coding block generally includes 5 prediction modes, i.e., a horizontal mode, a vertical mode, a horizontal up mode, a horizontal down mode, and a vertical right prediction mode, which may respectively correspond to index values 0, 1, 2, 3, and 4. The motion information of the neighboring sub-blocks may be copied to each sub-block of the current coding unit according to the selected prediction mode, and then prediction and the like may be performed on each sub-block. The MVAP technology allocates different motion information to each subblock according to a fixed rule, so that the motion information required to be transmitted in a code stream is saved, and the coding performance is improved.
When determining which prediction mode in the candidate mode list is used for the current coding block, the coding cost of the current coding block needs to be calculated for each prediction mode in the 5 prediction modes, and the prediction mode corresponding to the minimum coding cost is determined as the prediction mode of the current coding block by comparing the calculated coding costs. Since the existing MVAP technology does not consider the influence of the prediction modes of the peripheral sub-blocks of the current coding block on the current coding block, each of the 5 prediction modes has to be traversed to determine which prediction mode has the smallest coding cost.
Fig. 2 is a flowchart illustrating an MVAP-based video encoding method according to an embodiment of the present disclosure.
In step S20, a candidate mode list of the MVAP of the current coding block is determined according to the prediction modes of the neighboring subblocks of the current coding block.
As an example, the number α of subblocks whose prediction modes are intra prediction modes in the upper subblock of the current coding blockcaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockcaSum of alphacacaThe number gamma of upper sub-blocks of the current coding blockcRatio (α) ofcaca)/γcAbove the preset threshold, the number of candidate modes in the candidate mode list may be reduced from the original 5 to 3, for example, the candidate mode list may be reduced to include only the following three modes: horizontal mode, horizontal up mode and horizontal down mode, wherein the order of the three modes in the candidate mode list is not limited, wherein the unavailable subblocks represent subblocks that are not encoded.
Alternatively, the candidate patterns in the candidate pattern list may be reduced from the original 5 to other numbers.
Fig. 3 is a schematic diagram illustrating determining a candidate mode list of a current coding block according to prediction modes of peripheral sub-blocks according to an embodiment of the present disclosure.
Referring to fig. 3, when the upper side of the current coding blockNumber α of subblocks whose prediction mode is intra prediction mode among subblockscaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockcaSum of alphacacaThe number gamma of upper sub-blocks of the current coding blockcRatio (α) ofcaca)/γcWhen the current sub-block is larger than a preset threshold (for example, when the upper sub-block is an unavailable sub-block or the prediction mode of the upper sub-block is intra prediction), determining the candidate modes in the candidate mode list as follows in sequence: horizontal mode, horizontal up mode, and horizontal down mode, which may correspond to index values 0, 1, 2, respectively.
In the case where only three prediction modes are included in the candidate mode list, the coding cost of each prediction mode may be calculated by traversing only three prediction modes in the candidate mode list to determine the prediction mode of the current coding block.
As another example, the number α of subblocks whose prediction modes are intra prediction modes in an upper subblock of a current coding blockcaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockcaSum of alphacacaThe number gamma of upper sub-blocks of the current coding blockcRatio (α) ofcaca)/γcWhen the number of the prediction modes in the candidate mode list is larger than the preset threshold, the priorities of the candidate modes in the candidate mode list can be reordered, and at the moment, the number of the prediction modes in the candidate mode list is not changed, but only the sequence is changed. As an example, the list of candidate patterns may be ordered in one of three orders: a horizontal mode, a horizontal up mode, a horizontal down mode, a vertical mode, and a vertical right mode; a horizontal mode, a horizontal down mode, a horizontal up mode, a vertical mode, and a vertical right mode; horizontal down mode, horizontal up mode, vertical mode, and vertical right mode.
Fig. 4 is a schematic diagram illustrating reordering of candidate patterns in a candidate pattern list according to prediction patterns of peripheral sub-blocks according to an embodiment of the present disclosure.
Referring to fig. 4, due to the upper side of the current coding blockThe number of sub-blocks of which the prediction mode is the inter prediction mode among the sub-blocks is small, that is, the proportion of the sub-blocks having the inter prediction mode among the upper sub-block is too small, so that the number α of the sub-blocks of which the prediction mode is the intra prediction mode among the upper sub-blocks of the current coding block is smallcaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockcaSum of alphacacaThe number gamma of upper sub-blocks of the current coding blockcRatio (α) ofcaca)/γcAbove a preset threshold, the candidate patterns in the candidate pattern list may be reordered as: horizontal mode, horizontal up mode, horizontal down mode, vertical mode, and vertical right mode, and corresponds to index values 0, 1, 2, 3, 4, respectively.
As another example, the number α of sub-blocks in which the prediction mode in the left sub-block of the current coding block is the intra prediction modeclAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockclSum of alphaclclNumber gamma of left sub-blocks of current coding blockcRatio (α) ofclcl)/γcWhen the number of candidate modes in the candidate mode list is greater than a preset threshold, the number of candidate modes in the candidate mode list is reduced from five to three, for example, the candidate mode list may be determined to include only the following three prediction modes: a vertical mode, a vertical right mode and a horizontal up mode, wherein the order of the three prediction modes is not limited, wherein the unavailable subblocks represent subblocks that are not encoded.
Alternatively, the number of candidate patterns in the candidate pattern list may be reduced from the original five to other numbers.
Fig. 5 is a schematic diagram illustrating determining a candidate mode list of a current coding block according to prediction modes of peripheral sub-blocks of the current coding block according to an embodiment of the present disclosure.
Referring to fig. 5, the number α of sub-blocks in which the prediction mode in the left sub-block of the current coding block is the intra prediction modeclAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockclSum of alphaclclNumber gamma of left sub-blocks of current coding blockcRatio (α) ofclcl)/γcAbove a preset threshold (e.g., when the left sub-block is an unavailable sub-block or the prediction mode of the left sub-block is intra prediction), the candidate mode list for the current coding block may be determined to include only: vertical mode, vertical right mode, and horizontal up mode, and corresponds to index values 0, 1, and 2, respectively. In this case, the encoding cost corresponding to each prediction mode may be calculated by traversing only three prediction modes in the candidate mode list to determine the prediction mode of the current encoding block.
As another example, the number α of sub-blocks in which the prediction mode in the left sub-block of the current coding block is the intra prediction modeclAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockclSum of alphaclclNumber gamma of left sub-blocks of current coding blockcRatio (α) ofclcl)/γcWhen the number of the prediction modes in the candidate mode list is larger than the preset threshold, the priorities of the candidate modes in the candidate mode list can be reordered, and at the moment, the number of the prediction modes in the candidate mode list is not changed, but only the sequence is changed. As an example, the candidate patterns in the candidate pattern list may be ordered into one of three orders: a vertical mode, a horizontal up mode, a vertical right mode, a horizontal mode, and a horizontal down mode; a vertical mode, a vertical right mode, a horizontal up mode, a horizontal mode, and a horizontal down mode; a vertical right mode, a horizontal up mode, a vertical mode, a horizontal mode, and a horizontal down mode.
Fig. 6 illustrates a schematic diagram of reordering candidate modes in a candidate mode list according to prediction modes of peripheral sub-blocks according to an embodiment of the present disclosure.
Referring to fig. 6, since the number of sub-blocks of which the prediction mode in the left sub-block of the current coding block is the inter prediction mode is small, that is, the ratio of the number of sub-blocks having the inter prediction mode in the left sub-block to the total number of the left sub-blocks is too small, the left sub-block of the current coding block is caused to be leftNumber of subblocks whose prediction mode in a block is an intra prediction mode αclAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockclSum of alphaclclNumber gamma of left sub-blocks of current coding blockcRatio (α) ofclcl)/γcIf the current coding block is larger than the preset threshold, the candidate mode list of the current coding block can be reordered as: vertical mode, vertical right mode, horizontal up mode, horizontal mode, and horizontal down mode, and corresponds to index values 0, 1, 2, 3, 4, respectively.
Returning again to fig. 1, in step S21, one candidate mode in the candidate mode list is determined as the prediction mode of the current coding block.
Specifically, as described above, the number of candidate modes in the candidate mode list may be changed or the order of the candidate modes in the candidate mode list may be changed to determine the candidate mode list of the current encoding block.
Optionally, the coding cost of the current coding block for coding in each of the multiple MVAP modes may be calculated separately; and determining the one MVAP mode based on the calculated coding cost.
As an example, all modes in the candidate mode list in the step may be traversed, for a mode in the list, motion information of each subblock of the current coding block is obtained, then, coding costs in the mode are obtained through motion compensation, and finally, a prediction mode of the current coding block is determined by comparing the coding costs in the modes. For example, when a horizontal mode is included in the candidate mode list and the calculated coding cost in the horizontal mode is the smallest, the horizontal mode is determined as the prediction mode of the current coding block.
In step S22, mode information corresponding to the determined prediction mode of the current coding block is written into the code stream.
As an example, the determined index value of the prediction mode of the current coding block in the candidate list may be used as the mode information of the prediction mode of the current coding block, for example, when the horizontal mode is determined as the prediction mode of the current coding block, if the index value of the horizontal mode in the candidate list is 1, the index value 1 is used as the mode information corresponding to the prediction mode of the current coding block, and the index value is written into the code stream. That is, an index value corresponding to a prediction mode of the current encoding block in the candidate mode list may be used as the mode information corresponding to the prediction mode of the current encoding block.
Fig. 7 is a flowchart illustrating an MVAP-based video decoding method according to an embodiment of the present disclosure.
In step S30, the MVAP mode information of the current coding block is parsed from the code stream.
As an example, the MVAP mode information of the current coding block obtained by parsing may be an index value corresponding to the MVAP mode of the current coding block, which is written into the code stream at the coding stage.
In step S31, a candidate mode list of the MVAP of the current coding block is determined according to the prediction modes of the neighboring subblocks of the current coding block, wherein the candidate mode list includes a plurality of MVAP modes.
As an example, the number α of subblocks whose prediction modes are intra prediction modes in the upper subblock of the current coding blockdaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockdaSum of alphadadaThe number gamma of upper sub-blocks of the current coding blockdRatio (α) ofdada)/γdAbove a predetermined threshold (e.g., when the upper sub-block is an unavailable sub-block or the prediction mode of the upper sub-block is intra prediction), the number of modes in the candidate mode list may be reduced from 5 to 3, which may respectively correspond to index values 0, 1, and 2, e.g., the candidate mode list may be reduced to include only the following three modes: horizontal mode, horizontal up mode and horizontal down mode, wherein the order of the three modes in the candidate mode list is not limited, wherein the unavailable subblocks represent subblocks that are not encoded.
Alternatively, the number of modes in the candidate mode list may be reduced from the original 5 to another number.
As an example, when the prediction mode in the upper sub-block of the current coding block is a frameNumber of subblocks of intra prediction mode alphadaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockdaSum of alphadadaThe number gamma of upper sub-blocks of the current coding blockdRatio (α) ofdada)/γdWhen the number of the sub-blocks of the current coding block is less than the preset threshold (for example, when the number of the sub-blocks of which the prediction modes are the inter-prediction modes in the upper sub-block of the current coding block is small, that is, the ratio of the number of the sub-blocks with the inter-prediction modes in the upper sub-block to the total number of the upper sub-blocks is too small), the priorities of the candidate modes in the candidate mode list can be reordered, and at this time, the number of the prediction modes in the candidate mode list is not changed, but only the order is changed. As an example, the list of candidate patterns may be ordered in one of three orders: a horizontal mode, a horizontal up mode, a horizontal down mode, a vertical mode, and a vertical right mode; a horizontal mode, a horizontal down mode, a horizontal up mode, a vertical mode, and a vertical right mode; horizontal down mode, horizontal up mode, vertical mode, and vertical right mode.
As an example, the number α of sub-blocks whose prediction mode is intra prediction mode in the left sub-block of the current coding blockdlAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockdlSum of alphadldlNumber gamma of left sub-blocks of current coding blockdRatio of (beta)dldl)/γdAbove a preset threshold (e.g., when the left sub-block is an unavailable sub-block or the prediction mode of the left sub-block is intra prediction), the number of candidate modes in the candidate mode list is reduced from five to three, e.g., the candidate mode list may be determined to include only the following three prediction modes: the vertical mode, the vertical right mode and the horizontal up mode may correspond to index values 0, 1, 2, respectively, wherein the order of the three prediction modes is not limited, wherein the unavailable subblocks represent subblocks that are not encoded.
Alternatively, the number of candidate patterns in the candidate pattern list may be reduced from the original five to other numbers.
As an example, the number α of sub-blocks whose prediction mode is intra prediction mode in the left sub-block of the current coding blockdlAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockdlSum of alphadldlNumber gamma of left sub-blocks of current coding blockdRatio (α) ofdldl)/γdWhen the number of the sub-blocks of the left sub-block of the current coding block is less than the preset threshold (for example, when the number of the sub-blocks of which the prediction mode is the inter prediction mode is too small, that is, the number of the sub-blocks of the left sub-block having the inter prediction mode is too small, the priority of the candidate modes in the candidate mode list may be reordered, and at this time, the number of the prediction modes in the candidate mode list is not changed, but only the order is changed. As an example, the candidate patterns in the candidate pattern list may be ordered into one of three orders: a vertical mode, a horizontal up mode, a vertical right mode, a horizontal mode, and a horizontal down mode; a vertical mode, a vertical right mode, a horizontal up mode, a horizontal mode, and a horizontal down mode; a vertical right mode, a horizontal up mode, a vertical mode, a horizontal mode, and a horizontal down mode.
In step S32, the motion information of the current coding block is determined according to the analyzed MVAP mode information of the current coding block and the determined candidate mode list of the MVAP of the current coding block.
As an example, when the MVAP mode information of the current coding block is represented by an index value, determining which mode in the candidate mode list has the same index value as the analyzed index value corresponding to the current coding block, determining the mode as the MVAP mode of the current coding block, and after determining the MVAP mode of the current coding block, decoding the current coding block.
As described above, the present invention can deduce the candidate mode list (the mode of the list has a predetermined number and a predetermined sequence) most suitable for the MVAP of the current coding block through the prediction mode information of the peripheral sub-blocks of the current coding block, so that the more efficient candidate mode can be written into the code stream with shorter code words, and according to the entropy theorem, the coding efficiency can be effectively improved; and at the decoding end, because the number of the mode lists needing to be constructed is less, the complexity of the decoding end on constructing the candidate list can be effectively reduced.
Fig. 8 is a flowchart illustrating an MVAP-based video encoding method according to an embodiment of the present disclosure.
In step S80, it is determined whether the current coding block is encoded by the MVAP according to the prediction modes of the peripheral subblocks of the current coding block.
As an example, the number α of subblocks whose prediction modes are intra prediction modes among the peripheral subblocks of the current coding blockcAnd the number beta of unavailable subblocks in the peripheral subblocks of the current coding blockcSum of alphaccThe number gamma of the peripheral sub-blocks of the current coding blockcRatio (α) ofcc)/γcWhen the current coding block is larger than a preset threshold value, determining that the current coding block is not coded through MVAP, wherein the unavailable subblocks represent the subblocks which are not coded; otherwise, the current coding block needs to be coded by the MVAP.
In step S81, in response to determining that the current coding block is not coded in the MVAP mode, determining that an identifier of the MVAP is not transmitted in the code stream, wherein the identifier indicates whether the current coding block is coded by the MVAP; otherwise, the identification of the MVAP needs to be transmitted. When the identification of the MVAP is not transmitted, the motion information required to be transmitted in the code stream can be saved.
Fig. 9 is a flowchart illustrating an MVAP-based video decoding method according to an embodiment of the present disclosure.
In step S90, it may be determined whether to parse an identifier of the MVAP of the current coding block from the codestream according to a prediction mode of peripheral subblocks of the current coding block, wherein the identifier indicates whether the current coding block is encoded by the MVAP.
As an example, when the perimeter of the current coding block is subNumber of subblocks whose prediction mode in a block is an intra prediction mode αcAnd the number beta of unavailable subblocks in the peripheral subblocks of the current coding blockcSum of alphaccThe number gamma of the peripheral sub-blocks of the current coding blockcRatio (α) ofcc)/γcWhen the number of the sub blocks is larger than a third preset threshold value, determining that the MVAP identifier of the current coding block is not analyzed from the code stream, wherein the sub blocks which are not available represent the sub blocks which are not decoded; otherwise, the identifier of the MVAP of the current coding block needs to be analyzed from the code stream.
In step S91, in response to determining that the identifier of the MVAP of the current coding block is not parsed from the code stream, determining not to decode the current coding block through the MVAP; otherwise, the current coding block needs to be decoded by the MVAP.
Fig. 10 is a flowchart illustrating a skip mode or direct mode based video encoding method according to an embodiment of the present disclosure.
In step S100, the skip mode or the direct mode is classified into a plurality of coding modes.
As an example, the skip mode or the direct mode may be classified into:
an advanced motion vector expression mode, an affine skip mode, a spatial and temporal skip mode, a history-based motion vector prediction mode, and a motion vector angle prediction mode, wherein the spatial and temporal skip mode includes a spatial skip mode and a temporal skip mode (that is, the spatial skip mode and the temporal skip mode are one mode type), and wherein each prediction mode may include a plurality of sub-modes.
Alternatively, the skip mode or the direct mode may be classified into:
advanced motion vector expression mode, subblock mode, spatial, temporal and historical skip mode, wherein, the spatial, temporal, and historical skip modes include a spatial skip mode, a temporal skip mode, and a historical skip mode (that is, the spatial skip mode, the temporal skip mode, and the historical skip mode are one type of mode), the subblock modes include an affine mode and a motion vector angle prediction mode, the affine motion mode and the motion vector angle prediction mode have different candidate mode lists, the spatial, temporal, and historical skip modes include a spatial motion vector, a temporal motion vector, and a historical-based motion vector, and a spatial skip mode, a temporal skip mode, and a history skip mode among the spatial, temporal, and history skip modes share one candidate motion vector list, wherein each prediction mode may include a plurality of sub-modes.
Alternatively, the skip mode or the direct mode may be classified into:
the motion vector prediction method includes advanced motion vector expression modes, subblock modes, spatial, temporal and history skip modes, wherein the spatial, temporal and history skip modes include a spatial skip mode, a temporal skip mode and a history skip mode, the subblock modes include affine motion modes and motion vector angle prediction modes, and the affine motion modes and the motion vector angle prediction modes share one candidate mode list, the spatial, temporal and history skip modes include spatial motion vectors, temporal motion vectors and history-based motion vectors, and the spatial skip mode, the temporal skip mode and the history skip mode in the spatial, temporal and history skip modes share one candidate motion vector list, wherein each prediction mode may include a plurality of sub-modes.
In step S101, one sub-mode of one of the plurality of coding modes is determined as an optimal coding mode of a current coding block.
As an example, all modes in the skip mode or the direct mode may be traversed, and the coding costs of all modes are obtained, and the optimal mode is obtained by comparing the coding costs.
Specifically, for each sub-mode in each coding mode type, the coding cost of the current coding block is determined, and the sub-mode in the coding mode type with the smallest coding cost is determined as the optimal coding mode of the current coding block. For example, when it is determined that the horizontal mode in the MVAP mode has the minimum coding cost, the horizontal mode in the MVAP mode is taken as the optimal coding mode of the current coding block.
In step S102, the encoding mode information corresponding to the one encoding mode and the sub-mode information corresponding to the one sub-mode are written into the code stream.
Specifically, for example, if it is determined that a horizontal mode in the MVAP mode has the minimum coding cost, mode information corresponding to the MVAP mode and mode information corresponding to the horizontal mode are written into the codestream.
As an example, mode information corresponding to the MVAP mode and mode information corresponding to the horizontal mode may be written into the codestream in various existing methods, for example, the mode information corresponding to the MVAP mode and the mode information corresponding to the horizontal mode may be an index value corresponding to the MVAP mode and an index value corresponding to the horizontal mode, respectively.
Fig. 11 is a flowchart illustrating a skip mode or direct mode based video decoding method according to an embodiment of the present disclosure.
In step S110, the coding mode information of the coding block corresponding to the current coding block and the sub-mode information of the coding mode may be parsed from the code stream.
In step S111, the skip mode or the direct mode may be classified into a plurality of encoding modes, wherein each of the plurality of encoding modes includes a plurality of sub-modes.
The classification may be the same as the classification described with reference to fig. 10, and will not be described herein again.
In step S112, a coding mode corresponding to the coding mode information among the coding modes obtained by the classification may be determined as a coding mode of the current coding block, and a sub-mode corresponding to the sub-mode information among the determined coding modes of the current coding block may be determined as a sub-mode of the coding mode of the current coding block.
Optionally, the encoding mode information and the sub-mode information are an index value corresponding to the one encoding mode and an index value corresponding to the one sub-mode, respectively.
As an example, if the mode information is an index value, it is determined which of the classified coding modes has the same index value as the analyzed index value of the coding mode of the current coding block, the classified coding mode is determined as the coding mode of the current coding block, and it is determined which sub-mode has the same index value as the analyzed sub-mode, and the sub-mode is determined as the sub-mode of the current coding block.
In step S113, motion information of the current coding block is determined based on the determined coding mode of the current coding block and the sub-mode. Since it is the prior art to determine the motion information of the coding block based on the determined coding mode and sub-mode, it is not described herein again.
Because the number of motion vector angle prediction modes in the existing standard is not determined and shares a candidate mode list with the history-based motion vector prediction technology, when decoding is carried out, the worst condition is that the motion vector angle prediction candidate mode list needs to be constructed to determine whether the motion vector angle prediction technology or the history-based motion vector prediction technology is used for the coding block at all.
Fig. 12 is a flowchart illustrating an encoding method for context model selection based on motion vector angle prediction according to an embodiment of the present disclosure.
In step S120, mode information corresponding to the MVAP mode is binarized according to the prediction mode of the peripheral sub-blocks of the current coding block.
As an example, the number α of subblocks whose prediction modes are intra prediction modes in the upper subblock of the current coding blockcaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockcaSum of alphacacaThe number gamma of upper sub-blocks of the current coding blockcRatio (α) ofcaca)/γcGreater than a preset threshold or when the prediction mode in the left sub-block of the current coding block is the intra-frame prediction modeNumber of subblocks of formula (la) alphaclAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockclSum of alphaclclNumber gamma of left sub-blocks of current coding blockcRatio (α) ofclcl)/γcAnd when the value is larger than the preset threshold value, carrying out binarization on the mode information corresponding to the MVAP mode.
Wherein, the first binary symbol of the binarized mode information indicates whether the MVAP mode of the current coding block is a horizontal mode (horizontal mode, horizontal-down mode or horizontal-up mode) or a vertical mode (vertical mode or vertical-right mode), and the context model corresponding to the first binary symbol indicates the probability of occurrence of the horizontal mode or the vertical mode.
Optionally, the context model of other binary symbols than the first binary symbol is not limited.
As an example, for example, if the first binary symbol is 0 indicates that the prediction mode is a horizontal class mode, and accordingly, the first binary symbol is 1, it indicates that the prediction mode is a vertical class mode. Accordingly, the context model corresponding to the first binary symbol indicates the probability of occurrence of either the horizontal or vertical class pattern.
In step S121, mode information corresponding to the MVAP mode of the current coding block is written into the code stream according to a context model including a context model corresponding to the first binary symbol.
Fig. 13 is a flowchart illustrating a decoding method for context model selection based on motion vector angle prediction according to an embodiment of the present disclosure.
In step S130, a context model is determined according to the prediction modes of the peripheral sub-blocks of the current coding block.
As an example, the number α of subblocks whose prediction modes are intra prediction modes in the upper subblock of the current coding blockcaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockcaSum of alphacacaThe number gamma of upper sub-blocks of the current coding blockcRatio (α) ofcaca)/γcThe number alpha of the sub-blocks which are larger than a preset threshold value or are in the intra-frame prediction mode when the prediction mode in the left sub-block of the current coding block is the intra-frame prediction modeclAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockclSum of alphaclclNumber gamma of left sub-blocks of current coding blockcRatio (α) ofclcl)/γcAnd when the value is larger than a preset threshold value, determining a context model, wherein the context model corresponding to the first binary symbol in the determined context model indicates the probability of occurrence of a horizontal mode (horizontal mode, horizontal downward mode or horizontal upward mode) or a vertical mode (vertical mode or vertical rightward mode).
Optionally, the context model of other binary symbols than the first binary symbol is not limited.
In step S131, the mode information corresponding to the MVAP mode of the current coding block is parsed from the code stream according to the determined context model.
The encoding/decoding method according to the exemplary embodiment of the present invention has been described above with reference to fig. 1 to 13. Hereinafter, an encoding/decoding apparatus according to an exemplary embodiment of the present invention will be described with reference to fig. 14 to 11.
Fig. 14 is a block diagram illustrating an MVAP-based video encoding device 1400 according to an embodiment of the present disclosure.
Referring to fig. 14, the video encoding apparatus 1400 may include a candidate list determination unit 1401, an MVAP mode determination unit 1402, and a writing unit 1403.
As an example, the candidate list determination unit 1401 may determine a candidate mode list of the MVAP of the current coding block according to the prediction modes of the peripheral sub-blocks of the current coding block.
As an example, the candidate list determination unit 1401 may be configured to: when the prediction mode in the upper sub-block of the current coding block is the sub-block number alpha of the intra-prediction modecaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockcaSum of alphacacaNumber of upper sub-block of current coding blockQuantity gammacRatio (α) ofcaca)/γcWhen the number of candidate modes in the candidate mode list is greater than a preset threshold (for example, when the upper sub-block is an unavailable sub-block or the prediction mode of the upper sub-block is intra prediction), the number of the candidate modes in the candidate mode list is reduced from 5 to 3, for example, the candidate mode list is reduced to include only the following three modes: horizontal mode, horizontal up mode and horizontal down mode, wherein the order of the three modes in the candidate mode list is not limited, wherein the unavailable subblocks represent subblocks that are not encoded.
Alternatively, the candidate list determining unit 1401 may also reduce the candidate patterns in the candidate pattern list from the original 5 to another number.
As another example, the candidate list determination unit 1401 may determine the number α of sub-blocks in which the prediction mode in the upper sub-block of the current coding block is the intra prediction mode when the prediction mode is the intra prediction modecaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockcaSum of alphacacaThe number gamma of upper sub-blocks of the current coding blockcRatio (α) ofcaca)/γcAnd when the number of the sub-blocks with the prediction modes being the inter-prediction modes in the upper sub-block of the current coding block is larger than a preset threshold (for example, when the number of the sub-blocks with the inter-prediction modes in the upper sub-block of the current coding block is smaller, that is, the proportion of the number of the sub-blocks with the inter-prediction modes in the upper sub-block to the total number of the upper sub-blocks is too small), reordering the priorities of the candidate modes in the candidate mode list, wherein at this time, the number of the prediction modes in the candidate mode list is not changed. As an example, the list of candidate patterns may be ordered in one of three orders: a horizontal mode, a horizontal up mode, a horizontal down mode, a vertical mode, and a vertical right mode; a horizontal mode, a horizontal down mode, a horizontal up mode, a vertical mode, and a vertical right mode; horizontal down mode, horizontal up mode, vertical mode, and vertical right mode.
As an example, the candidate list determination unit 1401 may determine that the prediction mode in the left sub-block of the current coding block is a sub-block of the intra prediction mode when the prediction mode is the left sub-block of the current coding blockNumber of blocks alphaclAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockclSum of alphaclclNumber gamma of left sub-blocks of current coding blockcRatio (α) ofclcl)/γcAbove a preset threshold (e.g., when the left sub-block is an unavailable sub-block or the prediction mode of the left sub-block is intra prediction), the number of candidate modes in the candidate mode list is reduced from five to three, e.g., the candidate mode list may be determined to include only the following three prediction modes: a vertical mode, a vertical right mode and a horizontal up mode, wherein the order of the three prediction modes is not limited, wherein the unavailable subblocks represent subblocks that are not encoded.
As another example, the candidate list determination unit 1401 may determine the number α of sub-blocks in which the prediction mode in the left sub-block of the current coding block is the intra prediction mode when the prediction mode is the intra prediction modeclAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockclSum of alphaclclNumber gamma of left sub-blocks of current coding blockcRatio (α) ofclcl)/γcAnd when the number of the sub-blocks with the inter-prediction modes in the left sub-block is larger than a preset threshold (for example, when the proportion of the number of the sub-blocks with the inter-prediction modes in the left sub-block to the total number of the left sub-blocks is too small), re-ordering the priorities of the candidate modes in the candidate mode list, wherein the number of the prediction modes in the candidate mode list is not changed, and only the sequence is changed. As an example, the candidate patterns in the candidate pattern list may be ordered into one of three orders: a vertical mode, a horizontal up mode, a vertical right mode, a horizontal mode, and a horizontal down mode; a vertical mode, a vertical right mode, a horizontal up mode, a horizontal mode, and a horizontal down mode; a vertical right mode, a horizontal up mode, a vertical mode, a horizontal mode, and a horizontal down mode.
As an example, the MVAP mode determination unit 1402 may determine one candidate mode in the candidate mode list as a prediction mode of the current coding block.
Specifically, as described above, the candidate list determination unit 1401 may change the number of candidate modes in the candidate mode list or change the order of candidate modes in the candidate mode list to determine the candidate mode list of the current encoding block.
Alternatively, the MVAP mode determining unit 1402 may calculate a coding cost of a current coding block to be coded in each of the multiple MVAP modes, respectively; and determining the one MVAP mode based on the calculated coding cost.
As an example, the MVAP mode determining unit 1402 may traverse all modes in the candidate mode list, obtain motion information of each sub-block of the current coding block for the modes in the list, obtain a coding cost in the mode through motion compensation, and finally determine the prediction mode of the current coding block by comparing the coding costs in the modes. For example, when a horizontal mode is included in the candidate mode list and the calculated coding cost in the horizontal mode is the smallest, the horizontal mode is determined as the prediction mode of the current coding block.
The writing unit 1403 may write mode information corresponding to the determined prediction mode of the current coding block into the code stream.
As an example, the determined index value of the prediction mode of the current coding block in the candidate list may be used as the mode information of the prediction mode of the current coding block, for example, when the MVAP mode determination unit 1402 determines the horizontal mode as the prediction mode of the current coding block, if the index value of the horizontal mode in the candidate list is 1, the index value 1 is used as the mode information corresponding to the prediction mode of the current coding block, and the index value is written into the code stream by the writing unit 1402. That is, an index value corresponding to a prediction mode of the current encoding block in the candidate mode list may be used as the mode information corresponding to the prediction mode of the current encoding block.
Fig. 15 is a block diagram illustrating an MVAP-based video decoding device 1500 according to an embodiment of the present disclosure.
Referring to fig. 15, the video decoding apparatus 1500 may include a codestream parsing unit 1501, a candidate list determination unit 1502, and a motion information determination unit 1503.
As an example, the codestream parsing unit 1501 may parse the MVAP mode information of the current coding block from the codestream.
As an example, the MVAP mode information of the current coding block obtained by parsing may be an index value corresponding to the MVAP mode of the current coding block, which is written into the code stream at the coding stage.
As an example, the candidate list determination unit 1502 may be configured to: when the prediction mode in the upper sub-block of the current coding block is the sub-block number alpha of the intra-prediction modedaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockdaSum of alphadadaThe number gamma of upper sub-blocks of the current coding blockdRatio (α) ofdada)/γdAbove a preset threshold (e.g., when the above sub-block is an unavailable sub-block or the prediction mode of the above sub-block is intra prediction), the original 5 of the candidate mode list may be reduced to 3, for example, the candidate mode list may be reduced to include only the following three modes: the horizontal mode, the horizontal up mode, and the horizontal down mode may correspond to index values 0, 1, 2, respectively, wherein the order of the three modes in the candidate mode list is not limited, wherein the unavailable subblocks represent subblocks that are not encoded.
Alternatively, the candidate list determination unit 1502 may also reduce the number of patterns in the candidate pattern list from the original 5 to another number.
As an example, the candidate list determination unit 1502 may be configured to: when the prediction mode in the upper sub-block of the current coding block is the sub-block number alpha of the intra-prediction modedaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockdaSum of alphadadaThe number gamma of upper sub-blocks of the current coding blockdRatio (α) ofdada)/γdWhen the number of sub-blocks having inter prediction modes in the upper sub-block is larger than the preset threshold (for example, when the ratio of the number of sub-blocks having inter prediction modes to the total number of the upper sub-blocks is too small), the priority of the candidate modes in the candidate mode list can be rearrangedIn this case, the number of prediction modes in the candidate mode list is not changed, and only the order is changed. As an example, the list of candidate patterns may be ordered in one of three orders: a horizontal mode, a horizontal up mode, a horizontal down mode, a vertical mode, and a vertical right mode; a horizontal mode, a horizontal down mode, a horizontal up mode, a vertical mode, and a vertical right mode; horizontal down mode, horizontal up mode, vertical mode, and vertical right mode.
As an example, the candidate list determination unit 1502 may be configured to: when the prediction mode in the left sub-block of the current coding block is the number alpha of the sub-blocks with the intra-prediction modedlAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockdlSum of alphadldlNumber gamma of left sub-blocks of current coding blockdRatio (α) ofdldl)/γdAbove a preset threshold (e.g., when the left sub-block is an unavailable sub-block or the prediction mode of the left sub-block is intra prediction), the number of candidate modes in the candidate mode list is reduced from five to three, e.g., the candidate mode list may be determined to include only the following three prediction modes: the vertical mode, the vertical right mode and the horizontal up mode may correspond to index values 0, 1, 2, respectively, wherein the order of the three prediction modes is not limited, wherein the unavailable subblocks represent subblocks that are not encoded.
Alternatively, the candidate list determination unit 1502 may also reduce the number of candidate patterns in the candidate pattern list from the original five to other numbers.
As an example, the candidate list determination unit 1502 may be configured to: when the prediction mode in the left sub-block of the current coding block is the number alpha of the sub-blocks with the intra-prediction modedlAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockdlSum of alphadldlNumber gamma of left sub-blocks of current coding blockdRatio (α) ofdldl)/γdGreater than a predetermined threshold (e.g. greater thanWhen the number of sub-blocks having inter prediction modes in the left sub-block is too small in proportion to the total number of left sub-blocks), the priorities of the candidate modes in the candidate mode list may be reordered, and at this time, the number of prediction modes in the candidate mode list is not changed, but only the order is changed. As an example, the candidate patterns in the candidate pattern list may be ordered into one of three orders: a vertical mode, a horizontal up mode, a vertical right mode, a horizontal mode, and a horizontal down mode; a vertical mode, a vertical right mode, a horizontal up mode, a horizontal mode, and a horizontal down mode; a vertical right mode, a horizontal up mode, a vertical mode, a horizontal mode, and a horizontal down mode.
The motion information determination unit 1503 may determine the motion information of the current coding block according to the analyzed MVAP mode information of the current coding block and the determined candidate mode list of the MVAP of the current coding block.
Fig. 16 is a block diagram illustrating an MVAP-based video encoding device 1600 according to an embodiment of the present disclosure.
Referring to fig. 16, the video encoding device 1600 may include an encoding determination unit 1601, a codestream transmission determination unit 1602.
As an example, the encoding determining unit 1601 may be configured to determine whether to encode the current coding block through the MVAP according to a prediction mode of a peripheral sub-block of the current coding block.
As an example, the encoding determination unit 1601 may be configured to: when the prediction mode in the peripheral sub-blocks of the current coding block is the sub-block number alpha of the intra-prediction modecAnd the number beta of unavailable subblocks in the peripheral subblocks of the current coding blockcSum of alphaccThe number gamma of the peripheral sub-blocks of the current coding blockcRatio (α) ofcc)/γcWhen the current coding block is larger than a preset threshold value, determining that the current coding block is not coded through MVAP, wherein unavailable subblocks represent subblocks which are not coded; otherwise, determining that the current coding block needs to be coded through the MVAP.
The codestream transmission determination unit 1602 may be configured to: in response to determining that the current coding block is not coded in the MVAP mode, determining that an identifier of the MVAP is not transmitted in the code stream, wherein the identifier indicates whether the current coding block is coded through the MVAP; and in response to determining to encode the current coding block in the MVAP mode, determining to transmit an identification of the MVAP in the code stream. When the MVAP identification is determined not to be transmitted in the code stream, the motion information required to be transmitted in the code stream can be saved.
Fig. 17 is a block diagram illustrating an MVAP-based video decoding apparatus 1700 according to an embodiment of the present disclosure.
Referring to fig. 17, the video decoding apparatus 1700 may include a stream parsing determination unit 1701, a decoding determination unit 1702.
As an example, the code stream parsing determining unit 1701 may be configured to determine whether to parse an identification of an MVAP of a current coding block from a code stream according to prediction modes of peripheral sub-blocks of the current coding block, wherein the identification indicates whether the current coding block is encoded by the MVAP.
As an example, the code stream parsing determination unit 1701 may be configured to: when the prediction mode in the peripheral sub-blocks of the current coding block is the sub-block number alpha of the intra-prediction modecAnd the number beta of unavailable subblocks in the peripheral subblocks of the current coding blockcSum of alphaccThe number gamma of the peripheral sub-blocks of the current coding blockcRatio (α) ofcc)/γcWhen the number of the sub blocks is larger than a preset threshold value, determining that the MVAP identifier of the current coding block is not analyzed from the code stream, wherein the sub blocks which are not available represent the sub blocks which are not decoded; otherwise, determining the identification of the MVAP needing to analyze the current coding block from the code stream.
The decoding determination unit 1702 may be configured to: in response to determining that the identification of the MVAP of the current coding block is not analyzed from the code stream, determining that the current coding block is not decoded by the MVAP; otherwise, determining that the current coding block needs to be decoded through the MVAP.
Fig. 18 is a block diagram illustrating a skip mode or direct mode-based video encoding apparatus 1800 according to an embodiment of the present disclosure.
Referring to fig. 18, the encoding apparatus 1800 may include a mode classification unit 1801, an optimal encoding mode determination unit 1802, and a stream writing unit 1803.
As an example, the classifying unit 1801 may be configured to classify the skip mode or the direct mode into a plurality of coding modes, wherein each of the plurality of coding modes includes a plurality of sub-modes.
As an example, the classification unit 1801 may be configured to: the skip mode or the direct mode is classified into a plurality of coding modes in one of three classification manners:
the first classification mode is as follows: advanced motion vector expression mode, affine skip mode, spatial and temporal skip mode, history-based motion vector prediction mode, and motion vector angle prediction mode, wherein the spatial and temporal skip mode includes spatial skip mode and temporal skip mode (that is, spatial skip mode and temporal skip mode are taken as one mode type), wherein each prediction mode may include a plurality of sub-modes;
the second classification mode is as follows: advanced motion vector expression mode, subblock mode, spatial, temporal and historical skip mode, wherein, the spatial, temporal, and historical skip modes include a spatial skip mode, a temporal skip mode, and a historical skip mode (that is, the spatial skip mode, the temporal skip mode, and the historical skip mode are one type of mode), the subblock modes include an affine mode and a motion vector angle prediction mode, the affine motion mode and the motion vector angle prediction mode have different candidate mode lists, the spatial, temporal, and historical skip modes include a spatial motion vector, a temporal motion vector, and a historical-based motion vector, and the spatial domain skipping mode, the temporal domain skipping mode and the historical skipping mode in the spatial domain, the temporal domain and the historical skipping mode share one candidate motion vector list, wherein each prediction mode can comprise a plurality of sub-modes;
the third classification mode is as follows: the motion vector prediction method includes advanced motion vector expression modes, subblock modes, spatial, temporal and history skip modes, wherein the spatial, temporal and history skip modes include a spatial skip mode, a temporal skip mode and a history skip mode, the subblock modes include affine motion modes and motion vector angle prediction modes, and the affine motion modes and the motion vector angle prediction modes share one candidate mode list, the spatial, temporal and history skip modes include spatial motion vectors, temporal motion vectors and history-based motion vectors, and the spatial skip mode, the temporal skip mode and the history skip mode in the spatial, temporal and history skip modes share one candidate motion vector list, wherein each prediction mode may include a plurality of sub-modes.
As an example, the optimal encoding mode determination unit 1802 may be configured to determine one sub-mode of one of the plurality of encoding modes as the optimal encoding mode of the current encoding block.
As an example, the code stream writing unit 1803 may be configured to write the encoding mode information corresponding to the one encoding mode and the sub-mode information corresponding to the one sub-mode into the code stream.
As an example, the encoding mode information and the sub mode information are an index value corresponding to the one encoding mode and an index value corresponding to the one sub mode, respectively.
Fig. 19 is a block diagram illustrating a skip mode or direct mode based decoding apparatus 1900 according to an embodiment of the present disclosure.
Referring to fig. 19, the decoding apparatus 1900 may include a codestream parsing unit 1901, a mode classification unit 1902, a mode determination unit 1903, and a motion information determination unit 1904.
As an example, the code stream parsing unit 1901 may be configured to parse, from the code stream, encoding mode information of a coding block corresponding to a current coding block and sub-mode information of the encoding mode.
As an example, the mode classification unit 1902 may be configured to classify the skip mode or the direct mode into a plurality of encoding modes, wherein each of the plurality of encoding modes includes a plurality of sub-modes.
As an example, the mode determining unit 1903 may be configured to determine, as the coding mode of the current coding block, a coding mode corresponding to the coding mode information among the coding modes obtained by the classification, and determine, as the sub-mode of the coding mode of the current coding block, a sub-mode corresponding to the sub-mode information among the determined coding modes of the current coding block.
As an example, the motion information determining unit 1904 may be configured to determine the motion information of the current coding block based on the determined coding mode of the current coding block and the sub-mode.
Alternatively, the encoding mode information and the sub mode information may be an index value corresponding to the one encoding mode and an index value corresponding to the one sub mode, respectively.
Fig. 20 illustrates a block diagram of an encoding apparatus 2000 for context model selection based on motion vector angle prediction according to an embodiment of the present disclosure.
Referring to fig. 20, the encoding apparatus 2000 may include a binarization unit 2001, a code stream writing unit 2002.
As an example, the binarization unit 2001 may be configured to binarize mode information corresponding to the MVAP mode according to a prediction mode of a peripheral sub-block of the current coding block, wherein a first binary symbol of the binarized mode information indicates whether the MVAP mode is a horizontal type mode or a vertical type mode, and a context model corresponding to the first binary symbol indicates a probability of occurrence of the horizontal type mode or the vertical type mode.
As an example, the binarization unit 2001 may be configured to: when the prediction mode in the upper sub-block of the current coding block is the sub-block number alpha of the intra-prediction modecaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockcaSum of alphacacaThe number gamma of upper sub-blocks of the current coding blockcRatio (α) ofcaca)/γcThe number alpha of the sub-blocks which are larger than a preset threshold value or are in the intra-frame prediction mode when the prediction mode in the left sub-block of the current coding block is the intra-frame prediction modeclAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockclSum of alphaclclNumber gamma of left sub-blocks of current coding blockcRatio (α) ofclcl)/γcAnd when the value is larger than the preset threshold value, carrying out binarization on the mode information corresponding to the MVAP prediction mode.
As an example, the codestream writing unit 2002 may be configured to write mode information corresponding to the MVAP mode of the current coding block into the codestream according to a context model including the context model (i.e., the context model corresponding to the first binary symbol).
Fig. 21 is a block diagram illustrating a decoding apparatus 2100 for context model selection based on motion vector angle prediction according to an embodiment of the present disclosure.
Referring to fig. 21, the decoding apparatus 2100 may include a context model determining unit 2101, a codestream parsing unit 2102.
As an example, the context model determining unit 2101 may be configured to determine a context model according to prediction modes of surrounding sub-blocks of the current coding block, wherein a context model corresponding to a first binary symbol in the determined context models indicates a probability of occurrence of a horizontal class mode or a vertical class mode.
The context model determination unit 2101 may be configured to: when the prediction mode in the upper sub-block of the current coding block is the sub-block number alpha of the intra-prediction modecaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockcaSum of alphacacaThe number gamma of upper sub-blocks of the current coding blockcRatio (α) ofcaca)/γcThe number alpha of the sub-blocks which are larger than a preset threshold value or are in the intra-frame prediction mode when the prediction mode in the left sub-block of the current coding block is the intra-frame prediction modeclAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockclSum of alphaclclNumber gamma of left sub-blocks of current coding blockcRatio (α) ofclcl)/γcAnd when the probability is larger than a preset threshold value, determining the probability that the context model corresponding to the first binary symbol indicates the occurrence of the horizontal mode or the vertical mode.
As an example, the code stream parsing unit 2102 may be configured to parse mode information corresponding to an MVAP mode of a current coding block from the code stream according to the context model.
It should be understood that each unit in the codec device according to the exemplary embodiment of the present invention may be implemented as a hardware component and/or a software component. The individual units may be implemented, for example, using Field Programmable Gate Arrays (FPGAs) or Application Specific Integrated Circuits (ASICs), depending on the processing performed by the individual units as defined by the skilled person.
Exemplary embodiments of the present invention provide a computer-readable storage medium storing a computer program that, when executed by a processor, implements a codec method as in the above exemplary embodiments. The computer readable storage medium is any data storage device that can store data which can be read by a computer system. Examples of computer-readable storage media include: read-only memory, random access memory, read-only optical disks, magnetic tapes, floppy disks, optical data storage devices, and carrier waves (such as data transmission through the internet via wired or wireless transmission paths).
The encoding and decoding apparatus according to an exemplary embodiment of the present invention may include: a processor and a memory, wherein the memory stores a computer program which, when executed by the processor, implements the codec method as described in the above exemplary embodiments.
Although a few exemplary embodiments of the present invention have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.

Claims (20)

1. A video coding method for predicting MVAP based on motion vector angles, the method comprising:
determining a candidate mode list of MVAP of a current coding block according to the prediction mode of peripheral sub-blocks of the current coding block, wherein the candidate mode list comprises a plurality of MVAP modes;
determining one MVAP mode in the multiple MVAP modes as the MVAP mode of the current coding block;
and writing the mode information corresponding to the MVAP mode into a code stream.
2. The method of claim 1, wherein the determining the candidate mode list of the MVAP of the current coding block comprises:
when the prediction mode in the upper sub-block of the current coding block is the sub-block number alpha of the intra-prediction modecaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockcaSum of alphacacaThe number gamma of upper sub-blocks of the current coding blockcRatio (α) ofcaca)/γcAnd when the number of the candidate modes in the candidate mode list is larger than a preset threshold, reducing the number of the candidate modes in the candidate mode list to a preset number or reordering the priority of the candidate modes in the candidate mode list, wherein the unavailable subblocks represent the subblocks which are not coded.
3. The method of claim 1, wherein the determining the candidate mode list of the MVAP of the current coding block comprises:
when the prediction mode in the left sub-block of the current coding block is the number alpha of the sub-blocks with the intra-prediction modeclAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockclSum of alphaclclNumber gamma of left sub-blocks of current coding blockcRatio (α) ofclcl)/γcAnd when the number of the candidate modes in the candidate mode list is larger than a preset threshold, reducing the number of the candidate modes in the candidate mode list to a preset number or reordering the priority of the candidate modes in the candidate mode list, wherein the unavailable subblocks represent the subblocks which are not coded.
4. The method of claim 1, wherein determining one of the plurality of MVAP modes as the MVAP mode of the current coding block comprises:
respectively calculating the coding cost of the current coding block for coding in each MVAP mode in the multiple MVAP modes;
determining the one MVAP mode based on the calculated coding cost.
5. A video decoding method for predicting MVAP based on motion vector angles, the method comprising:
analyzing the code stream to obtain MVAP mode information of the current coding block;
determining a candidate mode list of MVAP of a current coding block according to the prediction mode of peripheral sub-blocks of the current coding block, wherein the candidate mode list comprises a plurality of MVAP modes;
and determining the motion information of the current coding block according to the candidate mode list and the analyzed MVAP mode information of the current coding block.
6. The method of claim 5, wherein the determining the candidate mode list of the MVAP of the current coding block according to the prediction modes of the peripheral sub-blocks of the current coding block comprises:
when the prediction mode in the upper sub-block of the current coding block is the sub-block number alpha of the intra-prediction modedaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockdaSum of alphadadaThe number gamma of upper sub-blocks of the current coding blockdRatio (α) ofdada)/γdAnd when the number of the candidate modes in the candidate mode list is larger than a preset threshold, reducing the number of the candidate modes in the candidate mode list to a preset number or reordering the priority of the candidate modes in the candidate mode list, wherein the sub-blocks which are not available represent the sub-blocks which are not decoded.
7. The method of claim 5, wherein the determining the candidate mode list of the MVAP of the current coding block comprises:
when the prediction mode in the left sub-block of the current coding block is the number alpha of the sub-blocks with the intra-prediction modedlAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockdlSum of alphadldlNumber gamma of left sub-blocks of current coding blockdRatio (α) ofdldl)/γdAnd when the number of the candidate modes in the candidate mode list is larger than a preset threshold, reducing the number of the candidate modes in the candidate mode list to a preset number or reordering the priority of the candidate modes in the candidate mode list, wherein the sub-blocks which are not available represent the sub-blocks which are not decoded.
8. The method of claim 5, wherein the determining motion information of the current coding block according to the candidate mode list and the parsed MVAP mode information of the current coding block comprises:
determining an MVAP mode having MVAP mode information same as the MVAP mode information in a candidate mode list as an MVAP mode of a current coding block, and acquiring motion information of the current coding block based on the determined MVAP mode of the current coding block.
9. A video coding method for predicting MVAP based on motion vector angles, the method comprising:
determining whether the current coding block is coded through MVAP according to the prediction mode of the peripheral subblocks of the current coding block;
and in response to determining not to encode the current coding block in the MVAP mode, determining not to transmit an identifier of the MVAP in the code stream, wherein the identifier indicates whether the current coding block is encoded through the MVAP.
10. The method of claim 9, wherein the determining whether to encode the current coding block in the MVAP mode according to the prediction modes of the peripheral sub-blocks of the current coding block comprises: when the prediction mode in the peripheral sub-blocks of the current coding block is the sub-block number alpha of the intra-prediction modecAnd the number beta of unavailable subblocks in the peripheral subblocks of the current coding blockcSum of alphaccThe number gamma of the peripheral sub-blocks of the current coding blockcRatio (α) ofcc)/γcWhen the current coding block is larger than the preset threshold value, determining that the current coding block is not processed through the MVAPLine coding, where the sub-blocks that are not available represent sub-blocks that are not coded.
11. A video decoding method for predicting MVAP based on motion vector angles, the method comprising:
determining whether an identification of an MVAP (multifunction video coding protocol) of a current coding block is analyzed from a code stream according to a prediction mode of peripheral subblocks of the current coding block, wherein the identification indicates whether the current coding block is coded through the MVAP;
and in response to determining not to parse the identification of the MVAP of the current coding block from the code stream, determining not to decode the current coding block through the MVAP.
12. The method of claim 11, wherein the determining whether to parse the identification of the MVAP of the current coding block from the codestream according to the prediction modes of the peripheral sub-blocks of the current coding block comprises:
when the prediction mode in the peripheral sub-blocks of the current coding block is the sub-block number alpha of the intra-prediction modecAnd the number beta of unavailable subblocks in the peripheral subblocks of the current coding blockcSum of alphaccThe number gamma of the peripheral sub-blocks of the current coding blockcRatio (α) ofcc)/γcAnd when the number of the sub blocks is larger than a preset threshold value, determining not to analyze the identification of the MVAP of the current coding block from the code stream, wherein the sub blocks which are not available represent the sub blocks which are not decoded.
13. A method of video encoding based on skip mode or direct mode, the method comprising:
classifying a skip mode or a direct mode into a plurality of coding modes, wherein each of the plurality of coding modes includes a plurality of sub-modes;
determining one sub-mode in one coding mode in the plurality of coding modes as the optimal coding mode of the current coding block;
and writing the coding mode information corresponding to the coding mode and the sub-mode information corresponding to the sub-mode into the code stream.
14. The method of claim 13, wherein the classifying the skip mode or the direct mode into the plurality of coding modes comprises classifying the skip mode or the direct mode into the plurality of coding modes in one of three classification manners:
the first classification method comprises the following steps: advanced motion vector expression mode, affine skipping mode, spatial and temporal skipping mode, history-based motion vector prediction mode and motion vector angle prediction mode, wherein the spatial and temporal skipping mode comprises spatial skipping mode and temporal skipping mode;
the second classification mode is as follows: an advanced motion vector expression mode, a subblock mode, a spatial domain, a temporal domain and a history skip mode, wherein the spatial domain, the temporal domain and the history skip mode comprise a spatial domain skip mode, a temporal domain skip mode and a history skip mode, the subblock mode comprises an affine mode and a motion vector angle prediction mode, the affine motion mode and the motion vector angle prediction mode have different candidate mode lists, the spatial domain, the temporal domain and the history skip mode comprise spatial motion vectors, temporal motion vectors and history-based motion vectors, and the spatial domain skip mode, the temporal domain skip mode and the history skip mode in the spatial domain, the temporal domain and the history skip mode share one candidate motion vector list;
the third classification mode is as follows: an advanced motion vector expression mode, a subblock mode, a spatial, temporal and history skip mode, wherein the spatial, temporal and history skip mode includes a spatial skip mode, a temporal skip mode and a history skip mode, the subblock mode includes an affine motion mode and a motion vector angle prediction mode, and the affine motion mode and the motion vector angle prediction mode share one candidate mode list, the spatial, temporal and history skip mode includes a spatial motion vector, a temporal motion vector and a history-based motion vector, and the spatial skip mode, the temporal skip mode and the history skip mode in the spatial, temporal and history skip mode share one candidate motion vector list.
15. The method of claim 13, wherein the encoding mode information and the sub mode information are an index value corresponding to the one encoding mode and an index value corresponding to the one sub mode, respectively.
16. A method of video decoding based on skip mode or direct mode, the method comprising:
analyzing coding mode information of a coding block corresponding to a current coding block and sub-mode information of the coding mode from a code stream;
classifying a skip mode or a direct mode into a plurality of coding modes, wherein each of the plurality of coding modes includes a plurality of sub-modes;
determining a coding mode corresponding to the coding mode information in the coding modes obtained by classification as a coding mode of a current coding block, and determining a sub-mode corresponding to the sub-mode information in the determined coding mode of the current coding block as a sub-mode of the coding mode of the current coding block;
and determining the motion information of the current coding block based on the determined coding mode and the sub-mode of the current coding block.
17. The method of claim 16, wherein the encoding mode information and the sub mode information are an index value corresponding to the one encoding mode and an index value corresponding to the one sub mode, respectively.
18. A video coding method for context model selection based on motion vector angle prediction, the method comprising:
when the prediction mode in the upper sub-block of the current coding block is the sub-block number alpha of the intra-prediction modecaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockcaSum of alphacacaThe number gamma of upper sub-blocks of the current coding blockcRatio (α) ofcaca)/γcGreater than a predetermined threshold value or whenThe number alpha of sub-blocks with prediction mode of intra-prediction mode in the left sub-block of the current coding blockclAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockclSum of alphaclclNumber gamma of left sub-blocks of current coding blockcRatio (α) ofclcl)/γcWhen the value is larger than a preset threshold value, carrying out binarization on mode information corresponding to the MVAP prediction mode, wherein a first binary symbol of the binarized mode information indicates whether the MVAP mode is a horizontal mode or a vertical mode, and a context model corresponding to the first binary symbol indicates the probability of occurrence of the horizontal mode or the vertical mode;
and writing the mode information corresponding to the MVAP mode of the current coding block after binarization into a code stream according to the context model comprising the context model.
19. A video decoding method for context model selection based on motion vector angle prediction, the method comprising:
when the prediction mode in the upper sub-block of the current coding block is the sub-block number alpha of the intra-prediction modecaAnd the number beta of unavailable subblocks in the upper subblock of the current coding blockcaSum of alphacacaThe number gamma of upper sub-blocks of the current coding blockcRatio (α) ofcaca)/γcThe number alpha of the sub-blocks which are larger than a preset threshold value or are in the intra-frame prediction mode when the prediction mode in the left sub-block of the current coding block is the intra-frame prediction modeclAnd the number beta of sub-blocks not available in the left sub-block of the current coding blockclSum of alphaclclNumber gamma of left sub-blocks of current coding blockcRatio (α) ofclcl)/γcWhen the probability of the occurrence of the context model indicating the horizontal mode or the vertical mode is larger than a preset threshold value, the context model corresponding to the first binary symbol is determined;
and analyzing the mode information corresponding to the MVAP mode of the current coding block from the code stream according to the determined context model.
20. A computer-readable storage medium, in which a computer program is stored which, when executed, implements the method of any of claims 1 to 19.
CN201910501806.1A 2019-06-11 2019-06-11 Video coding and decoding method and device based on motion vector angle prediction Pending CN112073733A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910501806.1A CN112073733A (en) 2019-06-11 2019-06-11 Video coding and decoding method and device based on motion vector angle prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910501806.1A CN112073733A (en) 2019-06-11 2019-06-11 Video coding and decoding method and device based on motion vector angle prediction

Publications (1)

Publication Number Publication Date
CN112073733A true CN112073733A (en) 2020-12-11

Family

ID=73658540

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910501806.1A Pending CN112073733A (en) 2019-06-11 2019-06-11 Video coding and decoding method and device based on motion vector angle prediction

Country Status (1)

Country Link
CN (1) CN112073733A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113709456A (en) * 2021-06-30 2021-11-26 杭州海康威视数字技术股份有限公司 Decoding method, device, equipment and machine readable storage medium
CN115361550A (en) * 2021-02-22 2022-11-18 北京达佳互联信息技术有限公司 Improved overlapped block motion compensation for inter prediction

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115361550A (en) * 2021-02-22 2022-11-18 北京达佳互联信息技术有限公司 Improved overlapped block motion compensation for inter prediction
CN113709456A (en) * 2021-06-30 2021-11-26 杭州海康威视数字技术股份有限公司 Decoding method, device, equipment and machine readable storage medium
CN113794877A (en) * 2021-06-30 2021-12-14 杭州海康威视数字技术股份有限公司 Decoding method, encoding method, device, equipment and machine readable storage medium
CN114650418A (en) * 2021-06-30 2022-06-21 杭州海康威视数字技术股份有限公司 Decoding method, encoding method, device and equipment
CN113709456B (en) * 2021-06-30 2022-11-25 杭州海康威视数字技术股份有限公司 Decoding method, device, equipment and machine readable storage medium
CN113794877B (en) * 2021-06-30 2022-11-25 杭州海康威视数字技术股份有限公司 Decoding method, encoding method, device, equipment and machine readable storage medium
WO2023273802A1 (en) * 2021-06-30 2023-01-05 杭州海康威视数字技术股份有限公司 Decoding method and apparatus, coding method and apparatus, device, and storage medium
CN114650418B (en) * 2021-06-30 2023-01-24 杭州海康威视数字技术股份有限公司 Decoding method, encoding method, device and equipment
TWI806650B (en) * 2021-06-30 2023-06-21 大陸商杭州海康威視數字技術股份有限公司 Decoding methods, encoding methods, and apparatuses, devices and storage media thereof

Similar Documents

Publication Publication Date Title
US11553185B2 (en) Method and apparatus for processing a video signal
RU2709158C1 (en) Encoding and decoding of video with high resistance to errors
JP7351485B2 (en) Image encoding method and device, image decoding method and device, and program
JP6728249B2 (en) Image coding supporting block division and block integration
KR101208863B1 (en) Selecting encoding types and predictive modes for encoding video data
TW202017369A (en) Extended reference intra-picture prediction
CN101283600A (en) Reference image selection method and device
JP2023052767A (en) Video processing method and encoder
KR102267770B1 (en) Method and device for determining a set of modifiable elements in a group of pictures
CN110024397B (en) Method and apparatus for encoding video
JP6212890B2 (en) Moving picture coding apparatus, moving picture coding method, and moving picture coding program
CN113647105A (en) Inter prediction for exponential partitions
CN112073733A (en) Video coding and decoding method and device based on motion vector angle prediction
JP7448558B2 (en) Methods and devices for image encoding and decoding
CN114339236B (en) Prediction mode decoding method, electronic device and machine-readable storage medium
CN114339224B (en) Image enhancement method, device and machine-readable storage medium
CN105306953A (en) Image coding method and device
CN115955572A (en) Encoding method, decoding method, electronic device, and computer-readable storage medium
CN113056913A (en) Method and system for constructing merge candidate list including adding non-adjacent diagonal space merge candidates

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination