CN112055202B - Inter-frame prediction method, video coding method, electronic device and storage medium - Google Patents

Inter-frame prediction method, video coding method, electronic device and storage medium Download PDF

Info

Publication number
CN112055202B
CN112055202B CN202010852587.4A CN202010852587A CN112055202B CN 112055202 B CN112055202 B CN 112055202B CN 202010852587 A CN202010852587 A CN 202010852587A CN 112055202 B CN112055202 B CN 112055202B
Authority
CN
China
Prior art keywords
motion vector
block
sub
control point
inter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010852587.4A
Other languages
Chinese (zh)
Other versions
CN112055202A (en
Inventor
方瑞东
曾飞洋
张政腾
江东
林聚财
陈瑶
粘春湄
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010852587.4A priority Critical patent/CN112055202B/en
Publication of CN112055202A publication Critical patent/CN112055202A/en
Application granted granted Critical
Publication of CN112055202B publication Critical patent/CN112055202B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application discloses an inter-frame prediction method, a video coding method, electronic equipment and a storage medium, wherein the inter-frame prediction method is used for constructing a first candidate list according to motion vectors of adjacent coded blocks of a current block after the current block is divided into a plurality of sub-blocks and a first preset number of control points are defined; when the first number of the motion vector candidate groups in the first candidate list is smaller than a second preset number, constructing a second candidate list from a time-space domain, wherein the second candidate list comprises the motion vector candidate groups with the second number, and the sum of the second number and the first number is smaller than or equal to the second preset number; the final motion vector of each sub-block is then obtained using the first candidate list and the second candidate list. Wherein the product of the width and the height of the current block is greater than or equal to a preset threshold. Therefore, the method and the device have the advantages that the condition of starting the affine prediction mode is relaxed, more current blocks can be coded by starting the high-efficiency affine prediction mode, and the coding efficiency is improved.

Description

Inter-frame prediction method, video coding method, electronic device and storage medium
Technical Field
The present application relates to the field of video coding technologies, and in particular, to an inter-frame prediction method, a video coding method, an electronic device, and a storage medium.
Background
Video is formed by the sequential playing of a number of still images, each of which can be viewed as a frame. Since the values of the similar pixels in the adjacent frames are usually relatively close and the color will not change suddenly, the temporal correlation can be used for compression, which is the inter-frame prediction. In short, the inter prediction is to search a block that matches the current block best within a reference frame image of the current block to predict the current block. One efficient inter-frame prediction mode is an affine prediction mode, but the condition that the affine prediction mode is enabled is that the width and the height of a current block to be encoded are respectively greater than or equal to preset values, so that part of the current blocks cannot enable the affine prediction mode to obtain higher encoding efficiency.
Disclosure of Invention
The technical problem mainly solved by the application is to provide an inter-frame prediction method, a video coding method, an electronic device and a storage medium, which can relax the condition of starting an affine prediction mode, enable more current blocks to start high-efficiency affine prediction modes for coding, and improve coding efficiency.
In order to solve the technical problem, the application adopts a technical scheme that:
provided is an inter prediction method including:
dividing a current block into a plurality of sub-blocks with the same size, and determining a first preset number of sub-blocks in the sub-blocks as control points; wherein the product of the width and the height of the current block is greater than or equal to a preset threshold;
constructing a first candidate list from motion vectors of neighboring encoded blocks of the current block, wherein the first candidate list comprises a first number of motion vector candidate sets comprising motion vectors of at least two of the control points;
constructing a second candidate list from a spatio-temporal domain in response to the first number being less than a second preset number, wherein the second candidate list comprises a second number of the motion vector candidate sets and a sum of the second number and the first number is less than or equal to the second preset number;
determining a final motion vector for each of the sub-blocks using the first candidate list and the second candidate list.
In order to solve the above technical problem, another technical solution adopted by the present application is:
there is provided a video encoding method including:
obtaining a final motion vector of each sub-block in a current block, wherein the final motion vector of each sub-block is obtained by using the inter-frame prediction method according to the above technical solution;
obtaining pixel values of the current block based on the final motion vector of each sub-block to encode the current block.
In order to solve the above technical problem, another technical solution adopted by the present application is:
there is provided an electronic device comprising a memory and a processor coupled to each other, the memory storing program instructions, and the processor being configured to execute the program instructions to implement the inter-frame prediction method according to the above technical solution or to implement the video coding method according to the above technical solution.
In order to solve the above technical problem, another technical solution adopted by the present application is:
there is provided a computer readable storage medium having stored thereon program instructions executable by a processor to implement an inter prediction method as described in the above technical solution or to implement a video coding method as described in the above technical solution.
The beneficial effect of this application is: different from the situation of the prior art, the inter-frame prediction method provided by the application constructs a first candidate list according to motion vectors of adjacent coded blocks of the current block after dividing the current block into a plurality of sub-blocks and determining a first preset number of sub-blocks in the plurality of sub-blocks as control points; when the first number of the motion vector candidate groups in the first candidate list is smaller than a second preset number, constructing a second candidate list from a time-space domain, wherein the second candidate list comprises the motion vector candidate groups with the second number, and the sum of the second number and the first number is smaller than or equal to the second preset number; the final motion vector of each sub-block is then obtained using the first candidate list and the second candidate list. Wherein the product of the width and the height of the current block is greater than or equal to a preset threshold. Therefore, the method and the device have the advantages that the condition of starting the affine prediction mode is relaxed, more current blocks can be coded by starting the high-efficiency affine prediction mode, and the coding efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts. Wherein:
FIG. 1 is a flowchart illustrating an embodiment of an inter prediction method according to the present application;
FIG. 2a is a diagram of an embodiment of dividing a current block into a plurality of sub-blocks;
FIG. 2b is a diagram of a current block and adjacent encoded blocks;
FIG. 3 is a flowchart illustrating an embodiment of step S13 in FIG. 1;
FIG. 4 is a diagram illustrating scaling of motion vectors;
FIG. 5 is a flowchart illustrating an embodiment of a step before step S22 in FIG. 3;
FIG. 6 is a flowchart illustrating an embodiment of step S32 in FIG. 5;
FIG. 7 is a schematic flow chart illustrating another embodiment of the step before step S22 in FIG. 3;
FIG. 8 is a flowchart illustrating an embodiment of step S42 in FIG. 7;
FIG. 9 is a schematic flow chart illustrating another embodiment of the step before step S22 in FIG. 3;
FIG. 10 is a flowchart illustrating an embodiment of step S52 in FIG. 9;
FIG. 11 is a flowchart illustrating an embodiment of a step before step S14 in FIG. 1;
FIG. 12 is a flowchart illustrating an embodiment of step S62 in FIG. 11;
FIG. 13 is a flowchart illustrating an embodiment of a video encoding method of the present application;
FIG. 14 is a block diagram of an embodiment of an inter prediction apparatus;
FIG. 15 is a block diagram of an embodiment of a video encoding apparatus;
FIG. 16 is a schematic structural diagram of an embodiment of an electronic device according to the present application;
FIG. 17 is a schematic structural diagram of an embodiment of a computer-readable storage medium according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments that can be obtained by a person skilled in the art without making any inventive step based on the embodiments in the present application belong to the protection scope of the present application.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of an inter-frame prediction method according to the present application, the inter-frame prediction method including the following steps:
step S11, dividing the current block into a plurality of sub-blocks with the same size, and determining a first preset number of sub-blocks in the plurality of sub-blocks as control points; wherein the product of the width and the height of the current block is greater than or equal to a preset threshold.
For inter prediction of a current block, a reference block that matches the current block most may be searched in a reference frame image of the current block, so as to obtain a displacement between positions of the current block and the reference block, that is, a Motion Vector (MV) of the current block, and further perform motion compensation on the current block. Block-based motion compensation assumes that all pixels within a prediction unit follow the same balanced motion model by sharing the same motion vector MV, however, translational motion models cannot capture rotating, blooming, and warping scenes, and an affine (affine) prediction mode is developed. The affine prediction mode first divides the current block into a plurality of sub-blocks with the same size, such as 4 × 4 sub-blocks or 8 × 8 sub-blocks, as shown in fig. 2a, fig. 2a is a schematic diagram of an embodiment of dividing the current block into a plurality of sub-blocks, specifically, a 16 × 16 current block is divided into 16 4 × 4 sub-blocks. And further generating different final motion vectors for each sub-block, and respectively performing motion compensation so as to simulate scenes such as rotation, blooming and deformation. The process of generating a different final motion vector for each sub-block will be described below.
It can be seen that the affine prediction mode is an inter prediction mode with higher encoding efficiency, but the condition of enabling the affine prediction mode in the prior art is that the width and the height of the current block are respectively greater than or equal to preset values, for example, the width w of the current block is greater than or equal to 16, and the height h of the current block is greater than or equal to 16, which results in that some current blocks (for example, current blocks with w equal to 32 and h equal to 8) cannot obtain higher encoding efficiency by using the affine prediction mode. For the technical problem, the application relaxes the condition of enabling the affine prediction to be that the product of the width and the height of the current block is greater than or equal to the preset threshold, for example, if the preset threshold is set to 256, the condition of enabling the affine prediction is that the product w h of the width and the height of the current block is greater than or equal to 256. In other embodiments, the preset threshold may be set to other values, and the preset threshold is defined to be greater than or equal to 256.
In order to obtain the final motion vectors of the plurality of sub-blocks, it is required to first obtain initial motion vectors of a first preset number of Control Points (CPs). Referring to fig. 2a, in the present embodiment, the first predetermined number is four, that is, the control points include a first control point, a second control point, a third control point and a fourth control point, which are an upper Left (LT) sub-block, an upper Right (RT) sub-block, a lower Left (LB) sub-block and a lower Right (RB) sub-block of the current block, respectively. First, a first initial motion vector of a first control point is defined as a motion vector LTMV of an upper left sub-block, a second initial motion vector of a second control point is defined as a motion vector RTMV of an upper right sub-block, a third initial motion vector of a third control point is defined as a motion vector LBMV of a lower left sub-block, and a fourth initial motion vector of a fourth control point is defined as a motion vector RBMV of a lower right sub-block. After obtaining the LTMV, RTMV, LBMV, and RBMV, it is necessary to obtain a first motion vector v0 of the first control point, a second motion vector v1 of the second control point, and a motion vector v2 of the third control point, which will be described below.
In step S12, a first candidate list is constructed based on motion vectors of neighboring encoded blocks of the current block, wherein the first candidate list includes a first number of motion vector candidate groups, and the motion vector candidate groups include motion vectors of at least two control points.
The above-mentioned obtaining of the first motion vector v0 of the first control point, the second motion vector v1 of the second control point, and the motion vector v2 of the third control point is to construct a plurality of motion vector candidate groups (CPMV groups) to further screen out an optimal motion vector candidate group, and to perform accurate inter-frame prediction. Wherein the motion vector candidate group comprises a first motion vector v0 and a second motion vector v 1; or the motion vector candidate group includes the first motion vector v0, the second motion vector v1, and the third motion vector v2 of the third control point. The 4-parameter affine prediction mode is applicable when the motion vector candidate group includes two motion vectors, and the 6-parameter affine prediction mode is applicable when the motion vector candidate group CPMV includes three motion vectors.
Referring to FIG. 2b in conjunction with FIG. 2a, FIG. 2b is a diagram illustrating a current block and neighboring encoded blocks, which may construct a first number of candidate sets of motion vectors to form a first candidate list according to the neighboring encoded blocks (F, G, C, A and D) of the current block. Wherein adjacent encoded blocks are encoded using an affine prediction mode. Specifically, the availability of neighboring coding blocks is checked, the same available neighboring coding blocks are removed, and then a candidate set of motion vectors is derived from the available neighboring coding blocks. The specific operation method for constructing the first candidate list is well known in the prior art, and is not described herein again.
Step S13, in response to the first number being smaller than a second preset number, constructing a second candidate list from the temporal-spatial domain, wherein the second candidate list includes a second number of motion vector candidate sets, and a sum of the second number and the first number is smaller than or equal to the second preset number.
The second preset number is the number of motion vector candidate groups that the affine prediction mode can store at most, for example, 5, the affine prediction mode will construct a candidate list storing at most 5 motion vector candidate groups, and then the best motion vector candidate group is selected from the candidate list for inter-frame prediction.
After the first candidate list is constructed, the embodiment determines whether the first number is smaller than the second preset number, and if so, the embodiment further stores the motion vector candidate group, and at this time, constructs the second candidate list from the time-space domain. Wherein the second candidate list comprises a second number of motion vector candidate sets, such that the total number of motion vector candidate sets is further increased. For example, if the second preset number is 5, 3 CPMV groups are constructed in step S12, and at most 2 CPMV groups are constructed in step S13, so that the candidate list of affine prediction modes is filled as much as possible. And if the sum of the second quantity and the first quantity is less than a second preset quantity after the second candidate list is constructed, filling the zero vector candidate group until the zero vector candidate group is full. The specific process of constructing the second candidate list from the time-space domain will be described below.
In step S14, a final motion vector of each sub-block is determined using the first candidate list and the second candidate list.
After the first candidate list and the second candidate list are constructed, an optimal motion vector candidate set can be further screened out according to a rate-distortion cost minimum principle, the final motion vector of each subblock is calculated by utilizing the motion vector weighting of the control point in the optimal motion vector candidate set, the motion compensation of the current block is completed, and the predicted value of the inter-frame prediction is obtained. That is, a reference region designated in the reference frame by each sub-block can be determined using the final motion vector of the sub-block, and the pixel values of the reference region are padded with the pixel values of the sub-blocks, thereby obtaining the prediction value of the current block. The formula of the weighting calculation is well known in the prior art and will not be described herein.
The method and the device have the advantages that the condition of starting the affine prediction mode is relaxed, more current blocks can be coded by starting the high-efficiency affine prediction mode, and the coding efficiency is improved.
In some embodiments, referring to fig. 3, fig. 3 is a flowchart illustrating an embodiment of step S13 in fig. 1, that is, a second candidate list may be constructed from the space-time domain by:
step S21, obtaining a first initial motion vector of a first control point, a second initial motion vector of a second control point and a third initial motion vector of a third control point from the spatial domain adjacent coded blocks; and the spatial domain adjacent coded blocks are not coded by adopting an intra-frame prediction mode.
Continuing with FIG. 2B, the spatial neighboring coded blocks of the current block have A, B, D, G, C and F, wherein the first initial motion vector LTMV is set as LTMV in A, B, D order, e.g., if there is no available motion vector for the neighboring block A and there is an available motion vector for the neighboring block B, then the motion vector of the neighboring block B is set as LTMV. The second initial motion vector RTMV is in the order of G, C, with the first available motion vector being the RTMV. The third initial motion vector LBMV can only be obtained from the neighboring block F.
In step S22, a fourth initial motion vector of the fourth control point is obtained from the time domain.
And when the fourth initial motion vector RBMV is obtained, judging whether the homothetic subblock of the RB subblock has an available motion vector or not, if so, scaling the motion vector of the homothetic subblock to be used as the RBMV. Specifically, referring to fig. 4, fig. 4 is a schematic diagram illustrating a scaling of motion vectors. In fig. 4, the RB sub-block is a lower right sub-block of a current block in a current picture, the co-located picture refers to a previous or subsequent frame of the current picture, the co-located block refers to a block in the co-located picture having the same coordinates as the current block, and the co-located sub-block is a lower right sub-block of the co-located block and has the same coordinates as the RB sub-block. td denotes the POC distance between the current picture and the reference picture of the current picture, and tb denotes the POC distance between the co-located picture and the reference picture of the co-located picture. The POC refers to a Picture Order Count (Picture Order Count), and the POC distance refers to a POC difference between the Picture Order Count and the POC difference.
If there is an available motion vector col _ RB _ scuMV in the co-located sub-block of the fourth control point (RB sub-block), a fourth initial motion vector RBMV is calculated according to the following equation (1):
Figure BDA0002645226030000071
in step S23, a second candidate list is constructed according to the first initial motion vector, the third initial motion vector and the fourth initial motion vector.
The motion vector candidate set in the second candidate list comprises motion vectors of two (v0 and v1) or three (v0, v1 and v2) control points, which respectively need two or three of four initial motion vectors LTMV, RTMV, LBMV, RBMV of four control points to be calculated. The present application provides 6 sets of equations, specifically please refer to table 1, wherein the premise of using a certain set of equations is that the initial motion vectors included in the set of equations all exist and the corresponding reference frame indexes are identical, and the reference frame indexes are the POC of the reference frames.
TABLE 1 List of formula groups for constructing the second candidate list
Number of formula group Formula set
0 v0=LTMV,v1=RTMV,v2=LBMV
1 v0=LTMV,v1=RTMV,v2=RBMV+LTMV-RTMV
2 v0=LTMV,v1=RBMV+LTMV-LBMV,v2=LBMV
3 v0=RTMV+LBMV-RBMV,v1=RTMV,v2=LBMV
4 v0=LTMV,v1=RTMV
5 v0=LTMV,v1=(v1x,v1y)
Wherein, v1x=LTMVx+((LBMVx-LTMVx)<<(Log2[w]-Log2[h])),v1y=LTMVy-((LBMVy-LTMVy)<<(Log2[w]-Log2[h]) W and h denote the width and height, respectively, of the current block, and the subscripts x and y denote the first and second components, respectively, of the corresponding motion vector.
The constructable CPMV groups are filled into the second candidate list in the order of combination 0 to combination 5, and the filling is stopped when the second number is equal to the second preset number minus the first number. If the sum of the second number and the first number is still less than the second preset number after the second candidate list is constructed by using the combinations 0 to 5, the filling is continued to be full by using the zero vector group.
In the embodiment, the second candidate list is constructed from the time-space domain, and more motion vector candidate groups are obtained, so that further screening can be performed in more non-zero motion vector candidate groups, the finally obtained final motion vector of each subblock is more accurate, and the inter-frame prediction result is more accurate.
In some embodiments, referring to fig. 5, fig. 5 is a flowchart illustrating an embodiment of a step before step S22 in fig. 3, that is, before the step of obtaining the fourth initial motion vector of the fourth control point from the time domain, the method may include the following steps:
in step S31, it is determined whether or not the first initial motion vector is acquired from the spatial domain adjacent coded block.
Referring to fig. 2b, in step S21, the first initial motion vector LTMV is the first obtainable motion vector LTMV in the order of A, B, D. There may be a case where the LTMV acquisition from the airspace fails, and therefore, it may be determined whether the LTMV acquisition is successful. If the LTMV is obtained from the spatial domain neighboring encoded blocks, the step S22 is directly performed, as shown in FIG. 5 at step S33.
In step S32, if not, a first initial motion vector is acquired from the time domain.
If the LTMV cannot be obtained from the spatial domain neighboring coded blocks, it is obtained from the time domain.
Specifically, referring to fig. 6 in conjunction with fig. 4 and fig. 5, fig. 6 is a schematic flowchart of an embodiment of step S32 in fig. 5, that is, a first initial motion vector may be obtained from the time domain through the following steps:
in step S321, it is determined whether a motion vector of the first parity sub-block in the first parity image of the first control point exists.
In step S322, if there is, the motion vector of the first co-located sub-block is scaled to be the first initial motion vector.
When the first initial motion vector LTMV is obtained from the time domain, whether the first co-located sub-block of the LT sub-block has an available motion vector or not is judged, if so, the motion vector of the co-located sub-block is scaled and then is used as the LTMV. Wherein the first co-located sub-block is an LT sub-block of a co-located block of the current block. The motion vector col _ lt _ scuMV of the first co-located sub-block is scaled by the following formula (2), and the specific parameters can be defined as shown in fig. 4 and the above step S22:
Figure BDA0002645226030000091
of course, if the motion vector of the first co-located sub-block does not exist, step S22 is directly performed, as shown in step S323 in fig. 6.
In the embodiment, when the first initial motion vector is failed to be acquired from the airspace, acquisition from the time domain is tried, so that the probability that the first control point can acquire the first initial motion vector is higher, more motion vector candidate groups can be constructed for further screening, and the coding efficiency can be further improved.
In some embodiments, referring to fig. 7, fig. 7 is a flowchart illustrating a step before step S22 in fig. 3 according to another embodiment, that is, before the step of obtaining the fourth initial motion vector of the fourth control point from the time domain, the method may further include the following steps:
in step S41, it is determined whether or not the second initial motion vector is acquired from the spatial domain adjacent coded block.
Referring to fig. 2b, in step S21, the second initial motion vector RTMV takes the first available motion vector as RTMV in the order of G, C. The situation that the RTMV is acquired from the airspace in a failure mode may exist, so that whether the RTMV is acquired successfully or not can be judged firstly. If the RTMV is obtained from the spatial domain neighboring coded blocks, step S22 is directly performed, as shown in step S43 in FIG. 7.
In step S42, if not, a second initial motion vector is acquired from the time domain.
If the RTMV cannot be obtained from the spatial domain neighboring coded blocks, the RTMV is obtained from the time domain.
Specifically, referring to fig. 8 in conjunction with fig. 4 and fig. 7, fig. 8 is a flowchart illustrating an embodiment of step S42 in fig. 7, that is, a second initial motion vector may be obtained from the time domain by the following steps:
in step S421, it is determined whether the motion vector of the second parity sub-block in the second parity image of the second control point exists.
In step S422, if the motion vector of the second co-located sub-block exists, the scaled motion vector of the second co-located sub-block is used as a second initial motion vector.
When a second initial motion vector RTMV is obtained from a time domain, whether an available motion vector exists in a first co-located subblock of an RT subblock is judged, and if the available motion vector exists, the motion vector of the co-located subblock is used as the RTMV after being scaled. Wherein the second parity sub-block is an RT sub-block of a parity block of the current block. The motion vector col _ rt _ scuMV of the second co-located sub-block is scaled by the following formula (3), and the specific parameters can be defined as shown in fig. 4 and the above step S22:
Figure BDA0002645226030000101
of course, if the motion vector of the second co-located sub-block does not exist, step S22 is directly performed, as shown in step S423 in fig. 8.
In the embodiment, when the second initial motion vector is failed to be acquired from the airspace, acquisition from the time domain is tried, so that the probability that the second control point can acquire the second initial motion vector is higher, more motion vector candidate groups can be constructed for further screening, and the coding efficiency can be further improved.
In some embodiments, referring to fig. 9, fig. 9 is a flowchart illustrating a step before step S22 in fig. 3 according to another embodiment, that is, before the step of obtaining the fourth initial motion vector of the fourth control point from the time domain, the method may further include the following steps:
in step S51, it is determined whether or not the third initial motion vector is acquired from the spatial domain adjacent coded block.
Referring to fig. 2b, in step S21, the third initial motion vector LBMV can be obtained only from the neighboring block F. There may be a case where the LBMV acquisition from the airspace fails, and therefore, it may be determined whether the LBMV acquisition is successful or not. If the LBMV is acquired from the spatial domain neighboring encoded block, step S22 is directly performed, as shown in step S53 in fig. 9.
In step S52, if not, a third initial motion vector is acquired from the time domain.
If the LBMV cannot be obtained from the spatial domain neighboring coded blocks, the LBMV is obtained from the time domain.
Specifically, referring to fig. 10 in conjunction with fig. 4 and fig. 9, fig. 10 is a flowchart illustrating an embodiment of step S52 in fig. 9, that is, a third initial motion vector may be obtained from the time domain through the following steps:
in step S521, it is determined whether a motion vector of a third parity sub-block in a third parity image of a third control point exists.
In step S522, if the motion vector of the third co-located sub-block exists, the scaled motion vector of the third co-located sub-block is used as a third initial motion vector.
When the third initial motion vector LBMV is obtained from the time domain, whether the third co-located sub-block of the LB sub-block has an available motion vector is firstly judged, if so, the motion vector of the co-located sub-block is scaled and then is used as the LBMV. And the third co-located sub-block is an LB sub-block of the co-located block of the current block. The motion vector col _ lb _ scuMV of the third co-located sub-block is scaled by the following formula (4), and the specific parameters can be defined with reference to fig. 4 and the above step S22:
Figure BDA0002645226030000111
of course, if the motion vector of the third co-located sub-block does not exist, step S22 is directly performed, as shown in step S523 in fig. 10.
In this embodiment, when the third initial motion vector is failed to be obtained from the spatial domain, the third initial motion vector is obtained from the temporal domain, so that the probability that the third control point can obtain the third initial motion vector is higher, and further, more motion vector candidate groups can be constructed for further screening, thereby further improving the coding efficiency.
In some embodiments, referring to fig. 11, fig. 11 is a flowchart illustrating an embodiment of a step before step S14 in fig. 1, that is, before the step of determining the final motion vector of each sub-block by using the first candidate list and the second candidate list, the following steps may be included:
in step S61, a motion vector candidate group including only the first motion vector and the second motion vector is filtered out.
The motion vector candidate group includes motion vectors of two (v0 and v1) or three (v0, v1 and v2) control points, which are tried for the 4-parameter affine mode and the 6-parameter affine mode, respectively. The present embodiment attempts to expand the motion vector candidate group including only the first motion vector v0 and the second motion vector v1 into the applicable 6-parameter affine mode, that is, to construct v2 as the motion vector candidate group including only v0 and v1, it is necessary to screen out these motion vector candidate groups first.
In step S62, a third motion vector is constructed for the screened candidate set of motion vectors.
After the motion vector candidate groups including only v0 and v1 are screened out, a third motion vector v2 is constructed for the CPMV groups, and the specific construction process is described below.
In the embodiment, the third motion vector is constructed for the motion vector candidate group only comprising 2 control point motion vectors, so that the final motion vector of each subblock of the current block to be encoded is obtained through the 6-parameter affine mode, the final motion vector of each subblock is more accurate, and the encoding efficiency is further improved.
In some embodiments, referring to fig. 12, fig. 12 is a flowchart illustrating an embodiment of step S62 in fig. 11, that is, a third motion vector can be constructed for the selected motion vector candidate set by the following steps:
in step S71, it is determined whether a motion vector of the left adjacent encoded sub-block of the third control point exists.
Referring to fig. 4, when the motion vector candidate set only includes v0 and v1, it is first determined whether the motion vector of the left neighboring encoded sub-block (F sub-block) of the third control point exists.
In step S72, if the motion vector of the left neighboring encoded sub-block of the third control point exists, the motion vector of the left neighboring encoded sub-block is taken as the third motion vector.
If the motion vector of the F sub-block in fig. 4 exists, it is taken as a third motion vector v 2.
In step S73, if the motion vector of the left neighboring encoded sub-block of the third control point does not exist, it is determined whether the motion vector of the third collocated sub-block in the third collocated image of the third control point exists.
If the motion vector of the F sub-block in fig. 4 does not exist, it is determined whether the motion vector of the third co-located sub-block in the third co-located image of the third control point exists, which can be referred to the above step S521.
In step S74, if the motion vector of the third co-located sub-block exists in the third co-located image of the third control point, the motion vector of the third co-located sub-block is scaled to be the third motion vector.
If the motion vector of the third co-located sub-block exists, the scaled motion vector is used as the third motion vector v2, and the specific scaling process can refer to the step S522.
In step S75, if the motion vector of the third collocated sub-block in the third collocated image of the third control point does not exist, a third motion vector is determined according to the first motion vector and the second motion vector.
If the motion vector of the above-mentioned third co-located sub-block does not exist, the third motion vector v2 is determined according to v0 and v1, and specifically, the first component v2 of the third motion vector is calculated using the following formula (5)xAnd a second component v2y
Figure BDA0002645226030000131
Wherein, v0xAnd v0yFirst and second components of the first motion vector, v1, respectivelyxAnd v1yFirst and second components of a second motion vector, respectively, m being the tangent of the ratio of the width and height of the current block, i.e. m
Figure BDA0002645226030000132
In the embodiment, the third motion vector is constructed for the motion vector candidate group only comprising 2 control point motion vectors, so that the final motion vector of each subblock of the current block to be encoded is obtained through the 6-parameter affine mode, the final motion vector of each subblock is more accurate, and the encoding efficiency is further improved.
In addition, the present application also provides a video encoding method, please refer to fig. 13, where fig. 13 is a schematic flowchart of an embodiment of the video encoding method of the present application, and the video encoding method includes the following steps:
in step S81, a final motion vector of each sub-block in the current block is obtained.
In this embodiment, the final motion vector of each sub-block of the current block is obtained by using the inter-frame prediction method described in any of the above embodiments, which may be referred to in detail in any of the above embodiments, and is not described herein again.
In step S82, the pixel values of the current block are obtained based on the final motion vector of each sub-block to encode the current block.
Specifically, after obtaining the final motion vector of each sub-block, motion compensation is performed on each sub-block, the pixel value of each sub-block is obtained from the reference region, so as to obtain the pixel value of the current block, and then the current block is encoded according to the pixel value of the current block.
The embodiment relaxes the condition of starting the affine prediction mode, enables more current blocks to start the high-efficiency affine prediction mode for coding, and improves the coding efficiency. And when the initial motion vector is failed to be obtained from the space domain, the initial motion vector is obtained from the time domain, so that the probability that the control point can obtain the initial motion vector is higher, more motion vector candidate groups can be constructed for further screening, and the coding efficiency can be further improved. Further, in the embodiment, a third motion vector is constructed for the motion vector candidate group only including 2 control point motion vectors, so that the final motion vector of each sub-block of the current block to be encoded is acquired through a 6-parameter affine mode, and therefore the final motion vector of each sub-block is more accurate, and the encoding efficiency is further improved.
In addition, the present application further provides an inter-frame prediction apparatus, please refer to fig. 14, where fig. 14 is a schematic structural diagram of an embodiment of the inter-frame prediction apparatus. The inter-prediction apparatus 1400 includes a sub-block partitioning module 1410, a candidate texture modeling block 1420, and a motion vector determination module 1430.
The subblock dividing module 1410 is configured to divide the current block into a plurality of subblocks with the same size, and determine a first preset number of subblocks in the plurality of subblocks as control points; wherein the product of the width and the height of the current block is greater than or equal to a preset threshold.
Wherein the candidate texture modeling block 1420 is configured to construct a first candidate list from motion vectors of neighboring encoded blocks of the current block, wherein the first candidate list comprises a first number of motion vector candidate sets, the motion vector candidate sets comprising motion vectors of at least two control points.
In particular, the candidate texture modeling block 1420 examines the availability of neighboring coding blocks, removes the same available neighboring coding blocks, and derives a candidate set of motion vectors from the available neighboring coding blocks, constructing a first candidate list.
When the first number is less than a second preset number, the candidate texture modeling block 1420 is further configured to construct a second candidate list from the spatio-temporal domain, wherein the second candidate list includes a second number of motion vector candidate groups, and a sum of the second number and the first number is less than or equal to the second preset number.
Specifically, first, the candidate texture modeling block 1420 obtains a first initial motion vector of the first control point, a second initial motion vector of the second control point, and a third initial motion vector of the third control point from the spatially adjacent encoded blocks; and the spatial domain adjacent coded blocks are not coded by adopting an intra-frame prediction mode. When the first initial motion vector cannot be obtained from the spatial neighboring encoded block, the candidate texture modeling block 1420 obtains the first initial motion vector from the temporal domain, specifically, the first initial motion vector is obtained by scaling the motion vector of the first co-located sub-block in the first co-located image of the first control point. When the second initial motion vector cannot be obtained from the spatial neighboring encoded block, the candidate texture modeling block 1420 obtains the second initial motion vector from the temporal domain, specifically, the second initial motion vector is obtained by scaling the motion vector of the second collocated sub-block in the second collocated image of the second control point. When the third initial motion vector cannot be obtained from the spatial neighboring coded block, the candidate texture modeling block 1420 obtains the third initial motion vector from the temporal domain, specifically, the motion vector of the third co-located sub-block in the third co-located image of the third control point is scaled and then used as the third initial motion vector.
Second, the candidate texture modeling block 1420 obtains a fourth initial motion vector for the fourth control point from the time domain. Specifically, the motion vector of the co-located sub-block of the fourth control point is scaled to be used as the fourth initial motion vector.
Finally, the candidate texture modeling block 1420 constructs a second candidate list based on the first initial motion vector, the third initial motion vector, and the fourth initial motion vector.
Wherein the motion vector determination module 1430 is configured to determine a final motion vector for each sub-block using the first candidate list and the second candidate list. Heretofore, the motion vector determination module 1430 is further configured to filter out a motion vector candidate group including only the first motion vector and the second motion vector from the first candidate list and the second candidate list, and construct a third motion vector for the filtered motion vector candidate group.
Specifically, the motion vector determination module 1430 determines whether a motion vector of the left neighboring encoded sub-block of the third control point exists, and if so, takes the motion vector of the left neighboring encoded sub-block as the third motion vector. If the motion vector of the left adjacent encoded sub-block of the third control point does not exist, the motion vector determining module 1430 determines whether the motion vector of the third co-located sub-block exists in the third co-located image of the third control point, and scales the motion vector of the third co-located sub-block to be the third motion vector if the motion vector of the third co-located sub-block exists. If the motion vector of the third co-located sub-block does not exist, the motion vector determination module 1430 determines a third motion vector according to the first motion vector and the second motion vector.
Then, the motion vector determining module 1430 screens out an optimal motion vector candidate group from the first candidate list and the second candidate list according to the principle of minimum rate-distortion cost, and further calculates the final motion vector of each sub-block by using the motion vector weights of the three control points in the optimal motion vector candidate group, thereby completing the inter-frame prediction. This embodiment can improve the encoding efficiency of inter prediction.
In addition, the present application further provides a video encoding apparatus, please refer to fig. 15, where fig. 15 is a schematic structural diagram of an embodiment of the video encoding apparatus. The video encoding apparatus 1500 includes a motion vector obtaining module 1510 and a video encoding module 1520, the motion vector obtaining module 1510 is configured to obtain a final motion vector of each sub-block of the current block; the video encoding module 1520 is configured to encode the current block using the final motion vector of each sub-block; the final motion vector of each sub-block is obtained by using the inter-frame prediction apparatus in the foregoing embodiment, which may specifically refer to the foregoing embodiment and is not described herein again. This embodiment can improve the encoding efficiency of inter prediction.
Referring to fig. 16, fig. 16 is a schematic structural diagram of an embodiment of an electronic device 1600 according to the present application, where the electronic device 1600 includes a memory 1610 and a processor 1620 coupled to each other, where the memory 1610 has program instructions stored thereon, and the processor 1620 is configured to execute the program instructions to implement the inter-frame prediction method according to the above embodiments or implement the video coding method according to the above embodiments. Specifically, the electronic devices may include, but are not limited to: the electronic devices such as the server, the microcomputer, the tablet computer, and the mobile phone are not limited herein.
Specifically, the processor 1620 is configured to control itself and the memory 1610 to implement the steps in the inter-frame prediction method according to any of the above embodiments or implement the steps in the video coding method according to any of the above embodiments, which may specifically refer to the above embodiments and is not described herein again. Processor 1620 may also be referred to as a CPU (Central Processing Unit). Processor 1620 may be an integrated circuit chip having signal processing capabilities. The Processor 1620 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 1620 may be commonly implemented by a plurality of integrated circuit chips.
This embodiment can improve the encoding efficiency of inter prediction.
Referring to fig. 17, fig. 17 is a schematic structural diagram of an embodiment of a computer-readable storage medium of the present application, where the storage medium 1700 has stored thereon program instructions 1711, and the program instructions 1711 can be executed by a processor to implement the inter-frame prediction method according to the above embodiments or implement the video coding method according to the above embodiments. For details, reference may be made to the above embodiments, which are not described herein again. This embodiment can improve the encoding efficiency of inter prediction.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (15)

1. An inter-frame prediction method, comprising:
dividing a current block into a plurality of sub-blocks with the same size, and determining a first preset number of sub-blocks in the sub-blocks as control points; wherein the product of the width and the height of the current block is greater than or equal to a preset threshold; the control points comprise a first control point, a second control point, a third control point and a fourth control point which are respectively an upper left sub-block, an upper right sub-block, a lower left sub-block and a lower right sub-block of the current block;
constructing a first candidate list from motion vectors of neighboring encoded blocks of the current block, wherein the first candidate list comprises a first number of motion vector candidate sets comprising motion vectors of at least two of the control points; wherein the motion vector candidate group includes a first motion vector of the first control point and a second motion vector of the second control point, or includes a third motion vector of the first motion vector, the second motion vector, and the third control point;
constructing a second candidate list from a spatio-temporal domain in response to the first number being less than a second preset number, wherein the second candidate list comprises a second number of the motion vector candidate sets and a sum of the second number and the first number is less than or equal to the second preset number;
filtering out the motion vector candidate group including only the first motion vector and the second motion vector, and constructing the third motion vector for the filtered out motion vector candidate group;
determining a final motion vector for each of the sub-blocks using the first candidate list and the second candidate list.
2. The inter-prediction method of claim 1, wherein the step of constructing the second candidate list from a space-time domain comprises:
acquiring a first initial motion vector of the first control point, a second initial motion vector of the second control point and a third initial motion vector of the third control point from the spatial domain adjacent coded blocks; wherein the spatial domain adjacent coded blocks are not coded by adopting an intra-frame prediction mode;
acquiring a fourth initial motion vector of the fourth control point from the time domain;
constructing the second candidate list according to the first initial motion vector, the third initial motion vector, and the fourth initial motion vector.
3. The inter-prediction method according to claim 2, wherein the step of obtaining the fourth initial motion vector of the fourth control point from the time domain is preceded by:
judging whether the first initial motion vector is obtained from the space domain adjacent coded block or not;
and if not, acquiring the first initial motion vector from a time domain.
4. The inter-prediction method of claim 3, wherein the step of obtaining the first initial motion vector from the time domain comprises:
judging whether the motion vector of a first co-located sub-block in a first co-located image of the first control point exists or not;
and if so, scaling the motion vector of the first co-located sub-block to be the first initial motion vector.
5. The inter-prediction method according to claim 2, wherein the step of obtaining the motion vector of the fourth control point from the time domain is preceded by:
judging whether the second initial motion vector is obtained from the space-domain adjacent coded block or not;
and if not, acquiring the second initial motion vector from the time domain.
6. The inter-prediction method of claim 5, wherein the step of obtaining the second initial motion vector from the time domain comprises:
judging whether the motion vector of a second parity sub-block in a second parity image of the second control point exists or not;
and if the motion vector of the second co-located sub-block exists, the motion vector of the second co-located sub-block is used as the second initial motion vector after being scaled.
7. The inter-prediction method according to claim 2, wherein the step of obtaining the motion vector of the fourth control point from the time domain is preceded by:
judging whether the third initial motion vector is obtained from the space-domain adjacent coded block or not;
and if not, acquiring the third initial motion vector from the time domain.
8. The inter-prediction method of claim 7, wherein the step of obtaining the motion vector of the third control point from the time domain comprises:
judging whether a motion vector of a third parity sub-block in a third parity image of the third control point exists or not;
and if so, scaling the motion vector of the third co-located sub-block to be the third initial motion vector.
9. The method of claim 1, wherein the step of constructing the third motion vector for the filtered set of motion vector candidates comprises:
judging whether a motion vector of a left adjacent coded sub-block of the third control point exists;
if the motion vector of the left neighboring encoded sub-block of the third control point exists, taking the motion vector of the left neighboring encoded sub-block as the third motion vector;
if the motion vector of the left adjacent coded sub-block of the third control point does not exist, judging whether the motion vector of a third co-located sub-block in a third co-located image of the third control point exists;
if the motion vector of a third co-located sub-block in a third co-located image of the third control point exists, scaling the motion vector of the third co-located sub-block to be used as the third motion vector;
and if the motion vector of a third co-located sub-block in a third co-located image of the third control point does not exist, determining the third motion vector according to the first motion vector and the second motion vector.
10. The method of claim 9, wherein the step of determining the third motion vector according to the first motion vector and the second motion vector comprises:
calculating a first component v2 of the third motion vector using the following formulaxAnd a second component v2y
Figure FDA0003253387320000031
Wherein, v0xAnd v0yFirst and second components of the first motion vector, v1, respectivelyxAnd v1yA first component and a second component of the second motion vector, respectively, and m is a tangent of a ratio of a width and a height of the current block.
11. The inter-prediction method of claim 1, wherein the preset threshold is greater than or equal to 256.
12. The inter-prediction method according to claim 1, wherein the second predetermined number is a number of the motion vector candidate groups that can be stored at most in the affine prediction mode.
13. A video encoding method, comprising:
obtaining a final motion vector of each sub-block in a current block, wherein the final motion vector of each sub-block is obtained by using the inter prediction method according to any one of claims 1 to 12;
obtaining pixel values of the current block based on the final motion vector of each sub-block to encode the current block.
14. An electronic device comprising a memory and a processor coupled to each other, the memory storing program instructions, the processor being configured to execute the program instructions to implement the inter prediction method according to any one of claims 1 to 12 or to implement the video coding method according to claim 13.
15. A computer-readable storage medium, characterized in that the storage medium has stored thereon program instructions executable by a processor to implement the inter-prediction method according to any one of claims 1 to 12, or to implement the video coding method according to claim 13.
CN202010852587.4A 2020-08-21 2020-08-21 Inter-frame prediction method, video coding method, electronic device and storage medium Active CN112055202B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010852587.4A CN112055202B (en) 2020-08-21 2020-08-21 Inter-frame prediction method, video coding method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010852587.4A CN112055202B (en) 2020-08-21 2020-08-21 Inter-frame prediction method, video coding method, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN112055202A CN112055202A (en) 2020-12-08
CN112055202B true CN112055202B (en) 2021-11-16

Family

ID=73599657

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010852587.4A Active CN112055202B (en) 2020-08-21 2020-08-21 Inter-frame prediction method, video coding method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN112055202B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110876065A (en) * 2018-08-29 2020-03-10 华为技术有限公司 Construction method of candidate motion information list, and inter-frame prediction method and device
CN111066324A (en) * 2017-08-03 2020-04-24 Lg 电子株式会社 Method and apparatus for processing video signal using affine prediction
CN111432219A (en) * 2019-01-09 2020-07-17 华为技术有限公司 Inter-frame prediction method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11736713B2 (en) * 2018-11-14 2023-08-22 Tencent America LLC Constraint on affine model motion vector

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111066324A (en) * 2017-08-03 2020-04-24 Lg 电子株式会社 Method and apparatus for processing video signal using affine prediction
CN110876065A (en) * 2018-08-29 2020-03-10 华为技术有限公司 Construction method of candidate motion information list, and inter-frame prediction method and device
CN111432219A (en) * 2019-01-09 2020-07-17 华为技术有限公司 Inter-frame prediction method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Affine SKIP and MERGE Modes for Video Coding;Huanbang Chen;《2015 IEEE 17th International Workshop on Multimedia Signal Processing(MMSP)》;20151203;全文 *
CE4: Affine motion compensation with fixed sub-block size (Test 1.1);Huanbang Chen;《Joint Video Experts Team (JVET)》;20180703;全文 *
H.266/VVC视频编码帧间预测关键技术研究;周芸等;《广播与电视技术》;20200715(第07期);全文 *

Also Published As

Publication number Publication date
CN112055202A (en) 2020-12-08

Similar Documents

Publication Publication Date Title
TWI729402B (en) Weighted interweaved prediction
Liu et al. Neural video coding using multiscale motion compensation and spatiotemporal context model
CN104363451B (en) Image prediction method and relevant apparatus
CN103891290B (en) Motion vector processing
RU2480941C2 (en) Method of adaptive frame prediction for multiview video sequence coding
TW202013966A (en) Chroma dmvr
US8149915B1 (en) Refinement of motion vectors in hierarchical motion estimation
US10536692B2 (en) Picture prediction method and related apparatus
CN109379594B (en) Video coding compression method, device, equipment and medium
CN110312130B (en) Inter-frame prediction and video coding method and device based on triangular mode
CN111131837B (en) Motion compensation correction method, encoding method, encoder, and storage medium
CN115486068A (en) Method and apparatus for inter-frame prediction based on deep neural network in video coding
CN101873490B (en) Image processing method and image information coding apparatus using the same
CN111246212A (en) Geometric partition mode prediction method and device based on encoding and decoding end, storage medium and terminal
EP1921865A2 (en) Obtaining a motion vector in block-based motion estimation
US11601661B2 (en) Deep loop filter by temporal deformable convolution
CN109565600A (en) Method and apparatus for carrying out image watermarking in Prediction Parameters
CN112055202B (en) Inter-frame prediction method, video coding method, electronic device and storage medium
TWI833795B (en) Fast encoding methods for interweaved prediction
CN112055221B (en) Inter-frame prediction method, video coding method, electronic device and storage medium
CN105872538A (en) Time-domain filtering method and time-domain filtering device
CN114071138A (en) Intra-frame prediction encoding method, intra-frame prediction encoding device, and computer-readable medium
CN112004096B (en) Angular mode inter prediction method, encoder, and storage medium
CN109618152A (en) Depth divides coding method, device and electronic equipment
JP2016052056A (en) Encoding circuit, and encoding method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant