CN112055203B - Inter-frame prediction method, video coding method and related devices - Google Patents

Inter-frame prediction method, video coding method and related devices Download PDF

Info

Publication number
CN112055203B
CN112055203B CN202010853191.1A CN202010853191A CN112055203B CN 112055203 B CN112055203 B CN 112055203B CN 202010853191 A CN202010853191 A CN 202010853191A CN 112055203 B CN112055203 B CN 112055203B
Authority
CN
China
Prior art keywords
motion information
block
sub
candidate
list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010853191.1A
Other languages
Chinese (zh)
Other versions
CN112055203A (en
Inventor
陈瑶
粘春湄
张雪
江东
方瑞东
林聚财
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010853191.1A priority Critical patent/CN112055203B/en
Publication of CN112055203A publication Critical patent/CN112055203A/en
Application granted granted Critical
Publication of CN112055203B publication Critical patent/CN112055203B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application discloses an inter-frame prediction method, a video coding method and related devices thereof, comprising the following steps: determining a weight array of the current block in each original prediction mode; dividing the current block into a plurality of first sub-blocks, and constructing a unidirectional motion information candidate list of the current block based on the time domain motion information of at least two first sub-blocks; calculating coding cost based on the weight array, and selecting a plurality of groups of motion information with the minimum cost from the unidirectional motion information candidate list as a plurality of groups of first candidate motion information; and selecting a final prediction mode from the original prediction modes based on the plurality of groups of first candidate motion information. Through the technical scheme provided by the application, the prediction mode with better prediction accuracy can be selected, and the accuracy of inter-frame prediction is further better improved.

Description

Inter-frame prediction method, video coding method and related devices
Technical Field
The present disclosure relates to the field of video encoding and decoding technologies, and in particular, to an inter-frame prediction method, a video encoding method, and related devices.
Background
Because the video image data volume is relatively large, it is usually required to encode and compress the video image data, the compressed data is called a video code stream, and the video code stream is transmitted to a user terminal through a wired or wireless network and then decoded and watched.
The whole video coding flow comprises the processes of prediction, transformation, quantization, coding and the like. Wherein the prediction is divided into two parts, intra prediction and inter prediction. Inter prediction uses temporal correlation between image frames to compress images. In the long-term research and development process, the inventor of the application finds that the current inter-frame prediction method has certain limitation and also influences the accuracy of the inter-frame prediction to a certain extent.
Disclosure of Invention
The technical problem to be solved mainly by the application is to provide an inter-frame prediction method, a video coding method and a related device thereof, which can select a prediction mode with better prediction accuracy, thereby improving the accuracy of inter-frame prediction.
In order to solve the technical problems, one technical scheme adopted by the application is as follows: there is provided an inter prediction method, the method comprising:
determining a weight array of the current block in each original prediction mode;
dividing the current block into a plurality of first sub-blocks, and constructing a unidirectional motion information candidate list of the current block based on time domain motion information of at least two first sub-blocks;
calculating coding cost based on the weight array, and selecting a plurality of groups of motion information with the minimum cost from the unidirectional motion information candidate list as a plurality of groups of first candidate motion information;
And selecting a final prediction mode from the original prediction modes based on a plurality of groups of the first candidate motion information.
In order to solve the technical problems, another technical scheme adopted by the application is as follows: there is provided a video encoding method, the method comprising:
determining a final prediction mode of the current block based on the inter prediction method as described above;
and determining a final prediction value of the current block based on the final prediction mode, and encoding the current block based on the final prediction value.
In order to solve the technical problem, another technical scheme adopted by the application is as follows: a video encoding system is provided, the video encoding system comprising a memory and a processor; the memory has stored therein a computer program for execution by the processor to perform the steps of the method as described above.
In order to solve the technical problem, another technical scheme adopted by the application is as follows: there is provided a readable storage medium storing a computer program executable by a processor for implementing a method as described above.
The beneficial effects of this application are: compared with the prior art, the method and the device realize the construction of the unidirectional motion information candidate list which can more accurately respond to the motion state of the current block by constructing the unidirectional motion information candidate list based on at least two time domain motion information of the current block, then calculate coding cost based on the weight array, select multiple groups of motion information with the minimum coding cost from the unidirectional motion information candidate list as multiple groups of first candidate motion information, select a final prediction mode in an original prediction mode based on the multiple groups of first candidate motion information, and further improve the accuracy of inter-frame prediction.
Drawings
FIG. 1 is a schematic diagram of a weight array in an embodiment of an inter prediction method according to the present application;
FIG. 2 is a schematic diagram of the angles supported by the AWP in an embodiment of an inter prediction method of the present application;
FIG. 3 is a schematic diagram of an angle partition supported by an AWP according to another embodiment of an inter prediction method of the present application;
FIG. 4 is a schematic diagram of reference weight configuration in an inter prediction method according to the present application;
FIG. 5 is a flowchart illustrating an inter prediction method according to an embodiment of the present disclosure;
FIG. 6 is a block diagram illustrating a co-located block in an embodiment of an inter prediction method according to the present application;
FIG. 7 is a block diagram illustrating the partitioning of a current block in another embodiment of an inter prediction method according to the present application;
FIG. 8 is a flowchart illustrating an inter prediction method according to another embodiment of the present application;
FIG. 9 is a schematic diagram of an inter prediction block according to an embodiment of the present application;
FIG. 10 is a flowchart illustrating an inter prediction method according to an embodiment of the present disclosure;
FIG. 11 is a flowchart illustrating an inter prediction method according to another embodiment of the present application;
FIG. 12 is a flowchart illustrating an inter prediction method according to an embodiment of the present disclosure;
FIG. 13 is a schematic diagram of the segmentation of the current block of the AWP of the present application;
FIG. 14 is a flowchart illustrating an embodiment of a video encoding method according to the present application;
FIG. 15 is a schematic diagram illustrating an embodiment of a video encoding system according to the present application;
FIG. 16 is a schematic diagram illustrating the structure of an embodiment of a readable storage medium according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are for purposes of illustration only and are not limiting. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In the field of video transmission, as the data volume of video images is relatively large, the main function of video coding is to compress video pixel data into video code streams, thereby reducing the data volume of video, realizing the reduction of network bandwidth in the process of video transmission and simultaneously reducing storage space. Wherein the video pixel data includes at least RGB data and YUV data.
The video coding flow mainly comprises video acquisition, prediction, transformation quantization and entropy coding. The prediction comprises two parts of intra prediction and inter prediction, namely, the prediction is to remove redundancy of video images in space and time.
Because the brightness and the chrominance signal values of the pixel points of the adjacent frames are relatively close in time, the pixel points have strong correlation, and inter-frame prediction is to search a matching block closest to the current block in a reference frame by a motion search method and the like based on the correlation, and record the motion information between the current block and the matching block. The motion information includes a motion vector MV (motion vector) and a reference frame index, and in other embodiments, the motion information may include other types of information, which is not limited herein. After the motion information is obtained, the motion information is encoded and transmitted to a decoding end. At the decoding end, the decoder can find the matching block of the current block in the reference frame as long as the MV of the current block is resolved through the corresponding syntax element, and copies the pixel value of the matching block to the current block, namely the inter prediction value of the current block.
In the application field of the existing coding technology, the inter-frame angle weighted prediction technology is mainly applied to AVS3 to obtain predicted inter-frame pixel values. The inter-frame angle weighted prediction mode (AWP: angle weighted prediction) is a new prediction mode in the merge mode, and the supported coding block size ranges from 8x8 to 64x64. The prediction mode is based on intra-frame angle prediction ideas: firstly setting a reference weight value of a peripheral position (the peripheral position comprises an integral pixel position and a sub-pixel position) of a current block, then obtaining a weight value corresponding to each pixel position by utilizing an angle, and realizing the weighting of two different image inter-frame predicted values through a finally obtained weight array, wherein the two weight arrays are respectively marked as weight0 and weight1.
Referring to fig. 1 and fig. 2, fig. 1 is a schematic view of a weight array in an embodiment of an inter-frame prediction method according to the present application, and fig. 2 is a schematic view of an angle supported by AWP in an embodiment of an inter-frame prediction method according to the present application. The current AWP supports 8 angles illustrated in fig. 2, each angle supports 7 reference weight configurations, the traversal angle and the reference weight can configure 56 prediction modes, and there can be 56 prediction modes for the AWP at each class of block sizes. In other embodiments, the supported prediction angles may be set according to actual needs, and accordingly, after the supported angles are changed, the total number of prediction modes that may be configured by using the angles and weights may also be changed accordingly, which is not limited herein.
As illustrated in fig. 2, when the AWP may support 8 angles as described above, the absolute value of the slope of the corresponding supported predicted angle may include five of horizontal, vertical, 1, 2, and 1/2.
It will be appreciated that in other embodiments, the number of prediction angles may be reduced based on the texture of the image or the texture correspondence of the image block, e.g., in one embodiment the angle dimension is 6 and does not include the current block diagonal angle, i.e., angle 0 and angle 4 of the diagonal illustrated in fig. 2 are removed. And when the angles 0 and 4 are removed, the absolute values of the slopes corresponding to the supported prediction angles are four of horizontal, vertical, 2 and 1/2. In other embodiments, other prediction angles that do not affect image construction may be removed according to actual needs, such as removing angles away from the horizontal or vertical directions, or adding supported angles according to actual needs, which are not specifically listed herein. In the current embodiment, by reducing some angles which do not affect the prediction accuracy, the complexity of the inter-frame prediction calculation can be reduced on the premise of ensuring the basic performance unchanged, and the response speed of the inter-frame prediction can be further improved.
Further, referring to fig. 3, fig. 3 is a schematic diagram illustrating an angle partition supported by AWP in another embodiment of an inter prediction method according to the present application. In the current embodiment, the supported angles can be divided into 4 partitions, as illustrated in fig. 3, and according to the region where the angles are located, the supported angles are divided into 4 angle partitions: angle partition 0, angle partition 1, angle partition 2 and angle partition 3, wherein angle partition 0 includes angle 0 and angle 1, angle partition 1 includes angle 2 and angle 3, angle partition 2 includes angle 4 and angle 5, and angle partition 3 includes angle 6 and angle 7.
Further, when the supported angle is reduced according to the actual requirement, the angle division may be kept unchanged, that is, the angle near the lower left diagonal is divided into one division, the angle near the horizontal direction is divided into one division, the angle near the upper left diagonal is divided into one division, and the angle near the vertical direction is divided into one division, as still illustrated in fig. 3. It will be appreciated that when the number of angles is changed, the partitioning may also be performed in a new partitioning manner, which is not specifically described herein.
Still further, in other embodiments, the supported angular categories may also be reduced in a uniform or non-uniform manner. Then after the angle is reduced, the residual angle is partitioned in an original mode to obtain angle partitions, and a new mode can be adopted to partition to obtain different angle partitions.
After the angle is reduced, the number of the prediction modes for carrying out UMVE offset on the motion information in the subsequent motion compensation can be correspondingly reduced, for example, the original UMVE offset on 42 prediction modes can be reduced to 35 or 28 prediction modes, and the response speed of inter-frame prediction is improved on the premise of ensuring the prediction accuracy.
Correspondingly, after the angle is reduced, the angle mode ordering can be adaptively modified according to actual requirements. Such as: the angle modes may be ordered in conjunction with the aspect ratio of the current encoded block. Correspondingly, the inter-frame prediction method provided by the application comprises the following steps: the angle modes of the current block are ordered in combination with the aspect ratio of the current block.
Such as: for the coding blocks with the aspect ratio of 8:1 or 4:1, firstly, the angle modes in the horizontal direction and the nearby angle modes are ordered, and then the angle modes in the vertical direction and the nearby angle modes are ordered; for the coding blocks with the aspect ratio of 1:8 or 1:4, firstly, the angle modes in the vertical direction and the nearby angle modes are ordered, and then the angle modes in the horizontal direction and the nearby angle modes are ordered; for an aspect ratio of 1:2 or 2:1, firstly ordering the angle modes in the vertical direction and the nearby angle modes, and then ordering the angle modes in the horizontal direction and the nearby angle modes. It will be appreciated that in other embodiments, the order of arrangement for the angular modes may also be adaptively adjusted according to other orders when the aspect ratios of the encoded blocks are different. In the present embodiment, compared with the prior art, by performing angle mode ordering by adopting different ordering modes for the coding blocks with different aspect ratios, bit overhead caused by transmitting the prediction mode can be reduced, and the response speed during inter-frame prediction is further improved.
The reference weight configurations supported by AWP may include 7 reference weight configurations as shown in fig. 4, and fig. 4 is a schematic diagram of the reference weight configuration in an inter prediction method of the present application. The reference weight configuration may be a distribution function of reference weight values obtained according to reference weight index values, as shown in fig. 4, in which a non-strict monotonically increasing function assignment is performed by using 8-point positions (indicated by black arrows in fig. 4) of the effective length of the reference weight as a reference point, where the effective length of the reference weight is calculated from a prediction angle and a current block size.
Traversing 8 angles and 7 reference weight configurations, 56 original prediction modes of AWP can be configured.
Referring to fig. 5, fig. 5 is a flowchart illustrating an inter prediction method according to an embodiment of the present application.
S510: a weight array of the current block in each original prediction mode is determined.
Before step S510 is performed, all original prediction modes are determined by first traversing the currently supported angle and reference weight configurations, and then determining the weight array of the current block in all original prediction modes. In the technical scheme provided by the application, all prediction modes determined based on angle and reference weight configurations are defined as original prediction modes, a selected prediction mode for inter-frame prediction is defined as a final prediction mode, and a part of prediction modes selected from all original prediction modes in the process of selecting the final prediction mode are defined as candidate prediction modes.
Further, the weight array of the current block in each original prediction mode can be obtained by deriving the weight array pixel by pixel based on each original prediction mode. The process of pixel-by-pixel derivation is as follows:
the above diagram illustrates an example of dividing the angle into 4 partitions, and the formulas derived from the luminance pixel-by-pixel weights are slightly different due to the difference in the areas where the angles are located. The weights may be derived pixel by pixel based on a formula derived from pixel by pixel weights corresponding to the region where the angle is located in the prediction mode of the current luminance block to obtain a weight array of the current luminance block. Let the block size of the current luminance block be MxN, where M is wide, N is high, X is log2 (absolute value of the weight prediction angle slope), and Y is the weight prediction position.
i) The formula for deriving the pixel-by-pixel weights of the luminance blocks corresponding to angle 0 and angle 1 at angle partition 0 is shown as the following [1] - [3 ]:
[1] calculating the effective length ValidLength of the reference weight
ValidLength=(N+(M>>X))<<1
[2] Setting reference weight value Referenceweights [ x ], wherein x is E [0, validLength-1]
FirstPos=(ValidLength>>1)-6+Y*((ValidLength-1)>>3)
ReferenceWeights[x]=Clip3(0,8,x-FirstPos)
[3] Deriving the weight SampleWeight [ x ] [ y ] pixel by pixel
SampleWeight[x][y]=ReferenceWeights[(y<<1)+((x<<1)>>X)]
ii) the formula for deriving the pixel-by-pixel weights of the luminance blocks corresponding to angle 2 and angle 3, which are located in the angle partition 1, is as follows:
[1] calculating the effective length ValidLength of the reference weight
ValidLength=(N+(M>>X))<<1
[2] Setting reference weight value Referenceweights [ x ], wherein x is E [0, validLength-1]
FirstPos=(ValidLength>>1)-4+Y*((ValidLength-1)>>3)–((M<<1)>>X)
ReferenceWeights[x]=Clip3(0,8,x-FirstPos)
[3] Deriving the weight SampleWeight [ x ] [ y ] pixel by pixel
SampleWeight[x][y]=ReferenceWeights[(y<<1)-((x<<1)>>X)]
iii) The formula for deriving the pixel-by-pixel weights of the luminance blocks corresponding to angle 4 and angle 5, which are located in the angle partition 2, is as follows:
[1] calculating the effective length ValidLength of the reference weight
ValidLength=(M+(N>>X))<<1
[2] Setting reference weight value Referenceweights [ x ], wherein x is E [0, validLength-1]
FirstPos=(ValidLength>>1)-4+Y*((ValidLength-1)>>3)–((N<<1)>>X)
ReferenceWeights[x]=Clip3(0,8,x-FirstPos)
[3] Deriving the weight SampleWeight [ x ] [ y ] pixel by pixel
SampleWeight[x][y]=ReferenceWeights[(x<<1)-((y<<1)>>X)]
iv) the formula for deriving the pixel-by-pixel weights of the luminance blocks corresponding to angle 6 and angle 7 located in the angle partition 3 is as follows:
[1] calculating the effective length ValidLength of the reference weight
ValidLength=(M+(N>>X))<<1
[2] Setting reference weight value Referenceweights [ x ], wherein x is E [0, validLength-1]
FirstPos=(ValidLength>>1)-6+Y*((ValidLength-1)>>3)
ReferenceWeights[x]=Clip3(0,8,x-FirstPos)
[3] Deriving the weight SampleWeight [ x ] [ y ] pixel by pixel
SampleWeight[x][y]=ReferenceWeights[(x<<1)+((y<<1)>>X)]
The process of deriving pixel-by-pixel weights for a chroma block is as follows: also as illustrated in fig. 3, for the current chroma block, the weight of the upper left corner of the weight array of the corresponding luma block can be directly taken, and the block size of the current block is MxN, where M is wide and N is high, and the value range of x of the current chroma block is 0 to (M/2-1); the value range of y of the current chroma block is 0 to (N/2-1).
The formula for deriving the pixel-by-pixel weights of the chroma blocks is: sampleWeight chroma [ x ] [ y ] = SampleWeight [ x > >1] [ y > >1].
S520: dividing the current block into a plurality of first sub-blocks, and constructing a unidirectional motion information candidate list of the current block based on the temporal motion information of at least two first sub-blocks.
Dividing the current block into a plurality of first sub-blocks, and then constructing a unidirectional motion information candidate list of the current block based on the time domain motion information of at least two first sub-blocks obtained by the division.
Further, in an embodiment, the current block is subjected to cross-averaging segmentation to obtain four first sub-blocks, and then a unidirectional motion information candidate list of the current block is constructed based on time domain motion information of at least two first sub-blocks. The number of pieces of motion information that can be accommodated in the unidirectional motion information candidate list, that is, the length of the unidirectional motion information candidate list, may be set to 5, or may be set to 4 or 8, and is not limited herein.
After the current block is divided into a plurality of first sub-blocks, the time domain motion information corresponding to each first sub-block is further acquired, and then motion compensation and subsequent cost calculation are carried out according to the obtained first sub-blocks and the time domain motion information corresponding to the first sub-blocks.
Correspondingly, in an embodiment, when the current block is divided into 4 first sub-blocks in a cross-average manner, the first sub-blocks are taken as units, and then the respective time domain motion information of each first sub-block is acquired, and prediction is performed by taking the first sub-blocks as units. The method for acquiring the time domain motion information of the first sub-block is as follows:
First, the spatial location of the time-domain co-located block corresponding to the current first sub-block is determined.
Let bx, by be the position coordinates of the current first sub-block in scu (small coding unit) as a unit in the whole frame, and determine the spatial position of the co-located block by using a mask. Wherein mask= (-1)/(3). "≡" represents bitwise exclusive OR, where after-1 and 3 are bitwise exclusive OR, the last two bits of mask are 0. Wherein scu is CU (Coding Unit) of 4*4.
Then, let coordinates of the spatial position of the bit block (in units of the first sub-block) be (xpos, ypos):
xpos=(bx&mask)+2;
ypos=(by&mask)+2;
the range that xpos may take is: bx-1, bx, bx+1, bx+2; the range that ypos may take is: by-1, by, by+1, by+2. The current first sub-block of the same coordinate position only corresponds to one time domain co-located block, and the time domain co-located block range corresponding to the current block of different coordinates is shown in fig. 6.
Referring to fig. 6, fig. 6 is a schematic diagram of a co-located block in an embodiment of an inter prediction method according to the present application. The positions of all dots in fig. 6 are the coordinates of all possible co-located blocks of the first sub-block cur, each small square representing a scu. Traversing the 4 first sub-blocks according to the mode of acquiring the motion information, taking scu at four corners of the current block as a reference, acquiring the time domain block position corresponding to each first sub-block by using the mask and the coordinates of the first sub-block, and acquiring the corresponding 4 time domain MVs. Then, for each first sub-block, looking at which corner scu the current first sub-block contains, the time domain MV corresponding to the corner scu is used as the time domain MV of the first sub-block.
Referring to fig. 7, fig. 7 is a schematic diagram illustrating a partition of a current block in another embodiment of an inter prediction method according to the present application.
As illustrated in fig. 7, the time-domain MV corresponding to scu1 is taken as the time-domain MV of the first sub-block 1, the time-domain MV corresponding to scu2 is taken as the time-domain MV of the first sub-block 2, the time-domain MV corresponding to scu3 is taken as the time-domain MV of the first sub-block 3, and the time-domain MV corresponding to scu4 is taken as the time-domain MV of the first sub-block 4. If one corner cannot acquire the effective time domain MV, the first sub-block MV corresponding to the corner directly takes the TMVP (time domain motion information) of the current block as the time domain MV of the first sub-block. And finally, traversing MVP in the candidate list, and respectively performing motion compensation on the 4 first sub-blocks when traversing to the original TMVP.
The execution order of step S510 and step S520 is not limited, and step S510 and step S520 may be executed simultaneously, step S510 may be executed first, step S520 may be executed second, or step S520 may be executed first, and step S510 may be executed second.
S530: and calculating coding cost based on the weight array, and selecting a plurality of groups of motion information with the minimum cost from the unidirectional motion information candidate list as a plurality of groups of first candidate motion information.
After determining a weight array of the current block in each original prediction mode and constructing a unidirectional motion information candidate list of the current block, further calculating and obtaining corresponding coding cost of each motion information in the unidirectional motion information candidate list in each original prediction mode based on the weight array, and then sorting the obtained coding cost in each original prediction mode, and selecting a plurality of groups of motion information with the minimum coding cost from the unidirectional motion information candidate list as a plurality of groups of first candidate motion information of each original prediction mode. The first candidate motion information is motion information which is selected from a unidirectional motion information candidate list and used for selecting a final prediction mode.
Further, in an embodiment, step S530 is to select two sets of motion information with the minimum cost from the unidirectional motion information candidate list as the first candidate motion information of each original prediction mode based on the encoding cost.
Further, when the number of the original prediction modes is plural and the unidirectional motion information candidate list includes plural motion information, in step S530, encoding costs of each motion information in each original prediction mode are calculated based on the weight arrays corresponding to each original prediction mode, then encoding costs corresponding to each motion information in each original prediction mode are sorted, and plural sets of motion information with the minimum encoding costs in each prediction mode are selected as plural sets of first candidate information. In an embodiment, the unidirectional motion information candidate list includes 5 sets of motion information V, W, X, Y and Z, and there are 56 kinds of original prediction modes in total, and the original prediction modes are numbered 1 to 56 in sequence, then the coding cost corresponding to 5 sets of motion information in each original prediction mode is respectively obtained, then the obtained coding cost is sequenced to obtain the smallest sets of motion information as multiple sets of first candidate motion information, for example, the coding cost sequences of the motion information V, W, X, Y and Z in the original prediction modes, and two sets of motion information with the smallest coding cost are selected as the first candidate motion information.
Further, in an embodiment, when the number of original prediction modes is 56 and the unidirectional motion information candidate list includes 5 sets of motion information, 2 sets of motion information with the minimum coding cost in each original prediction mode in step S530 may be set as the first candidate motion information.
S540: and selecting a final prediction mode from the original prediction modes based on the plurality of groups of first candidate motion information.
After a plurality of groups of first candidate motion information are determined, the plurality of groups of first candidate motion information are respectively subjected to motion compensation to obtain first predicted values of the first candidate motion information under each prediction mode. And then, calculating the coding cost corresponding to the first candidate motion information under each original prediction mode based on the obtained first prediction value, and selecting the original prediction mode corresponding to the minimum coding cost as a final prediction mode. After the final prediction mode is determined, a final prediction value of the current block may be further determined for encoding and subsequent decoding based on the weight array and the final prediction mode.
According to the technical scheme provided by the application, the weight array of the current block in each original prediction mode is determined, the current block is divided into a plurality of first sub-blocks, a unidirectional motion information candidate list of the current block is built based on time domain motion information of at least two first sub-blocks, coding cost is calculated based on the weight array, multiple groups of motion information with the minimum coding cost are selected from the unidirectional motion information candidate list to serve as multiple groups of first candidate motion information, and a final prediction mode is selected from the original prediction modes based on the multiple groups of first candidate motion information.
Referring to fig. 8, fig. 8 is a flowchart illustrating an inter prediction method according to another embodiment of the present application. In the current embodiment, the inter prediction method provided in the present application includes:
s81: a weight array of the current block in each original prediction mode is determined.
Step S81 is the same as step S510, and please refer to the description of the corresponding parts above, and the detailed description is omitted here.
In the present embodiment, before the step S520 divides the current block into a plurality of first sub-blocks and constructs the unidirectional motion information candidate list of the current block based on the temporal motion information of at least two first sub-blocks, the method provided in the present application further includes steps S82 to S84.
S82: and judging whether a target adjacent prediction block corresponding to the current block exists or not.
When constructing the unidirectional motion information candidate list, firstly constructing a candidate motion information list, wherein the candidate motion information list and the unidirectional motion information candidate list accommodate the same amount of motion information. The process of constructing the candidate motion information list is as follows: firstly, the available adjacent prediction block of the current block is taken out, then the unidirectional motion information is derived by utilizing the motion information of the airspace adjacent block, and the unidirectional motion information is filled into a candidate motion information list. The same length as the unidirectional motion information candidate list is set for the candidate motion information list in advance, so that when the motion information in the candidate motion information list is smaller than the set number, the temporal motion information of the sub-block obtained by division is further added to the candidate motion information list.
Wherein the target neighboring prediction block is a neighboring prediction block employing an inter prediction mode. Step S82 judges whether or not there is an adjacent prediction block corresponding to the current block and employing the inter prediction mode, and when the judgment result is yes, performs the following step S83. Otherwise, if it is determined that the neighboring prediction block corresponding to the current block does not exist, and/or if it is determined that all the neighboring prediction blocks are not neighboring prediction blocks adopting the inter prediction mode, it is determined that the target neighboring prediction block corresponding to the current block does not exist, the following step S85 and the following steps are directly performed, so that the time domain motion information of at least one first sub-block is filled into the candidate motion information list until the number of motion information in the candidate motion information list reaches a preset number, and then the construction of the candidate motion information list of the current block is completed.
S83: if the target adjacent predicted block corresponding to the current block is obtained through judgment, all the target adjacent predicted blocks of the current block are obtained, and the motion information of the target adjacent predicted blocks is checked again to determine the available adjacent predicted blocks.
And if the target adjacent prediction block corresponding to the current block is obtained through judgment, all the target adjacent prediction blocks corresponding to the current block are taken out. And then searching the motion information of all the target adjacent prediction blocks to determine whether the target adjacent prediction blocks are available adjacent prediction blocks. Wherein the available neighboring prediction block is a target neighboring prediction block having motion information that is different from the motion information in the candidate motion information list.
Further, in an embodiment, the step of searching motion information of the target neighboring prediction block to determine an available neighboring prediction block further comprises: and searching the motion information of the target adjacent prediction block by using a full searching method to determine the available adjacent prediction block. And when the full check is performed on each target adjacent predicted block, comparing and judging the motion information of the target adjacent predicted block with each motion information in the previous filling candidate motion information list so as to avoid filling the motion information which is the same as the motion information in the candidate motion information list corresponding to the previous filling current block.
S84: motion information of the available neighboring prediction blocks is added to the candidate motion information list.
After the available neighboring prediction block is determined through the above-mentioned judgment and check, the motion information of the available neighboring prediction block is added to the candidate motion information list. Specifically, the motion information of the available adjacent prediction blocks is sequentially filled into the candidate motion information list according to the judging sequence until the candidate motion information list reaches a set length, or the motion information of all the available adjacent prediction blocks is added into the candidate motion information list.
Further, referring to fig. 9, fig. 9 is a schematic diagram of an adjacent prediction block in an embodiment of an inter prediction method according to the present application. As illustrated in fig. 9, each current block may correspond to a plurality of neighboring prediction blocks.
In an embodiment, when each current block corresponds to a plurality of neighboring prediction blocks, the neighboring prediction blocks of the current block are first fetched according to a set order, and then whether the current block is a neighboring prediction block adopting an inter prediction mode is determined, if so, the current block is determined to be a target neighboring prediction block. And then judging whether the motion information of the current target adjacent prediction block is the same as the motion information of the prior filling candidate motion information list, if not, judging that the current target adjacent prediction block is an available adjacent prediction block, otherwise, further judging whether the next adjacent prediction block is an available adjacent prediction block, and sequentially cycling until all the adjacent prediction blocks corresponding to the current block are judged.
In another embodiment, when each current block corresponds to a plurality of neighboring prediction blocks, all neighboring prediction blocks of the current block are first fetched according to a set order, then whether all neighboring prediction blocks are neighboring prediction blocks adopting an inter prediction mode is respectively judged, the neighboring prediction blocks adopting the inter prediction mode are fetched as target neighboring prediction blocks, then whether the target neighboring prediction blocks are available neighboring prediction blocks is judged according to the set order, and when the target neighboring prediction blocks are judged to be available neighboring prediction blocks, the available neighboring prediction blocks are further filled into the candidate motion information list. It should be noted that, the target neighboring prediction block to be determined later needs to be compared with all the motion information previously filled in the candidate motion information list to determine whether the motion information of the current target neighboring prediction block is the same as the motion information previously written in the candidate motion information list, so as to avoid filling the same motion information in the candidate motion information list.
As illustrated in fig. 9, the neighboring prediction block of the current block includes F, G, C, A, and D (skip B). In one embodiment, the flow of determining F, G, C, A, and whether D is an available neighbor prediction block is as follows:
first, it is determined whether F, G, C, A and D are available target neighboring prediction blocks, and the flow is as follows:
i) If F exists and an inter prediction mode is employed, F is "available"; otherwise, F "unavailable".
j) If G exists and an inter prediction mode is employed, G is "available"; otherwise, G "unavailable".
k) If C exists and an inter prediction mode is employed, C is "available"; otherwise, C is "unavailable".
l) if a is present and inter prediction mode is employed, a "available"; otherwise, a "unavailable".
m) if D is present and inter prediction mode is employed, D is "available"; otherwise, D is "unavailable".
Secondly, checking the motion information of each available target adjacent prediction block, and filling the motion information of the available adjacent prediction block which passes the checking into a candidate motion information list. The duplicate checking process is as follows:
(1) Firstly judging the availability of F, if F is available, filling F in a candidate motion information list, otherwise, if F is unavailable, entering the next step (2);
(2) Judging the usability of G: if G is unavailable, G is set to be unavailable, and the next step (3) is carried out, if not, F is further judged to be available, if not, G is set to be available, and a candidate motion information list can be added;
if not, G is set as available, otherwise, G is not available;
(3) Judging the usability of C: if C is unavailable, setting C to be unavailable, entering the next step (4), otherwise, if C is available, further judging whether G is available, if not, setting C to be available, and adding a candidate motion information list;
if G is available, whether the MVs of C and G are repeated or not needs to be compared, if not, C is set to be available, otherwise, C is not available;
(4) Judging the usability of A: if A is unavailable, setting A to be unavailable, entering a next step (5), otherwise, if A is available, further judging whether F is available, if not, setting A to be available, and adding a candidate motion information list;
if F is available, whether the MVs of A and F are repeated or not needs to be compared, if not, A is set to be available, otherwise, A is not available;
(5) Judging the usability of D: if D is unavailable, setting D to be unavailable, and finishing availability judgment, otherwise, if D is available, further judging whether A is available, and if A is unavailable, initializing mv of A to be unavailable; otherwise, obtaining mv of the sample, and judging whether the mv of the sample D and the mv of the sample A are repeated;
Judging whether G is available or not, if not, initializing mv of G as unavailable; otherwise, obtaining the mv of the device, judging whether the mv of the device D and the mv of the device G are repeated, and specifically judging whether the mv of the device D and the mv of the device G are repeated according to the following condition I and the following condition II;
condition one: if A is not available or available, D is not repeated with A's mv;
condition II: if G is not available or available, D and G mv are not repeated;
if the two conditions are simultaneously met, D is finally available, otherwise, D is not available.
In another embodiment, the flow of determining F, G, C, A, and D if it is an available neighbor prediction block is as follows:
(1) And (3) judging the availability of F, if F is available, filling F in the candidate motion information list, otherwise, entering the next step (2).
(2) Judging the usability of G: if G is unavailable, G is set to be unavailable, and the next step (3) is carried out, if not, F is further judged to be available, if not, G is set to be available, and a candidate motion information list can be added;
otherwise, if F is available, it is necessary to compare whether MV of F and G are repeated, if not, G is set to be available, otherwise, G is not available.
(3) Judging the usability of C: if C is not available, setting C to be unavailable, and entering the next step (4), otherwise, if C is available, further judging whether G is available, and if so, judging whether the MVs of C and G are repeated;
Judging whether F is available, if so, judging whether C and F are repeated;
condition one: c and G mv are not repeated if G is not available or available;
condition II: c and F mv are not repeated if F is not available or available;
if the two conditions are simultaneously met, C is finally available, otherwise, C is not available.
(4) And (3) judging the availability of A, wherein the availability of A is unavailable, and entering the next step (5), otherwise, if the availability of A is available, respectively checking whether the MVs of A and F/G/C are repeated, and if the MVs of A and F/G/C are different, the availability of A is only available.
(5) And judging the availability of D, and judging the unavailability of D, otherwise, further respectively checking whether the MVs of D and F/G/C/A are repeated or not, and only when the MVs of D and F/G/C/A are different, the availability of D is realized.
Further, in an embodiment, since the reference frame index and the motion vector are the same when the existing method is used for the duplicate checking, the POC may be the same even if the reference frame index is different, which may cause the motion information to be repeated. In order to avoid the problems, the technical scheme provided by the application further adopts an image sequence number weight checking mode (POC weight checking mode) to check the motion information of the target adjacent prediction block, so that the original reference frame index weight checking mode is replaced, the accuracy of weight checking can be better improved by using the POC weight checking mode, the situation that the same motion information is filled in a candidate motion information list is better avoided, the more accurate candidate motion information list is further determined, then the more accurate unidirectional motion information candidate list is determined based on the accurate candidate motion information list, and therefore the accuracy of inter-frame prediction is improved. The flow of the image sequence number duplicate checking method is described in the embodiment corresponding to fig. 10.
After adding the motion information of the available neighboring prediction block to the candidate motion information list, if the number of motion information in the candidate motion information list reaches the preset number, the following step S85 is not performed any more, and steps S86 to S87 are directly performed by skipping step S85. If the number of motion information in the candidate motion information list has not reached the preset number, the following step S85 is further executed, so that the length of the candidate motion information list reaches the preset length.
The above-mentioned step S520 constructs a unidirectional motion information candidate list of the current block based on the temporal motion information of at least two first sub-blocks, further comprising step S85 and step S86.
S85: and adding the time domain motion information of different first sub-blocks into the candidate motion information list in sequence according to the preset position sequence of the first sub-blocks until the number of the motion information of the candidate motion information list reaches the preset number.
If the number of the motion information in the candidate motion information list does not reach the preset number, adding the time domain motion information of different first sub-blocks into the candidate motion information list according to the preset position sequence of the first sub-blocks and the position sequence.
Before adding the temporal motion information of the different first sub-blocks to the candidate motion information list, the temporal motion information of the first sub-blocks added to the candidate motion information list is further checked to avoid filling the candidate motion information list with the same motion information. Wherein, check the weight and include: judging whether the time domain motion information of the first sub-block is the same as the motion information in the candidate motion information list, if so, if the motion information of the current first sub-block is judged to be repeated, the time domain motion information of the current first sub-block is not filled in the candidate motion information list, and judging whether the time domain motion information of the next first sub-block is repeated; otherwise, if the time domain motion information of the first sub-block is judged to be different from the motion information in the candidate motion information list, the motion information of the first sub-block is judged not to be repeatedly writable in the candidate motion information list.
In an embodiment, before adding the temporal motion information of the first sub-block to the candidate motion information list, the method provided by the present application further includes:
judging whether motion information which is the same as the time domain motion information of the current first sub-block exists in the candidate motion information list; if the candidate motion information list is judged to have no motion information which is the same as the time domain motion information of the current first sub-block, adding the time domain motion information of the first sub-block into the candidate motion information list; if the candidate motion information list is judged to have the motion information which is the same as the time domain motion information of the current first sub-block, the time domain motion information of the first sub-block is not added into the candidate motion information list, and the next first word block is continuously judged until the number of the motion information in the candidate motion information list reaches the preset number and/or all the first sub-blocks are traversed. Wherein, whether the two pieces of motion information are the same can be judged by comparing the POCs corresponding to the two pieces of motion information.
Before executing step S85, it includes: the current block is divided into a plurality of first sub-blocks.
Further, in the step S85, according to the preset position sequence of the first sub-blocks, the temporal motion information of the different first sub-blocks is added to the candidate motion information list in sequence, which includes: after the time domain motion information of all the first sub-blocks is added into the candidate motion information list, if the number of the motion information of the candidate motion information list is still smaller than the preset number, generating at least one piece of new motion information based on the motion information in the candidate motion information list, so that the number of the motion information of the candidate motion information list reaches the preset number.
In an embodiment, generating at least one new motion information based on motion information in the candidate motion information list comprises: selecting first motion information in the candidate motion information list, performing different scaling on the selected motion information, and adding the scaled motion information to the candidate motion information list until the number of the motion information in the candidate motion information list reaches a preset number.
In another embodiment, generating at least one new motion information based on motion information in the candidate motion information list comprises: and starting from the first motion information in the candidate motion information list, scaling the motion information sequentially, and adding the scaled motion information to the candidate motion information list until the number of the motion information in the candidate motion information list reaches a preset number.
Further, the current block comprises four first sub-blocks which are distributed in a Chinese character 'tian' shape, and the preset position sequences of the first sub-blocks are an upper left corner, an upper right corner, a lower left corner and a lower right corner in sequence. It will be appreciated that in other embodiments, the order of the preset positions of the first sub-block may be controlled in other types of orders, which are not listed herein.
Further, in an embodiment, after adding the temporal motion information of different first sub-blocks to the candidate motion information list according to the preset position order of the first sub-blocks, the method provided in the present application further includes: a unidirectional motion information candidate list is constructed based on the motion information in the candidate motion information list.
Further, the step of constructing a unidirectional motion information candidate list based on the motion information in the candidate motion information list includes the content described in step S86.
S86: and selecting forward motion information or backward motion information from the motion information in the candidate motion information list, and correspondingly filling the same position in the unidirectional motion information candidate list.
Wherein the motion information comprises forward motion information and/or backward motion information.
Specifically, forward motion information or backward motion information of candidate motion information in the candidate motion information list is selected according to parity of the list and put into the list, so that a unidirectional motion information candidate list is constructed. Namely, only filling forward motion information into the candidate list at the 1 st, 3 rd and 5 th positions, and filling backward motion information if the forward motion information does not exist; and the rest positions are filled in the list by only taking the backward motion information, and similarly, if the backward motion information does not exist, the forward motion information is filled in the list.
S87: calculating coding cost based on the weight array, and selecting a plurality of groups of motion information with the minimum cost from the unidirectional motion information candidate list as a plurality of groups of first candidate motion information;
s88: and selecting a final prediction mode from the original prediction modes based on the plurality of groups of first candidate motion information.
Step S87 and step S88 in the present embodiment are the same as step S530 and step S540 described above, and specific reference may be made to the description of the corresponding parts above, which are not repeated here.
Referring to fig. 10, fig. 10 is a flowchart illustrating an inter prediction method according to an embodiment of the present application.
In the current embodiment, the above-mentioned searching the motion information of the target neighboring prediction block by using the full searching method further includes:
s101: and acquiring a reference frame image sequence number corresponding to the motion information in the target adjacent prediction block.
Each piece of motion information corresponds to a reference frame image sequence number, the image sequence number is used for identifying the sequence of images in an image sequence, the image sequence number has uniqueness in the technical scheme provided by the application, and one image sequence number corresponds to only one frame of image.
Similarly, before step S102, an image sequence number corresponding to each piece of motion information in the candidate motion information list is also acquired.
S102: and judging whether the reference frame image sequence number corresponding to the current target adjacent prediction block is the same as the reference frame image sequence number corresponding to any one piece of motion information in the candidate motion information list, and judging whether the motion vector corresponding to the current target adjacent prediction block is the same as the motion vector corresponding to any one piece of motion information in the candidate motion information list.
After the reference frame image sequence number corresponding to the motion information in the target adjacent prediction block and the image sequence number corresponding to each motion information in the unidirectional motion information candidate list are obtained, whether the reference frame image sequence number corresponding to the current target adjacent prediction block is the same as the reference frame image sequence number corresponding to any one motion information in the unidirectional motion information candidate list is further judged.
S103: if the reference frame image sequence number corresponding to the current target adjacent predicted block is judged to be the same as the reference frame image sequence number corresponding to any one piece of motion information in the candidate motion information list, and the motion vector corresponding to the current target adjacent predicted block is judged to be the same as the motion vector corresponding to any one piece of motion information in the candidate motion information list, the target adjacent predicted block is judged to be an unavailable adjacent predicted block.
If the step S102 is that the reference frame image sequence number corresponding to the current target neighboring prediction block is the same as the reference frame image sequence number corresponding to any one motion information in the candidate motion information list, and that the motion vector corresponding to the current target neighboring prediction block is the same as the motion vector corresponding to any one motion information in the candidate motion information list, the current target neighboring prediction block is determined as an unavailable neighboring prediction block; and if the reference frame image sequence number corresponding to the current target adjacent prediction block is different from the reference frame image sequence number corresponding to all the motion information in the candidate motion information list, and/or if the motion vector corresponding to the current target adjacent prediction block is different from the motion vector corresponding to all the motion information in the candidate motion information list, judging that the current target adjacent prediction block is an available adjacent prediction block. In the current embodiment, by combining the image sequence number duplicate checking mode, whether the motion information of the current target adjacent predicted block and the motion information in the candidate motion information list are repeated or not can be judged more accurately, so that a more accurate candidate motion information list is obtained, and further, a unidirectional motion information candidate list which more accurately reflects the motion state of the current block is constructed.
Further, the reference frame list includes a first direction list and a second direction list. In other embodiments, the first direction is forward and the second direction is backward.
In an embodiment, the step of performing weight checking on the motion information of the target neighboring prediction block by using a weight checking method further includes: if the reference frame of the current target adjacent prediction block in the first direction is not available, the reference frame of the candidate motion information in the candidate motion information list in the second direction is not available, and whether the motion information of the current target adjacent prediction block in the second direction is the same as the motion information of the current candidate motion information in the candidate motion information list in the first direction is further judged.
If the reference frame image sequence number corresponding to the motion information of the current target adjacent prediction block is the same as the reference frame image sequence number corresponding to the current candidate motion information and the motion vector is the same, the motion information of the target adjacent prediction block is repeated with the current candidate motion information, otherwise, the motion information of the target adjacent prediction block is judged not to be repeated with the current candidate motion information, and whether the motion information of the target adjacent prediction block is repeated with the next candidate motion information is continuously judged. And if the motion information of the target adjacent predicted block is judged to be not repeated with all the candidate motion information in the candidate motion information list, judging the current target adjacent predicted block as an available adjacent predicted block, otherwise, judging the motion information of the target adjacent predicted block is judged to be repeated with any one of the candidate motion information in the candidate motion information list, and judging the current target adjacent predicted block as unavailable adjacent predicted block. In the current embodiment, the motion information included in the candidate motion information is defined as candidate motion information.
Such as: if the POC weight checking method is adopted to check weight, and the reference frame list includes a first direction list and a second direction list (the first direction list and the second direction list are respectively represented by L0 and L1), when it is determined that the adjacent prediction block a is unavailable in the L0 direction reference frame, the motion information B (or the adjacent prediction block B) in the candidate motion information list needs to be determined whether the motion information of a in the L1 direction is the same as the motion information of B in the L0 direction reference frame, if POC is the same, and the MV is the same as the MV in the x and y directions, the motion information of a and B is repeated, and only one of them is selected from a and B.
In another embodiment, the step of performing the duplicate checking on the motion information of the target neighboring prediction block by using a full duplicate checking method further includes: if the reference frames of the current target adjacent prediction block in the first direction and the second direction are judged to be available, and the reference frames of the candidate motion information in the candidate motion information list in the first direction and the second direction are judged to be available, whether the motion information of the target adjacent prediction block in the first direction is identical to the motion information of the current candidate motion information in the second direction or not is further judged, and whether the motion information of the target adjacent prediction block in the second direction is identical to the motion information of the current candidate motion information in the first direction or not is judged.
If the motion information of the target adjacent prediction block in the first direction is the same as the motion information of one candidate motion information in the candidate motion information list in the second direction, and the motion information of the target adjacent prediction block in the second direction is the same as the motion information of one candidate motion information in the candidate motion information list in the first direction, the motion information of the target adjacent prediction block is judged to be repeated with the current candidate motion information, otherwise, the motion information of the target adjacent prediction block is judged to be not repeated with the current candidate motion information, and whether the motion information of the target adjacent prediction block is repeated with the next candidate motion information is continuously judged. And if the motion information of the target adjacent predicted block is judged to be not repeated with all the candidate motion information in the candidate motion information list, judging the current target adjacent predicted block as an available adjacent predicted block, otherwise, judging the motion information of the target adjacent predicted block is judged to be repeated with any one of the candidate motion information in the candidate motion information list, and judging the current target adjacent predicted block as unavailable adjacent predicted block. In the current embodiment, the motion information included in the candidate motion information is defined as candidate motion information.
Such as: if the POC weight checking method is adopted for weight checking, if the A and the B are judged to be available in both the L0 and the L1 directions, whether the movement information of the A in the L0 and the B in the L1 directions is the same or not and whether the movement information of the A in the L1 and the B in the L0 directions is the same or not are judged, and if the movement information of the A and the movement information of the B are the same, the movement information of the A and the movement information of the B are repeated.
Referring to fig. 11, fig. 11 is a flowchart of an inter prediction method according to another embodiment of the present application. In the current embodiment, the calculating the coding cost based on the weight array in the step S530 further includes:
s1101: and respectively performing motion compensation on each piece of motion information in the unidirectional motion information candidate list to obtain a first prediction value corresponding to each piece of motion information.
After the unidirectional motion information candidate list is constructed, motion compensation is carried out on the current block by utilizing each piece of motion information in the unidirectional motion information candidate list, and then a first predicted value corresponding to each piece of motion information is obtained.
Further, in the technical solution provided in the present application, a plurality of prediction modes may be determined by using the angle and the reference weight configuration, and then in step S1101, a first prediction value corresponding to each motion information in the unidirectional motion information candidate list in each prediction mode is determined respectively.
S1102: and calculating and obtaining the coding cost corresponding to each piece of motion information based on the first predicted value.
And calculating the coding cost corresponding to each piece of motion information based on the obtained first predicted value.
When a plurality of first predicted values corresponding to each set of motion information are calculated in step S1101, step S1102 may respectively calculate coding costs corresponding to each motion information in each prediction mode based on each first predicted value.
Further, in an embodiment, advanced motion vector expression (Ultimate motion vector expression, UMVE) techniques may also be introduced in AWP. For example, when the angle and reference weight configurations are used to determine 56 prediction modes and each unidirectional motion information candidate list includes 5 pieces of motion information, performing UMVE offset (where the UMVE offset includes 4 directions by 5 setps and 20 offset results), comparing the mv subjected to the UMVE offset with the mv not subjected to the offset, and finally determining whether to use the mv after the offset or the mv before the offset as the final mv according to the comparison result of the encoding cost. The flow is as follows:
(i) Performing motion compensation on all motion information in the unidirectional motion information candidate list to obtain a first predicted value; and calculates the sum of squares (SAD: sum of Absolute Difference) of pixel differences of the motion information with UMVE offset and the motion information without UMVE offset.
(j) In 56 prediction modes, calculating weighted RDCost for all motion information in the unidirectional motion information candidate list in a UMVE offset-free mode, sequencing, and then selecting two groups of motion information cost0 and cost1 with the minimum RDCost as first candidate motion information in each prediction mode.
(k) Selecting a part of modes from all the prediction modes to carry out UMVE offset on the motion information, calculating weighted RDCost on the motion information in all the unidirectional motion information candidate lists in the selected original prediction mode according to the UMVE offset mode, sequencing, and selecting two groups of motion information cost0 and cost1 with the minimum RDCost as first candidate motion information in each prediction mode.
It should be noted that, unlike the previous step (j), the step (k) does not traverse all the original prediction modes, but selects a part of the original prediction modes to perform UMVE offset, and the selection modes include a) and b), which are specifically as follows:
a) If the current block has been accessed, i.e. AWP was previously executed, only the original prediction modes (up to 7) that last participated in reselection are subjected to UMVE offset;
b) If the current block has not been accessed, i.e. AWP has not been performed, the costs (cost 0) of the first motion information in 56 modes are ordered, and the 42 original prediction modes with the smallest costs are selected to be subjected to UMVE offset.
Further, after the first candidate motion information is determined, a final prediction mode is selected from the original prediction modes based further on the plurality of sets of first candidate motion information. In the current embodiment, the process of selecting the final prediction mode based on the plurality of sets of first candidate motion information is as follows:
all the original prediction modes (including UMVE offset and UMVE-free) in the step (k) enter the RDO (rate distortion optimization: rate Distortion Optimation) roughing stage, and SATD (Sum of Absolute Transformed Difference) is used to calculate the cost of each set of first candidate motion information in each prediction mode entering the RDO roughing stage and order the first candidate motion information. 7 prediction modes with the smallest cost (each prediction mode comprises two sets of first candidate motion information) are selected as candidate prediction modes to enter a final fine selection stage. In the current step, the index of the two sets of first candidate motion information cannot be the same, and if there is a UMVE offset, the index of the UMVE cannot be the same.
(6) After 7 prediction modes with the minimum coding cost are obtained, interpolation, residual error calculation, transformation quantization, inverse transformation, inverse quantization are further carried out on each first candidate motion information to obtain a reconstructed pixel and the like, RDCost (SSE) is obtained through an RDO process, the magnitudes of RDCost of the first candidate motion information under the 7 candidate prediction modes are compared, and the mode with the minimum RDCost is used as a final prediction mode.
Further, the step of performing motion compensation on each motion information in the unidirectional motion information candidate list to obtain a first prediction value corresponding to each motion information respectively further includes: and respectively performing motion compensation on each first sub-block by utilizing the time domain motion information of a plurality of first sub-blocks included in the current block, and further obtaining a corresponding first predicted value of the current block based on the motion compensation result of each first sub-block.
Referring to fig. 12, fig. 12 is a flowchart of an inter prediction method according to an embodiment of the present application. In the current embodiment, the above-described step divides the current block into a plurality of first sub-blocks, including step S1201.
S1201: and carrying out cross average division on the current block to obtain four first sub-blocks.
And carrying out cross average division on the current block to obtain four first sub-blocks with the same area. The four first sub-blocks are arranged in a Chinese character 'tian' shape, and the preset position sequences of the first sub-blocks are an upper left corner, an upper right corner, a lower left corner and a lower right corner in sequence.
S1202: dividing the current block according to the dividing mode corresponding to each original prediction mode to obtain two second sub-blocks.
In the present embodiment, a corresponding dividing manner is preset for each original prediction mode, and when which original prediction mode is selected, the current block is divided by correspondingly selecting the dividing manner corresponding to the original prediction mode, so as to obtain two second sub-blocks.
The step of performing motion compensation on each motion information in the unidirectional motion information candidate list to obtain a first prediction value corresponding to each motion information, and the method further comprises the following steps:
s1203: and selecting time domain motion information of a first sub-block corresponding to the second sub-block, and performing motion compensation on the second sub-block to obtain a first predicted value corresponding to the current block.
In the present embodiment, the first sub-block corresponding to the second sub-block is determined according to the boundary of the two second sub-blocks, which may also be understood as determining the dividing direction of the second sub-block based on the division of the current block.
Fig. 13 is a schematic diagram of partitioning of a current block of an AWP in the present application, and fig. 13 shows a schematic diagram of a partition structure of the current block partitioned into two second sub-blocks in different original prediction modes.
In an embodiment, when dividing the current block according to the partition mode corresponding to each original prediction mode to obtain two second sub-blocks distributed left and right or up and down, the step of selecting the time domain motion information of the first sub-block corresponding to the second sub-block, and performing motion compensation on the second sub-block further includes: the motion information of the first sub-block at the upper left corner is selected to perform motion compensation on the second sub-block distributed at the upper side or the left side, and the motion information of the first sub-block at the lower right corner is selected to perform motion compensation on the second sub-block distributed at the lower side or the right side.
In another embodiment, when dividing the current block according to the partition mode corresponding to each original prediction mode to obtain two second sub-blocks distributed left and right or up and down, the step of selecting the time domain motion information of the first sub-block corresponding to the second sub-block, and performing motion compensation on the second sub-block further includes: the motion information of the first sub-block at the lower left corner is selected to perform motion compensation on the second sub-block distributed at the left side or the lower side, and the motion information of the first sub-block at the upper right corner is selected to perform motion compensation on the second sub-block distributed at the right side or the upper side.
In another embodiment, when the current block is divided according to the partition mode corresponding to the original prediction mode and the partition lines for dividing the two second sub-blocks are not parallel to the diagonal line, the horizontal line and the vertical line, the motion information of the first sub-block close to the center of gravity of each second sub-block is selected to perform motion compensation on the second sub-block.
In still another embodiment, when the current block is divided according to the partition mode corresponding to the original prediction mode and the partition line for dividing the two second sub-blocks is parallel to any one diagonal line, the step of selecting the temporal motion information of the first sub-block corresponding to the second sub-block, and performing motion compensation on the second sub-block further includes: and selecting the motion information of the first sub-blocks distributed along the diagonal line intersecting with the dividing line to correspondingly perform motion compensation on the two second sub-blocks.
In connection with fig. 13, when divided as in column 4 in fig. 13, two second sub-blocks are obtained that are present. For column 4, tmvp of the first sub-block at the upper right corner and the lower left corner is adopted, and the current block is divided into two upper and lower second sub-blocks, wherein the upper second sub-block adopts tmvp of the first sub-block at the upper left corner, and the lower second sub-block adopts tmvp at the lower right corner for motion compensation.
For columns 5-7, the current block is divided into two second sub-blocks, namely, the tmvp of the first sub-block at the upper left corner and the first sub-block at the lower right corner, the second sub-block at the left side adopts the tmvp of the first sub-block at the lower left corner, and the second sub-block at the right side adopts the tmvp of the first sub-block at the upper right corner to perform motion compensation.
In another embodiment, temporal motion information (tmvp) of the first upper left sub-block and the first lower right sub-block is used for the original prediction modes of column 1, column 2, column 3 and the last column in fig. 13, and the current block is divided into two second left and right sub-blocks, the second left sub-block is motion compensated using tmvp of the first upper left sub-block, and the second right sub-block is motion compensated using tmvp of the first lower right sub-block.
For the original prediction modes of the 4 th column, the 5 th column, the 6 th column and the 7 th column, adopting tmvp of the first sub-block at the upper right corner and the first sub-block at the lower left corner, dividing the current block into two second sub-blocks at the left and right, carrying out motion compensation by adopting tmvp of the first sub-block at the upper right corner by adopting the second sub-block at the left side, and carrying out motion compensation by adopting tmvp of the first sub-block at the lower left corner by adopting the second sub-block at the right side.
In another embodiment, for the original prediction modes of the 1 st and last columns, the tmvp of the first sub-block at the upper left corner and the first sub-block at the lower right corner are used for motion compensation, and the current block is divided into two second sub-blocks at the left and right, the second sub-block at the left side uses the tmvp of the first sub-block at the upper left corner for motion compensation, and the second sub-block at the right side uses the tmvp of the first sub-block at the lower right corner for motion compensation.
And performing motion compensation on the columns 2 and 3 by adopting the tmvp of the first sub-block at the upper left corner and the first sub-block at the lower right corner respectively, dividing the current block into an upper second sub-block and a lower second sub-block, performing motion compensation on the upper second sub-block by adopting the tmvp of the first sub-block at the upper left corner, and performing motion compensation on the lower second sub-block by adopting the tmvp at the lower right corner.
The technical scheme provided in the current embodiment mainly combines the thought of subTMVP, and simultaneously, the AWP can select TMVP with different positions scu to perform motion compensation on the current block in consideration of different prediction modes, namely, the TMVP with different positions is used for performing block motion compensation on the current block, so that more accurate prediction values are obtained.
In other embodiments, subTMVP techniques may be used instead of the original TMVP and still occupy only one location in the candidate list. And the current block can be correspondingly divided into four first sub-blocks, and the motion information of the four first sub-blocks can be stored, and when the motion compensation is carried out, the subTMVP can respectively carry out motion compensation on the four first sub-blocks to obtain respective predicted values, and further obtains the predicted value of the current block based on the predicted values of the four first sub-blocks. It can be appreciated that in other embodiments, the motion information of the first sub-block may be selected according to other manners to perform motion compensation on the second sub-block, which is not listed here, so long as the current block is divided according to the division manner corresponding to each original prediction mode to obtain two second sub-blocks, the time domain motion information of the first sub-block corresponding to the second sub-block is selected, and the motion compensation is performed on the second sub-block to obtain the first prediction corresponding to the current block, which can be considered as protecting the protection scope of the present application.
In an embodiment, the method provided by the present application further comprises: all prediction modes are determined by traversing the angle and reference weight configurations, wherein the angle dimension is 7 and does not include the current block diagonal angle.
In an embodiment, the method provided by the present application further comprises: the angular mode correspondence of the current block is ordered in combination with the ratio of the width to the height of the current block, and reference is specifically made to the above description of the corresponding parts of fig. 2 to 3.
Referring to fig. 14, fig. 14 is a flowchart of an embodiment of a video encoding method according to the present application. The method provided by the application comprises the following steps:
s1410: a final prediction mode of the current block is determined.
Wherein the final prediction mode is determined according to the method as described in any one of the embodiments of fig. 1 to 13 and corresponding thereto.
S1420: a final prediction value of the current block is determined based on the final prediction mode, and the current block is encoded based on the final prediction value.
After determining the final prediction mode according to the method as described in any one of the embodiments of fig. 1 to 13 and corresponding thereto, a final prediction value of the current block is further determined based on the determined final prediction mode, and the current block is encoded based on the final prediction value.
Wherein encoding the current block based on the final prediction value of the current block includes: an index of one motion information in the unidirectional motion information candidate list is encoded.
Further, the method provided by the application further comprises the following steps: the texture direction of the current block is determined.
After determining the texture direction of the current block, the prediction modes are reordered based on the texture direction of the current block. Specifically, all prediction modes may be reordered starting from the prediction mode corresponding to the angle that is the same as or closest to the texture direction. Still further, encoding the current block based on the predicted value of the current block includes: and encoding the index of the current block after the prediction mode is reordered.
Referring to fig. 15, fig. 15 is a schematic structural diagram of a video coding system according to an embodiment of the present application. The video coding system includes a memory and a processor; the memory stores a computer program, and the processor is configured to execute the computer program to implement the method according to any one of the embodiments shown in fig. 1 to 14 and corresponding to the respective drawings.
The memory 1502 includes a local storage (not shown) and stores a computer program that, when executed, implements the methods described in any of the embodiments of fig. 1-14 and corresponding thereto.
The processor 1501 is coupled to the memory 1502, the processor 1501 being configured to execute a computer program to perform the method as described above in any one of fig. 1 to 14 and their corresponding embodiments.
Referring to fig. 16, fig. 16 is a schematic structural diagram of an embodiment of a readable storage medium according to the present application. The readable storage medium 1600 stores a computer program 1601 that can be executed by a processor, the computer program 1601 being configured to implement the method as described in any of the above fig. 1 to 14 and their corresponding embodiments. Specifically, the storage medium 1600 may be one of a memory, a personal computer, a server, a network device, or a usb disk, which is not limited herein.
The foregoing description is only of embodiments of the present application, and is not intended to limit the scope of the patent application, and all equivalent structures or equivalent processes using the descriptions and the contents of the present application or other related technical fields are included in the scope of the patent application.

Claims (21)

1. An inter prediction method, the method comprising:
determining a weight array of the current block in each original prediction mode;
Dividing the current block into a plurality of first sub-blocks, and constructing a unidirectional motion information candidate list of the current block based on time domain motion information of at least two first sub-blocks;
calculating coding cost of each motion information in the unidirectional motion information candidate list under each original prediction mode based on the weight array, and selecting a plurality of groups of motion information with the minimum cost in each original prediction mode from the unidirectional motion information candidate list as a plurality of groups of first candidate motion information;
based on a plurality of groups of the first candidate motion information, respectively performing motion compensation on the plurality of groups of the first candidate motion information to obtain a first predicted value of the first candidate motion information in each prediction mode;
and calculating the corresponding coding cost of the first candidate motion information under each original prediction mode based on the first prediction value, and selecting the original prediction mode corresponding to the minimum coding cost as a final prediction mode.
2. The method according to claim 1, wherein before the dividing the current block into a plurality of first sub-blocks and constructing the unidirectional motion information candidate list of the current block based on temporal motion information of at least two of the first sub-blocks, the method further comprises:
Judging whether a target adjacent prediction block corresponding to the current block exists or not, wherein the target adjacent prediction block is an adjacent prediction block adopting an inter prediction mode;
if yes, acquiring all target adjacent prediction blocks of the current block, and checking the motion information of the target adjacent prediction blocks to determine available adjacent prediction blocks;
and adding the motion information of the available adjacent prediction blocks into a candidate motion information list.
3. The method according to claim 2, wherein constructing the unidirectional motion information candidate list of the current block based on temporal motion information of at least two of the first sub-blocks, comprises:
sequentially adding the time domain motion information of different first sub-blocks into the candidate motion information list according to the preset position sequence of the first sub-blocks until the number of the motion information of the candidate motion information list reaches the preset number;
and selecting forward motion information or backward motion information from the motion information in the candidate motion information list, and correspondingly filling the same position of the unidirectional motion information candidate list, wherein the motion information comprises forward motion information and/or backward motion information.
4. The method for inter prediction according to claim 3, wherein,
the current block comprises four first sub-blocks which are distributed in a Chinese character 'tian', and the preset position sequences of the first sub-blocks are an upper left corner, an upper right corner, a lower left corner and a lower right corner in sequence.
5. The method according to claim 3, wherein sequentially adding temporal motion information of different first sub-blocks to the candidate motion information list according to a preset position order of the first sub-blocks, comprises:
after all the time domain motion information of the first sub-block is added into the candidate motion information list, if the number of the motion information of the candidate motion information list is smaller than the preset number, generating at least one new motion information based on the motion information in the candidate motion information list, so that the number of the motion information of the candidate motion information list reaches the preset number.
6. The method of inter prediction according to claim 5, wherein the generating at least one new motion information based on the motion information in the candidate motion information list comprises:
and starting from the first motion information in the candidate motion information list, scaling the motion information sequentially, and adding the scaled motion information to the candidate motion information list until the number of the motion information in the candidate motion information list reaches the preset number.
7. The method of inter prediction according to claim 3, wherein before adding the temporal motion information of the first sub-block to the candidate motion information list, the method further comprises:
judging whether motion information which is the same as the time domain motion information of the current first sub-block exists in the candidate motion information list;
if not, adding the time domain motion information of the first sub-block into the candidate motion information list;
and if so, not adding the time domain motion information of the first sub-block into the candidate motion information list.
8. The method according to claim 2, wherein the searching the motion information of the target neighboring prediction block to determine an available neighboring prediction block, further comprises:
and searching the motion information of the target adjacent prediction block by using a full searching method to determine the available adjacent prediction block.
9. The method according to claim 8, wherein the performing the weight check on the motion information of the target neighboring prediction block by using a full weight check method, further comprises:
acquiring a reference frame image sequence number corresponding to motion information in the target adjacent prediction block;
Judging whether the reference frame image sequence number corresponding to the current target adjacent prediction block is the same as the reference frame image sequence number corresponding to any one piece of motion information in the candidate motion information list; judging whether the motion vector corresponding to the current target adjacent prediction block is the same as the motion vector corresponding to any one motion information in the candidate motion information list;
if the reference frame image sequence number corresponding to the current target adjacent prediction block is judged to be the same as the reference frame image sequence number corresponding to any one piece of motion information in the candidate motion information list, and the motion vector corresponding to the current target adjacent prediction block is judged to be the same as the motion vector corresponding to any one piece of motion information in the candidate motion information list, judging that the target adjacent prediction block is an unavailable adjacent prediction block;
and otherwise, judging the target adjacent prediction block as an available adjacent prediction block.
10. The method of inter prediction according to claim 8, wherein the reference frame list includes a first direction list and a second direction list;
and the method for searching the motion information of the target adjacent prediction block by using the full searching method comprises the following steps:
If the reference frame of the current target adjacent prediction block in the first direction is not available, the reference frame of the candidate motion information in the candidate motion information list in the second direction is not available, whether the motion information of the current target adjacent prediction block in the second direction is the same as the motion information of the current candidate motion information in the candidate motion information list in the first direction is further judged;
and if the reference frame image sequence number corresponding to the motion information of the current target adjacent prediction block is the same as the reference frame image sequence number corresponding to the current candidate motion information and the motion vector is the same, the motion information of the target adjacent prediction block is repeated with the current candidate motion information, otherwise, whether the motion information of the target adjacent prediction block is not repeated with the current candidate motion information is judged, and whether the motion information of the target adjacent prediction block is repeated with the next candidate motion information is continuously judged.
11. The method of inter prediction according to claim 8, wherein the reference frame list includes a first direction list and a second direction list;
and the method for searching the motion information of the target adjacent prediction block by using the full searching method comprises the following steps:
If the current target adjacent prediction block is judged to be available in the first direction and the reference frame of the second direction, and the candidate motion information in the candidate motion information list is judged to be available in the first direction and the reference frame of the second direction, whether the motion information of the target adjacent prediction block in the first direction is identical to the motion information of the current candidate motion information in the candidate motion information list in the second direction is further judged, and whether the motion information of the target adjacent prediction block in the second direction is identical to the motion information of the current candidate motion information in the first direction is judged;
and if the motion information of the target adjacent predicted block and the current candidate motion information are the same, judging that the motion information of the target adjacent predicted block and the current candidate motion information are not repeated, and continuously judging whether the motion information of the target adjacent predicted block and the next candidate motion information are repeated or not.
12. The method according to claim 1, wherein said calculating, based on the weight array, coding costs of each motion information in the unidirectional motion information candidate list in each of the original prediction modes includes:
Respectively performing motion compensation on each piece of motion information in the unidirectional motion information candidate list to obtain respective corresponding first predicted values;
and calculating and obtaining the coding cost of each motion information in each original prediction mode based on the first prediction value.
13. The method according to claim 12, wherein motion compensating each motion information in the unidirectional motion information candidate list to obtain a respective first prediction value, further comprises:
and respectively performing motion compensation on each first sub-block by using time domain motion information of a plurality of first sub-blocks included in the current block to obtain the corresponding first predicted value of the current block.
14. The method of inter prediction according to claim 12, wherein,
the dividing the current block into a plurality of first sub-blocks includes: performing cross average division on the current block to obtain four first sub-blocks;
after the current block is divided into the plurality of first sub-blocks, the method further comprises: dividing the current block according to the dividing mode corresponding to each original prediction mode to obtain two second sub-blocks;
The motion compensation is performed on each motion information in the unidirectional motion information candidate list to obtain a first prediction value corresponding to each motion information, and the method further comprises the following steps:
and selecting time domain motion information of a first sub-block corresponding to the second sub-block, and performing motion compensation on the second sub-block to obtain a first predicted value corresponding to the current block.
15. The method according to claim 14, wherein when dividing the current block according to the partition mode corresponding to each of the original prediction modes to obtain two second sub-blocks distributed left and right or up and down, selecting the time domain motion information of the first sub-block corresponding to the second sub-block, and performing motion compensation on the second sub-block further comprises:
and selecting the motion information of the first sub-block at the upper left corner to perform motion compensation on the second sub-block distributed at the upper side or the left side, and selecting the motion information of the first sub-block at the lower right corner to perform motion compensation on the second sub-block distributed at the lower side or the right side.
16. The method according to claim 14, wherein when dividing the current block according to the partition mode corresponding to each of the original prediction modes to obtain two second sub-blocks distributed left and right or up and down, selecting the time domain motion information of the first sub-block corresponding to the second sub-block, and performing motion compensation on the second sub-block further comprises:
And selecting the motion information of the first sub-block at the lower left corner to perform motion compensation on the second sub-block distributed at the left side or the lower side, and selecting the motion information of the first sub-block at the upper right corner to perform motion compensation on the second sub-block distributed at the right side or the upper side.
17. The inter prediction method according to claim 1, characterized in that the method further comprises: all original prediction modes are determined through traversing being configured by angles and reference weights, wherein the angle dimension is 6 and does not include the current block diagonal direction angle.
18. The inter prediction method according to claim 1, characterized in that the method further comprises:
and sequencing the angle modes of the current block correspondingly by combining the ratio of the width to the height of the current block.
19. A method of video encoding, the method comprising:
determining a final prediction mode of the current block based on the method of any one of claims 1-18;
and determining a final prediction value of the current block based on the final prediction mode, and encoding the current block based on the final prediction value.
20. A video coding system, the video coding system comprising a memory and a processor; stored in the memory is a computer program, the processor being adapted to execute the computer program to carry out the steps of the method according to any one of claims 1-18.
21. A readable storage medium, characterized in that it stores a computer program executable by a processor for implementing the steps of the method according to any one of claims 1-18.
CN202010853191.1A 2020-08-22 2020-08-22 Inter-frame prediction method, video coding method and related devices Active CN112055203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010853191.1A CN112055203B (en) 2020-08-22 2020-08-22 Inter-frame prediction method, video coding method and related devices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010853191.1A CN112055203B (en) 2020-08-22 2020-08-22 Inter-frame prediction method, video coding method and related devices

Publications (2)

Publication Number Publication Date
CN112055203A CN112055203A (en) 2020-12-08
CN112055203B true CN112055203B (en) 2024-04-12

Family

ID=73599838

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010853191.1A Active CN112055203B (en) 2020-08-22 2020-08-22 Inter-frame prediction method, video coding method and related devices

Country Status (1)

Country Link
CN (1) CN112055203B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117499628A (en) * 2020-03-26 2024-02-02 阿里巴巴(中国)有限公司 Method and apparatus for encoding or decoding video
CN114640848B (en) * 2021-04-13 2023-04-28 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment thereof
WO2023123495A1 (en) * 2021-12-31 2023-07-06 Oppo广东移动通信有限公司 Prediction method and apparatus, device, system, and storage medium
CN114885164B (en) * 2022-07-12 2022-09-30 深圳比特微电子科技有限公司 Method and device for determining intra-frame prediction mode, electronic equipment and storage medium
WO2024050723A1 (en) * 2022-09-07 2024-03-14 Oppo广东移动通信有限公司 Image prediction method and apparatus, and computer readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1527607A (en) * 2003-01-14 2004-09-08 ���ǵ�����ʽ���� Method and apparatus for coding and or decoding moving image
CN105009590A (en) * 2013-03-15 2015-10-28 高通股份有限公司 Device and method for scalable coding of video information
CN108141604A (en) * 2015-06-05 2018-06-08 英迪股份有限公司 Image coding and decoding method and image decoding apparatus
CN110225346A (en) * 2018-12-28 2019-09-10 杭州海康威视数字技术股份有限公司 A kind of decoding method and its equipment
CN110383695A (en) * 2017-03-03 2019-10-25 西斯维尔科技有限公司 Method and apparatus for being coded and decoded to digital picture or video flowing
CN111418205A (en) * 2018-11-06 2020-07-14 北京字节跳动网络技术有限公司 Motion candidates for inter prediction
CN111567045A (en) * 2017-10-10 2020-08-21 韩国电子通信研究院 Method and apparatus for using inter prediction information

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1527607A (en) * 2003-01-14 2004-09-08 ���ǵ�����ʽ���� Method and apparatus for coding and or decoding moving image
CN105009590A (en) * 2013-03-15 2015-10-28 高通股份有限公司 Device and method for scalable coding of video information
CN108141604A (en) * 2015-06-05 2018-06-08 英迪股份有限公司 Image coding and decoding method and image decoding apparatus
CN110383695A (en) * 2017-03-03 2019-10-25 西斯维尔科技有限公司 Method and apparatus for being coded and decoded to digital picture or video flowing
CN111567045A (en) * 2017-10-10 2020-08-21 韩国电子通信研究院 Method and apparatus for using inter prediction information
CN111418205A (en) * 2018-11-06 2020-07-14 北京字节跳动网络技术有限公司 Motion candidates for inter prediction
CN110225346A (en) * 2018-12-28 2019-09-10 杭州海康威视数字技术股份有限公司 A kind of decoding method and its equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
H.266/VVC视频编码帧间预测关键技术研究;周芸等;《广播与电视技术》;全文 *
Non-CE4: On merge list generation for geometric partitioning;Yoshitaka Kidani, et al;《JVET会议》;全文 *

Also Published As

Publication number Publication date
CN112055203A (en) 2020-12-08

Similar Documents

Publication Publication Date Title
CN112055203B (en) Inter-frame prediction method, video coding method and related devices
KR102081213B1 (en) Image prediction method and related device
CN110249628B (en) Video encoder and decoder for predictive partitioning
CN112087629B (en) Image prediction method, device and computer readable storage medium
US9451266B2 (en) Optimal intra prediction in block-based video coding to calculate minimal activity direction based on texture gradient distribution
US8711939B2 (en) Method and apparatus for encoding and decoding video based on first sub-pixel unit and second sub-pixel unit
KR101208863B1 (en) Selecting encoding types and predictive modes for encoding video data
WO2017005146A1 (en) Video encoding and decoding method and device
CN111741297B (en) Inter-frame prediction method, video coding method and related devices
CN107810632B (en) Intra prediction processor with reduced cost block segmentation and refined intra mode selection
CN102036067A (en) Moving image encoding apparatus and control method thereof
CN111818342B (en) Inter-frame prediction method and prediction device
CN108989799B (en) Method and device for selecting reference frame of coding unit and electronic equipment
US11838499B2 (en) Encoding/decoding method and apparatus for coding unit partitioning
CN111263144B (en) Motion information determination method and equipment
CN112565768B (en) Inter-frame prediction method, encoding and decoding system and computer readable storage medium
CN110719467A (en) Prediction method of chrominance block, encoder and storage medium
KR101842551B1 (en) Method for deciding motion partition mode and encoder
CN113873257B (en) Method, device and equipment for constructing motion information candidate list
CN110730344B (en) Video coding method and device and computer storage medium
CN113794883B (en) Encoding and decoding method, device and equipment
CN112449181A (en) Encoding and decoding method, device and equipment
CN111669592B (en) Encoding and decoding method, device and equipment
CN107426573B (en) Self-adaptive rapid prediction unit partitioning method and device based on motion homogeneity
CN116980590A (en) Adaptive selection of IBC reference regions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant