CN112055203A - Inter-frame prediction method, video coding method and related devices thereof - Google Patents

Inter-frame prediction method, video coding method and related devices thereof Download PDF

Info

Publication number
CN112055203A
CN112055203A CN202010853191.1A CN202010853191A CN112055203A CN 112055203 A CN112055203 A CN 112055203A CN 202010853191 A CN202010853191 A CN 202010853191A CN 112055203 A CN112055203 A CN 112055203A
Authority
CN
China
Prior art keywords
motion information
block
sub
prediction
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010853191.1A
Other languages
Chinese (zh)
Other versions
CN112055203B (en
Inventor
陈瑶
粘春湄
张雪
江东
方瑞东
林聚财
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010853191.1A priority Critical patent/CN112055203B/en
Publication of CN112055203A publication Critical patent/CN112055203A/en
Application granted granted Critical
Publication of CN112055203B publication Critical patent/CN112055203B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application discloses an inter-frame prediction method, a video coding method and a related device thereof, comprising the following steps: determining a weight array of the current block in each original prediction mode; dividing a current block into a plurality of first sub-blocks, and constructing a unidirectional motion information candidate list of the current block based on time domain motion information of at least two first sub-blocks; calculating coding cost based on the weight array, and selecting multiple groups of motion information with the minimum replacement price from the unidirectional motion information candidate list as multiple groups of first candidate motion information; based on the plurality of sets of first candidate motion information, a final prediction mode is selected from the original prediction modes. By the technical scheme, the prediction mode with better prediction accuracy can be selected, and the accuracy of inter-frame prediction is further improved better.

Description

Inter-frame prediction method, video coding method and related devices thereof
Technical Field
The present application relates to the field of video encoding and decoding technologies, and in particular, to an inter-frame prediction method, a video encoding method, and related apparatuses.
Background
Because the video image data volume is large, it is usually necessary to encode and compress the video image data, the compressed data is called video code stream, and the video code stream is transmitted to the user end through a wired or wireless network and then decoded and viewed.
The whole video coding flow comprises the processes of prediction, transformation, quantization, coding and the like. The prediction is divided into an intra-frame prediction part and an inter-frame prediction part. Inter-frame prediction uses temporal correlation between image frames to compress images. In a long-term research and development process, the inventor of the present application finds that the current inter-frame prediction method has certain limitations and also affects the accuracy of inter-frame prediction to a certain extent.
Disclosure of Invention
The technical problem mainly solved by the application is to provide an inter-frame prediction method, a video coding method and a related device thereof, which can select a prediction mode with better prediction accuracy so as to improve the accuracy of inter-frame prediction.
In order to solve the technical problem, the application adopts a technical scheme that: there is provided an inter prediction method, the method including:
determining a weight array of the current block in each original prediction mode;
dividing the current block into a plurality of first sub-blocks, and constructing a unidirectional motion information candidate list of the current block based on time domain motion information of at least two first sub-blocks;
calculating coding cost based on a weight array, and selecting multiple groups of motion information with the minimum replacement value from the unidirectional motion information candidate list as multiple groups of first candidate motion information;
selecting a final prediction mode among the original prediction modes based on a plurality of sets of the first candidate motion information.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided a video encoding method, the method comprising:
determining a final prediction mode of the current block based on the inter prediction method as described above;
determining a final prediction value of the current block based on the final prediction mode, and encoding the current block based on the final prediction value.
In order to solve the above technical problem, the present application adopts another technical solution: a video encoding system is provided, the video encoding system comprising a memory and a processor; the memory has stored therein a computer program for execution by the processor to implement the steps of the method as described above.
In order to solve the above technical problem, the present application adopts another technical solution: there is provided a readable storage medium storing a computer program executable by a processor for implementing the method as described above.
The beneficial effect of this application is: different from the prior art, the technical scheme provided by the application realizes the construction of the unidirectional motion information candidate list capable of reflecting the motion state of the current block more accurately by determining the weight array of the current block in each original prediction mode, dividing the current block into a plurality of first sub-blocks, constructing the unidirectional motion information candidate list of the current block based on the time domain motion information of at least two first sub-blocks, calculating the coding cost based on the weight array, selecting a plurality of groups of motion information with the minimum coding cost from the unidirectional motion information candidate list as a plurality of groups of first candidate motion information, selecting the final prediction mode in the original prediction mode based on the plurality of groups of first candidate motion information, compared with the prior art, the application can realize the selection of the prediction mode with better accuracy based on the more accurate unidirectional motion information candidate list by selecting the plurality of groups of motion information with the minimum coding cost from the unidirectional motion information candidate list, and further improves the accuracy of inter-frame prediction.
Drawings
FIG. 1 is a diagram illustrating a weight array according to an embodiment of an inter-frame prediction method of the present application;
FIG. 2 is a schematic diagram illustrating angles supported by AWP in an embodiment of an inter-frame prediction method of the present application;
FIG. 3 is a schematic diagram of angular partitions supported by AWP in another embodiment of an inter-frame prediction method of the present application;
FIG. 4 is a schematic diagram of reference weight configuration in an inter-frame prediction method according to the present application;
FIG. 5 is a flowchart illustrating an inter-frame prediction method according to an embodiment of the present disclosure;
FIG. 6 is a diagram illustrating a co-located block in an embodiment of an inter prediction method according to the present application;
FIG. 7 is a diagram illustrating a division of a current block in another embodiment of an inter prediction method according to the present application;
FIG. 8 is a flowchart illustrating an inter-frame prediction method according to another embodiment of the present disclosure;
FIG. 9 is a diagram illustrating neighboring prediction blocks in an embodiment of an inter-frame prediction method according to the present application;
FIG. 10 is a flowchart illustrating an inter-frame prediction method according to an embodiment of the present application;
FIG. 11 is a flowchart illustrating a method of inter-frame prediction according to another embodiment of the present application;
FIG. 12 is a flowchart illustrating an inter-frame prediction method according to an embodiment of the present disclosure;
FIG. 13 is a schematic representation of the partitioning of a current block of AWP in the present application;
FIG. 14 is a flowchart illustrating a video encoding method according to an embodiment of the present application;
FIG. 15 is a block diagram illustrating an embodiment of a video coding system according to the present application;
fig. 16 is a schematic structural diagram of an embodiment of a readable storage medium according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In the field of video transmission, because the data volume of a video image is large, video coding mainly has the function of compressing video pixel data into a video code stream, so that the data volume of the video is reduced, the network bandwidth in the process of transmitting the video is reduced, and meanwhile, the storage space can be reduced. The video pixel data at least comprises RGB data and YUV data.
The video coding process mainly comprises video acquisition, prediction, transform quantization and entropy coding. The prediction includes an intra-frame prediction and an inter-frame prediction, which are respectively used for removing the spatial and temporal redundancies of the video image.
Because the brightness and the chrominance signal values of the pixels of the adjacent frames are relatively close in time and have strong correlation, the inter-frame prediction is just to search a matching block which is closest to the current block in the reference frame by methods such as motion search and the like based on the correlation, and record the motion information between the current block and the matching block. The motion information includes a motion vector mv (motion vector) and a reference frame index, and in other embodiments, the motion information may also include other types of information, which is not limited herein. After obtaining the motion information, the motion information is encoded and transmitted to a decoding end. At the decoding end, the decoder can find the matching block of the current block in the reference frame as long as the MV of the current block is analyzed through the corresponding syntax element, and copy the pixel value of the matching block to the current block, namely the inter-frame prediction value of the current block.
In the application field of the existing coding technology, the inter-frame angle weighted prediction technology is mainly applied in AVS3 to obtain the predicted inter-frame pixel value. Inter Angle Weighted Prediction (AWP) is a new prediction mode in merge mode, and the supported coding block size range is 8x8 to 64x 64. The prediction mode is realized by means of the intra angle prediction idea: the method comprises the steps of firstly setting reference weight values of peripheral positions (the peripheral positions comprise integer pixel positions and sub-pixel positions) of a current block, then obtaining the weight value corresponding to each pixel position by utilizing an angle, and realizing the weighting of predicted values between two different image frames through finally obtained weight arrays, wherein the two weight arrays are respectively marked as weight0 and weight 1.
Please refer to fig. 1 and fig. 2 together, in which fig. 1 is a schematic diagram of a weight array in an embodiment of an inter-frame prediction method of the present application, and fig. 2 is a schematic diagram of an angle supported by AWP in an embodiment of an inter-frame prediction method of the present application. The current AWP supports 8 angles illustrated in fig. 2, each angle supports 7 reference weight reconfiguration, 56 prediction modes are configurable for traversal angle and reference weight, and 56 prediction modes can exist in the AWP at each block size. In other embodiments, the supported prediction angles may be set according to actual needs, and correspondingly, after the supported angles are changed, the total number of the prediction modes configurable by using the angles and the weights may also be changed, which is not limited herein.
As illustrated in fig. 2, when the AWP can support 8 angles as described above, the absolute values of the slopes of the corresponding supported prediction angles can include five of horizontal, vertical, 1, 2, and 1/2.
It is understood that in other embodiments, the number of prediction angles may be correspondingly reduced according to the texture of the image or the texture of the image block, such as 6 in one embodiment, and the current block diagonal direction angle is not included, i.e. the diagonal direction angles 0 and 4 illustrated in fig. 2 are removed. When the angles 0 and 4 are removed, the absolute values of the slopes corresponding to the supported prediction angles are four of horizontal, vertical, 2 and 1/2. In other embodiments, other prediction angles that do not affect the image construction may be removed according to actual requirements, such as removing some angles away from the horizontal or vertical direction, or adding supported angles according to actual requirements, which are not specifically listed here. In the current embodiment, by reducing angles which do not affect the prediction accuracy, the complexity of inter-frame prediction calculation can be reduced on the premise of ensuring that the basic performance is unchanged, and the response speed of inter-frame prediction can be further improved.
Further, please refer to fig. 3, wherein fig. 3 is a schematic diagram of angular partitions supported by AWP in another embodiment of an inter-frame prediction method of the present application. In the current embodiment, the supported angles can be divided into 4 partitions, as illustrated in fig. 3, according to the different areas of the angles, the angles are divided into 4 angular partitions: angle subregion 0, angle subregion 1, angle subregion 2 and angle subregion 3, wherein, angle subregion 0 includes angle 0 and angle 1, and angle subregion 1 includes angle 2 and angle 3, and angle subregion 2 includes angle 4 and angle 5, and angle subregion 3 includes angle 6 and angle 7.
Furthermore, when the supported angles are reduced according to actual requirements, the angle partitions can be kept unchanged, that is, as still illustrated in fig. 3, the angle of the lower left diagonal is divided into one partition, the angle close to the horizontal direction is divided into one partition, the angle close to the upper left diagonal is divided into one partition, and the angle close to the vertical direction is divided into one partition. It is understood that when the number of angles is changed, the division may be performed in a new division manner, which is not specifically described herein.
Still further, in other embodiments, the number of supported angle classes may also be reduced in a uniform or non-uniform manner. And then after reducing the angle, partitioning the residual angle in the original mode to obtain angle partitions, or partitioning the residual angle in a new mode to obtain different angle partitions.
After the angle is reduced, the number of prediction modes for performing UMVE offset on motion information in subsequent motion compensation may also be correspondingly reduced, for example, the original UMVE offset on 42 prediction modes may be reduced to 35 or 28 prediction modes, so that the response speed of inter-frame prediction is improved on the premise of ensuring the prediction accuracy.
Correspondingly, after the angle is reduced, the angle mode sequencing can be adaptively modified according to actual needs. Such as: the angle modes may be ordered in combination with the aspect ratio of the current coding block. Correspondingly, the inter-frame prediction method provided by the application comprises the following steps: and sorting the angle modes of the current block by combining the aspect ratio of the current block.
Such as: for the coding blocks with the aspect ratio of 8:1 or 4:1, sorting the angle modes in the horizontal direction and the vicinity thereof, and then sorting the angle modes in the vertical direction and the vicinity thereof; for the coding blocks with the aspect ratio of 1:8 or 1:4, sorting the angle modes in the vertical direction and the vicinity thereof, and then sorting the angle modes in the horizontal direction and the vicinity thereof; for aspect ratios of 1:2 or 2: the coding block of 1 sequences the angle modes in the vertical direction and the vicinity thereof, and then sequences the angle modes in the horizontal direction and the vicinity thereof. It is understood that in other embodiments, when the aspect ratios of the coding blocks are different, the permutation order for the angle modes may also be adaptively adjusted according to other orders. In the current embodiment, compared with the prior art, the angle mode sequencing is performed by adopting different sequencing modes for coding blocks with different aspect ratios, so that the bit overhead caused by the transmission of a prediction mode can be reduced, and the response speed in inter-frame prediction is further improved.
The reference weight configuration supported by AWP may include 7 reference weight reconfigurations as shown in fig. 4, where fig. 4 is a schematic view illustrating the reference weight configuration in an inter-frame prediction method according to the present application. The reference weight configuration may be a distribution function of reference weight values obtained according to the reference weight index values, as shown in fig. 4, a non-strict monotonically increasing function is assigned to the reference point at the position of 8-fold of the reference weight effective length (indicated by a black arrow in fig. 4), where the reference weight effective length is calculated by the prediction angle and the current block size.
Through 8 angles and 7 reference weight reconfigurations, 56 original prediction modes of the AWP can be configured.
Referring to fig. 5, fig. 5 is a flowchart illustrating an embodiment of an inter prediction method according to the present application.
S510: a weight array of the current block in each original prediction mode is determined.
Before performing step S510, all original prediction modes are first determined by traversing currently supported angle and reference weight configurations, and then a weight array of the current block in all original prediction modes is determined. According to the technical scheme provided by the application, all prediction modes determined based on angle and reference weight configuration are defined as original prediction modes, the selected prediction mode for inter-frame prediction is defined as a final prediction mode, and part of prediction modes selected in all the original prediction modes in the process of selecting the final prediction mode are defined as candidate prediction modes.
Further, the current block may be derived pixel by pixel based on each original prediction mode, so as to obtain a weight array of the current block in each original prediction mode. The flow of pixel-by-pixel derivation is as follows:
in the above diagram, the angle is divided into 4 regions as an example, and the luminance is slightly different from the formula derived from the pixel weight due to the different regions where the angle is located. The weights may be derived pixel by pixel based on a formula derived pixel by pixel weights corresponding to the region where the angle is located in the prediction mode of the current luminance block to obtain a weight array of the current luminance block. Let the block size of the current luma block be MxN, where M is wide, N is high, X is log2 (weight prediction angular slope absolute value), and Y is the weight prediction position.
i) The formula for deriving the pixel-by-pixel weights of the luminance blocks corresponding to the angle 0 and the angle 1 in the angle partition 0 is shown as the following [1] to [3 ]:
[1] computing a reference weight effective length ValidLength
ValidLength=(N+(M>>X))<<1
[2] Setting a reference weight value ReferenceWeights [ x ], wherein x belongs to [0, ValidLength-1]
FirstPos=(ValidLength>>1)-6+Y*((ValidLength-1)>>3)
ReferenceWeights[x]=Clip3(0,8,x-FirstPos)
[3] Pixel-by-pixel derivation of weights SampleWeight [ x ] [ y ]
SampleWeight[x][y]=ReferenceWeights[(y<<1)+((x<<1)>>X)]
ii) the formula for deriving the pixel-by-pixel weights of the luminance blocks located at angle 2 and angle 3 of the angular partition 1 is as follows:
[1] computing a reference weight effective length ValidLength
ValidLength=(N+(M>>X))<<1
[2] Setting a reference weight value ReferenceWeights [ x ], wherein x belongs to [0, ValidLength-1]
FirstPos=(ValidLength>>1)-4+Y*((ValidLength-1)>>3)–((M<<1)>>X)
ReferenceWeights[x]=Clip3(0,8,x-FirstPos)
[3] Pixel-by-pixel derivation of weights SampleWeight [ x ] [ y ]
SampleWeight[x][y]=ReferenceWeights[(y<<1)-((x<<1)>>X)]
iii) the formula for deriving the pixel-by-pixel weights of the luminance blocks located at angle 4 and angle 5 of the angular partition 2 is as follows:
[1] computing a reference weight effective length ValidLength
ValidLength=(M+(N>>X))<<1
[2] Setting a reference weight value ReferenceWeights [ x ], wherein x belongs to [0, ValidLength-1]
FirstPos=(ValidLength>>1)-4+Y*((ValidLength-1)>>3)–((N<<1)>>X)
ReferenceWeights[x]=Clip3(0,8,x-FirstPos)
[3] Pixel-by-pixel derivation of weights SampleWeight [ x ] [ y ]
SampleWeight[x][y]=ReferenceWeights[(x<<1)-((y<<1)>>X)]
iv) the formula for deriving the pixel-by-pixel weights for the luminance blocks located at angle 6 and angle 7 of the angular partition 3 is as follows:
[1] computing a reference weight effective length ValidLength
ValidLength=(M+(N>>X))<<1
[2] Setting a reference weight value ReferenceWeights [ x ], wherein x belongs to [0, ValidLength-1]
FirstPos=(ValidLength>>1)-6+Y*((ValidLength-1)>>3)
ReferenceWeights[x]=Clip3(0,8,x-FirstPos)
[3] Pixel-by-pixel derivation of weights SampleWeight [ x ] [ y ]
SampleWeight[x][y]=ReferenceWeights[(x<<1)+((y<<1)>>X)]
The process for pixel-by-pixel weight derivation for chroma blocks is as follows: also as illustrated in fig. 3, for the current chroma block, the weight of the upper left corner position of the weight array of the corresponding luma block may be directly used to record that the block size of the current block is MxN, where M is wide and N is high, and the value range of x of the current chroma block is 0 — (M/2-1); the value range of y of the current chrominance block is 0 to (N/2-1).
The formula for deriving the pixel-by-pixel weight of the chroma block is: SampleWeight chroma [ x ] [ y ] ═ SampleWeight [ x > >1] [ y > >1 ].
S520: the current block is divided into a plurality of first sub-blocks, and a unidirectional motion information candidate list of the current block is constructed based on temporal motion information of at least two first sub-blocks.
The current block is divided into a plurality of first sub-blocks, and then a unidirectional motion information candidate list of the current block is constructed based on the time domain motion information of at least two divided first sub-blocks.
Further, in an embodiment, the cross-mean partition is performed on the current block to obtain four first sub-blocks, and then a unidirectional motion information candidate list of the current block is constructed based on temporal motion information of at least two first sub-blocks. The number of motion information that can be accommodated in the unidirectional motion information candidate list is preset, that is, the length of the unidirectional motion information candidate list is preset, for example, the length of the unidirectional motion information candidate list may be set to 5, or set to 4 or 8, which is not limited herein.
After the current block is divided into a plurality of first sub-blocks, time domain motion information corresponding to each first sub-block is further acquired, and then motion compensation and subsequent cost calculation are performed according to the acquired first sub-blocks and the time domain motion information corresponding to the first sub-blocks.
Correspondingly, in an embodiment, when the cross of the current block is divided into 4 first sub-blocks on average, the first sub-blocks are taken as a unit, then the respective temporal motion information of each first sub-block is obtained, and the first sub-blocks are taken as a unit for prediction. The method for acquiring the time domain motion information of the first sub-block is as follows:
firstly, the spatial domain position of the time domain co-located block corresponding to the current first sub-block is determined.
Let bx, by be the position coordinates of the current first sub-block in the whole frame in scu (small coding unit), and determine the spatial domain position of the co-located block by using a mask. Wherein, mask is (-1) ^ 3. And the 'A' represents exclusive OR according to bits, and after exclusive OR is carried out on the-1 and the 3 according to bits, the last two bits of the mask are 0. Wherein scu is cu (coding unit) with size 4 × 4.
Then, the coordinates of the spatial position of the same block (in units of the first sub-block) are set to (xpos, ypos):
xpos=(bx&mask)+2;
ypos=(by&mask)+2;
wherein xpos may take the range of: bx-1, bx, bx +1, bx + 2; ypos may take the range: by-1, by, by +1, by + 2. The current first sub-block at the same coordinate position only corresponds to one time-domain co-located block, and the range of the time-domain co-located block corresponding to the current blocks at different coordinates is as shown in fig. 6.
Referring to fig. 6, fig. 6 is a schematic diagram of a co-located block in an embodiment of an inter prediction method according to the present application. The positions of all the small dots in fig. 6 are the coordinates of all possible co-located blocks of the first sub-block cur, each small square representing a scu. Traversing the 4 first sub-blocks according to the motion information obtaining mode, taking scu at four corners of the current block as a reference, obtaining a time domain block position corresponding to each first sub-block by using the mask and the coordinates of the first sub-blocks, and obtaining corresponding 4 time domain MVs. Then, for each first sub-block, looking at which corner scu the current first sub-block contains, the time domain MV corresponding to the corner scu is taken as the time domain MV of the first sub-block.
Referring to FIG. 7, FIG. 7 is a block diagram illustrating a partition of a current block according to another embodiment of an inter prediction method of the present application.
As illustrated in fig. 7, the time domain MV corresponding to scu1 is the time domain MV of the first sub-block 1, the time domain MV corresponding to scu2 is the time domain MV of the first sub-block 2, the time domain MV corresponding to scu3 is the time domain MV of the first sub-block 3, and the time domain MV corresponding to scu4 is the time domain MV of the first sub-block 4. If a corner fails to obtain a valid time-domain MV, the TMVP (temporal motion information) of the current block is directly taken as the time-domain MV of the first sub-block MV corresponding to the corner. And finally, traversing the MVP in the candidate list, and respectively performing motion compensation on the 4 first sub-blocks when the original TMVP is traversed.
It should be noted that the execution sequence of step S510 and step S520 is not limited herein, and step S510 and step S520 may be executed simultaneously, or step S510 and step S520 may be executed first, or step S520 and step S510 may be executed first.
S530: and calculating coding cost based on the weight array, and selecting multiple groups of motion information with the minimum replacement cost from the unidirectional motion information candidate list as multiple groups of first candidate motion information.
After determining a weight array of a current block in each original prediction mode and constructing a unidirectional motion information candidate list of the current block, further calculating and obtaining coding costs corresponding to each motion information in the unidirectional motion information candidate list in each original prediction mode based on the weight array, then sorting the obtained coding costs in each original prediction mode to obtain results, and selecting multiple groups of motion information with the minimum coding cost from the unidirectional motion information candidate list as multiple groups of first candidate motion information of each original prediction mode. The first candidate motion information is selected from a unidirectional motion information candidate list and used for selecting the motion information of the final prediction mode.
Further, in an embodiment, step S530 selects two sets of motion information with the minimum replacement cost from the uni-directional motion information candidate list based on the coding cost as the first candidate motion information of each original prediction mode.
Further, when the number of the original prediction modes is multiple and the unidirectional motion information candidate list includes multiple pieces of motion information, in step S530, the coding costs of the motion information in the original prediction modes are calculated based on the weight arrays corresponding to the original prediction modes, then the coding costs corresponding to the motion information in each original prediction mode are sorted, and multiple sets of motion information with the minimum coding cost in each prediction mode are selected as multiple sets of first candidate information. In an embodiment, the unidirectional motion information candidate list includes 5 sets of motion information V, W, X, Y and Z, and there are 56 original prediction modes in total, and the original prediction modes are numbered 1 to 56 in sequence, then coding costs corresponding to the 5 sets of motion information in each original prediction mode are respectively obtained, then the obtained coding costs are ranked to obtain the smallest multiple sets of motion information as multiple sets of first candidate motion information, for example, the coding costs of the motion information V, W, X, Y and Z in the original prediction modes are ranked, and two sets of motion information with the smallest coding costs are selected as the first candidate motion information.
Further, in an embodiment, when the number of the original prediction modes is 56 and 5 sets of motion information are included in the unidirectional motion information candidate list, the 2 sets of motion information with the smallest coding cost in each original prediction mode in step S530 may be set as the first candidate motion information.
S540: based on the plurality of sets of first candidate motion information, a final prediction mode is selected from the original prediction modes.
After determining the multiple sets of first candidate motion information, respectively performing motion compensation on the multiple sets of first candidate motion information to obtain first predicted values of the first candidate motion information in each prediction mode. And then, based on the obtained first predicted value, the coding cost corresponding to the first candidate motion information in each original prediction mode is obtained, and the original prediction mode corresponding to the minimum coding cost is selected as the final prediction mode. After the final prediction mode is determined, a final prediction value of the current block may be further determined based on the weight array and the final prediction mode for encoding and subsequent decoding.
Compared with the prior art, the technical scheme provided by the application and the method has the advantages that the weight array of the current block in each original prediction mode is determined, the current block is divided into a plurality of first sub-blocks, the unidirectional motion information candidate list of the current block is constructed based on the time domain motion information of at least two first sub-blocks, the coding cost is calculated based on the weight array, a plurality of groups of motion information with the minimum coding cost are selected from the unidirectional motion information candidate list to serve as a plurality of groups of first candidate motion information, the final prediction mode is selected from the original prediction mode based on the plurality of groups of first candidate motion information, the construction of the unidirectional motion information candidate list capable of reflecting the motion state of the current block more accurately is realized based on the time domain motion information of at least two first sub-blocks obtained by dividing the current block, and the prediction mode with better accuracy can be selected based on the more accurate unidirectional motion information candidate list, and further improves the accuracy of inter-frame prediction.
Referring to fig. 8, fig. 8 is a flowchart illustrating another embodiment of an inter prediction method according to the present application. In the current embodiment, the inter prediction method provided by the present application includes:
s81: a weight array of the current block in each original prediction mode is determined.
Step S81 is the same as step S510, and please refer to the above description of the corresponding parts, which is not repeated herein.
In the current embodiment, before the step S520 divides the current block into a plurality of first sub-blocks and constructs the uni-directional motion information candidate list of the current block based on temporal motion information of at least two first sub-blocks, the method provided in the present application further includes steps S82 to S84.
S82: it is determined whether there is a target neighboring prediction block corresponding to the current block.
When constructing the unidirectional motion information candidate list, a candidate motion information list is first constructed, wherein the candidate motion information list and the unidirectional motion information candidate list contain the same amount of motion information. The process of constructing the candidate motion information list is as follows: firstly, the available adjacent prediction block of the current block is taken out, then unidirectional motion information is derived by utilizing the motion information of the spatial adjacent block, and the unidirectional motion information is filled into a candidate motion information list. The length of the candidate motion information list is set in advance to be the same as that of the unidirectional motion information candidate list, so that when the motion information in the candidate motion information list is less than the set number, the time domain motion information of the sub-blocks obtained by dividing is further added to the candidate motion information list.
Wherein the target neighbor prediction block is a neighbor prediction block adopting an inter prediction mode. The step S82 determines whether there is an adjacent prediction block corresponding to the current block and adopting the inter prediction mode, and performs the following step S83 when the determination result is yes. Otherwise, if it is determined that there is no adjacent prediction block corresponding to the current block and/or it is determined that all adjacent prediction blocks are not adjacent prediction blocks using the inter prediction mode, it is determined that there is no target adjacent prediction block corresponding to the current block, and the following step S85 and subsequent steps are directly performed to fill the temporal motion information of at least one first sub-block into the candidate motion information list until the number of motion information of the candidate motion information list reaches the preset number, thereby completing the construction of the candidate motion information list of the current block.
S83: if the target adjacent prediction block corresponding to the current block exists after judgment, all the target adjacent prediction blocks of the current block are obtained, and the motion information of the target adjacent prediction blocks is subjected to duplication checking to determine the available adjacent prediction blocks.
And if the target adjacent prediction block corresponding to the current block exists after judgment, taking out all the target adjacent prediction blocks corresponding to the current block. The motion information of all target neighbor prediction blocks is then re-checked to determine whether the target neighbor prediction block is a usable neighbor prediction block. Wherein the available neighboring prediction block is a target neighboring prediction block whose motion information is different from the motion information in the candidate motion information list.
Further, in an embodiment, the step of re-checking the motion information of the target neighbor prediction block to determine available neighbor prediction blocks further comprises: the motion information of the target neighbor prediction block is subjected to a full-duplication checking method to determine an available neighbor prediction block. When each target adjacent prediction block is subjected to weight checking, comparing and judging the motion information of the target adjacent prediction block with each motion information previously filled in a candidate motion information list so as to avoid filling the motion information which is the same as the motion information previously filled in the candidate motion information list corresponding to the current block.
S84: the motion information of the available neighboring prediction blocks is added to the candidate motion information list.
When the available neighboring prediction block is determined through the above-described judgment and repetition, the motion information of the available neighboring prediction block is added to the candidate motion information list. Specifically, the motion information of the available neighboring prediction blocks is sequentially filled into the candidate motion information list according to the judgment order until the candidate motion information list reaches a set length, or the motion information of all the available neighboring prediction blocks is completely added into the candidate motion information list.
Further, please refer to fig. 9, fig. 9 is a schematic diagram of an adjacent prediction block in an embodiment of an inter prediction method according to the present application. As illustrated in fig. 9, each current block may correspond to a plurality of neighboring prediction blocks.
In an embodiment, when each current block corresponds to a plurality of adjacent prediction blocks, the adjacent prediction blocks of the current block are taken out according to a set sequence, and then whether the current block is an adjacent prediction block adopting an inter prediction mode is judged, and if so, the current block is judged to be a target adjacent prediction block. And then judging whether the motion information of the current target adjacent prediction block is the same as the motion information filled in the candidate motion information list in advance, if not, judging that the current target adjacent prediction block is an available adjacent prediction block, otherwise, further judging whether the next adjacent prediction block is an available adjacent prediction block, and sequentially circulating until all the adjacent prediction blocks corresponding to the current block are judged to be finished.
In another embodiment, when each current block corresponds to a plurality of adjacent prediction blocks, all adjacent prediction blocks of the current block are taken out according to a set sequence, then whether all adjacent prediction blocks are adjacent prediction blocks adopting an inter-prediction mode is judged respectively, the adjacent prediction blocks adopting the inter-prediction mode are taken out to be used as target adjacent prediction blocks, whether the target adjacent prediction blocks are available adjacent prediction blocks is judged according to the set sequence, and when the target adjacent prediction blocks are judged to be available adjacent prediction blocks, the available adjacent prediction blocks are further filled into the candidate motion information list. It should be noted that, the target neighboring prediction block that is determined later needs to be compared with all the motion information previously filled in the candidate motion information list to determine whether the motion information of the current target neighboring prediction block is the same as the motion information previously written in the candidate motion information list, so as to avoid filling the same motion information in the candidate motion information list.
As illustrated in fig. 9, the neighboring prediction blocks of the current block include F, G, C, A, and D (skip B). In one embodiment, the flow of determining F, G, C, A, and whether D is a usable neighbor prediction block, is as follows:
first, it is determined F, G, C, A whether D is a target neighbor prediction block with availability, and the flow is as follows:
i) f is "available" if F exists and inter prediction mode is employed; otherwise, F is "unavailable".
j) G is "available" if it exists and inter prediction mode is employed; otherwise, G is "unavailable".
k) C is "available" if C exists and inter prediction mode is employed; otherwise, C is "unavailable".
l) if A is present and inter prediction mode is employed, then A is "available"; otherwise, a is "unavailable".
m) if D is present and inter prediction mode is employed, D is "available"; otherwise, D is "unavailable".
Secondly, the motion information of each available target adjacent prediction block is subjected to the duplicate checking, and the motion information of the available adjacent prediction block passing the duplicate checking is filled in a candidate motion information list. The duplicate checking process is as follows:
(1) firstly, judging the availability of F, if F is available, filling F into a candidate motion information list, otherwise, if F is unavailable, entering the next step (2);
(2) judging the availability of G: if G is unavailable, setting G to be unavailable, and entering the next step (3), otherwise, if G is available, further judging whether F is available, if not, setting G to be available, and adding a candidate motion information list;
otherwise, when F is available, whether the MVs of F and G are repeated or not needs to be compared, if not, G is set to be available, otherwise, G is unavailable;
(3) judging the availability of C: if the C is unavailable, setting the C to be unavailable, and entering the next step (4), otherwise, if the C is available, further judging whether the G is available, if the C is unavailable, setting the C to be available, and adding a candidate motion information list;
if G is available, whether the MVs of C and G are repeated or not needs to be compared, if not, C is set to be available, otherwise, C is not available;
(4) judging the availability of A: if A is not available, setting A to be unavailable, entering the next step (5), otherwise, if A is available, further judging whether F is available, if not, setting A to be available, and adding a candidate motion information list;
if F is available, whether the MVs of A and F are repeated or not needs to be compared, if not, A is set to be available, otherwise, A is unavailable;
(5) judging the availability of D: if D is not available, setting D to be unavailable to finish availability judgment, otherwise, if D is available, further judging whether A is available, and if A is unavailable, initializing mv of A as unavailable; otherwise, acquiring the mv of the D-A-D converter, and judging whether the mv of the D-A converter is repeated or not;
judging whether G is available, if not, initializing mv of G as unavailable; otherwise, acquiring the mv of the D and G, and judging whether the mv of the D and G is repeated or not, specifically judging whether the mv of the D and G is repeated or not according to the following first condition and second condition;
the first condition is as follows: if A is not available or available, D is not repeated with mv of A;
and a second condition: d does not repeat with mv of G if G is not available or is available;
if the two conditions are satisfied simultaneously, D is finally available, otherwise D is unavailable.
In another embodiment, the flow of determining F, G, C, A, and whether D is a usable neighbor prediction block, is as follows:
(1) and judging the availability of F, if the F is available, filling the F into the candidate motion information list, and if the F is not available, entering the next step (2).
(2) Judging the availability of G: if G is unavailable, setting G to be unavailable, and entering the next step (3), otherwise, if G is available, further judging whether F is available, if not, setting G to be available, and adding a candidate motion information list;
otherwise, when F is available, whether the MVs of F and G are repeated or not needs to be compared, if not, G is set to be available, otherwise, G is unavailable.
(3) Judging the availability of C: if C is unavailable, C is set to be unavailable, the next step (4) is carried out, otherwise, if C is available, G is further available, and if C is available, whether the MV of C and the MV of G are repeated is judged;
judging whether the F is available, and if so, judging whether the C and the F are repeated;
the first condition is as follows: if G is not available or is available, the mv of C and G is not repeated;
and a second condition: if F is not available or is available, the mv of C and F is not repeated;
if the above two conditions are simultaneously satisfied, C is finally available, otherwise C is not available.
(4) And judging the availability of A, wherein A is unavailable, entering the next step (5), otherwise, if A is available, respectively checking whether the MV of A and the MV of F/G/C are repeated, and only if the MV of A and the MV of F/G/C are different, the A is available.
(5) And judging the availability of D, if D is unavailable, finishing the judgment, otherwise, further respectively checking whether the MV of D and the MV of F/G/C/A are repeated, and if D is not the same as the MV of F/G/C/A, D is available.
Further, in an embodiment, since the conventional method uses whether the reference frame index and the motion vector are the same when performing duplicate checking, even if the reference frame index is different, the POC may be the same, which may cause motion information duplication. In order to avoid the above problems, the technical solution provided by the present application further adopts a picture order count duplication checking method (POC duplication checking method) to check the duplicate of the motion information of the target neighboring prediction block, thereby replacing the original reference frame index duplication checking method, and the accuracy of duplication checking can be better improved by using POC duplication checking, thereby better avoiding filling the same motion information into the candidate motion information list, further realizing determining a more accurate candidate motion information list, and then determining a more accurate unidirectional motion information candidate list based on the accurate candidate motion information list, thereby improving the accuracy of inter-frame prediction. The detailed flow of the image number duplication checking method is shown in the embodiment corresponding to fig. 10 below.
After adding the motion information of the available neighboring prediction block to the candidate motion information list, if the number of motion information in the candidate motion information list reaches the preset number, the following step S85 is not performed, and the steps S86 to S87 are directly performed without skipping step S85. If the number of motion information in the candidate motion information list has not reached the predetermined number, the following step S85 is further executed to make the length of the candidate motion information list reach the predetermined length.
The above-mentioned constructing the uni-directional motion information candidate list of the current block based on the temporal motion information of the at least two first sub-blocks in step S520 further includes step S85 and step S86.
S85: and sequentially adding the time domain motion information of different first sub-blocks into the candidate motion information list according to the preset position sequence of the first sub-blocks until the number of the motion information of the candidate motion information list reaches the preset number.
If the motion information quantity in the candidate motion information list does not reach the preset quantity, adding the time domain motion information of different first sub-blocks into the candidate motion information list according to the preset position sequence of the first sub-blocks and the position sequence.
Before adding the time domain motion information of different first sub-blocks into the candidate motion information list, further performing duplicate checking on the time domain motion information of the first sub-blocks added into the candidate motion information list so as to avoid filling the same motion information into the candidate motion information list. Wherein, look for the duplicate includes: judging whether the time domain motion information of the first sub-block is the same as the motion information in the candidate motion information list or not, if so, judging that the motion information of the current first sub-block is repeated, not filling the time domain motion information of the current first sub-block into the candidate motion information list, and judging whether the time domain motion information of the next first sub-block is repeated or not; otherwise, if the time domain motion information of the first sub-block is judged to be different from the motion information in the candidate motion information list, the motion information of the first sub-block is judged not to be repeatedly written into the candidate motion information list.
In an embodiment, before adding the temporal motion information of the first sub-block to the candidate motion information list, the method provided by the present application further includes:
judging whether motion information which is the same as the time domain motion information of the current first sub-block exists in the candidate motion information list or not; if the candidate motion information list does not have the motion information which is the same as the time domain motion information of the current first sub-block, adding the time domain motion information of the first sub-block into the candidate motion information list; if the candidate motion information list has the motion information which is the same as the time domain motion information of the current first sub-block, the time domain motion information of the first sub-block is not added into the candidate motion information list, and the next first block is continuously judged until the number of the motion information in the candidate motion information list reaches the preset number, and/or all the first sub-blocks are traversed. Whether the two pieces of motion information are the same or not can be determined by comparing the POC corresponding to the two pieces of motion information.
Before step S85 is executed, the method includes: the current block is divided into a plurality of first sub-blocks.
Further, the step S85 of adding the temporal motion information of the different first sub-blocks into the candidate motion information list according to the preset position order of the first sub-blocks sequentially includes: after the time domain motion information of all the first sub-blocks is added into the candidate motion information list, if the number of the motion information of the candidate motion information list is still smaller than the preset number, at least one new motion information is generated based on the motion information in the candidate motion information list, so that the number of the motion information of the candidate motion information list reaches the preset number.
In one embodiment, generating at least one new motion information based on the motion information in the candidate motion information list comprises: and selecting the first motion information in the candidate motion information list, carrying out different zooming on the selected motion information, and adding the zoomed motion information into the candidate motion information list until the number of the motion information in the candidate motion information list reaches the preset number.
In another embodiment, generating at least one new motion information based on the motion information in the candidate motion information list comprises: and zooming the motion information in sequence from the first motion information in the candidate motion information list, and adding the zoomed motion information into the candidate motion information list until the number of the motion information in the candidate motion information list reaches a preset number.
Furthermore, the current block comprises four first sub-blocks arranged like Chinese character 'tian', and the preset positions of the first sub-blocks are sequentially an upper left corner, an upper right corner, a lower left corner and a lower right corner. It is understood that in other embodiments, the predetermined position order of the first sub-block may control other types of orders, which are not listed here.
Further, in an embodiment, after sequentially adding the temporal motion information of different first sub-blocks to the candidate motion information list according to a preset position order of the first sub-blocks, the method provided in the present application further includes: and constructing a unidirectional motion information candidate list based on the motion information in the candidate motion information list.
Further, the step of constructing a uni-directional motion information candidate list based on the motion information in the candidate motion information list includes the contents described in step S86.
S86: and selecting forward motion information or backward motion information from the motion information in the candidate motion information list, and correspondingly filling the same position in the unidirectional motion information candidate list.
Wherein the motion information comprises forward motion information and/or backward motion information.
Specifically, forward motion information or backward motion information of the candidate motion information in the candidate motion information list is selected according to the parity of the list and is put into the list, and then the unidirectional motion information candidate list is constructed and obtained. That is, only the forward motion information is taken from the positions 1, 3 and 5 of the candidate list and is filled into the list, and if the forward motion information does not exist, the backward motion information is filled; and the other positions only take the backward motion information to fill in the list, and similarly, if the backward motion information does not exist, take the forward motion information to fill in the list.
S87: calculating coding cost based on the weight array, and selecting multiple groups of motion information with the minimum replacement price from the unidirectional motion information candidate list as multiple groups of first candidate motion information;
s88: based on the plurality of sets of first candidate motion information, a final prediction mode is selected from the original prediction modes.
Step S87 and step S88 in the current embodiment are the same as step S530 and step S540 described above, and may specifically refer to the description of the corresponding parts above, which is not described herein again.
Referring to fig. 10, fig. 10 is a flowchart illustrating an embodiment of an inter prediction method according to the present application.
In the present embodiment, the above-mentioned performing a duplicate checking on the motion information of the target neighboring prediction block by using a full duplicate checking method further includes:
s101: and acquiring the reference frame image sequence number corresponding to the motion information in the target adjacent prediction block.
Each piece of motion information corresponds to a reference frame image sequence number, the image sequence numbers are used for identifying the sequence of the images in the image sequence, the image sequence numbers are unique in the technical scheme provided by the application, and one image sequence number only corresponds to one frame of image.
Similarly, before step S102, the image number corresponding to each piece of motion information in the candidate motion information list is also obtained.
S102: and judging whether the reference frame image sequence number corresponding to the current target adjacent prediction block is the same as the reference frame image sequence number corresponding to any motion information in the candidate motion information list or not, and judging whether the motion vector corresponding to the current target adjacent prediction block is the same as the motion vector corresponding to any motion information in the candidate motion information list or not.
After acquiring the reference frame image sequence number corresponding to the motion information in the target adjacent prediction block and the image sequence number corresponding to each motion information in the unidirectional motion information candidate list, further judging whether the reference frame image sequence number corresponding to the current target adjacent prediction block is the same as the reference frame image sequence number corresponding to any motion information in the unidirectional motion information candidate list.
S103: and if the reference frame image sequence number corresponding to the current target adjacent prediction block is judged to be the same as the reference frame image sequence number corresponding to any motion information in the candidate motion information list, and the motion vector corresponding to the current target adjacent prediction block is judged to be the same as the motion vector corresponding to any motion information in the candidate motion information list, judging that the target adjacent prediction block is an unavailable adjacent prediction block.
If the step S102 determines that the reference frame image number corresponding to the current target adjacent prediction block is the same as the reference frame image number corresponding to any motion information in the candidate motion information list, and determines that the motion vector corresponding to the current target adjacent prediction block is the same as the motion vector corresponding to any motion information in the candidate motion information list, it determines that the current target adjacent prediction block is an unavailable adjacent prediction block; and if the reference frame image sequence numbers corresponding to the current target adjacent prediction block are different from the reference frame image sequence numbers corresponding to all the motion information in the candidate motion information list, and/or the motion vectors corresponding to the current target adjacent prediction block and the motion vectors corresponding to all the motion information in the candidate motion information list are judged to be different, judging that the current target adjacent prediction block is the available adjacent prediction block. In the current embodiment, by combining an image sequence number duplicate checking mode, whether the motion information of the current target adjacent prediction block is repeated with the motion information in the candidate motion information list can be judged more accurately, so that a more accurate candidate motion information list is obtained, and a one-way motion information candidate list which reflects the motion state of the current block more accurately is constructed.
Further, the reference frame list includes a first directional list and a second directional list. In other embodiments, the first direction is forward and the second direction is backward.
In an embodiment, the above steps use a full duplication checking method to check the motion information of the target neighboring prediction block, and further include: if the reference frame of the current target adjacent prediction block in the first direction is judged to be unavailable and the reference frame of the candidate motion information in the candidate motion information list in the second direction is not available, whether the motion information of the current target adjacent prediction block in the second direction is the same as the motion information of the current candidate motion information in the candidate motion information list in the first direction is further judged.
If the reference frame image sequence number corresponding to the motion information of the current target adjacent prediction block is the same as the reference frame image sequence number corresponding to the current candidate motion information and the motion vector correspondence is the same, the motion information of the target adjacent prediction block is repeated with the current candidate motion information, otherwise, the motion information of the target adjacent prediction block is judged not to be repeated with the current candidate motion information, and whether the motion information of the target adjacent prediction block is repeated with the next candidate motion information is continuously judged. If the motion information of the target adjacent prediction block obtained through judgment is not repeated with all the candidate motion information in the candidate motion information list, the current target adjacent prediction block is judged to be the usable adjacent prediction block, otherwise, if the motion information of the target adjacent prediction block obtained through judgment is repeated with any one candidate motion information in the candidate motion information list, the current target adjacent prediction block is the unusable adjacent prediction block. In the present embodiment, motion information included in the candidate motion information is defined as candidate motion information.
Such as: if the POC duplication checking method is adopted for duplication checking, and the reference frame list includes a first-direction list and a second-direction list (the first-direction list and the second-direction list are respectively represented by L0 and L1), when it is determined that the reference frame of the neighboring prediction block a in the L0 direction is unavailable, the reference frame of the motion information B (or the neighboring prediction block B) in the candidate motion information list in the L1 direction is unavailable, it is necessary to determine whether the motion information of a in the L1 direction is the same as that of B in the L0 direction, if the POC is the same and the MV is the same in the x and y directions, the motion information of a and B is repeated, and only one of a and B is selected.
In another embodiment, the above steps may use a full-duplication checking method to duplicate the motion information of the target neighboring prediction block, and further include: if the reference frames of the current target adjacent prediction block in the first direction and the second direction are available and the reference frames of the candidate motion information in the candidate motion information list in the first direction and the second direction are available, whether the motion information of the target adjacent prediction block in the first direction is the same as the motion information of the current candidate motion information in the second direction or not is further judged, and whether the motion information of the target adjacent prediction block in the second direction is the same as the motion information of the current candidate motion information in the first direction or not is judged.
If the motion information of the target adjacent prediction block in the first direction is judged to be the same as the motion information of one candidate motion information in the candidate motion information list in the second direction, and the motion information of the target adjacent prediction block in the second direction is judged to be the same as the motion information of one candidate motion information in the candidate motion information list in the first direction, the motion information of the target adjacent prediction block is judged to be repeated with the current candidate motion information, otherwise, the motion information of the target adjacent prediction block is judged not to be repeated with the current candidate motion information, and whether the motion information of the target adjacent prediction block is repeated with the next candidate motion information is continuously judged. If the motion information of the target adjacent prediction block obtained through judgment is not repeated with all the candidate motion information in the candidate motion information list, the current target adjacent prediction block is judged to be the usable adjacent prediction block, otherwise, if the motion information of the target adjacent prediction block obtained through judgment is repeated with any one candidate motion information in the candidate motion information list, the current target adjacent prediction block is the unusable adjacent prediction block. In the present embodiment, motion information included in the candidate motion information is defined as candidate motion information.
Such as: if the POC duplication checking method is adopted for duplication checking, when it is determined that a and B are available in both L0 and L1 directions, it is determined whether the motion information of a in L0 and B in L1 directions is the same, and the motion information of a in L1 and B in L0 directions is the same, and if they are the same, the motion information of a and B is repeated.
Referring to fig. 11, fig. 11 is a flowchart illustrating a method of inter-frame prediction according to another embodiment of the present application. In the present embodiment, the calculating the coding cost based on the weight array in step S530 further includes:
s1101: and respectively carrying out motion compensation on each piece of motion information in the unidirectional motion information candidate list to obtain a corresponding first predicted value.
After the unidirectional motion information candidate list is constructed, motion compensation is performed on the current block by using each motion information in the unidirectional motion information candidate list, and then a first predicted value corresponding to each motion information is obtained.
Further, in the technical solution provided in the present application, a plurality of prediction modes may be determined by using an angle and a reference weight configuration, and then in step S1101, a first prediction value corresponding to each motion information in the unidirectional motion information candidate list in each prediction mode is determined respectively.
S1102: and calculating and solving the coding cost corresponding to each motion information based on the first predicted value.
And calculating the coding cost corresponding to each motion information based on the obtained first predicted value.
When it is calculated in step S1101 that each set of motion information corresponds to multiple first prediction values, step S1102 separately obtains, based on each first prediction value, a coding cost corresponding to each motion information in each prediction mode.
Further, in an embodiment, an advanced motion vector expression (UMVE) technique may also be introduced in the AWP. For example, when 56 prediction modes can be determined by using the angular and reference weight configuration, and each unidirectional motion information candidate list includes 5 pieces of motion information, UMVE shifting is performed on the eligible motion information in the unidirectional motion information candidate list as base _ mv (where UMVE shifting includes 4 directions × 5 setps, a total of 20 shifting results), and coding costs are compared between the UMVE shifted mv and the unbiased mv, and finally, whether the biased mv or the unbiased mv is used as the final mv is determined according to the coding cost comparison result. The process is as follows:
(i) performing motion compensation on all motion information in the unidirectional motion information candidate list to obtain a first predicted value; and calculates the Sum of squares (SAD: Sum of Absolute Difference) of the pixel differences of motion information with and without UMVE offset.
(j) In 56 prediction modes, weighted RDCost is calculated and sorted in a mode without UMVE offset for all motion information in the unidirectional motion information candidate list, and then two sets of motion information cost0 and cost1 with the smallest RDCost are selected as first candidate motion information in each prediction mode.
(k) Selecting a partial mode from all the prediction modes to carry out UMVE offset on motion information, calculating and sorting weighted RDcost for the motion information in all the unidirectional motion information candidate lists in a mode of UMVE offset under the selected original prediction mode, and selecting two groups of motion information cost0 and cost1 with the smallest RDCost as first candidate motion information under each prediction mode.
It should be noted that, unlike the previous step (j), the step (k) does not traverse all original prediction modes, but selects a part of the original prediction modes for UMVE shift, and the selection method includes a) and b), which are specifically as follows:
a) if the current block has been visited, i.e. AWP was previously performed, only the original prediction modes (up to 7) that were last involved in the reselection are subjected to UMVE offset;
b) if the current block has not been visited, i.e. AWP has not been performed, the costs (cost0) in the 56 modes of the first motion information are sorted, and the 42 original prediction modes with the lowest cost are selected to be subjected to UMVE offset.
Further, after the first candidate motion information is determined, a final prediction mode is selected from the original prediction modes based on the plurality of sets of first candidate motion information. In the current embodiment, the process of selecting the final prediction mode based on the plurality of sets of first candidate motion information is as follows:
all the original prediction modes (including UMVE offset and no UMVE offset) in the step (k) enter into a RDO (Rate Distortion optimization) rough selection stage, and the cost of each group of first candidate motion information under each prediction mode entering into the RDO rough selection stage is calculated and ranked by using SATD (sum of Absolute Transformed difference). And 7 least-costly prediction modes (each prediction mode comprises two groups of first candidate motion information) are selected as candidate prediction modes to enter a final fine selection stage. However, the indexes of the two sets of first candidate motion information in the current step cannot be the same, and if there is a UMVE offset, the indexes of the UMVE cannot be the same.
(6) After 7 prediction modes with the minimum coding cost are obtained, processes such as interpolation, residual error obtaining, transformation quantization, inverse transformation and inverse quantization are further carried out on each first candidate motion information to obtain a reconstructed pixel, RDCost (SSE) is obtained by utilizing an RDO process, the size of the RDCost of the first candidate motion information in the 7 candidate prediction modes is compared, and the mode with the minimum RDCost is used as a final prediction mode.
Further, the above steps respectively perform motion compensation on each motion information in the unidirectional motion information candidate list to obtain a corresponding first prediction value, and further include: and respectively performing motion compensation on each first sub-block by using the time domain motion information of the plurality of first sub-blocks included in the current block, and further obtaining a corresponding first predicted value of the current block based on the motion compensation result of each first sub-block.
Referring to fig. 12, fig. 12 is a flowchart illustrating an embodiment of an inter prediction method according to the present application. In the current embodiment, the above-described step of dividing the current block into a plurality of first sub-blocks includes step S1201.
S1201: and performing cross average division on the current block to obtain four first sub-blocks.
And performing cross average division on the current block to further obtain four first sub-blocks with the same area. The four first sub-blocks are arranged like Chinese character 'tian', and the preset positions of the first sub-blocks are sequentially an upper left corner, an upper right corner, a lower left corner and a lower right corner.
S1202: and dividing the current block according to the corresponding division mode of each original prediction mode to obtain two second sub-blocks.
In the current embodiment, a corresponding partition method is preset for each original prediction mode, and when any original prediction mode is selected, the partition method corresponding to the selected original prediction mode is correspondingly selected to partition the current block, so as to obtain two second sub blocks.
The above-mentioned step performs motion compensation on each motion information in the unidirectional motion information candidate list to obtain a corresponding first prediction value, further comprising:
s1203: and selecting time domain motion information of the first sub-block corresponding to the second sub-block, and performing motion compensation on the second sub-block to obtain a first predicted value corresponding to the current block.
In the current embodiment, the first sub-block corresponding to the second sub-block is determined according to the boundary of two second sub-blocks, and may also be understood as the division direction of the second sub-block obtained based on the division of the current block.
Referring to fig. 13, fig. 13 is a schematic diagram of the segmentation of the current block of the AWP in the present application, and fig. 13 shows a schematic diagram of the partitioning structure of the current block into two second sub-blocks under different original prediction modes.
In an embodiment, when the current block is divided according to the partition manner corresponding to each original prediction mode to obtain two second sub-blocks distributed left and right or up and down, the step of selecting the time domain motion information of the first sub-block corresponding to the second sub-block further includes: and the motion information of the first sub-block at the lower right corner is selected to perform motion compensation on the second sub-block distributed at the upper side or the left side.
In another embodiment, when the current block is divided according to the partition mode corresponding to each original prediction mode to obtain two second sub-blocks distributed left and right or up and down, the step of selecting the temporal motion information of the first sub-block corresponding to the second sub-block further includes: and selecting the motion information of the first sub-block at the lower left corner to perform motion compensation on the second sub-block distributed on the left side or the lower side, and selecting the motion information of the first sub-block at the upper right corner to perform motion compensation on the second sub-block distributed on the right side or the upper side.
In another embodiment, when the current block is divided according to the division mode corresponding to the original prediction mode and the division line for dividing the two second sub-blocks is not parallel to the diagonal line, the horizontal line and the vertical line, the motion information of the first sub-block close to the gravity center of each second sub-block is selected to perform motion compensation on the second sub-block.
In another embodiment, when the current block is divided according to the partition method corresponding to the original prediction mode, and the partition line for dividing the two second sub-blocks is parallel to any diagonal line, the step of selecting the temporal motion information of the first sub-block corresponding to the second sub-block, and performing motion compensation on the second sub-block further includes: and selecting the motion information of the first sub-block distributed along the diagonal line intersected with the dividing line to correspondingly perform motion compensation on the two second sub-blocks.
With reference to fig. 13, when the division is performed in the manner of the 4 th column in fig. 13, two second sub-blocks, which are present upper and lower, are obtained. And adopting tmvp of the first sub-block at the upper right corner and the lower left corner for the 4 th column, dividing the current block into an upper second sub-block and a lower second sub-block, wherein the upper second sub-block adopts the tmvp of the first sub-block at the upper left corner, and the lower second sub-block adopts the tmvp at the lower right corner for motion compensation.
And for the 5 th-7 th columns, adopting the tmvp of the first sub-block at the upper left corner and the lower right corner, dividing the current block into a left second sub-block and a right second sub-block, wherein the tmvp of the first sub-block at the lower left corner is adopted by the second sub-block at the left side, and the tmvp of the first sub-block at the upper right corner is adopted by the second sub-block at the right side for motion compensation.
In another embodiment, for the original prediction modes of the 1 st, 2 nd, 3 rd and last columns in fig. 13, temporal motion information (tmvp) of the top left first sub-block and the bottom right first sub-block is used, and the current block is divided into two left and right second sub-blocks, the left second sub-block uses tmvp of the top left first sub-block for motion compensation, and the right second sub-block uses tmvp of the bottom right first sub-block for motion compensation.
And adopting tmvp of the first sub-block at the upper right corner and the first sub-block at the lower left corner for the original prediction modes of the 4 th column, the 5 th column, the 6 th column and the 7 th column, dividing the current block into a left second sub-block and a right second sub-block, performing motion compensation on the left second sub-block by adopting the tmvp of the first sub-block at the upper right corner, and performing motion compensation on the right second sub-block by adopting the tmvp of the first sub-block at the lower left corner.
In another embodiment, for the original prediction modes of the 1 st column and the last column, the tmvp of the upper left corner first sub-block and the lower right corner first sub-block are respectively used for motion compensation, and the current block is divided into two left and right second sub-blocks, the left second sub-block uses the tmvp of the upper left corner first sub-block for motion compensation, and the right second sub-block uses the tmvp of the lower right corner first sub-block for motion compensation.
And respectively performing motion compensation on tmvp of the first sub-block at the upper left corner and the first sub-block at the lower right corner in the rows 2 and 3, dividing the current block into an upper second sub-block and a lower second sub-block, performing motion compensation on the upper second sub-block by using the tmvp of the first sub-block at the upper left corner, and performing motion compensation on the lower second sub-block by using the tmvp of the lower right corner.
The technical scheme provided in the current embodiment mainly combines the idea of sub-TMVP, and meanwhile, in consideration of different prediction modes, the AWP can select TMVP at different positions scu to perform motion compensation on the current block, that is, block motion compensation is performed on the current block by using TMVP at different positions, so as to obtain a more accurate prediction value.
In other embodiments, the technique of subTMVP may also be employed to replace the original TMVP and still occupy only one position in the candidate list. And the current block can be correspondingly divided into four first sub-blocks, namely, the motion information of the four first sub-blocks can be stored, and when motion compensation is carried out, the sub-TMVP respectively carries out motion compensation on the four first sub-blocks to obtain respective predicted values, so that the predicted value of the current block is obtained based on the predicted values of the four first sub-blocks. It can be understood that, in other embodiments, motion compensation may also be performed on the second sub-block by using motion information of the first sub-block according to other manners, which are not listed here, as long as the current block is divided according to a dividing manner corresponding to each original prediction mode to obtain two second sub-blocks, and time domain motion information of the first sub-block corresponding to the second sub-block is selected to perform motion compensation on the second sub-block to obtain the first prediction corresponding to the current block, which may be considered as protection in the protection scope of the present application.
In an embodiment, the method provided by the present application further includes: and determining all prediction modes by traversing the configuration of angles and reference weights, wherein the angle dimensions are 7 and the diagonal direction angle of the current block is not included.
In an embodiment, the method provided by the present application further includes: the angle mode correspondences of the current block are sorted according to the ratio of the width to the height of the current block, which may be specifically described with reference to the corresponding portions of fig. 2 to 3 above.
Referring to fig. 14, fig. 14 is a flowchart illustrating a video encoding method according to an embodiment of the present application. The method provided by the application comprises the following steps:
s1410: a final prediction mode of the current block is determined.
Wherein the final prediction mode is determined according to the method as described in any one of the embodiments of fig. 1 to 13 and corresponding embodiments thereof.
S1420: and determining a final prediction value of the current block based on the final prediction mode, and encoding the current block based on the final prediction value.
After determining the final prediction mode according to the method as described in any one of the embodiments in fig. 1 to 13 and corresponding embodiments thereof, determining a final prediction value of the current block based on the determined final prediction mode, and encoding the current block based on the final prediction value.
Wherein encoding the current block based on the final prediction value of the current block comprises: an index of one motion information in the unidirectional motion information candidate list is encoded.
Further, the method provided by the present application further includes: the texture direction of the current block is determined.
After determining the texture direction of the current block, the prediction modes are reordered based on the texture direction of the current block. Specifically, all prediction modes may be reordered starting from a prediction mode corresponding to an angle that is the same as or closest to the texture direction. Further, encoding the current block based on the prediction value of the current block includes: and encoding the index after reordering the prediction mode of the current block.
Please refer to fig. 15, fig. 15 is a schematic structural diagram of an embodiment of a video coding system according to the present application. The video coding system comprises a memory and a processor; the memory stores a computer program, and the processor is configured to execute the computer program to implement the method according to any one of the embodiments shown in fig. 1 to 14 and corresponding figures.
The memory 1502 includes a local storage (not shown) and stores a computer program, which can implement the method described in any of the embodiments of fig. 1-14 and corresponding embodiments thereof when executed.
The processor 1501 is coupled to the memory 1502, and the processor 1501 is configured to execute a computer program to perform the method described in any of the embodiments of fig. 1-14 and their counterparts above.
Referring to fig. 16, fig. 16 is a schematic structural diagram of an embodiment of a readable storage medium according to the present application. The readable storage medium 1600 stores a computer program 1601 capable of being executed by a processor, the computer program 1601 being used for implementing the method as described in any one of the embodiments of fig. 1 to 14 and corresponding embodiments thereof. Specifically, the storage medium 1600 may be one of a memory, a personal computer, a server, a network device, or a usb disk, and is not limited in any way herein.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (21)

1. A method of inter-prediction, the method comprising:
determining a weight array of the current block in each original prediction mode;
dividing the current block into a plurality of first sub-blocks, and constructing a unidirectional motion information candidate list of the current block based on time domain motion information of at least two first sub-blocks;
calculating coding cost based on the weight array, and selecting multiple groups of motion information with the minimum replacement cost from the unidirectional motion information candidate list as multiple groups of first candidate motion information;
selecting a final prediction mode among the original prediction modes based on a plurality of sets of the first candidate motion information.
2. The inter-prediction method of claim 1, wherein before the dividing the current block into a plurality of first sub-blocks and constructing the uni-directional motion information candidate list for the current block based on temporal motion information of at least two of the first sub-blocks, the method further comprises:
judging whether a target adjacent prediction block corresponding to the current block exists or not, wherein the target adjacent prediction block is an adjacent prediction block adopting an inter-frame prediction mode;
if yes, acquiring all target adjacent prediction blocks of the current block, and carrying out duplicate checking on motion information of the target adjacent prediction blocks to determine available adjacent prediction blocks;
adding motion information of the available neighboring prediction blocks to a candidate motion information list.
3. The inter-prediction method of claim 2, wherein the constructing the uni-directional motion information candidate list for the current block based on the temporal motion information of at least two of the first sub-blocks comprises:
according to the preset position sequence of the first sub-blocks, sequentially adding the time domain motion information of different first sub-blocks into the candidate motion information list until the number of the motion information of the candidate motion information list reaches the preset number;
and selecting forward motion information or backward motion information from the motion information in the candidate motion information list, and correspondingly filling the forward motion information or the backward motion information into the same position of the unidirectional motion information candidate list, wherein the motion information comprises the forward motion information and/or the backward motion information.
4. The inter-prediction method according to claim 3,
the current block comprises four first sub-blocks arranged like Chinese character 'tian', and the preset positions of the first sub-blocks are sequentially an upper left corner, an upper right corner, a lower left corner and a lower right corner.
5. The inter-prediction method of claim 3, wherein the sequentially adding temporal motion information of different first sub-blocks into the candidate motion information list according to the predetermined position order of the first sub-block comprises:
after all the time domain motion information of the first sub-block is added into the candidate motion information list, if the number of the motion information of the candidate motion information list is smaller than the preset number, at least one new motion information is generated based on the motion information in the candidate motion information list, so that the number of the motion information of the candidate motion information list reaches the preset number.
6. The inter-prediction method of claim 5, wherein the generating at least one new motion information based on the motion information in the candidate motion information list comprises:
and zooming the motion information in sequence from the first motion information in the candidate motion information list, and adding the zoomed motion information into the candidate motion information list until the number of the motion information in the candidate motion information list reaches the preset number.
7. The inter-prediction method of claim 3, wherein before adding the temporal motion information of the first sub-block to the candidate motion information list, the method further comprises:
judging whether motion information which is the same as the time domain motion information of the current first sub-block exists in the candidate motion information list or not;
if the motion information does not exist, adding the time domain motion information of the first sub-block into the candidate motion information list;
and if so, not adding the time domain motion information of the first sub-block into the candidate motion information list.
8. The inter-prediction method of claim 2, wherein the performing a duplicate check on the motion information of the target neighbor prediction block to determine an available neighbor prediction block, further comprises:
motion information of the target neighbor prediction block is double-checked using a full-double-check method to determine available neighbor prediction blocks.
9. The inter-prediction method of claim 8, wherein the performing a full-duplication check on the motion information of the target neighboring prediction block further comprises:
acquiring a reference frame image sequence number corresponding to the motion information in the target adjacent prediction block;
judging whether the reference frame image sequence number corresponding to the current target adjacent prediction block is the same as the reference frame image sequence number corresponding to any motion information in the candidate motion information list or not; judging whether a motion vector corresponding to a current target adjacent prediction block is the same as a motion vector corresponding to any motion information in the candidate motion information list;
if the reference frame image sequence number corresponding to the current target adjacent prediction block is judged to be the same as the reference frame image sequence number corresponding to any motion information in the candidate motion information list, and the motion vector corresponding to the current target adjacent prediction block is judged to be the same as the motion vector corresponding to any motion information in the candidate motion information list, judging that the target adjacent prediction block is an unavailable adjacent prediction block;
otherwise, the target adjacent prediction block is judged to be an available adjacent prediction block.
10. The inter-prediction method of claim 8, wherein the reference frame list comprises a first directional list and a second directional list;
the performing duplicate checking on the motion information of the target adjacent prediction block by using a full duplicate checking method further comprises:
if the reference frame of the current target adjacent prediction block in the first direction is judged to be unavailable, and the reference frame of the candidate motion information in the candidate motion information list in the second direction is not available, further judging whether the motion information of the current target adjacent prediction block in the second direction is the same as the motion information of the current candidate motion information in the candidate motion information list in the first direction;
if the reference frame image sequence number corresponding to the motion information of the current target adjacent prediction block is the same as the reference frame image sequence number corresponding to the current candidate motion information and the motion vector is correspondingly the same, the motion information of the target adjacent prediction block is repeated with the current candidate motion information, otherwise, the motion information of the target adjacent prediction block is judged not to be repeated with the current candidate motion information, and whether the motion information of the target adjacent prediction block is repeated with the next candidate motion information is continuously judged.
11. The inter-prediction method of claim 8, wherein the reference frame list comprises a first directional list and a second directional list;
the performing duplicate checking on the motion information of the target adjacent prediction block by using a full duplicate checking method further comprises:
if the reference frames of the current target adjacent prediction block in the first direction and the second direction are available and the reference frames of the candidate motion information in the candidate motion information list in the first direction and the second direction are available, further judging whether the motion information of the target adjacent prediction block in the first direction is the same as the motion information of the current candidate motion information in the second direction or not and judging whether the motion information of the target adjacent prediction block in the second direction is the same as the motion information of the current candidate motion information in the first direction or not;
if the motion information of the target adjacent prediction block is the same as the motion information of the current candidate, judging that the motion information of the target adjacent prediction block is not repeated with the motion information of the current candidate, and continuously judging whether the motion information of the target adjacent prediction block is repeated with the motion information of the next candidate.
12. The inter-prediction method of claim 1, wherein the calculating the coding cost based on the weight array comprises:
respectively carrying out motion compensation on each piece of motion information in the unidirectional motion information candidate list to obtain a corresponding first predicted value;
and calculating and solving the coding cost corresponding to each motion information based on the first predicted value.
13. The inter-prediction method according to claim 12, wherein performing motion compensation on each motion information in the uni-directional motion information candidate list to obtain a corresponding first prediction value respectively, further comprises:
and respectively performing motion compensation on each first sub-block by using the time domain motion information of the plurality of first sub-blocks included in the current block to obtain the corresponding first predicted value of the current block.
14. The inter-prediction method of claim 12,
the dividing the current block into a plurality of first sub-blocks, comprising: performing cross average division on the current block to obtain four first sub-blocks;
after the dividing the current block into a plurality of first sub-blocks, further comprising: dividing the current block according to the division mode corresponding to each original prediction mode to obtain two second sub-blocks;
the performing motion compensation on each motion information in the unidirectional motion information candidate list to obtain a corresponding first prediction value further includes:
and selecting time domain motion information of a first sub-block corresponding to the second sub-block, and performing motion compensation on the second sub-block to obtain a first predicted value corresponding to the current block.
15. The inter-frame prediction method of claim 14, wherein when the current block is divided according to the partition manner corresponding to each original prediction mode to obtain two second sub-blocks distributed left and right or up and down, the selecting temporal motion information of the first sub-block corresponding to the second sub-block to perform motion compensation on the second sub-block further comprises:
and selecting the motion information of the first sub-block at the upper left corner to perform motion compensation on the second sub-block distributed at the upper side or the left side, and selecting the motion information of the first sub-block at the lower right corner to perform motion compensation on the second sub-block distributed at the lower side or the right side.
16. The inter-frame prediction method of claim 14, wherein when the current block is divided according to the partition manner corresponding to each original prediction mode to obtain two second sub-blocks distributed left and right or up and down, the selecting temporal motion information of the first sub-block corresponding to the second sub-block to perform motion compensation on the second sub-block further comprises:
and selecting the motion information of the first sub-block at the lower left corner to perform motion compensation on the second sub-block distributed on the left side or the lower side, and selecting the motion information of the first sub-block at the upper right corner to perform motion compensation on the second sub-block distributed on the right side or the upper side.
17. The inter-prediction method of claim 1, further comprising: and determining all original prediction modes by traversing the configuration of angles and reference weights, wherein the angular dimensions are 6 and the diagonal direction angle of the current block is not included.
18. The inter-prediction method of claim 1, further comprising:
and correspondingly sorting the angle modes of the current block by combining the ratio of the width to the height of the current block.
19. A method of video encoding, the method comprising:
determining a final prediction mode of the current block based on the method of any one of claims 1-18;
determining a final prediction value of the current block based on the final prediction mode, and encoding the current block based on the final prediction value.
20. A video encoding system, comprising a memory and a processor; the memory has stored therein a computer program for execution by the processor to perform the steps of the method according to any one of claims 1-18.
21. A readable storage medium, characterized in that the readable storage medium stores a computer program executable by a processor for implementing the steps of the method according to any one of claims 1-18.
CN202010853191.1A 2020-08-22 2020-08-22 Inter-frame prediction method, video coding method and related devices Active CN112055203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010853191.1A CN112055203B (en) 2020-08-22 2020-08-22 Inter-frame prediction method, video coding method and related devices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010853191.1A CN112055203B (en) 2020-08-22 2020-08-22 Inter-frame prediction method, video coding method and related devices

Publications (2)

Publication Number Publication Date
CN112055203A true CN112055203A (en) 2020-12-08
CN112055203B CN112055203B (en) 2024-04-12

Family

ID=73599838

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010853191.1A Active CN112055203B (en) 2020-08-22 2020-08-22 Inter-frame prediction method, video coding method and related devices

Country Status (1)

Country Link
CN (1) CN112055203B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210306656A1 (en) * 2020-03-26 2021-09-30 Alibaba Group Holding Limited Method and apparatus for encoding or decoding video
CN113794881A (en) * 2021-04-13 2021-12-14 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
CN114885164A (en) * 2022-07-12 2022-08-09 深圳比特微电子科技有限公司 Method and device for determining intra-frame prediction mode, electronic equipment and storage medium
WO2023123736A1 (en) * 2021-12-31 2023-07-06 Oppo广东移动通信有限公司 Communication method, apparatus, device, system, and storage medium
WO2024050723A1 (en) * 2022-09-07 2024-03-14 Oppo广东移动通信有限公司 Image prediction method and apparatus, and computer readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1527607A (en) * 2003-01-14 2004-09-08 ���ǵ�����ʽ���� Method and apparatus for coding and or decoding moving image
CN105009590A (en) * 2013-03-15 2015-10-28 高通股份有限公司 Device and method for scalable coding of video information
CN108141604A (en) * 2015-06-05 2018-06-08 英迪股份有限公司 Image coding and decoding method and image decoding apparatus
CN110225346A (en) * 2018-12-28 2019-09-10 杭州海康威视数字技术股份有限公司 A kind of decoding method and its equipment
CN110383695A (en) * 2017-03-03 2019-10-25 西斯维尔科技有限公司 Method and apparatus for being coded and decoded to digital picture or video flowing
CN111418205A (en) * 2018-11-06 2020-07-14 北京字节跳动网络技术有限公司 Motion candidates for inter prediction
CN111567045A (en) * 2017-10-10 2020-08-21 韩国电子通信研究院 Method and apparatus for using inter prediction information

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1527607A (en) * 2003-01-14 2004-09-08 ���ǵ�����ʽ���� Method and apparatus for coding and or decoding moving image
CN105009590A (en) * 2013-03-15 2015-10-28 高通股份有限公司 Device and method for scalable coding of video information
CN108141604A (en) * 2015-06-05 2018-06-08 英迪股份有限公司 Image coding and decoding method and image decoding apparatus
CN110383695A (en) * 2017-03-03 2019-10-25 西斯维尔科技有限公司 Method and apparatus for being coded and decoded to digital picture or video flowing
CN111567045A (en) * 2017-10-10 2020-08-21 韩国电子通信研究院 Method and apparatus for using inter prediction information
CN111418205A (en) * 2018-11-06 2020-07-14 北京字节跳动网络技术有限公司 Motion candidates for inter prediction
CN110225346A (en) * 2018-12-28 2019-09-10 杭州海康威视数字技术股份有限公司 A kind of decoding method and its equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YOSHITAKA KIDANI, ET AL: "Non-CE4: On merge list generation for geometric partitioning", 《JVET会议》 *
周芸等: "H.266/VVC视频编码帧间预测关键技术研究", 《广播与电视技术》 *
王秋月: "视频编码的帧间预测及率失真优化技术研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210306656A1 (en) * 2020-03-26 2021-09-30 Alibaba Group Holding Limited Method and apparatus for encoding or decoding video
US11706439B2 (en) * 2020-03-26 2023-07-18 Alibaba Group Holding Limited Method and apparatus for encoding or decoding video
US20230319303A1 (en) * 2020-03-26 2023-10-05 Alibaba Group Holding Limited Method and apparatus for encoding or decoding video
CN113794881A (en) * 2021-04-13 2021-12-14 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
WO2023123736A1 (en) * 2021-12-31 2023-07-06 Oppo广东移动通信有限公司 Communication method, apparatus, device, system, and storage medium
CN114885164A (en) * 2022-07-12 2022-08-09 深圳比特微电子科技有限公司 Method and device for determining intra-frame prediction mode, electronic equipment and storage medium
WO2024050723A1 (en) * 2022-09-07 2024-03-14 Oppo广东移动通信有限公司 Image prediction method and apparatus, and computer readable storage medium

Also Published As

Publication number Publication date
CN112055203B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
JP7558352B2 (en) Image prediction method and related device
CN112055203B (en) Inter-frame prediction method, video coding method and related devices
KR102081213B1 (en) Image prediction method and related device
CN109644276A (en) Image coding/decoding method
CN107925759A (en) Method and apparatus for coding and decoding infra-frame prediction
CN116962717A (en) Image encoding/decoding method, storage medium, and transmission method
CN109565593A (en) The recording medium of image coding/decoding method and equipment and stored bits stream
KR20170045264A (en) Image prediction method and related device
CN110446044B (en) Linear model prediction method, device, encoder and storage device
CN112369021A (en) Image encoding/decoding method and apparatus for throughput enhancement and recording medium storing bitstream
CN108141604A (en) Image coding and decoding method and image decoding apparatus
CN111527752A (en) Method and apparatus for encoding and decoding image, and recording medium storing bitstream
CN111741297B (en) Inter-frame prediction method, video coding method and related devices
CN110476425A (en) Prediction technique and device based on block form
CN107810632B (en) Intra prediction processor with reduced cost block segmentation and refined intra mode selection
CN108271023A (en) Image prediction method and relevant device
CN113273188B (en) Image encoding/decoding method and apparatus, and recording medium storing bit stream
CN112585976A (en) Method and apparatus for encoding/decoding image and recording medium for storing bitstream
CN112438048A (en) Method and apparatus for encoding/decoding image and recording medium storing bitstream
CN111684801A (en) Bidirectional intra prediction method and apparatus
CN113940077A (en) Virtual boundary signaling method and apparatus for video encoding/decoding
CN111263144B (en) Motion information determination method and equipment
KR20190116067A (en) Method and apparatus for inter predection using reference frame generabed based on deep-learning
KR20230003054A (en) Coding and decoding methods, devices and devices thereof
CN110198442B (en) Intra-frame prediction method, device and storage medium for video coding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant