CN110933423A - Inter-frame prediction method and device - Google Patents
Inter-frame prediction method and device Download PDFInfo
- Publication number
- CN110933423A CN110933423A CN201811102838.6A CN201811102838A CN110933423A CN 110933423 A CN110933423 A CN 110933423A CN 201811102838 A CN201811102838 A CN 201811102838A CN 110933423 A CN110933423 A CN 110933423A
- Authority
- CN
- China
- Prior art keywords
- mhp
- target
- image block
- current image
- candidate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
- H04N19/139—Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/167—Position within a video image, e.g. region of interest [ROI]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The application provides an inter-frame prediction method. In the method, the encoding end or the decoding end determines the target MHP position set according to the motion information of the current image block, and then derives the target inter-frame prediction pixel of the current image block according to the candidate inter-frame prediction pixel corresponding to each target MHP position in the target MHP position set, so that the finally obtained target inter-frame prediction pixel of the current image block can be ensured to be the optimal prediction pixel block. Even if the video image has the factors of afterimage and/or edge blurring of the video image, the finally obtained target inter-frame prediction pixel of the current image block is ensured to be closer to the original pixel of the current image block and is the best prediction pixel block, and the inter-frame prediction accuracy and the coding performance are improved.
Description
Technical Field
The present application relates to video image technology, and more particularly, to inter-frame prediction methods and apparatus.
Background
Inter-frame prediction refers to performing predictive coding by using reconstructed pixel values of time-domain adjacent blocks of image blocks in a video image, and is widely used in the fields of ordinary televisions, conference televisions, video telephones, high-definition televisions and the like.
At present, when inter-frame prediction is performed, inter-frame prediction pixel blocks of a plurality of image blocks in a video image are not optimal inter-frame prediction pixel blocks due to the fact that the video image has afterimages and/or the video image has blurred edges and the like, and inter-frame prediction accuracy and coding performance are reduced.
Disclosure of Invention
The application provides an inter-frame prediction method and device for guiding inter-frame prediction pixels through motion information of a current image block.
The technical scheme provided by the application comprises the following steps:
the application provides a first inter-frame prediction method, which is applied to an encoding end and comprises the following steps:
determining a target multi-hypothesis prediction MHP position set according to the motion information of the current image block;
determining candidate inter-frame prediction pixels corresponding to all target MHP positions in a target MHP position set;
and determining a target inter-frame prediction pixel of the current image block according to each candidate inter-frame prediction pixel.
As an embodiment, the determining a target multi-hypothesis prediction MHP location set according to the motion information of the current image block includes:
determining a candidate MHP position set according to the motion information of the current image block;
selecting at least two candidate MHP locations from the set of candidate MHP locations;
and forming the selected candidate MHP positions into the target MHP position set.
As one embodiment, the motion information includes: a motion vector;
the determining the candidate MHP position set according to the motion information of the current image block includes:
and selecting at least two positions of which the distance from the motion vector to the end point of the motion vector meets a preset condition as candidate MHP positions to form the candidate MHP position set according to the direction of the motion vector of the current image block.
As an embodiment, the selecting, according to the direction of the motion vector of the current image block, at least two positions whose distances from the end point of the motion vector meet a preset condition as candidate MHP positions to form the candidate MHP position set includes:
determining a target area in a target coordinate system according to the direction of a motion vector of a current image block, wherein the target area is an area matched with the direction of the motion vector in the target coordinate system;
and taking at least two positions in the target area as candidate MHP positions according to the preset condition to form the candidate MHP position set.
As one embodiment, the target coordinate system is a coordinate system with the first specified point as an origin;
the first specified point is the end point of the motion vector or other points in the direction of the motion vector except the end point.
As an embodiment, the motion information further comprises: a first reference frame index corresponding to a reference frame adopted by the current image block;
each candidate MHP location in the set of candidate MHP locations corresponds to a second reference frame index;
the target coordinate system is a coordinate system with the first specified point as an origin when the first reference frame index is the same as the second reference frame index.
When the first reference frame index is different from the second reference frame index, the target coordinate system is a coordinate system with a second designated point as an origin, the second designated point being a termination point of a target motion vector or other points in the direction of the motion vector except the termination point of the target motion vector;
the target motion vector is a motion vector obtained by multiplying a motion vector of the current image block by L, where L is a ratio of L1 to L2, L1 is a time-domain span of a reference frame corresponding to the second reference frame index and a current frame where the current image block is located, and L2 is a time-domain span of a reference frame corresponding to the first reference frame index and a current frame.
In an example, the first reference frame index and the second reference frame index are indexes of reference frames in a reference frame list recorded by the encoding end.
In one example, the determining the target area in the target coordinate system according to the direction of the motion vector of the current image block includes:
if the product of the transverse component and the longitudinal component in the motion vector of the current image block is greater than 0, determining a transverse coordinate axis, a longitudinal coordinate axis, a first coordinate quadrant area and a third coordinate quadrant area in the target coordinate system as the target area;
and if the product of the transverse component and the longitudinal component in the motion vector of the current image block is less than 0, determining a transverse coordinate axis, a longitudinal coordinate axis, a second coordinate quadrant area and a fourth coordinate quadrant area in the target coordinate system as the target area.
In one example, the determining the target area in the target coordinate system according to the motion vector of the current image block includes:
if the transverse component in the motion vector of the current image block is 0, determining a longitudinal coordinate axis in the target coordinate system as the target area;
and if the longitudinal component in the motion vector of the current image block is 0, determining a transverse coordinate axis in the target coordinate system as the target area.
As one embodiment, the determining the target inter prediction pixel of the current image block according to each candidate inter prediction pixel comprises:
performing weighting operation on each candidate prediction pixel according to the weighting coefficient of each candidate inter prediction pixel;
and determining a weighting operation result as a target inter-frame prediction pixel of the current image block.
In one example, the method further comprises:
storing at least one target MHP location of a current image block to provide inter-prediction reference to other image blocks adjacent to the current image block; alternatively, the first and second electrodes may be,
and storing a calculation result calculated according to at least one target MHP position of the current image block so as to provide inter-frame prediction reference for other image blocks adjacent to the current image block.
In one example, the method further comprises:
and sending a coded bit stream carrying MHP position strategy information to a decoding end, wherein the MHP position strategy information is used for providing a strategy for determining the position of a target MHP to the decoding end.
Wherein the MHP location policy information comprises first indication information, and the first indication information is used for indicating an available candidate MHP location set; or the MHP location policy information includes second indication information, where the second indication information is used to indicate an available target MHP location; or, the MHP position policy information includes third indication information, and the third indication information includes a motion information offset to indicate that the target MHP position is determined according to the motion information of the current image block and the motion information offset.
The present application provides a second inter-frame prediction method, which is applied to a decoding end, and includes:
determining a target MHP position set corresponding to the current image block according to the motion information of the current image block and MHP position strategy information carried by a received coding bit stream from a coding end;
determining candidate inter-frame prediction pixels corresponding to all target MHP positions in a target MHP position set;
and determining a target inter-frame prediction pixel of the current image block according to each candidate inter-frame prediction pixel.
In one example, the MHP location policy information includes first indication information indicating a set of available candidate MHP locations;
the determining a target MHP position set corresponding to the current image block according to the motion information of the current image block and the received MHP position strategy information carried by the coding bit stream from the coding end comprises:
determining a candidate MHP position set according to the motion information of the current image block;
and determining a target MHP position set from all determined candidate MHP position sets according to the available candidate MHP position sets indicated by the first indication information.
In one example, the MHP location policy information includes second indication information indicating an available target MHP location;
the determining a target MHP position set corresponding to the current image block according to the motion information of the current image block and the received MHP position strategy information carried by the coding bit stream from the coding end comprises:
determining a candidate MHP position set according to the motion information of the current image block;
and selecting target MHP positions from the candidate MHP position set according to the available target MHP positions indicated by the second indication information and forming the target MHP position set.
In one example, the MHP position policy information includes third indication information, where the third indication information includes a motion information offset to indicate that a target MHP position is determined according to the motion information of the current image block and the motion information offset;
the determining a target MHP position set corresponding to a current image block according to MHP position strategy information carried by a received coding bit stream from a coding end comprises:
determining the position of a target MHP according to the motion information offset indicated by the third indication information and the motion information of the current image block;
and forming the determined target MHP positions into the target MHP position set.
The present application provides a third inter-frame prediction method, which is applied to a decoding end, and includes:
determining a target multi-hypothesis prediction MHP position set corresponding to the current image block according to the motion information of the current image block;
determining candidate inter-frame prediction pixels corresponding to all target MHP positions in a target MHP position set;
and determining a target inter-frame prediction pixel of the current image block according to each candidate inter-frame prediction pixel.
In one example, the determining a target MHP position set corresponding to a current image block according to motion information of the current image block includes:
determining a candidate MHP position set according to the motion information of the current image block;
selecting at least two candidate MHP locations from the set of candidate MHP locations;
and forming the selected candidate MHP positions into the target MHP position set.
In one example, the motion information includes: a motion vector;
the determining the candidate MHP position set according to the motion information of the current image block includes:
and selecting at least two positions of which the distance from the motion vector to the end point of the motion vector meets a preset condition as candidate MHP positions to form the candidate MHP position set according to the direction of the motion vector of the current image block.
In one example, the selecting, according to the direction of the motion vector of the current image block, at least two positions whose distances from the end point of the motion vector satisfy a preset condition as candidate MHP positions to form the candidate MHP position set includes:
determining a target area in a target coordinate system according to the direction of a motion vector of a current image block, wherein the target area is an area matched with the direction of the motion vector in the target coordinate system;
and taking at least two positions in the target area as candidate MHP positions according to the preset condition to form the candidate MHP position set.
In one example, the target coordinate system is a coordinate system with the first specified point as an origin;
the first specified point is the end point of the motion vector or other points in the direction of the motion vector except the end point.
In one example, the motion information further includes: a first reference frame index corresponding to a reference frame adopted by the current image block;
each candidate MHP location in the set of candidate MHP locations corresponds to a second reference frame index.
In one example, the target coordinate system is a coordinate system with a first specified point as an origin when the first reference frame index is the same as the second reference frame index.
In one example, when the first reference frame index is different from the second reference frame index, the target coordinate system is a coordinate system with a second designated point as an origin, the second designated point being an end point of a target motion vector or a point other than the end point of the target motion vector in the direction of the motion vector;
the target motion vector is a motion vector obtained by multiplying a motion vector of the current image block by L, wherein L is a ratio of L1 to L2, L1 is a time-domain span between a reference frame corresponding to the second reference frame index and a current frame where the current image block is located, and L2 is a time-domain span between a reference frame corresponding to the first reference frame index and the current frame.
In an example, the first reference frame index and the second reference frame index are indexes of reference frames in a reference frame list recorded by the encoding end.
In one example, the determining the target area in the target coordinate system according to the motion vector of the current image block includes:
if the product of the transverse component and the longitudinal component in the motion vector of the current image block is greater than 0, determining a transverse coordinate axis, a longitudinal coordinate axis, a first coordinate quadrant area and a third coordinate quadrant area in the target coordinate system as the target area;
and if the product of the transverse component and the longitudinal component in the motion vector of the current image block is less than 0, determining a transverse coordinate axis, a longitudinal coordinate axis, a second coordinate quadrant area and a fourth coordinate quadrant area in the target coordinate system as the target area.
In one example, the determining the target area in the target coordinate system according to the motion vector of the current image block includes:
if the transverse component in the motion vector of the current image block is 0, determining a longitudinal coordinate axis in the target coordinate system as the target area;
and if the longitudinal component in the motion vector of the current image block is 0, determining a transverse coordinate axis in the target coordinate system as the target area.
As an embodiment, the determining a target multi-hypothesis prediction MHP location set corresponding to the current image block according to the motion information of the current image block includes:
and determining a target MHP position set corresponding to the current image block according to the motion information of the current image block and the received MHP position strategy information carried by the coding bit stream from the coding end.
In one example, the MHP location policy information includes first indication information indicating a set of available candidate MHP locations;
the determining a target MHP position set corresponding to the current image block according to the motion information of the current image block and the received MHP position strategy information carried by the coding bit stream from the coding end comprises:
determining all candidate MHP position sets according to the motion information of the current image block;
and determining a target MHP position set from all candidate MHP position sets according to the available candidate MHP position sets indicated by the first indication information.
In one example, the MHP location policy information includes second indication information indicating an available target MHP location;
the determining a target MHP position set corresponding to the current image block according to the motion information of the current image block and the received MHP position strategy information carried by the coding bit stream from the coding end comprises:
determining all candidate MHP position sets according to the motion information of the current image block;
and selecting target MHP positions from all candidate MHP position sets according to the available target MHP positions indicated by the second indication information to form the target MHP position set.
In one example, the MHP position policy information includes third indication information, where the third indication information includes a motion information offset to indicate that a target MHP position is determined according to the motion information of the current image block and the motion information offset;
the determining a target MHP position set corresponding to the current image block according to the motion information of the current image block and the received MHP position strategy information carried by the coding bit stream from the coding end comprises:
determining the position of a target MHP according to the motion information offset indicated by the third indication information and the motion information of the current image block;
and forming the determined target MHP positions into the target MHP position set.
As one embodiment, the determining the target inter prediction pixel of the current image block according to each candidate inter prediction pixel comprises:
performing weighting operation on each candidate prediction pixel according to the weighting coefficient of each candidate inter prediction pixel;
and determining a weighting operation result as a target inter-frame prediction pixel of the current image block.
As an embodiment, the method further comprises:
storing at least one target MHP location of a current image block to provide inter-prediction reference to other image blocks adjacent to the current image block; alternatively, the first and second electrodes may be,
and storing a calculation result calculated according to at least one target MHP position of the current image block so as to provide inter-frame prediction reference for other image blocks adjacent to the current image block.
The embodiment of the present application provides a coding end device, including: a machine-readable storage medium and a processor;
wherein the machine-readable storage medium is to store machine-readable instructions;
the processor is configured to read the machine-readable instruction and execute the instruction to implement the first inter-frame prediction method.
The embodiment of the present application provides a first decoding-side device, including: a machine-readable storage medium and a processor;
wherein the machine-readable storage medium is to store machine-readable instructions;
the processor is configured to read the machine-readable instruction and execute the instruction to implement the second inter-frame prediction method.
The embodiment of the present application provides a second decoding-side device, including: a machine-readable storage medium and a processor;
wherein the machine-readable storage medium is to store machine-readable instructions;
the processor is configured to read the machine-readable instruction and execute the instruction to implement the third inter-frame prediction method.
According to the technical scheme, the encoding end or the decoding end determines the target MHP position set according to the motion information of the current image block, and then deduces the target inter-frame prediction pixel of the current image block according to the candidate inter-frame prediction pixel corresponding to each target MHP position in the target MHP position set, so that the finally obtained target inter-frame prediction pixel of the current image block can be ensured to be the optimal prediction pixel block. Even if the video image has the factors of afterimage and/or edge blurring of the video image, the finally obtained target inter-frame prediction pixel of the current image block is ensured to be closer to the original pixel of the current image block and is the best prediction pixel block, and the inter-frame prediction accuracy and the coding performance are improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart of an inter-frame prediction method provided in embodiment 1 of the present application;
FIG. 2 is a flowchart of an implementation of step 101 provided in embodiment 1 of the present application;
fig. 3 is a flowchart of implementation of step 201 provided in embodiment 1 of the present application;
FIG. 4 is a flowchart of an implementation of step 302 provided in embodiment 1 of the present application;
fig. 5a to 5d are schematic views of target areas provided in embodiment 1 of the present application;
fig. 6 is a flowchart of another implementation of step 201 provided in embodiment 1 of the present application;
FIG. 7 is a schematic view of a target area provided in example 1 of the present application;
fig. 8 is a flowchart of an inter-frame prediction method according to embodiment 2 of the present application;
fig. 9 is a flowchart of an inter-frame prediction method according to embodiment 3 of the present application;
fig. 10 is a structural diagram of an encoding end device provided in the present application;
fig. 11 is a schematic hardware structure diagram of an encoding-side device provided in the present application;
fig. 12 is a block diagram of a decoding-side device provided in the present application;
fig. 13 is a schematic hardware structure diagram of the decoding-side device shown in fig. 12 provided in the present application;
fig. 14 is another structural diagram of the decoding-side device provided in the present application;
fig. 15 is a schematic hardware structure diagram of the decoding-side device shown in fig. 14 provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Example 1:
this embodiment 1 is described as applied to the encoding side.
Referring to fig. 1, fig. 1 is a flowchart of an inter prediction method provided in embodiment 1 of the present application. The method is applied to an encoding end and can comprise the following steps:
In this application, the size of the current tile is not limited, and may be a largest Coding Tree Unit (CTU) specified in a Coding standard, and may also be larger or smaller than the CTU, which is not specifically limited in this application.
In this application, the Motion information of the current block may refer to Motion-related coding information, such as Motion Vector (MV), Reference frame Index (Reference Index), and the like, and is not limited in this application. Here, the motion vector represents the relative displacement between the current image block and the best matching block in its reference image. The reference frame index refers to the number of reference frames in a preset reference frame list adopted by the current image block.
According to the inter-frame prediction method and device, when inter-frame prediction is carried out, the target MHP position set used for inter-frame prediction is guided according to the motion information of the current image block, inter-frame prediction is carried out based on the target MHP position set, and compared with the existing method that the target MHP position set is determined based on a fixed mode, the inter-frame prediction accuracy can be improved. As to how to determine the target MHP position set according to the motion information of the current image block in step 101, there are many implementation manners, and the flow shown in fig. 2 below illustrates one implementation manner, which is not described herein again for the sake of example.
In the present application, the candidate inter prediction pixel refers to a pixel derived from a reference frame of the current image block, which is a basis for obtaining a target inter prediction pixel of the current image block, specifically, see step 103.
For one embodiment, the target MHP position may be represented by a motion vector or a motion vector and a reference frame index. The reference frame index here is used to indicate that the target MHP location uses the next reference frame in the pre-defined reference frame list. In step 102, when the target MHP position is represented by a motion vector, the default motion vector of the target MHP position points to a reference frame consistent with the current image block, so as to obtain candidate inter-frame prediction pixels corresponding to each MHP target position. And when the target MHP position is represented by the motion vector and the reference frame index, acquiring the candidate inter-frame prediction pixel corresponding to each MHP target position according to the motion vector of each MHP position and the reference frame pointed by the motion vector.
And 103, determining a target inter-frame prediction pixel of the current image block according to each candidate inter-frame prediction pixel.
As an embodiment, in step 103, determining the target inter prediction pixel of the current image block according to each candidate inter prediction pixel may include: and performing weighting operation on each candidate prediction pixel according to the weighting coefficient of each candidate inter prediction pixel, and determining the weighting operation result as the target inter prediction pixel of the current image block.
In one example, the weighting coefficients for each candidate inter-predicted pixel may be the same. When the weighting coefficients of the respective candidate inter prediction pixels are the same, performing the weighting operation on the respective candidate prediction pixels according to the weighting coefficients of the respective candidate inter prediction pixels corresponds to performing weighted averaging on the respective candidate inter prediction pixels.
In another example, the weighting coefficients of the candidate inter prediction pixels may also be different, for example, the weighting coefficients may be set according to the distance between the target MHP position corresponding to each candidate inter prediction pixel and the current image block. One possible setting method is: the closer the distance, the larger the weighting factor. Specifically, when the motion information of the current image block is located in the target MHP position set, the weighting coefficient of the candidate inter-prediction pixel corresponding to the target MHP position (which is substantially the motion information of the current image block) in the target MHP position set corresponding to the motion information of the current image block is the largest.
Thus, the flow shown in fig. 1 is completed.
As can be seen from the process shown in fig. 1, in the present application, the encoding end determines the target MHP position set according to the motion information of the current image block, and then derives the target inter-frame prediction pixel of the current image block according to the candidate inter-frame prediction pixel corresponding to each target MHP position in the target MHP position set, so as to ensure that the finally obtained target inter-frame prediction pixel of the current image block is the optimal prediction pixel block. Even if the video image has the factors of afterimage and/or edge blurring of the video image, the finally obtained target inter-frame prediction pixel of the current image block is ensured to be closer to the original pixel of the current image block and is the best prediction pixel block, and the inter-frame prediction accuracy and the coding performance are improved.
Referring to fig. 2, fig. 2 is a flowchart of an embodiment implementation of step 101 provided in embodiment 1 of the present application. As shown in fig. 2, the process may include the following steps:
Here, the motion information of the current image block may be motion-related encoding information.
As an embodiment, the motion information of the current image block may be a Motion Vector (MV) of the current image block. Fig. 3 shows, for example, how to determine the candidate MHP position set according to the motion information of the current image block when the motion information of the current image block is the motion vector of the current image block, which is not repeated here.
As another embodiment, the motion information of the current image block may be a motion vector of the current image block and a reference frame index (denoted as a first reference frame index) corresponding to a reference frame adopted by the current image block. The first reference frame index here is an index of a certain reference frame in the reference frame list of the current frame recorded by the encoding end. Fig. 6 shows, for example, determining a candidate MHP position set according to the motion information of the current image block by using the motion information of the current image block as a motion vector and a first reference frame index, which is not described herein again.
At step 202, at least two candidate MHP locations are selected from the set of candidate MHP locations.
As an embodiment, in this step 202, at least two candidate MHP positions may be randomly selected from the candidate MHP position set, or each candidate MHP position in the candidate MHP position set may be prioritized according to a preset requirement or condition, and at least two candidate MHP positions are selected from top to bottom according to the priority, for example, the smaller the distance, the higher the priority is, and the like.
As another embodiment, in step 202, at least two candidate MHP locations that satisfy the requirement can be selected from the candidate MHP location set according to the actual requirement. The requirements include, but are not limited to, external commands, business requirements, a distance from the current image block to meet a set distance, and the like.
And step 203, forming the selected candidate MHP positions into the target MHP position set.
Finally, the determination of the target MHP position set according to the motion information of the current image block in step 101 is realized through the process shown in fig. 2.
Referring to fig. 3, fig. 3 is a flowchart for implementing step 201 provided in embodiment 1 of the present application. As shown in fig. 3, the process may include the following steps:
Through the process shown in fig. 3, when the motion information includes the motion vector, the candidate MHP position set of the current image block is limited according to the motion vector of the current image block, the candidate MHP position in the limited candidate MHP position set is a position along the direction of the motion vector and meeting the preset condition, and compared with the existing method that a plurality of position points are selected around the current image block to perform inter-frame prediction, the inter-frame prediction performed based on the candidate MHP position set is more accurate in the following process.
It should be noted that there are many implementation manners for implementing the step 302, such as:
in one example, this step 302 may first select candidate MHP locations based on the offset of the motion vector in the lateral direction and/or the offset in the longitudinal direction. When the position is selected based on the offset of the motion vector in the transverse direction, it often appears that the value of the longitudinal component of the finally selected candidate MHP position is not an integer if the position is along the direction of the motion vector. A similar situation occurs when selecting a position with reference to the amount of displacement of the motion vector in the longitudinal direction. For this case, the embodiment of the present application may round up or round down. For example: for example, if the end point of the motion vector is (1, 2), if the position is selected based on the offset of the motion vector in the longitudinal direction, the offset is positively offset by 1 along the direction of the motion vector, that is, the longitudinal component of the selectable position is 3, at this time, if the direction along the motion vector is adopted, the lateral component of the selectable position should be 1.5, and the lateral component or the longitudinal component of the pixel point should be an integer, based on which, the value may be rounded downward with respect to 1.5, and taken to 1, that is, the position (1, 3) is selected, or the value may be rounded upward with respect to 1.5, and taken to 2, that is, the position (2, 3) is selected.
In another example, step 302 may be implemented as follows: and randomly taking at least two positions which are along the direction of the motion vector and meet a preset condition as candidate MHP positions to form the candidate MHP position set. For example, if the number N of candidate MHP positions in the candidate MHP position set is 2, the candidate MHP position set may include a motion vector (denoted as MV1) of the current image block and a first candidate MV generated from an offset p1 of MV1 in the lateral direction and an offset p2 of MV1 in the longitudinal direction. Alternatively, when N is 2, the candidate MHP position set may include a second candidate MV generated according to the offsets p3 and p4 of MV1 in the lateral direction of MV1 in the longitudinal direction, and a third candidate MV generated according to the offsets p5 and p6 of MV1 in the longitudinal direction. When N is 3, the set of candidate MHP positions may include MV1, a fourth candidate MV and a fifth candidate MV, which are symmetric or asymmetric, the fourth candidate MV being generated at an offset p7 in the lateral direction and an offset p8 in the longitudinal direction of MV 1. The offset p9 of the fifth candidate MV in the lateral direction and the offset p10 of MV1 in the longitudinal direction are generated, and so on. The offset amount in the transverse direction may be the same as or different from the offset amount in the longitudinal direction, the offset direction in the transverse direction may be along the transverse direction, positive direction, or reverse direction, the offset direction in the longitudinal direction may be along the longitudinal direction, positive direction, or reverse direction, and finally, the offset direction in the transverse direction is determined according to the principle that the offset amount is closest to the termination point of MV 1.
In yet another example, step 302 may be implemented by coordinate changes. Fig. 4 illustrates how step 302 is implemented by coordinate change, which is not described herein for the sake of brevity.
As an embodiment, in this step 302, the preset condition may be set according to actual requirements, for example, the distance from the ending point of the motion vector is less than a preset distance value, and the preset distance value is set to determine the candidate MHP position within a range close to the ending point of the motion vector.
The flow shown in fig. 3 is completed.
How to determine the candidate MHP position set according to the motion information of the current image block when the motion information of the current image block includes a motion vector is realized through the process shown in fig. 3.
It should be noted that, when the motion information of the current image block shown in fig. 3 includes a motion vector, how to determine the candidate MHP position set according to the motion information of the current image block is merely an example and is not limited.
Referring to fig. 4, fig. 4 is a flowchart of an implementation of step 302 provided in embodiment 1 of the present application. As shown in fig. 4, the process may include the following steps:
The target area here is an area in the target coordinate system that matches the direction of the motion vector.
In one example, the target coordinate system is different from the coordinate system where the motion vector of the current image block is located, which is a coordinate system with the first designated point as the origin. Here, when the motion vector of the current image block is a zero motion vector, the first designated point may be an origin of a coordinate system in which the motion vector of the current image block is located, and when the motion vector of the current image block is not a zero motion vector, the first designated point may not be the origin of the coordinate system in which the motion vector of the current image block is located. It should be noted that the first designated point is only named for convenience of description and is not limiting.
As an embodiment, the first specified point here is the end point of the motion vector or other points in the direction of the motion vector except the end point.
In step 401, according to the direction of the motion vector of the current image block, a value of a horizontal component and a value of a vertical component in the motion vector of the current image block are determined in a target coordinate system. If the product of the transverse component and the longitudinal component in the motion vector of the current image block is greater than 0 (namely the motion vector of the current image block points to the first coordinate quadrant area or the third coordinate quadrant area of the coordinate system where the motion vector of the current image block is located), determining that the transverse coordinate axis, the longitudinal coordinate axis, the first coordinate quadrant area and the third coordinate quadrant area in the target coordinate system (the coordinate system where the motion vector of the current image block is not located) are the target area; if the product of the horizontal component and the vertical component in the motion vector of the current image block is less than 0 (namely the motion vector of the current image block points to the second coordinate quadrant area or the fourth coordinate quadrant area of the coordinate system in which the motion vector of the current image block is positioned), determining the horizontal coordinate axis, the vertical coordinate axis, the second coordinate quadrant area and the fourth coordinate quadrant area in the target coordinate system (the coordinate system in which the motion vector of the current image block is not positioned) as the target area; if the transverse component in the motion vector of the current image block is 0 (namely the motion vector of the current image block points to the longitudinal coordinate axis of the coordinate system where the motion vector of the current image block is located), determining the longitudinal coordinate axis in the target coordinate system as the target area; and if the longitudinal component in the motion vector of the current image block is 0 (namely the motion vector of the current image block points to the transverse coordinate axis of the coordinate system where the motion vector of the current image block is located), determining the transverse coordinate axis in the target coordinate system as the target area.
The flow shown in fig. 4 is completed.
The flow shown in fig. 4 is described below by way of example by 4 examples:
example 1:
in example 1, it is assumed that the motion vector of the current image block is denoted as MV1, where the horizontal component of MV1 is a and the vertical component is b.
In example 1, according to the flow shown in fig. 4, assuming that the target coordinate system is the origin at the end points (a, b) of the motion vector MV1 of the current image block, taking the motion vector as an example of representing the set of MTP candidate positions, when ab is greater than 0, then:
if the number N of candidate MHP positions in the set of candidate MHP positions is 2, the origin, B1 point (k1, k2) in the target coordinate system shown in fig. 5a can be taken as the candidate MHP positions, that is, the set of candidate MHP positions is: { (0,0), (k1, k2) } in the target coordinate system. And the coordinate system (marked as the original coordinate system) where the motion vector applied to the current image block is located, the candidate MHP position set is transformed into: { (a, b), (a + k1, b + k2) } of the original coordinate system.
Still taking the number N of candidate MHP positions in the candidate MHP position set as 2, B2 points (-k3, -k4), B1 points (k1, k2) in the target coordinate system shown in fig. 5a may also be taken as candidate MHP positions, that is, the candidate MHP position set is: { (-k3, -k4), (k1, k2) } in the target coordinate system. And the coordinate system (marked as the original coordinate system) where the motion vector applied to the current image block is located, the candidate MHP position set is transformed into: { (a-k3, b-k4), (a + k1, b + k2) } of the original coordinate system.
Taking the number N of candidate MHP positions in the candidate MHP position set as 3 as an example, B2 points (-k3, -k4), the origin, and B1 points (k1, k2) in the target coordinate system shown in fig. 5a may also be used as candidate MHP positions, that is, the candidate MHP position set is: { (-k3, -k4), (0,0), (k1, k2) } in the target coordinate system. And the coordinate system (marked as the original coordinate system) where the motion vector applied to the current image block is located, the candidate MHP position set is transformed into: { (a-k3, b-k4), (a, b), (a + k1, b + k2) } of the original coordinate system.
The number N of candidate MHP positions in the set of candidate MHP positions may also be other values, which is similar to the case where N is 2 or 3, and is not described in detail here.
In the above description, k1 to k4 are integers, which may be the same or different, and may specifically be selected according to the principle of being closest to the termination point of MV 1.
Thus, the description of example 1 is completed.
Example 2:
in example 2, it is assumed that the motion vector of the current image block is denoted as MV1, where the horizontal component of MV1 is a and the vertical component is b.
In example 2, according to the flow shown in fig. 4, assuming that the target coordinate system is the origin at the end points (a, b) of the motion vector MV1 of the current image block, taking the motion vector as an example of representing the candidate MTP position set, when ab is smaller than 0, then:
if the number N of candidate MHP positions in the set of candidate MHP positions is 2, the origin, C1 point (-k5, k6) in the target coordinate system shown in fig. 5b can be taken as the candidate MHP positions, that is, the set of candidate MHP positions is: { (0,0), (-k5, k6) } in the target coordinate system. And the coordinate system (marked as the original coordinate system) where the motion vector applied to the current image block is located, the candidate MHP position set is transformed into: { (a, b), (a-k5, b + k6) } of the original coordinate system.
Still taking the number N of candidate MHP positions in the candidate MHP position set as 2, C1 points (-k5, k6) and C2 points (k7, -k8) in the target coordinate system shown in fig. 5b may also be taken as candidate MHP positions, that is, the candidate MHP position set is: { (-k5, k6), (k7, -k8) } in the target coordinate system. And the coordinate system (marked as the original coordinate system) where the motion vector applied to the current image block is located, the candidate MHP position set is transformed into: { (a-k5, b + k6), (a + k7, b-k 8) } of the original coordinate system.
Taking the number N of candidate MHP positions in the candidate MHP position set as 3 as an example, the C1 point (-k5, k6), the origin, and the C2 points (k7, -k8) in the target coordinate system shown in fig. 5b may also be used as candidate MHP positions, that is, the candidate MHP position set is: { (-k5, k6), (0,0), (k7, -k8) } in the target coordinate system. And the coordinate system (marked as the original coordinate system) where the motion vector applied to the current image block is located, the candidate MHP position set is transformed into: { (a-k5, b + k6), (a, b), (a + k7, b-k 8) } of the original coordinate system.
The number N of candidate MHP positions in the set of candidate MHP positions may also be other values, which is similar to the case where N is 2 or 3, and is not described in detail here.
In the above description, k5 to k8 are integers, which may be the same or different, and may specifically be selected according to the principle of being closest to the termination point of MV 1.
Thus, the description of example 2 is completed.
Example 3:
in example 3, it is assumed that the motion vector of the current image block is MV1, where the horizontal component of MV1 is 0 and the vertical component is b, and b ≠ 0.
In example 3, according to the flow shown in fig. 4, assuming that the target coordinate system is the origin at the end point (0, b) of the motion vector MV1 of the current image block, and taking the motion vector as an example of representing the candidate MTP position set, then:
if the number N of candidate MHP positions in the set of candidate MHP positions is 2, the origin, D1 point (0, k9) in the target coordinate system shown in fig. 5c may be taken as the candidate MHP positions, that is, the set of candidate MHP positions is: { (0,0), (0, k9) } in the target coordinate system. And the coordinate system (marked as the original coordinate system) where the motion vector applied to the current image block is located, the candidate MHP position set is transformed into: of the original coordinate system { (a, b), (a, b + k9) }.
Still taking the number N of candidate MHP positions in the set of candidate MHP positions as 2, D1 points (0, k9) and D2 points (0, -k10) in the target coordinate system shown in fig. 5c may also be used as candidate MHP positions, that is, the set of candidate MHP positions is: { (0, k9), (0, -k10) } in the target coordinate system. And the coordinate system (marked as the original coordinate system) where the motion vector applied to the current image block is located, the candidate MHP position set is transformed into: { (a, b + k9), (a, b-k 10) } of the original coordinate system.
Taking the number N of candidate MHP positions in the candidate MHP position set as 3 as an example, the D1 point (0, k9), the origin, and the D2 point (0, -k10) in the target coordinate system shown in fig. 5c may also be used as candidate MHP positions, that is, the candidate MHP position set is: { (0, k9), (0,0), (0, -k10) } in the target coordinate system. And the coordinate system (marked as the original coordinate system) where the motion vector applied to the current image block is located, the candidate MHP position set is transformed into: { (a, b + k9), (a, b), (a, b-k 10) } of the original coordinate system.
The number N of candidate MHP positions in the set of candidate MHP positions may also be other values, which is similar to the case where N is 2 or 3, and is not described in detail here.
In the above description, k9 to k10 are integers, which may be the same or different, and may specifically be selected according to the principle of being closest to the termination point of MV 1.
Thus, the description of example 3 is completed.
Example 4:
in example 4, it is assumed that the motion vector of the current image block is MV1, where the horizontal component of MV1 is a, a ≠ 0, and the vertical component is 0.
In example 4, according to the flow shown in fig. 4, assuming that the target coordinate system is the origin at the end point (a, 0) of the motion vector MV1 of the current image block, and taking the motion vector as an example of representing the candidate MTP position set, then:
if the number N of candidate MHP positions in the set of candidate MHP positions is 2, the origin, E1 point (k11, 0) in the target coordinate system shown in fig. 5d can be taken as the candidate MHP positions, that is, the set of candidate MHP positions is: { (0,0), (k11, 0) } in the target coordinate system. And the coordinate system (marked as the original coordinate system) where the motion vector applied to the current image block is located, the candidate MHP position set is transformed into: of the original coordinate system { (a, b), (a + k11, b) }.
Still taking the number N of candidate MHP positions in the candidate MHP position set as 2 as an example, the E1 point (k11, 0) and the E2 point (-k12, 0) in the target coordinate system shown in fig. 5d may also be taken as candidate MHP positions, that is, the candidate MHP position set is: { (-k12, 0), (k11, 0) } in the target coordinate system. And the coordinate system (marked as the original coordinate system) where the motion vector applied to the current image block is located, the candidate MHP position set is transformed into: { (a-k12, b), (a + k11, b) } of the original coordinate system.
Taking the number N of candidate MHP positions in the candidate MHP position set as 3 as an example, the point E1 (k11, 0), the origin, and the point E2 (-k12, 0) in the target coordinate system shown in fig. 5d may also be used as candidate MHP positions, that is, the candidate MHP position set is: { (-k12, 0), (0,0), (k11, 0) } in the target coordinate system. And the coordinate system (marked as the original coordinate system) where the motion vector applied to the current image block is located, the candidate MHP position set is transformed into: of the original coordinate system { (a-k12, b), (a, b), (a + k11, b) }
The number N of candidate MHP positions in the set of candidate MHP positions may also be other values, which is similar to the case where N is 2 or 3, and is not described in detail here.
In the above description, k11 to k12 are integers, which may be the same or different, and may specifically be selected according to the principle of being closest to the termination point of MV 1.
Thus, the description of example 4 is completed.
How to make up the set of candidate MHP positions by using at least two positions along the direction of the motion vector and satisfying a preset condition as candidate MHP positions as shown in fig. 4 by 4 examples above.
It should be noted that the above 4 examples are only examples for easy understanding and are not intended to be limiting.
Referring to fig. 6, fig. 6 is a flowchart of another implementation of step 201 provided in embodiment 1 of the present application. As shown in fig. 6, the process may include the following steps:
The second reference frame index is an index of a reference frame in a reference frame list recorded by the encoding end.
In an example, when the first reference frame index is the same as the second reference frame index, the step 302 may be referred to in the step 602 for determining the candidate MHP location set, and details are not repeated here.
In another example, when the first reference frame index is different from the second reference frame index, the step 302 may be referred to in this step 602, except that when the step 302 is implemented, the origin of the target coordinate system is changed, specifically: the end point of the target motion vector or a point other than the end point of the target motion vector in the direction of the motion vector. The target motion vector is a motion vector obtained by multiplying a motion vector of the current image block by L, where L is a ratio of L1 to L2, L1 is a time-domain span between a reference frame corresponding to the second reference frame index and a current frame where the current image block is located, and L2 is a time-domain span between a reference frame corresponding to the first reference frame index and the current frame. The following examples describe:
assuming that L1 is 3, L2 is 1, L is 3, the motion vector of the current image block is MV1, the lateral component of MV1 is a, and the longitudinal component is b, the target motion vector is 3MV1, the lateral component of 3MV1 is 3a, and the longitudinal component is 3 b.
As described above, assuming that the origin of the target coordinate system is the termination point (3a,3b) of the target motion vector, and the motion vector represents the candidate MTP position set, for example, when ab is greater than 0, the horizontal coordinate axis, the vertical coordinate axis, the first coordinate quadrant region, and the third coordinate quadrant region in the target coordinate system shown in fig. 7 may be determined as the target region, and at least two positions in the target region are taken as candidate MHP positions to form the candidate MHP position set. Such as:
if the number N of candidate MHP positions in the set of candidate MHP positions is 2, the origin, F1 point (k13, k14) in the target coordinate system shown in fig. 7 can be taken as the candidate MHP positions, that is, the set of candidate MHP positions is: { (0,0), (k13, k14) } in the target coordinate system. And the coordinate system (marked as the original coordinate system) where the motion vector applied to the current image block is located, the candidate MHP position set is transformed into: { (3a,3b), (3a + k13,3b + k14) } of the original coordinate system. Wherein each candidate MHP position in the set of candidate MHP positions corresponds to a second reference frame index.
The flow shown in fig. 6 is completed.
How to determine the candidate MHP position set when the motion information of the current image block includes the motion vector and the first reference frame index is realized by the flow shown in fig. 6.
In this embodiment 1, after determining a target MHP position for a current image block, a coding end may send a bitstream to a decoding end while carrying MHP position policy information of the current image block, where the MHP position policy information is used to provide a policy for determining the target MHP position to the decoding end.
In one example, the MHP location policy information includes first indication information; the first indication information is used for indicating available candidate MHP position sets on the premise that a plurality of candidate MHP position sets exist.
In another example, the MHP location policy information includes second indication information; the second indication information is used for indicating the available target MHP position.
In one example, the MHP location policy information includes third indication information; the third indication information comprises a motion information offset to indicate that the position of the target MHP is determined according to the motion information of the current image block and the motion information offset. For example, the motion information of the current image block is represented by a motion vector of the current image block, for example, (2, 2), and if the motion information offset is 1, it means that (3, 3) and/or (1, 1) obtained according to (2, 2) and the motion information offset 1 are/is the target MHP position of the current image block.
In addition, in this embodiment 1, after the encoding end determines the target MHP position for the current image block, as an embodiment, at least one reference MHP position of the current image block may be further stored, so as to provide inter-frame prediction reference for other image blocks adjacent to the current image block. In particular, in the bidirectional reference frame mode, two reference MHP positions can be stored. It should be noted that, the reference MHP position stored here may refer to the target MHP position of the current image block; or the calculation result calculated for at least one target MHP position of the current image block, etc.
The description of embodiment 1 is completed so far.
Example 2:
this embodiment 2 is described as applied to the decoding side.
Referring to fig. 8, fig. 8 is a flowchart of an inter prediction method provided in embodiment 2 of the present application. The method is applied to a decoding end and can comprise the following steps:
The MHP location policy information here is a policy for instructing the decoding end to determine the location of the target MHP. The MHP location policy information will be described in the following by way of example, and will not be described herein again.
This step 802 is similar to the step 102 described above and will not be described again.
This step 803 is similar to the step 103 described above and will not be described again.
The flow shown in fig. 8 is completed.
As can be seen from the process shown in fig. 8, in the present application, the decoding end determines a target MHP position set corresponding to the current image block according to the motion information of the current image block and the received MHP position policy information carried in the coded bit stream from the coding end, and then derives a target inter-frame prediction pixel of the current image block according to candidate inter-frame prediction pixels corresponding to each target MHP position in the target MHP position set, so as to ensure that the finally obtained target inter-frame prediction pixel of the current image block is the best prediction pixel block. Even if the video image has the factors of afterimage and/or edge blurring of the video image, the finally obtained target inter-frame prediction pixel of the current image block is ensured to be closer to the original pixel of the current image block and is the best prediction pixel block, and the inter-frame prediction accuracy and the coding performance are improved.
In this embodiment 2, as an embodiment, the MHP location policy information includes first indication information, where the first indication information is used to indicate an available candidate MHP location set. Here, the decoding end determines the set of candidate MHP locations in a manner similar to the encoding end. And all the candidate MHP position sets determined by the decoding end are the same as all the candidate MHP position sets determined by the encoding end.
In one example, the first indication information may include: the set of available candidate MHP locations is represented by an indicator value of 1 and the set of unavailable candidate MHP locations is represented by an indicator value of 0. In another example, the first indication information may include: only the set of available candidate MHP locations is indicated, while the set of unavailable candidate MHP locations is not indicated.
Based on the first indication information, the step 801 may include: and determining a target MHP position set according to the available candidate MHP position set indicated by the first indication information. The final set of target MHP locations comprises a set of available candidate MHP locations indicated by the first indication information.
As another embodiment, the MHP location policy information includes second indication information for indicating an available target MHP location. Here, the decoding end determines the candidate MHP location set in a manner similar to the manner in which the encoding end determines the candidate MHP location set.
In one example, the second indication information may include: available target MHP positions in the target MHP position set determined by the encoding end are represented by an indicator value 1, and unavailable target MHP positions are represented by an indicator value 0. In another example, only available target MHP locations are indicated, while unavailable target MHP locations are not indicated.
Based on the second indication information, the step 801 may include: determining a candidate MHP position set according to the motion information of the current image block; and determining target MHP positions from the candidate MHP position set according to the available target MHP positions indicated by the second indication information and forming the target MHP position set. The final set of target MHP locations comprises available target MHP locations indicated by the second indication information.
As another embodiment, the MHP location policy information includes third indication information, where the third indication information includes a motion information offset to indicate that the target MHP location is determined according to the motion information of the current image block and the motion information offset.
Based on the third indication information, in this step 801, determining a target MHP position set corresponding to the current image block according to MHP position policy information carried in a received coded bit stream from a coding end includes:
determining the position of a target MHP according to the motion information offset indicated by the third indication information and the motion information of the current image block;
and forming the determined target MHP positions into the target MHP position set.
For example, the motion information of the current image block processed by the decoding end is represented by a motion vector of the current image block, for example, (2, 2), if the motion information offset is 1, it means that (3, 3) and (1, 1) obtained according to (2, 2) and the motion information offset 1 are target MHP positions of the current image block, and the finally determined target MHP position sets of the current image block are (3, 3) and (1, 1).
It should be noted that the above-mentioned limitation of the MHP location policy information to the first indication information, the second indication information, and the third indication information is only an example for facilitating understanding, and is not limited.
The description of embodiment 2 is completed so far.
Example 3:
this embodiment 3 is described as applied to the decoding side.
Referring to fig. 9, fig. 9 is a flowchart of an inter prediction method provided in embodiment 3 of the present application. The method is applied to a decoding end and can comprise the following steps:
In this step 901, there are multiple implementation manners for determining the target MHP position set corresponding to the current image block according to the motion information of the current image block, and one implementation manner may adopt the implementation manner described in embodiment 1 for determining the target MHP position set corresponding to the current image block according to the motion information of the current image block; another implementation manner may adopt the implementation manner described in embodiment 2 to determine the target MHP position set corresponding to the current image block according to the motion information of the current image block and the MHP position policy information carried by the received coded bit stream from the encoding end on the premise that the received coded bit stream from the encoding end carries the MHP position policy information.
Step 902 is similar to step 102 described above and will not be described here.
Step 903 is similar to step 103 described above and will not be described here.
The flow shown in fig. 9 is completed.
As can be seen from the process shown in fig. 9, in the present application, the decoding end determines the target MHP position set according to the motion information of the current image block, and then derives the target inter-frame prediction pixel of the current image block according to the candidate inter-frame prediction pixel corresponding to each target MHP position in the target MHP position set, so as to ensure that the finally obtained target inter-frame prediction pixel of the current image block is the optimal prediction pixel block. Even if the video image has the factors of afterimage and/or edge blurring of the video image, the finally obtained target inter-frame prediction pixel of the current image block is ensured to be closer to the original pixel of the current image block and is the best prediction pixel block, and the inter-frame prediction accuracy and the coding performance are improved.
The description of embodiment 3 is completed so far. The methods provided herein are described above.
It should be noted that, as an embodiment, in the present application, the size of the current image block may be greater than or equal to a preset size. The preset size may be set according to actual requirements, and as an embodiment, the preset size may be 8 × 8.
It should be further noted that, as an embodiment, in the present application, in the case that the current image block is a bi-directional inter prediction block (i.e. there are two sets of motion information in the current image block itself), one implementation manner is: the method provided in embodiment 1 of the present application is applied to an encoding side, and the method provided in embodiment 2 or 3 of the present application is performed on one set of motion information of a current image block.
As another embodiment, in the present application, in the case that the current image block is a bi-directional inter-prediction block (i.e. there are two sets of motion information in the current image block itself), one way to implement this is: the method is applied to a coding end, and the mode provided by the embodiment 1 of the application is executed aiming at two sets of motion information of the current image block; the method provided by embodiment 2 or 3 of the present application is applied to a decoding end and executed for two sets of motion information of a current image block.
The method is applied to a coding end, and under the premise that two sets of motion information of a current image block execute the method provided by the embodiment 1 of the application, the coding end can transmit two sets of coded bit streams to a decoding end to respectively indicate the available MHP position corresponding to each set of motion information of the current image block; or, the encoding end transmits a set of encoding bit stream to the decoding end, and indicates the available MHP position corresponding to each set of motion information of the current image block; or, the encoding end transmits three sets of encoded bit streams T1 to T3 to the decoding end, where T1 indicates the common available MHP positions corresponding to the two sets of motion information of the current image block, T2 indicates the remaining available MHP positions except the common available MHP position corresponding to one set of motion information of the current image block, T2 indicates the remaining available MHP positions except the common available MHP position corresponding to one set of motion information of the current image block, and T2 indicates the remaining available MHP positions except the common available MHP position corresponding to the other set of motion information of the current image block. And so on.
The following describes the apparatus provided in the present application:
referring to fig. 10, fig. 10 is a structural diagram of an encoding end device provided in the present application. As shown in fig. 10, the apparatus may include:
an MHP location unit 1001, configured to determine a target MHP location set according to the motion information of the current image block;
a determining unit 1002, configured to determine candidate inter-frame prediction pixels corresponding to each target MHP position in the target MHP position set;
the inter prediction unit 1003 is configured to determine a target inter prediction pixel of the current image block according to each candidate inter prediction pixel.
For one embodiment, the MHP location unit 1001 determining the target MHP location set according to the motion information of the current image block may include:
determining a candidate MHP position set according to the motion information of the current image block;
selecting at least two candidate MHP locations from the set of candidate MHP locations;
and forming the selected candidate MHP positions into the target MHP position set.
In one example, the motion information includes: a motion vector; based on this, the MHP location unit 1001 determining the candidate MHP location set according to the motion information of the current image block may include:
and selecting at least two positions of which the distance from the motion vector to the end point of the motion vector meets a preset condition as candidate MHP positions to form the candidate MHP position set according to the direction of the motion vector of the current image block.
In one example, the MHP position unit 1001 selects, according to the direction of the motion vector of the current image block, at least two positions whose distances from the end point of the motion vector satisfy a preset condition as candidate MHP positions to form the candidate MHP position set, including:
determining a target area in a target coordinate system according to the direction of a motion vector of a current image block, wherein the target area is an area matched with the direction of the motion vector in the target coordinate system;
and taking at least two positions in the target area as candidate MHP positions according to the preset condition to form the candidate MHP position set.
As one embodiment, the target coordinate system is a coordinate system with the first specified point as an origin;
the first specified point is the end point of the motion vector or other points in the direction of the motion vector except the end point.
In one example, the motion information further includes: a first reference frame index corresponding to a reference frame adopted by the current image block; each candidate MHP location in the set of candidate MHP locations corresponds to a second reference frame index.
When the first reference frame index is the same as the second reference frame index, the target coordinate system is a coordinate system with a first designated point as an origin, and the first designated point is a termination point of the motion vector or other points except the termination point in the direction of the motion vector.
When the first reference frame index is different from the second reference frame index, the target coordinate system is a coordinate system with a second designated point as an origin, the second designated point being a termination point of a target motion vector or other points in the direction of the motion vector except the termination point of the target motion vector;
the target motion vector is a motion vector obtained by multiplying a motion vector of the current image block by L, wherein L is a ratio of L1 to L2, L1 is a time-domain span between a reference frame corresponding to the second reference frame index and a current frame where the current image block is located, and L2 is a time-domain span between a reference frame corresponding to the first reference frame index and the current frame.
Here, the first reference frame index and the second reference frame index are indexes of reference frames in a reference frame list recorded by the encoding end.
In one example, MHP location unit 1001 determines the target area in the target coordinate system based on the direction of the motion vector of the current image block includes
If the product of the transverse component and the longitudinal component in the motion vector of the current image block is greater than 0, determining a first coordinate quadrant area and a third coordinate quadrant area in the target coordinate system as the target area;
and if the product of the transverse component and the longitudinal component in the motion vector of the current image block is less than 0, determining a second coordinate quadrant area and a fourth coordinate quadrant area in the target coordinate system as the target area.
In another example, the MHP position unit 1001 determines the target area in the target coordinate system according to the direction of the motion vector of the current image block includes:
if the transverse component in the motion vector of the current image block is 0, determining a longitudinal coordinate axis in the target coordinate system as the target area;
and if the longitudinal component in the motion vector of the current image block is 0, determining a transverse coordinate axis in the target coordinate system as the target area.
Here, the inter prediction unit 1003 determining the target inter prediction pixel of the current image block according to each candidate inter prediction pixel includes:
performing weighting operation on each candidate prediction pixel according to the weighting coefficient of each candidate inter prediction pixel;
and determining a weighting operation result as a target inter-frame prediction pixel of the current image block.
As an embodiment, the MHP location unit 1001 further stores at least one target MHP location storage of a current tile to provide inter prediction reference to other tiles adjacent to the current tile; or storing a calculation result calculated according to at least one target MHP position of the current image block so as to provide inter-frame prediction reference for other image blocks adjacent to the current image block.
In one example, as shown in fig. 10, the apparatus further comprises:
a sending unit 1004, configured to send, to a decoding end, a coded bitstream carrying MHP location policy information, where the MHP location policy information is used to provide, to the decoding end, a policy for determining a target MHP location.
As one embodiment, the MHP location policy information includes first indication information; the first indication information is used to indicate an available candidate MHP location set.
As another embodiment, the MHP location policy information includes second indication information; the second indication information is used for indicating the available target MHP position.
As yet another embodiment, the MHP location policy information includes third indication information; the third indication information comprises a motion information offset to indicate that the position of the target MHP is determined according to the motion information of the current image block and the motion information offset.
Thus, the structure of the encoding-side device shown in fig. 10 is completed.
The present application also provides a hardware structure diagram of the encoding-side device shown in fig. 10. As shown in fig. 11, the hardware structure may include:
a machine-readable storage medium 1101 to store machine-readable instructions;
a processor 1102 configured to read the machine readable instructions and execute the instructions to implement the inter prediction method described in embodiment 1.
Up to this point, the description of the hardware configuration shown in fig. 11 is completed.
Referring to fig. 12, fig. 12 is a block diagram of a decoding-side device provided in the present application. As shown in fig. 12, the decoding-side apparatus may include:
and the MHP location unit 1201 is configured to determine a target MHP location set corresponding to the current image block according to the motion information of the current image block and MHP location policy information carried in the received coded bitstream from the coding end.
A determining unit 1202, configured to determine candidate inter-frame prediction pixels corresponding to each target MHP position in the target MHP position set;
the inter prediction unit 1203 is configured to determine a target inter prediction pixel of the current image block according to each candidate inter prediction pixel.
For one embodiment, the MHP location policy information includes first indication information indicating a set of available candidate MHP locations;
the MHP location unit 1201 determines, according to the motion information of the current image block and the received MHP location policy information carried by the coded bitstream from the coding end, a target MHP location set corresponding to the current image block, including:
determining a candidate MHP position set according to the motion information of the current image block;
and determining a target MHP position set from all determined candidate MHP position sets according to the available candidate MHP position sets indicated by the first indication information.
As another embodiment, the MHP location policy information includes second indication information for indicating an available target MHP location;
the MHP location unit 1201 determines, according to the motion information of the current image block and the received MHP location policy information carried by the coded bitstream from the coding end, a target MHP location set corresponding to the current image block, including:
determining a candidate MHP position set according to the motion information of the current image block;
and selecting target MHP positions from the candidate MHP position set according to the available target MHP positions indicated by the second indication information and forming the target MHP position set.
As another embodiment, the MHP location policy information includes third indication information, where the third indication information includes a motion information offset to indicate that a target MHP location is determined according to the motion information of the current image block and the motion information offset;
the MHP location unit 1201 determines, according to MHP location policy information carried by a received coded bit stream from a coding end, a target MHP location set corresponding to a current image block, including:
determining the position of a target MHP according to the motion information offset indicated by the third indication information and the motion information of the current image block;
and forming the determined target MHP positions into the target MHP position set.
Thus, the decoding-side apparatus configuration diagram shown in fig. 12 is completed.
The present application also provides a hardware structure diagram of the decoding-side device shown in fig. 12. As shown in fig. 13, the hardware structure may include:
a machine-readable storage medium 1301 for storing machine-readable instructions;
a processor 1302, configured to read the machine readable instructions and execute the instructions to implement the inter prediction method described in embodiment 2.
Up to this point, the description of the hardware configuration shown in fig. 13 is completed.
Referring to fig. 14, fig. 14 is another structural diagram of the decoding-side device provided in the present application. As shown in fig. 14, the decoding-side apparatus includes:
an MHP location unit 1401, configured to determine, according to the motion information of the current image block, a target MHP location set corresponding to the current image block.
A determining unit 1402, configured to determine candidate inter-frame prediction pixels corresponding to each target MHP position in the target MHP position set;
the inter prediction unit 1403 is configured to determine a target inter prediction pixel of the current image block according to each candidate inter prediction pixel.
As an embodiment, determining, by MHP location unit 1401, a target MHP location set corresponding to the current image block according to the motion information of the current image block includes:
determining a candidate MHP position set according to the motion information of the current image block;
selecting at least two candidate MHP locations from the set of candidate MHP locations;
and forming the selected candidate MHP positions into the target MHP position set.
In one example, the motion information includes: a motion vector.
The MHP location unit 1401 determining a candidate MHP location set according to the motion information of the current image block includes:
and selecting at least two positions of which the distance from the motion vector to the end point of the motion vector meets a preset condition as candidate MHP positions to form the candidate MHP position set according to the direction of the motion vector of the current image block.
Here, the selecting, according to the direction of the motion vector of the current image block, at least two positions whose distances from the end point of the motion vector satisfy a preset condition as candidate MHP positions to form the candidate MHP position set includes:
determining a target area in a target coordinate system according to the direction of a motion vector of a current image block, wherein the target area is an area matched with the direction of the motion vector in the target coordinate system;
and taking at least two positions in the target area as candidate MHP positions according to the preset condition to form the candidate MHP position set.
The target coordinate system is a coordinate system with a first designated point as an origin; the first specified point is the end point of the motion vector or other points in the direction of the motion vector except the end point.
In another example, the motion information further includes: a first reference frame index corresponding to a reference frame adopted by the current image block;
each candidate MHP location in the set of candidate MHP locations corresponds to a second reference frame index.
In one example, the first reference frame index is the same as the second reference frame index;
the target coordinate system is a coordinate system with a first designated point as an origin, and the first designated point is a termination point of the motion vector or other points except the termination point in the direction of the motion vector.
In another example, the first reference frame index is different from the second reference frame index, the target coordinate system is a coordinate system with a second designated point as an origin, the second designated point is an end point of a target motion vector or a point other than the end point of the target motion vector in the direction of the motion vector;
the target motion vector is a motion vector obtained by multiplying a motion vector of the current image block by L, wherein L is a ratio of L1 to L2, L1 is a time-domain span between a reference frame corresponding to the second reference frame index and a current frame where the current image block is located, and L2 is a time-domain span between a reference frame corresponding to the first reference frame index and the current frame.
In this embodiment of the present application, the first reference frame index and the second reference frame index are indexes of reference frames in a reference frame list recorded by the encoding end.
In this embodiment of the present application, the determining a target area in a target coordinate system according to the direction of the motion vector of the current image block includes:
if the product of the transverse component and the longitudinal component in the motion vector of the current image block is greater than 0, determining a transverse coordinate axis, a longitudinal coordinate axis, a first coordinate quadrant area and a third coordinate quadrant area in the target coordinate system as the target area;
and if the product of the transverse component and the longitudinal component in the motion vector of the current image block is less than 0, determining a transverse coordinate axis, a longitudinal coordinate axis, a second coordinate quadrant area and a fourth coordinate quadrant area in the target coordinate system as the target area.
In another embodiment of the present application, the determining a target area in a target coordinate system according to the direction of the motion vector of the current image block includes:
if the transverse component in the motion vector of the current image block is 0, determining a longitudinal coordinate axis in the target coordinate system as the target area;
and if the longitudinal component in the motion vector of the current image block is 0, determining a transverse coordinate axis in the target coordinate system as the target area.
As another embodiment, determining, by MHP location unit 1401, a target MHP location set corresponding to the current image block according to the motion information of the current image block includes:
and determining a target MHP position set corresponding to the current image block according to the motion information of the current image block and the received MHP position strategy information carried by the coding bit stream from the coding end.
In one example, the MHP location policy information includes first indication information indicating a set of available candidate MHP locations;
the MHP position unit 1401 determining a target MHP position set corresponding to the current image block according to the motion information of the current image block and the received MHP position policy information carried by the coded bit stream from the coding end includes:
determining all candidate MHP position sets according to the motion information of the current image block;
and determining a target MHP position set from all candidate MHP position sets according to the available candidate MHP position sets indicated by the first indication information.
In one example, the MHP location policy information includes second indication information indicating an available target MHP location;
the MHP position unit 1401 determining a target MHP position set corresponding to the current image block according to the motion information of the current image block and the received MHP position policy information carried by the coded bit stream from the coding end includes:
determining a candidate MHP position set according to the motion information of the current image block;
and selecting target MHP positions from the candidate MHP position set according to the available target MHP positions indicated by the second indication information and forming the target MHP position set.
In one example, the MHP position policy information includes third indication information, where the third indication information includes a motion information offset to indicate that a target MHP position is determined according to the motion information of the current image block and the motion information offset;
the MHP position unit 1401 determining a target MHP position set corresponding to the current image block according to the motion information of the current image block and the received MHP position policy information carried by the coded bit stream from the coding end includes:
determining the position of a target MHP according to the motion information offset indicated by the third indication information and the motion information of the current image block;
and forming the determined target MHP positions into the target MHP position set.
As an embodiment, the determining, by the inter prediction unit 1403, a target inter prediction pixel of the current image block according to each candidate inter prediction pixel includes:
performing weighting operation on each candidate prediction pixel according to the weighting coefficient of each candidate inter prediction pixel;
and determining a weighting operation result as a target inter-frame prediction pixel of the current image block.
As one embodiment, MHP location unit 1401 further stores at least one target MHP location store of a current image block to provide inter prediction reference to other image blocks adjacent to the current image block; or storing a calculation result calculated according to at least one target MHP position of the current image block so as to provide inter-frame prediction reference for other image blocks adjacent to the current image block.
Thus, the decoding-side apparatus configuration diagram shown in fig. 14 is completed.
The present application also provides a hardware structure diagram of the encoding-side device shown in fig. 14. As shown in fig. 15, the hardware structure may include:
a machine-readable storage medium 1501 for storing machine-readable instructions;
a processor 1502 configured to read the machine readable instructions and execute the instructions to implement the inter prediction method described in embodiment 3.
Up to this point, the description of the hardware configuration shown in fig. 15 is completed.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.
Claims (19)
1. A method of inter-frame prediction, the method comprising:
determining a target multi-hypothesis prediction MHP position set according to the motion information of the current image block;
determining candidate inter-frame prediction pixels corresponding to all target MHP positions in a target MHP position set;
and determining a target inter-frame prediction pixel of the current image block according to each candidate inter-frame prediction pixel.
2. The method of claim 1, wherein determining a target multi-hypothesis prediction MHP location set according to motion information of a current image block comprises:
determining a candidate MHP position set according to the motion information of the current image block;
selecting at least two candidate MHP locations from the set of candidate MHP locations;
and forming the selected candidate MHP positions into the target MHP position set.
3. The method of claim 2, wherein the motion information comprises: a motion vector;
the determining the candidate MHP position set according to the motion information of the current image block includes:
and selecting at least two positions of which the distance from the motion vector to the end point of the motion vector meets a preset condition as candidate MHP positions to form the candidate MHP position set according to the direction of the motion vector of the current image block.
4. The method of claim 3, wherein the selecting at least two positions having a distance from the end point of the motion vector satisfying a preset condition as candidate MHP positions to form the candidate MHP position set according to the direction of the motion vector of the current image block comprises:
determining a target area in a target coordinate system according to the direction of a motion vector of a current image block, wherein the target area is an area matched with the direction of the motion vector in the target coordinate system;
and taking at least two positions in the target area as candidate MHP positions according to the preset condition to form the candidate MHP position set.
5. The method of claim 4, wherein the target coordinate system is a coordinate system with the first specified point as an origin;
the first specified point is the end point of the motion vector or other points in the direction of the motion vector except the end point.
6. The method of claim 5, wherein the motion information further comprises: a first reference frame index corresponding to a reference frame adopted by the current image block;
each candidate MHP location in the set of candidate MHP locations corresponds to a second reference frame index;
the target coordinate system is a coordinate system with the first specified point as an origin when the first reference frame index is the same as the second reference frame index.
7. The method of claim 4, wherein the motion information further comprises: a first reference frame index corresponding to a reference frame adopted by the current image block; each candidate MHP location in the set of candidate MHP locations corresponds to a second reference frame index;
when the first reference frame index is different from the second reference frame index, the target coordinate system is a coordinate system with a second designated point as an origin, the second designated point being a termination point of a target motion vector or other points in the direction of the motion vector except the termination point of the target motion vector;
the target motion vector is a motion vector obtained by multiplying a motion vector of the current image block by L, where L is a ratio of L1 to L2, L1 is a time-domain span of a reference frame corresponding to the second reference frame index and a current frame where the current image block is located, and L2 is a time-domain span of a reference frame corresponding to the first reference frame index and a current frame.
8. The method as claimed in claim 4, wherein the determining the target area in the target coordinate system according to the direction of the motion vector of the current image block comprises:
if the product of the transverse component and the longitudinal component in the motion vector of the current image block is greater than 0, determining a transverse coordinate axis, a longitudinal coordinate axis, a first coordinate quadrant area and a third coordinate quadrant area in the target coordinate system as the target area;
and if the product of the transverse component and the longitudinal component in the motion vector of the current image block is less than 0, determining a transverse coordinate axis, a longitudinal coordinate axis, a second coordinate quadrant area and a fourth coordinate quadrant area in the target coordinate system as the target area.
9. The method as claimed in claim 4, wherein the determining the target area in the target coordinate system according to the motion vector of the current image block comprises:
if the transverse component in the motion vector of the current image block is 0, determining a longitudinal coordinate axis in the target coordinate system as the target area;
and if the longitudinal component in the motion vector of the current image block is 0, determining a transverse coordinate axis in the target coordinate system as the target area.
10. The method as claimed in claim 1, wherein said determining the target inter prediction pixel of the current image block according to each candidate inter prediction pixel comprises:
performing weighting operation on each candidate prediction pixel according to the weighting coefficient of each candidate inter prediction pixel;
and determining a weighting operation result as a target inter-frame prediction pixel of the current image block.
11. The method of claim 1, further comprising:
storing at least one target MHP location of a current image block to provide inter-prediction reference to other image blocks adjacent to the current image block; alternatively, the first and second electrodes may be,
and storing a calculation result calculated according to at least one target MHP position of the current image block so as to provide inter-frame prediction reference for other image blocks adjacent to the current image block.
12. The method according to any one of claims 2 to 11, wherein the method is applied to an encoding end; the method further comprises the following steps:
and sending a coded bit stream carrying MHP position strategy information to a decoding end, wherein the MHP position strategy information is used for providing a strategy for determining the position of a target MHP to the decoding end.
13. The method of claim 12, wherein the MHP location policy information comprises indication information;
the indication information is used for indicating an available candidate MHP position set or indicating the available target MHP position; or, the position of the target MHP is determined according to the motion information of the current image block and the motion information offset.
14. The method according to any one of claims 2 to 11, wherein the method is applied to a decoding end;
the determining a target multi-hypothesis prediction MHP position set according to the motion information of the current image block comprises:
and determining the target MHP position set according to the motion information of the current image block and MHP position strategy information carried by the received coding bit stream from the coding end, wherein the MHP position strategy information is used for providing a strategy for determining the target MHP position.
15. The method of claim 14,
the MHP location policy information comprises first indication information, wherein the first indication information is used for indicating an available candidate MHP location set; the determining the target MHP position set according to the motion information of the current image block and the received MHP position strategy information carried by the coding bit stream from the coding end comprises the following steps: determining a candidate MHP position set according to the motion information of the current image block, and determining a target MHP position set from all determined candidate MHP position sets according to the available candidate MHP position set indicated by the first indication information; alternatively, the first and second electrodes may be,
the MHP position strategy information comprises second indication information, and the second indication information is used for indicating an available target MHP position; the determining the target MHP position set according to the motion information of the current image block and the received MHP position strategy information carried by the coding bit stream from the coding end comprises the following steps: determining a candidate MHP position set according to the motion information of the current image block, selecting a target MHP position from the candidate MHP position set according to the available target MHP position indicated by the second indication information, and forming the target MHP position set; alternatively, the first and second electrodes may be,
the MHP position strategy information comprises third indication information, and the third indication information comprises a motion information offset so as to indicate that a target MHP position is determined according to the motion information of the current image block and the motion information offset; the determining the target MHP position set according to the received MHP position strategy information carried by the coding bit stream from the coding end comprises the following steps: determining the position of a target MHP according to the motion information offset indicated by the third indication information and the motion information of the current image block; and forming the determined target MHP positions into the target MHP position set.
16. An inter-frame prediction method applied to a decoding end, comprising:
determining a target MHP position set corresponding to the current image block according to the motion information of the current image block and MHP position strategy information carried by a received coding bit stream from a coding end;
determining candidate inter-frame prediction pixels corresponding to all target MHP positions in a target MHP position set;
and determining a target inter-frame prediction pixel of the current image block according to each candidate inter-frame prediction pixel.
17. The method of claim 16,
the MHP location policy information comprises first indication information, wherein the first indication information is used for indicating an available candidate MHP location set; the determining a target MHP position set corresponding to the current image block according to the motion information of the current image block and the received MHP position strategy information carried by the coding bit stream from the coding end comprises: determining a candidate MHP position set according to the motion information of the current image block, and determining a target MHP position set from all determined candidate MHP position sets according to the available candidate MHP position set indicated by the first indication information; alternatively, the first and second electrodes may be,
the MHP position strategy information comprises second indication information, and the second indication information is used for indicating an available target MHP position; the determining a target MHP position set corresponding to the current image block according to the motion information of the current image block and the received MHP position strategy information carried by the coding bit stream from the coding end comprises: determining a candidate MHP position set according to the motion information of the current image block, selecting a target MHP position from the candidate MHP position set according to the available target MHP position indicated by the second indication information, and forming the target MHP position set; alternatively, the first and second electrodes may be,
the MHP position strategy information comprises third indication information, and the third indication information comprises a motion information offset so as to indicate that a target MHP position is determined according to the motion information of the current image block and the motion information offset; the determining a target MHP position set corresponding to a current image block according to MHP position strategy information carried by a received coding bit stream from a coding end comprises: determining the position of a target MHP according to the motion information offset indicated by the third indication information and the motion information of the current image block; and forming the determined target MHP positions into the target MHP position set.
18. An encoding side device, comprising: a machine-readable storage medium and a processor;
wherein the machine-readable storage medium is to store machine-readable instructions;
the processor configured to read the machine readable instructions and execute the instructions to implement the inter prediction method of any of claims 1-13.
19. A decoding-side apparatus, comprising: a machine-readable storage medium and a processor;
wherein the machine-readable storage medium is to store machine-readable instructions;
the processor is configured to read the machine-readable instructions and execute the instructions to implement the inter prediction method according to any one of claims 1 to 11, 14 to 15, and 16 to 17.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811102838.6A CN110933423B (en) | 2018-09-20 | 2018-09-20 | Inter-frame prediction method and device |
PCT/CN2019/106484 WO2020057558A1 (en) | 2018-09-20 | 2019-09-18 | Inter-frame prediction method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811102838.6A CN110933423B (en) | 2018-09-20 | 2018-09-20 | Inter-frame prediction method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110933423A true CN110933423A (en) | 2020-03-27 |
CN110933423B CN110933423B (en) | 2022-03-25 |
Family
ID=69855579
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811102838.6A Active CN110933423B (en) | 2018-09-20 | 2018-09-20 | Inter-frame prediction method and device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110933423B (en) |
WO (1) | WO2020057558A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023020392A1 (en) * | 2021-08-16 | 2023-02-23 | Mediatek Inc. | Latency reduction for reordering merge candidates |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120063514A1 (en) * | 2010-04-14 | 2012-03-15 | Jian-Liang Lin | Method for performing hybrid multihypothesis prediction during video coding of a coding unit, and associated apparatus |
CN102710934A (en) * | 2011-01-22 | 2012-10-03 | 华为技术有限公司 | Motion predicting or compensating method |
CN102934440A (en) * | 2010-05-26 | 2013-02-13 | Lg电子株式会社 | Method and apparatus for processing a video signal |
CN103024390A (en) * | 2012-12-21 | 2013-04-03 | 天津大学 | Self-adapting searching method for motion estimation in video coding |
US20130329800A1 (en) * | 2012-06-07 | 2013-12-12 | Samsung Electronics Co., Ltd. | Method of performing prediction for multiview video processing |
CN104935938A (en) * | 2015-07-15 | 2015-09-23 | 哈尔滨工业大学 | Inter-frame prediction method in hybrid video coding standard |
CN107105288A (en) * | 2010-12-13 | 2017-08-29 | 韩国电子通信研究院 | The method decoded based on inter prediction to vision signal |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012209914A (en) * | 2010-12-08 | 2012-10-25 | Sony Corp | Image processor, image processing method and program |
US10432961B2 (en) * | 2015-03-10 | 2019-10-01 | Apple Inc. | Video encoding optimization of extended spaces including last stage processes |
CN105678808A (en) * | 2016-01-08 | 2016-06-15 | 浙江宇视科技有限公司 | Moving object tracking method and device |
CN108259916B (en) * | 2018-01-22 | 2019-08-16 | 南京邮电大学 | Best match interpolation reconstruction method in frame in a kind of distributed video compressed sensing |
-
2018
- 2018-09-20 CN CN201811102838.6A patent/CN110933423B/en active Active
-
2019
- 2019-09-18 WO PCT/CN2019/106484 patent/WO2020057558A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120063514A1 (en) * | 2010-04-14 | 2012-03-15 | Jian-Liang Lin | Method for performing hybrid multihypothesis prediction during video coding of a coding unit, and associated apparatus |
CN102934440A (en) * | 2010-05-26 | 2013-02-13 | Lg电子株式会社 | Method and apparatus for processing a video signal |
CN107105288A (en) * | 2010-12-13 | 2017-08-29 | 韩国电子通信研究院 | The method decoded based on inter prediction to vision signal |
CN102710934A (en) * | 2011-01-22 | 2012-10-03 | 华为技术有限公司 | Motion predicting or compensating method |
US20130329800A1 (en) * | 2012-06-07 | 2013-12-12 | Samsung Electronics Co., Ltd. | Method of performing prediction for multiview video processing |
CN103024390A (en) * | 2012-12-21 | 2013-04-03 | 天津大学 | Self-adapting searching method for motion estimation in video coding |
CN104935938A (en) * | 2015-07-15 | 2015-09-23 | 哈尔滨工业大学 | Inter-frame prediction method in hybrid video coding standard |
Non-Patent Citations (1)
Title |
---|
凌勇: "视频压缩编码中帧间参考关系的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023020392A1 (en) * | 2021-08-16 | 2023-02-23 | Mediatek Inc. | Latency reduction for reordering merge candidates |
US11805245B2 (en) | 2021-08-16 | 2023-10-31 | Mediatek Inc. | Latency reduction for reordering prediction candidates |
Also Published As
Publication number | Publication date |
---|---|
WO2020057558A1 (en) | 2020-03-26 |
CN110933423B (en) | 2022-03-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11917140B2 (en) | Selection of an extended intra prediction mode | |
US20200007872A1 (en) | Video decoding method, video decoder, video encoding method and video encoder | |
US9349192B2 (en) | Method and apparatus for processing video signal | |
US9020043B2 (en) | Pathway indexing in flexible partitioning | |
EP2983365A1 (en) | Image prediction coding method and image coder | |
WO2015139187A1 (en) | Low latency encoder decision making for illumination compensation and depth look-up table transmission in video coding | |
US10142610B2 (en) | Method for sub-range based coding a depth lookup table | |
TWI688266B (en) | Method and apparatus for intra prediction fusion in image and video coding | |
KR20150114988A (en) | Method and apparatus of inter-view candidate derivation for three-dimensional video coding | |
US20220191548A1 (en) | Picture prediction method, encoder, decoder and storage medium | |
CN103168470A (en) | Video encoding method, video decoding method, video encoding device, video decoding device, and programs for same | |
US11425416B2 (en) | Video decoding method and device, and video encoding method and device | |
US8804839B2 (en) | Method for image prediction of multi-view video codec and computer-readable recording medium thereof | |
KR20230113247A (en) | Image encoding method and image decoding method and apparatus using adaptive deblocking filtering | |
US20130064299A1 (en) | Moving picture encoding apparatus, moving picture encoding method, and moving picture encoding program, and moving picture decoding apparatus, moving picture decoding method, and moving picture decoding program | |
KR101806949B1 (en) | Method for coding a depth lookup table | |
CN110933423B (en) | Inter-frame prediction method and device | |
CN101895749A (en) | Quick parallax estimation and motion estimation method | |
CN113508599A (en) | Syntax for motion information signaling in video coding | |
CN104380742A (en) | Encoding and decoding by means of selective inheritance | |
JP4759537B2 (en) | Image encoding method, image decoding method, image encoding device, image decoding device, image encoding program, image decoding program, and computer-readable recording medium | |
CN112449181B (en) | Encoding and decoding method, device and equipment | |
CN110971911B (en) | Method and apparatus for intra prediction in video coding and decoding | |
KR20230003054A (en) | Coding and decoding methods, devices and devices thereof | |
WO2012174973A1 (en) | Method and apparatus for line buffers reduction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |