CN117915097A - Intra-frame prediction method, device and equipment - Google Patents

Intra-frame prediction method, device and equipment Download PDF

Info

Publication number
CN117915097A
CN117915097A CN202211249310.8A CN202211249310A CN117915097A CN 117915097 A CN117915097 A CN 117915097A CN 202211249310 A CN202211249310 A CN 202211249310A CN 117915097 A CN117915097 A CN 117915097A
Authority
CN
China
Prior art keywords
sample point
reference sample
predicted
image block
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211249310.8A
Other languages
Chinese (zh)
Inventor
吕卓逸
周川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202211249310.8A priority Critical patent/CN117915097A/en
Priority to PCT/CN2023/123318 priority patent/WO2024078401A1/en
Publication of CN117915097A publication Critical patent/CN117915097A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application discloses an intra-frame prediction method, a device and equipment, which belong to the technical field of encoding and decoding, and the intra-frame prediction method provided by the embodiment of the application comprises the following steps: acquiring a mode index of an angle prediction mode corresponding to a predicted image block; for each prediction sample point in the prediction image block, under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is larger than 0 degrees and smaller than 90 degrees, carrying out intra-frame prediction on the prediction sample point based on the angle prediction mode, and determining a first reference sample point and a second reference sample point; determining a second predicted value corresponding to the predicted sample point according to a first gradient value between the first reconstructed value and the second reconstructed value, a first predicted value corresponding to the predicted sample point and a weight coefficient associated with the first gradient value; the first reconstruction value is a reconstruction value of a first reference sample point, the second reconstruction value is a reconstruction value of a second reference sample point, and the first prediction value is determined based on an angle prediction mode corresponding to the predicted image block.

Description

Intra-frame prediction method, device and equipment
Technical Field
The application belongs to the technical field of encoding and decoding, and particularly relates to an intra-frame prediction method, device and equipment.
Background
Position-dependent intra-prediction combining (position DEPENDENT INTRA Prediction Combination, PDPC) techniques are sampled in the general video coding (VERSATILE VIDEO CODING, VVC) standard, and intra-prediction is performed on image blocks to obtain prediction values corresponding to the image blocks.
However, the PDPC technology is only applicable to the case that the included angle between the angle prediction direction represented by the mode index corresponding to the image block and the horizontal direction is greater than or equal to 90 degrees or less than or equal to 0 degrees, where the mode index is an index of the angle prediction mode corresponding to the image block, and when the included angle between the angle prediction direction represented by the mode index corresponding to the image block and the horizontal direction is greater than 0 degrees and less than 90 degrees, the reference image block is determined only according to the angle prediction mode corresponding to the image block, and then the prediction value is determined based on the reconstruction value of the reference image block, and the prediction value obtained by the above method is not accurate enough, thereby reducing the accuracy of intra-frame prediction.
Disclosure of Invention
The embodiment of the application provides an intra-frame prediction method, device and equipment, which can solve the problem that in the prior art, when the included angle between the angle prediction direction represented by the mode index corresponding to the image block and the horizontal direction is larger than 0 degrees and smaller than 90 degrees, the reference image block is determined only according to the index of the angle prediction mode, so that the accuracy of intra-frame prediction is reduced.
In a first aspect, there is provided an intra prediction method, including:
acquiring a mode index of an angle prediction mode corresponding to a predicted image block;
For each prediction sample point in the prediction image block, under the condition that an included angle between an angle prediction direction represented by the mode index and a horizontal direction is larger than 0 degree and smaller than 90 degrees, carrying out intra-frame prediction on the prediction sample point based on the angle prediction mode, and determining a first reference sample point and a second reference sample point; the first reference sample point and the second reference sample point are sample points within the reconstructed image block;
determining a second predicted value corresponding to the predicted sample point according to a first gradient value between the first reconstructed value and the second reconstructed value, a first predicted value corresponding to the predicted sample point and a weight coefficient associated with the first gradient value; the first reconstruction value is a reconstruction value of the first reference sample point, the second reconstruction value is a reconstruction value of the second reference sample point, and the first prediction value is determined based on an angle prediction mode corresponding to the predicted image block.
In a second aspect, there is provided an intra prediction apparatus comprising:
the acquisition module is used for acquiring a mode index of an angle prediction mode corresponding to the predicted image block;
The first determining module is used for carrying out intra-frame prediction on each predicted sample point in the predicted image block based on the angle prediction mode under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is larger than 0 degree and smaller than 90 degrees, and determining a first reference sample point and a second reference sample point; the first reference sample point and the second reference sample point are sample points within the reconstructed image block;
The second determining module is used for determining a second predicted value corresponding to the predicted sample point according to a first gradient value between the first reconstructed value and the second reconstructed value, a first predicted value corresponding to the predicted sample point and a weight coefficient associated with the first gradient value; the first reconstruction value is a reconstruction value of the first reference sample point, the second reconstruction value is a reconstruction value of the second reference sample point, and the first prediction value is determined based on an angle prediction mode corresponding to the predicted image block.
In a third aspect, there is provided a terminal comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, there is provided a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, a chip is provided, the chip comprising a processor and a communication interface, the communication interface being coupled to the processor, the processor being configured to execute programs or instructions for implementing the method according to the first aspect.
In a sixth aspect, there is provided a computer program/program product stored in a storage medium, the computer program/program product being executed by at least one processor to carry out the steps of the method according to the first aspect.
In the embodiment of the application, for each prediction sample point in a prediction image block, under the condition that an included angle between an angle prediction direction represented by a mode index corresponding to the prediction image block and a horizontal direction is larger than 0 degrees and smaller than 90 degrees, intra-frame prediction is carried out on the prediction sample point based on the angle prediction mode, a first reference sample point and a second reference sample point are determined, and a second prediction value corresponding to the prediction sample point is determined according to a first prediction value, a first gradient value between the first reconstruction value and the second reconstruction value and a weight coefficient related to the first gradient value; the first reconstruction value is a reconstruction value of the first reference sample point, and the second reconstruction value is a reconstruction value of the second reference sample point. By carrying out intra-frame prediction on the prediction sample points based on the angle prediction mode, the application range of the PDPC technology is expanded, and a more accurate second prediction value is obtained by carrying out intra-frame prediction by using the PDPC technology, so that the accuracy of intra-frame prediction is improved.
Drawings
FIG. 1 is a diagram showing the relationship between the angle prediction mode and the mode index in the related art;
FIG. 2 is one of the application scenario diagrams of the intra prediction method in the related art;
FIG. 3 is a second application scenario diagram of the intra prediction method in the related art;
FIG. 4 is a flowchart of an intra prediction method according to an embodiment of the present application;
FIG. 5 is one of application scenario diagrams of an intra prediction method according to an embodiment of the present application;
FIG. 6 is a second application scenario diagram of an intra-frame prediction method according to an embodiment of the present application;
FIG. 7 is a third application scenario diagram of an intra prediction method according to an embodiment of the present application;
Fig. 8 is a block diagram of an intra prediction apparatus according to an embodiment of the present application;
fig. 9 is a block diagram of a communication device provided by an embodiment of the present application;
fig. 10 is a schematic diagram of a hardware structure of a terminal according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are derived by a person skilled in the art based on the embodiments of the application, fall within the scope of protection of the application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the "first" and "second" distinguishing between objects generally are not limited in number to the extent that the first object may, for example, be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/" generally means a relationship in which the associated object is an "or" before and after.
In the related art, an image block includes 65 angular prediction modes, each of which corresponds to a mode index, referring to fig. 1, fig. 1 shows a relationship between the angular prediction modes and the mode index, wherein the mode index may represent an angular prediction direction. For example, as shown in FIG. 1, the angle predicted direction characterized by the pattern index 34 is 45 degrees from the horizontal.
When the mode index is greater than or equal to 50 or less than or equal to 18 (i.e., the included angle between the angle prediction direction represented by the mode index and the horizontal direction is greater than or equal to 90 degrees or less than or equal to 0 degrees), the image block can be subjected to intra-frame prediction by using the PDPC technology, so as to obtain a prediction value corresponding to the image block.
The following description will be given by taking an example in which the mode index corresponding to the image block is greater than 50:
calculating according to formulas (1) and (2) to obtain a variable nScale corresponding to the image block;
nScale=Min(2,Log2(nTbH)-Floor(Log2(3*invAngle-2))+8) (1)
Wherein nTbH denotes the height of the image block, invAngle denotes the angle prediction function corresponding to the image block, INTRAPREDANGLE denotes the offset value corresponding to the image block, and optionally, the offset value may be obtained by looking up the table.
Table one:
As shown in fig. 2, when nScale > =0, the predicted value p (x, y) of each sample point in the image block can be calculated by the following formulas (3) and (4), where the value range of x is 0 to min (3 < < nScale, nTbW), and nTbW represents the width of the image block.
p(x,y)=((64–wL(x))*p’(x,y)+wL(x)*r(-1,y+d1)+32)>>6 (3)
r(-1+d)=(32–dFrac)*r(-1+dInt)+dFrac*r(-1+dInt+1) (4)
Wherein wL (x) =32 > > ((x < < 1) > > nScale), p' (x, y) is a predicted value determined based on the angle prediction mode corresponding to the image block, r (-1, y+d1) is a reconstructed value of the reference sample point determined along the angle prediction direction corresponding to the image block, and is determined based on formula (4); dInt = (INTRAPREDANGLE > > 5), dFract = (INTRAPREDANGLE & 31), INTRAPREDANGLE may be obtained by looking up table one.
As shown in fig. 3, when nScale <0, the predicted value p (x, y) of each sample point in the image block can be calculated by the following formula (5), where the value range of x is 0 to min (3 < < nScale, nTbW).
p(x,y)=Clip(((64–wL(x))*p’(x,y)+wL(x)*(r(-1,y)-r(-1,-1))+32)>>6) (5)
Where wL (x) =32 > > ((x < < 1) > > nScale 2), nScale 2= (log 2 (nTbH) +log2 (nTbW) -2) > >2.
However, in the case that the mode index is smaller than 50 and larger than 18 (i.e., the angle between the angle prediction direction represented by the mode index and the horizontal direction is smaller than 90 degrees and larger than 0 degrees), the reference image block is determined only according to the angle prediction mode corresponding to the image block, and then the prediction value is determined based on the reconstruction value of the reference image block, so that the prediction value obtained in the above manner is not accurate enough, and the accuracy of intra-frame prediction is reduced.
In order to solve the above technical problems, the present application provides an intra prediction method, and in the following, the intra prediction method provided by the embodiment of the present application is described in detail by some embodiments and application scenarios thereof with reference to the accompanying drawings.
Referring to fig. 4, fig. 4 is a flowchart illustrating an intra prediction method according to an embodiment of the application. The intra prediction method provided in this embodiment includes the following steps:
S401, a mode index of an angle prediction mode corresponding to the predicted image block is obtained.
Alternatively, the angle prediction mode corresponding to the predicted image block may be determined by acquiring the identification information in the predicted image block, and then the mode index of the angle prediction mode may be acquired.
S402, for each prediction sample point in the prediction image block, performing intra-frame prediction on the prediction sample point based on the angle prediction mode under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is larger than 0 degrees and smaller than 90 degrees, and determining a first reference sample point and a second reference sample point.
It should be appreciated that a predicted image block comprises at least two predicted sample points, which may be understood as pixels in the predicted image block. As described above, referring to fig. 1, in the case that the mode index is greater than 18 and less than 50, it can be understood that the angle between the mode index and the horizontal direction is greater than 0 and less than 90 degrees; if the angle prediction mode corresponding to the predicted image block is a wide angle mode, when the mode index is greater than 34 and less than 98, it can be understood that the angle between the angle prediction direction represented by the mode index and the horizontal direction is greater than 0 and less than 90 degrees.
In this step, for each prediction sample point in the prediction image block, when the angle between the angle prediction direction represented by the mode index and the horizontal direction is greater than 0 degrees and less than 90 degrees, intra-prediction is performed on the prediction sample point based on the angle prediction direction of the angle prediction mode, and a first reference sample point and a second reference sample point are determined. Wherein the first reference sample point and the second reference sample point are both sample points within the reconstructed image block, and the first reference sample point and the second reference sample point are understood as pixel points within the reconstructed image block. For specific embodiments, reference is made to the following examples.
S403, determining a second predicted value corresponding to the predicted sample point according to a first gradient value between the first reconstructed value and the second reconstructed value, the first predicted value corresponding to the predicted sample point and a weight coefficient associated with the first gradient value.
The first predicted value is determined according to an angle prediction mode corresponding to the predicted sample point, and is further determined based on a reconstructed value of the third reference image block. The first reconstruction value is a reconstruction value of the first reference sample point, the second reconstruction value is a reconstruction value of the second reference sample point, and the first gradient value is understood as a difference between the first reconstruction value and the second reconstruction value. The weight coefficient is determined based on the width and height of the predicted image block.
In this step, a second predicted value corresponding to the predicted sample point is determined according to the first predicted value, the first gradient value and the weight coefficient, and optionally, the second predicted value may be calculated by the formula (6).
p(x,y)=Clip(((64–w)*p’(x,y)+w*D1+32)>>6) (6)
Where p (x, y) represents the second predicted value, p' (x, y) represents the first predicted value, W represents the weight coefficient, and D1 represents the first gradient value.
In this step, after determining the second prediction value corresponding to each prediction sample point, the intra-frame prediction result corresponding to the predicted image block can be obtained.
In the embodiment of the application, for each prediction sample point in a prediction image block, under the condition that an included angle between an angle prediction direction represented by a mode index corresponding to the prediction image block and a horizontal direction is larger than 0 degrees and smaller than 90 degrees, intra-frame prediction is carried out on the prediction sample point based on the angle prediction mode, a first reference sample point and a second reference sample point are determined, and a second prediction value corresponding to the prediction sample point is determined according to a first prediction value, a first gradient value between the first reconstruction value and the second reconstruction value and a weight coefficient corresponding to the first gradient value; the first reconstruction value is a reconstruction value of the first reference sample point, and the second reconstruction value is a reconstruction value of the second reference sample point. By carrying out intra-frame prediction on the prediction sample points based on the angle prediction mode, the application range of the PDPC technology is expanded, and a more accurate second prediction value is obtained by carrying out intra-frame prediction by using the PDPC technology, so that the accuracy of intra-frame prediction is improved.
Optionally, the intra-predicting the predicted sample point based on the angular prediction mode, and determining the first reference sample point and the second reference sample point includes:
Determining a first reference sample point from left image blocks adjacent to the predicted image block according to the position information of the predicted sample point under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is larger than or equal to 45 degrees;
Determining a second reference sample point adjacent to the predicted image block and located above the predicted image block based on the position information of the first reference sample point and the angular prediction mode; or alternatively
Determining a first reference sample point from an upper image block adjacent to the predicted image block according to the position information of the predicted sample point under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is smaller than 45 degrees;
A second reference sample point adjacent to the predicted image block and located to the left of the predicted image block is determined based on the position information of the first reference sample point and the angular prediction mode.
In this embodiment, when the angle between the angle prediction direction represented by the mode index and the horizontal direction is greater than or equal to 45 degrees (i.e., the mode index is greater than or equal to 34, or the mode index is greater than or equal to 66 in the wide angle mode), a first reference sample point is determined from the left image block adjacent to the predicted image block, and intra-prediction is performed on the first reference sample point based on the angle prediction mode, so as to determine a second reference sample point located above the predicted image block.
In the case that the angle between the angle prediction direction represented by the mode index and the horizontal direction is smaller than 45 degrees (i.e., the mode index is smaller than 34, or the mode index is smaller than 66 in the wide angle mode), a first reference sample point is determined from the upper image block adjacent to the predicted image block, and intra-prediction is performed on the first reference sample point based on the angle prediction mode, so as to determine a second reference sample point located at the left side of the predicted image block.
For ease of understanding, referring to fig. 5, fig. 5 shows an application scenario in which an included angle between an angle prediction direction represented by a mode index and a horizontal direction is greater than or equal to 45 degrees, in the application scenario shown in fig. 5, coordinates of a predicted sample point are (x, y), coordinates of a first reference sample point are (-1, y), and coordinates of a second reference sample point are (-1+d, -1).
Obtaining a reconstructed value of a row of decoded reference sample points on the left side adjacent to a predicted image block to form an array r; and obtaining an offset value INTRAPREDANGLE of the predicted sample point according to the first angle prediction mode lookup table corresponding to the predicted sample point, and further obtaining a second predicted value corresponding to the predicted sample point through calculation according to a formula (7).
p(x,y)=Clip(((64–wL(x))*p1(x,y)+wL(x)*(r(-1,y)-r(-1+d,-1))+32)>>6) (7)
Wherein, the value range of x is 0 to min (3 < < nScale, nTbW), nTbW is the width of the predicted image block, nScale = (log 2 (nTbH) +log2 (nTbW) -2) > >2, ntbh is the height of the predicted image block;
Wherein p (x, y) represents a second predicted value, p1 (x, y) represents a first predicted value, r (-1, y) represents a first reconstructed value, r (-1+d, -1) represents a second reconstructed value, r (-1, y) -r (-1+d, -1) represents a first gradient value, wL (x) represents a weight coefficient .r(-1+d,-1)=(32–dFrac)*r(-1+dInt)+dFrac*r(-1+dInt+1),dInt=(intraPredAngle>>5),dFract=(intraPredAngle&31), in wide angle mode ,dInt=(intraPredAngle>>6),dFract=(intraPredAngle&63),wL(x)=32>>((x<<1)>>nScale).
Optionally, in the case where (x+ dInt) is smaller than 0, a second predicted value corresponding to the predicted sample point is calculated by the above formula (7).
In this embodiment, intra-frame prediction is performed on the prediction sample points based on the angle prediction mode corresponding to the prediction image block, and the first reference sample point and the second reference sample point adjacent to the prediction image block are determined, so that the application range of the PDPC technology is expanded, the second more accurate prediction value is obtained by performing intra-frame prediction by using the PDPC technology, and the accuracy of intra-frame prediction is improved.
Optionally, the intra-predicting the predicted sample point based on the angular prediction mode, and determining the first reference sample point and the second reference sample point includes:
Determining a first reference sample point from left image blocks adjacent to the predicted image block according to the position information of the predicted sample point under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is larger than or equal to 45 degrees;
Determining a second reference sample point positioned to the left of the first reference image sample point based on the position information of the first reference sample point and the angle prediction mode; or alternatively
Determining a first reference sample point from an upper image block adjacent to the predicted image block according to the position information of the predicted sample point under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is smaller than 45 degrees;
a second reference sample point located above the first reference image sample point is determined based on the position information of the first reference sample point and the angular prediction mode.
In this embodiment, when the angle between the angle prediction direction represented by the mode index and the horizontal direction is greater than or equal to 45 degrees (i.e., the mode index is greater than or equal to 34, or the mode index is greater than or equal to 66 in the wide angle mode), a first reference sample point is determined from the left image block adjacent to the predicted image block, and intra-prediction is performed on the first reference sample point based on the angle prediction mode, so as to determine a second reference sample point located at the left side of the first reference image sample point.
In the case that the angle between the angle prediction direction represented by the mode index and the horizontal direction is smaller than 45 degrees (i.e., the mode index is smaller than 34, or the mode index is smaller than 66 in the wide angle mode), a first reference sample point is determined from the upper image block adjacent to the predicted image block, and the first reference sample point is intra-predicted based on the angle prediction mode, and a second reference sample point located above the first reference image sample point is determined.
For ease of understanding, referring to fig. 6, fig. 6 shows an application scenario in which an included angle between an angle prediction direction represented by a pattern index and a horizontal direction is greater than or equal to 45 degrees, in the application scenario shown in fig. 6, coordinates of a predicted sample point are (x, y), coordinates of a first reference sample point are (-1, y), and coordinates of a second reference sample point are (-n, y+d).
Obtaining a reconstructed value of a row of decoded reference sample points on the left side of the adjacent predicted image block to form an array r0, and obtaining a reconstructed value of a row of decoded reference sample points on the left side of the adjacent predicted image block to form an array rn, wherein n is a positive integer larger than 0; and obtaining an offset value INTRAPREDANGLE of the predicted sample point according to the first angle prediction mode lookup table corresponding to the predicted sample point, and further obtaining a second predicted value corresponding to the predicted sample point through calculation according to a formula (8).
p(x,y)=Clip(((64–wL(x))*p1(x,y)+wL(x)*(r0(-1,y)-rn(-n,y+d))+32)>>6) (8)
Wherein, the value range of x is 0 to min (3 < < nScale, nTbW), nTbW is the width of the predicted image block, nScale = (log 2 (nTbH) +log2 (nTbW) -2) > >2, ntbh is the height of the predicted image block;
Wherein p (x, y) represents the second predicted value, p1 (x, y) represents the first predicted value, r0 (-1, y) represents the first reconstructed value, rn (-n, y+d) represents the second reconstructed value, r0 (-1, y) -rn (-n, y+d) represents the first gradient value, wL (x) represents the weight coefficient .rn(y+d)=(32–dFracN)*rn(y+dIntN)+dFracN*r(y+dIntN+1),dIntN=(deltaPos>>5),dFractN=(deltaPos&31),deltaPos=-(32-intraPredAngle)*(1+n), in wide angle mode, dIntN = (deltaPos > > 6), dFractN = (deltaPos & 63), deltaPos = - (64-INTRAPREDANGLE) ×1+n.
Optionally, when (y+ dIntN) is greater than or equal to 0, a second predicted value corresponding to the predicted sample point is calculated according to the above formula (8).
Optionally, when (x+ dInt) is greater than or equal to 0, a second predicted value corresponding to the predicted sample point is calculated according to the above formula (8).
Optionally, in the case where (x+ dInt) is smaller than 0, a second predicted value corresponding to the predicted sample point is calculated by the above formula (8).
In this embodiment, intra-frame prediction is performed on the prediction sample points based on the angle prediction mode corresponding to the prediction image block, and the first reference sample point and the second reference sample point located above the prediction image block are determined, or the first reference sample point and the second reference sample point located at the left side of the prediction image block are determined, so that the application range of the PDPC technology is expanded, and the second more accurate prediction value is obtained by performing intra-frame prediction by using the PDPC technology, thereby improving the accuracy of intra-frame prediction.
Optionally, the intra-predicting the predicted sample point based on the angular prediction mode, and determining the first reference sample point and the second reference sample point includes:
determining the first reference sample point from the upper image block adjacent to the predicted image block according to the position information of the predicted sample point under the condition that the included angle between the angle prediction direction represented by the mode index and the horizontal direction is larger than or equal to 45 degrees;
determining a second reference sample point located above the first reference image sample point based on the position information of the first reference sample point and the angle prediction mode; or alternatively
Determining a first reference sample point from left image blocks adjacent to the predicted image block according to the position information of the predicted sample point under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is smaller than 45 degrees;
A second reference sample point to the left of the first reference image sample point is determined based on the position information of the first reference sample point and the angular prediction mode.
In this embodiment, when the angle between the angle prediction direction represented by the mode index and the horizontal direction is greater than or equal to 45 degrees (i.e., the mode index is greater than or equal to 34, or in the wide angle mode, the mode index is greater than or equal to 66), a first reference sample point is determined from an upper image block adjacent to the predicted image block, and intra-prediction is performed on the first reference sample point based on the angle prediction mode, so as to determine a second reference sample point located above the first reference image sample point.
In the case that the angle between the angle prediction direction represented by the mode index and the horizontal direction is smaller than 45 degrees (i.e., the mode index is smaller than 34, or the mode index is smaller than 66 in the wide angle mode), a first reference sample point is determined from the left image block adjacent to the predicted image block, and the first reference sample point is intra-predicted based on the angle prediction mode, and a second reference sample point located at the left side of the first reference image sample point is determined.
For ease of understanding, referring to fig. 7, fig. 7 shows an application scenario in which an included angle between an angle prediction direction represented by a pattern index and a horizontal direction is greater than or equal to 45 degrees, in the application scenario shown in fig. 7, coordinates of a prediction sample point are (x, y), coordinates of a first reference sample point are (x, -1), and coordinates of a second reference sample point are (x+d, -n).
Obtaining reconstruction values of decoded reference sample points in a row above the adjacent position of a predicted image block to form an array r0, and obtaining reconstruction values of decoded reference sample points in a row above the adjacent position of the predicted image block to form an array rn, wherein n is a positive integer larger than 0; and obtaining an offset value INTRAPREDANGLE of the predicted sample point according to the first angle prediction mode lookup table corresponding to the predicted sample point, and further obtaining a second predicted value corresponding to the predicted sample point through calculation according to a formula (9).
p(x,y)=Clip(((64–wL(x))*p1(x,y)+wL(x)*(r0(x,-1)-rn(x+d,-n))+32)>>6) (9)
Wherein, the value range of x is 0 to min (3 < < nScale, nTbW), nTbW is the width of the predicted image block, nScale = (log 2 (nTbH) +log2 (nTbW) -2) > >2, ntbh is the height of the predicted image block;
Wherein p (x, y) represents the second predicted value, p1 (x, y) represents the first predicted value, r0 (x, -1) represents the first reconstructed value, rn (x+d, -n) represents the second reconstructed value, r0 (x, -1) -rn (x+d, -n) represents the first gradient value, wL (x) represents the weight coefficient .rn(x+d)=(32–dFracN)*rn(y+dIntN)+dFracN*r(x+dIntN+1),dIntN=(deltaPos>>5),dFractN=(deltaPos&31),deltaPos=intraPredAngle*(1+n), in wide angle mode, dIntN = (deltaPos > > 6), dFractN = (deltaPos & 63), deltaPos = INTRAPREDANGLE x (1+n).
Optionally, when (x+ dInt) is greater than or equal to 0, a second predicted value corresponding to the predicted sample point is calculated according to the above formula (8).
Optionally, in the case where (x+ dInt) is smaller than 0, a second predicted value corresponding to the predicted sample point is calculated by the above formula (8).
In this embodiment, intra-frame prediction is performed on the prediction sample points based on the angle prediction mode corresponding to the prediction image block, and the first reference sample point and the second reference sample point located above the prediction image block are determined, or the first reference sample point and the second reference sample point located at the left side of the prediction image block are determined, so that the application range of the PDPC technology is expanded, and the second more accurate prediction value is obtained by performing intra-frame prediction by using the PDPC technology, thereby improving the accuracy of intra-frame prediction.
Alternatively, in the case where (x+ dInt) is smaller than 0, the second prediction value corresponding to the predicted image block may be obtained using the intra prediction method involved in the application scenario illustrated in fig. 5; or using the intra prediction method related to the application scene shown in fig. 7 to obtain a second prediction value corresponding to the predicted image block. When (x+ dInt) is equal to or greater than 0, the second prediction value corresponding to the predicted image block may be obtained using the intra prediction method related to the application scenario shown in fig. 6.
Optionally, after the mode index of the angle prediction mode corresponding to the predicted image block is obtained, the method includes:
For each prediction sample point in the prediction image block, under the condition that an included angle between an angle prediction direction represented by the mode index and a horizontal direction is larger than 0 degree and smaller than 90 degrees, carrying out intra-frame prediction on the prediction sample point based on the angle prediction mode, and determining a third reference sample point, a fourth reference sample point, a fifth reference sample point and a sixth reference sample point;
and determining a second predicted value corresponding to the predicted sample point according to a second gradient value between the third reconstruction value and the fourth reconstruction value and a third gradient value between the fifth reconstruction value and the sixth reconstruction value.
In this embodiment, under the condition that an included angle between an angle prediction direction represented by the mode index and a horizontal direction is greater than 0 degrees and less than 90 degrees, intra-prediction is performed on each prediction sample point in the prediction image block, so as to obtain a third reference sample point, a fourth reference sample point, a fifth reference sample point and a sixth reference sample point corresponding to the prediction sample point, where the third reference sample point and the fourth reference sample point are located on the left side of the prediction image block, the fifth reference sample point and the sixth reference sample point are located above the prediction image block, and the third reference sample point, the fourth reference sample point, the fifth reference sample point and the sixth reference sample point are all sample points in the reconstructed image block. The specific embodiments are described in the examples that follow.
And determining a difference between the third reconstruction value and the fourth reconstruction value as a second gradient value, and determining a difference between the fifth reconstruction value and the sixth reconstruction value as a third gradient value, wherein the third reconstruction value is a reconstruction value of a third reference sample point, the fourth reconstruction value is a reconstruction value of a fourth reference sample point, the fifth reconstruction value is a reconstruction value of a fifth reference sample point, and the sixth reconstruction value is a reconstruction value of a sixth reference sample point. And (3) calculating to obtain a second predicted value corresponding to the predicted sample point through a formula (9).
p(x,y)=(1-w1(x)-w2(x))*p1(x,y)+w1(x)*GradientLeft+w2(x)*GradientTop (9)
Wherein p (x, y) represents a second predicted value, w1 (x) and w2 (x) represent preset weight values, p1 (x, y) represents a first predicted value, GRADIENTLEFT represents a first gradient value, and GradientTop represents a second gradient value.
Wherein GRADIENTLEFT = (r 0 (-1, y) -rn (-n, y+d)), gradientTop = (r 0 (x, -1) -rn (x+d, -n)), r0 (-1, y) represents the third reconstruction value, rn (-n, y+d) represents the fourth reconstruction value, r0 (x, -1) represents the fifth reconstruction value, and rn (x+d, -n) represents the sixth reconstruction value.
Optionally, the intra-predicting the predicted sample point based on the angular prediction mode, and determining the third, fourth, fifth, and sixth reference sample points includes:
determining the third reference sample point from the left image block adjacent to the predicted image block according to the position information of the predicted sample point;
determining a fourth reference sample point located to the left of the third reference sample point based on the position information of the third reference sample point and the angle prediction mode;
Determining the fifth reference sample point from an upper image block adjacent to the predicted image block according to the position information of the predicted sample point;
A sixth reference sample point located above the fifth reference sample point is determined based on the position information of the fifth reference sample point and the angular prediction mode.
In this embodiment, a third reference sample point is determined from a left image block adjacent to the predicted image block, intra-prediction is performed on the third reference sample point based on the angle prediction mode, and a fourth reference sample point located on the left side of the third reference image sample point is determined, that is, the third reference image sample point and the fourth reference sample point are obtained using the intra-prediction method related to the application scenario shown in fig. 6. A fifth reference sample point is determined from the upper image block adjacent to the predicted image block, and intra-prediction is performed on the fifth reference sample point based on the angle prediction mode, and a sixth reference sample point located above the fifth reference image sample point is determined, that is, the fifth reference image sample point and the sixth reference sample point are obtained using the intra-prediction method involved in the application scenario shown in fig. 7.
According to the intra-frame prediction method provided by the embodiment of the application, the execution main body can be an intra-frame prediction device. In the embodiment of the present application, an intra-frame prediction method performed by an intra-frame prediction device is taken as an example, and the intra-frame prediction device provided by the embodiment of the present application is described.
As shown in fig. 8, an embodiment of the present application further provides an intra prediction apparatus 800, including:
An obtaining module 801, configured to obtain a mode index of an angle prediction mode corresponding to a predicted image block;
a first determining module 802, configured to determine, for each predicted sample point in the predicted image block, a first reference sample point and a second reference sample point based on intra-prediction of the predicted sample point by the angular prediction mode when an included angle between an angular prediction direction represented by the mode index and a horizontal direction is greater than 0 degrees and less than 90 degrees;
The second determining module 803 is configured to determine a second predicted value corresponding to the predicted sample point according to a first gradient value between the first reconstructed value and the second reconstructed value, a first predicted value corresponding to the predicted sample point, and a weight coefficient associated with the first gradient value.
Optionally, the first determining module 802 is specifically configured to:
Determining a first reference sample point from left image blocks adjacent to the predicted image block according to the position information of the predicted sample point under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is larger than or equal to 45 degrees;
Determining a second reference sample point adjacent to the predicted image block and located above the predicted image block based on the position information of the first reference sample point and the angular prediction mode; or alternatively
Determining a first reference sample point from an upper image block adjacent to the predicted image block according to the position information of the predicted sample point under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is smaller than 45 degrees;
A second reference sample point adjacent to the predicted image block and located to the left of the predicted image block is determined based on the position information of the first reference sample point and the angular prediction mode.
Optionally, the first determining module 802 is further specifically configured to:
Determining a first reference sample point from left image blocks adjacent to the predicted image block according to the position information of the predicted sample point under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is larger than or equal to 45 degrees;
Determining a second reference sample point positioned to the left of the first reference image sample point based on the position information of the first reference sample point and the angle prediction mode; or alternatively
Determining a first reference sample point from an upper image block adjacent to the predicted image block according to the position information of the predicted sample point under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is smaller than 45 degrees;
a second reference sample point located above the first reference image sample point is determined based on the position information of the first reference sample point and the angular prediction mode.
Optionally, the first determining module 802 is further specifically configured to:
determining a first reference sample point from an upper image block adjacent to the predicted image block according to the position information of the predicted sample point under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is larger than or equal to 45 degrees;
Determining a second reference sample point located above the first reference image sample point based on the position information of the first reference sample point and the angular prediction mode; or alternatively
Determining a first reference sample point from left image blocks adjacent to the predicted image block according to the position information of the predicted sample point under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is smaller than 45 degrees;
A second reference sample point to the left of the first reference image sample point is determined based on the position information of the first reference sample point and the angular prediction mode.
Optionally, the intra prediction apparatus 800 further includes:
A third determining module, configured to determine, for each prediction sample point in the prediction image block, a third reference sample point, a fourth reference sample point, a fifth reference sample point, and a sixth reference sample point based on intra-prediction of the prediction sample point by the angular prediction mode when an included angle between an angular prediction direction represented by the mode index and a horizontal direction is greater than 0 degrees and less than 90 degrees;
And the fourth determining module is used for determining a second predicted value corresponding to the predicted sample point according to the second gradient value between the third reconstruction value and the fourth reconstruction value and the third gradient value between the fifth reconstruction value and the sixth reconstruction value.
Optionally, the third determining module is specifically configured to:
determining the third reference sample point from the left image block adjacent to the predicted image block according to the position information of the predicted sample point;
determining a fourth reference sample point located to the left of the third reference sample point based on the position information of the third reference sample point and the angle prediction mode;
Determining the fifth reference sample point from an upper image block adjacent to the predicted image block according to the position information of the predicted sample point;
A sixth reference sample point located above the fifth reference sample point is determined based on the position information of the fifth reference sample point and the angular prediction mode.
In the embodiment of the application, for each prediction sample point in a prediction image block, under the condition that an included angle between an angle prediction direction represented by a mode index corresponding to the prediction image block and a horizontal direction is larger than 0 degrees and smaller than 90 degrees, intra-frame prediction is carried out on the prediction sample point based on the angle prediction mode, a first reference sample point and a second reference sample point are determined, and a second prediction value corresponding to the prediction sample point is determined according to a first prediction value, a first gradient value between the first reconstruction value and the second reconstruction value and a weight coefficient related to the first gradient value; the first reconstruction value is a reconstruction value of the first reference sample point, and the second reconstruction value is a reconstruction value of the second reference sample point. By carrying out intra-frame prediction on the prediction sample points based on the angle prediction mode, the application range of the PDPC technology is expanded, and a more accurate second prediction value is obtained by carrying out intra-frame prediction by using the PDPC technology, so that the accuracy of intra-frame prediction is improved.
The embodiment of the apparatus corresponds to the embodiment of the intra-frame prediction method shown in fig. 4, and each implementation process and implementation manner in the embodiment of the method are applicable to the embodiment of the apparatus, and the same technical effects can be achieved.
The intra-frame prediction apparatus in the embodiment of the present application may be an electronic device, for example, an electronic device with an operating system, or may be a component in an electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the terminals may include, but are not limited to, the types of terminals listed above, other devices may be servers, network attached storage (Network Attached Storage, NAS), etc., and embodiments of the present application are not limited in detail.
Optionally, as shown in fig. 9, the embodiment of the present application further provides a communication device 900, including a processor 901 and a memory 902, where the memory 902 stores a program or instructions that can be executed on the processor 901, for example, when the communication device 900 is a terminal, the program or instructions implement the steps of the intra-frame prediction method embodiment described above when executed by the processor 901, and achieve the same technical effects.
The embodiment of the application also provides a terminal, which comprises a processor 901 and a communication interface, wherein the processor 901 is used for executing the following operations:
acquiring a mode index of an angle prediction mode corresponding to a predicted image block;
For each prediction sample point in the prediction image block, under the condition that an included angle between an angle prediction direction represented by the mode index and a horizontal direction is larger than 0 degree and smaller than 90 degrees, carrying out intra-frame prediction on the prediction sample point based on the angle prediction mode, and determining a first reference sample point and a second reference sample point;
and determining a second predicted value corresponding to the predicted sample point according to a first gradient value between the first reconstructed value and the second reconstructed value, a first predicted value associated with the predicted sample point and a weight coefficient corresponding to the first gradient value.
The terminal embodiment corresponds to the terminal-side method embodiment, and each implementation process and implementation manner of the method embodiment can be applied to the terminal embodiment, and the same technical effects can be achieved. Specifically, fig. 10 is a schematic diagram of a hardware structure of a terminal for implementing an embodiment of the present application.
The terminal 1000 includes, but is not limited to: radio frequency unit 1001, network module 1002, audio output unit 1003, input unit 1004, sensor 1005, display unit 1006, user input unit 1007, interface unit 1008, memory 1009, and processor 1010.
Those skilled in the art will appreciate that terminal 1000 can also include a power source (e.g., a battery) for powering the various components, which can be logically connected to processor 1010 by a power management system so as to perform functions such as managing charge, discharge, and power consumption by the power management system. The terminal structure shown in fig. 10 does not constitute a limitation of the terminal, and the terminal may include more or less components than shown, or may combine some components, or may be arranged in different components, which will not be described in detail herein.
It should be appreciated that in embodiments of the present application, the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, where the graphics processor 10041 processes image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 can include two portions, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
In the embodiment of the present application, after receiving downlink data from the network side device, the radio frequency unit 1001 may transmit the downlink data to the processor 1010 for processing; the radio frequency unit 1001 may send uplink data to the network side device. In general, the radio frequency unit 1001 includes, but is not limited to, an antenna, an amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
The memory 1009 may be used to store software programs or instructions and various data. The memory 1009 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory 1009 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (double DATA RATE SDRAM, DDRSDRAM), enhanced synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCH LINK DRAM, SLDRAM), and direct random access memory (DRRAM). Memory 1009 in embodiments of the application includes, but is not limited to, these and any other suitable types of memory.
The processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
Wherein the processor 1010 is configured to perform the following operations:
acquiring a mode index of an angle prediction mode corresponding to a predicted image block;
For each prediction sample point in the prediction image block, under the condition that an included angle between an angle prediction direction represented by the mode index and a horizontal direction is larger than 0 degree and smaller than 90 degrees, carrying out intra-frame prediction on the prediction sample point based on the angle prediction mode, and determining a first reference sample point and a second reference sample point;
And determining a second predicted value corresponding to the predicted sample point according to a first gradient value between the first reconstructed value and the second reconstructed value, a first predicted value corresponding to the predicted sample point and a weight coefficient associated with the first gradient value.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the processes of the above-mentioned intra-frame prediction method embodiment are implemented, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the terminal described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the intra-frame prediction method, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, or the like.
The embodiments of the present application further provide a computer program/program product stored in a storage medium, where the computer program/program product is executed by at least one processor to implement the respective processes of the above-described intra-prediction method embodiment, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (14)

1. An intra prediction method, comprising:
acquiring a mode index of an angle prediction mode corresponding to a predicted image block;
For each prediction sample point in the prediction image block, under the condition that an included angle between an angle prediction direction represented by the mode index and a horizontal direction is larger than 0 degree and smaller than 90 degrees, carrying out intra-frame prediction on the prediction sample point based on the angle prediction mode, and determining a first reference sample point and a second reference sample point; the first reference sample point and the second reference sample point are sample points within the reconstructed image block;
determining a second predicted value corresponding to the predicted sample point according to a first gradient value between the first reconstructed value and the second reconstructed value, a first predicted value corresponding to the predicted sample point and a weight coefficient associated with the first gradient value; the first reconstruction value is a reconstruction value of the first reference sample point, the second reconstruction value is a reconstruction value of the second reference sample point, and the first prediction value is determined based on an angle prediction mode corresponding to the predicted image block.
2. The method of claim 1, wherein the intra-predicting the predicted sample point based on the angular prediction mode, determining a first reference sample point and a second reference sample point comprises:
Determining a first reference sample point from left image blocks adjacent to the predicted image block according to the position information of the predicted sample point under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is larger than or equal to 45 degrees; determining a second reference sample point adjacent to the predicted image block and located above the predicted image block based on the position information of the first reference sample point and the angular prediction mode; or alternatively
Determining a first reference sample point from an upper image block adjacent to the predicted image block according to the position information of the predicted sample point under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is smaller than 45 degrees; a second reference sample point adjacent to the predicted image block and located to the left of the predicted image block is determined based on the position information of the first reference sample point and the angular prediction mode.
3. The method of claim 1, wherein the intra-predicting the predicted sample point based on the angular prediction mode, determining a first reference sample point and a second reference sample point comprises:
Determining a first reference sample point from left image blocks adjacent to the predicted image block according to the position information of the predicted sample point under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is larger than or equal to 45 degrees;
Determining a second reference sample point positioned to the left of the first reference image sample point based on the position information of the first reference sample point and the angle prediction mode; or alternatively
Determining a first reference sample point from an upper image block adjacent to the predicted image block according to the position information of the predicted sample point under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is smaller than 45 degrees; a second reference sample point located above the first reference image sample point is determined based on the position information of the first reference sample point and the angular prediction mode.
4. The method of claim 1, wherein the intra-predicting the predicted sample point based on the angular prediction mode, determining a first reference sample point and a second reference sample point comprises:
determining a first reference sample point from an upper image block adjacent to the predicted image block according to the position information of the predicted sample point under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is larger than or equal to 45 degrees;
Determining a second reference sample point located above the first reference image sample point based on the position information of the first reference sample point and the angular prediction mode; or alternatively
Determining a first reference sample point from left image blocks adjacent to the predicted image block according to the position information of the predicted sample point under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is smaller than 45 degrees; a second reference sample point to the left of the first reference image sample point is determined based on the position information of the first reference sample point and the angular prediction mode.
5. The method according to any one of claims 1-4, wherein after the obtaining the mode index of the angular prediction mode corresponding to the predicted image block, the method further comprises:
For each prediction sample point in the prediction image block, under the condition that an included angle between an angle prediction direction represented by the mode index and a horizontal direction is larger than 0 degree and smaller than 90 degrees, carrying out intra-frame prediction on the prediction sample point based on the angle prediction mode, and determining a third reference sample point, a fourth reference sample point, a fifth reference sample point and a sixth reference sample point; the third reference sample point and the fourth reference sample point are located to the left of the predicted image block, the fifth reference sample point and the sixth reference sample point are located above the predicted image block, and the third reference sample point, the fourth reference sample point, the fifth reference sample point and the sixth reference sample point are all sample points within a reconstructed image block;
Determining a second predicted value corresponding to the predicted sample point according to a second gradient value between the third reconstruction value and the fourth reconstruction value and a third gradient value between the fifth reconstruction value and the sixth reconstruction value; the third reconstruction value is a reconstruction value of a third reference sample point, the fourth reconstruction value is a reconstruction value of a fourth reference sample point, the fifth reconstruction value is a reconstruction value of a fifth reference sample point, and the sixth reconstruction value is a reconstruction value of a sixth reference sample point.
6. The method of claim 5, wherein the intra-predicting the predicted sample point based on the angular prediction mode, determining a third reference sample point, a fourth reference sample point, a fifth reference sample point, and a sixth reference sample point comprises:
determining the third reference sample point from the left image block adjacent to the predicted image block according to the position information of the predicted sample point;
determining a fourth reference sample point located to the left of the third reference sample point based on the position information of the third reference sample point and the angle prediction mode;
Determining the fifth reference sample point from an upper image block adjacent to the predicted image block according to the position information of the predicted sample point;
A sixth reference sample point located above the fifth reference sample point is determined based on the position information of the fifth reference sample point and the angular prediction mode.
7. An intra prediction apparatus, comprising:
the acquisition module is used for acquiring a mode index of an angle prediction mode corresponding to the predicted image block;
The first determining module is used for carrying out intra-frame prediction on each predicted sample point in the predicted image block based on the angle prediction mode under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is larger than 0 degree and smaller than 90 degrees, and determining a first reference sample point and a second reference sample point; the first reference sample point and the second reference sample point are sample points within the reconstructed image block;
The second determining module is used for determining a second predicted value corresponding to the predicted sample point according to a first gradient value between the first reconstructed value and the second reconstructed value, a first predicted value corresponding to the predicted sample point and a weight coefficient associated with the first gradient value; the first reconstruction value is a reconstruction value of the first reference sample point, the second reconstruction value is a reconstruction value of the second reference sample point, and the first prediction value is determined based on an angle prediction mode corresponding to the predicted image block.
8. The apparatus of claim 7, wherein the first determining module is specifically configured to:
Determining a first reference sample point from left image blocks adjacent to the predicted image block according to the position information of the predicted sample point under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is larger than or equal to 45 degrees;
Determining a second reference sample point adjacent to the predicted image block and located above the predicted image block based on the position information of the first reference sample point and the angular prediction mode; or alternatively
Determining a first reference sample point from an upper image block adjacent to the predicted image block according to the position information of the predicted sample point under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is smaller than 45 degrees;
A second reference sample point adjacent to the predicted image block and located to the left of the predicted image block is determined based on the position information of the first reference sample point and the angular prediction mode.
9. The apparatus of claim 7, wherein the first determining module is further specifically configured to:
Determining a first reference sample point from left image blocks adjacent to the predicted image block according to the position information of the predicted sample point under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is larger than or equal to 45 degrees;
Determining a second reference sample point positioned to the left of the first reference image sample point based on the position information of the first reference sample point and the angle prediction mode; or alternatively
Determining a first reference sample point from an upper image block adjacent to the predicted image block according to the position information of the predicted sample point under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is smaller than 45 degrees;
a second reference sample point located above the first reference image sample point is determined based on the position information of the first reference sample point and the angular prediction mode.
10. The apparatus of claim 7, wherein the first determining module is further specifically configured to:
determining a first reference sample point from an upper image block adjacent to the predicted image block according to the position information of the predicted sample point under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is larger than or equal to 45 degrees;
Determining a second reference sample point located above the first reference image sample point based on the position information of the first reference sample point and the angular prediction mode; or alternatively
Determining a first reference sample point from left image blocks adjacent to the predicted image block according to the position information of the predicted sample point under the condition that an included angle between the angle prediction direction represented by the mode index and the horizontal direction is smaller than 45 degrees;
A second reference sample point to the left of the first reference image sample point is determined based on the position information of the first reference sample point and the angular prediction mode.
11. The apparatus according to any one of claims 7-10, wherein the apparatus further comprises:
a third determining module, configured to determine, for each prediction sample point in the prediction image block, a third reference sample point, a fourth reference sample point, a fifth reference sample point, and a sixth reference sample point based on intra-prediction of the prediction sample point by the angular prediction mode when an included angle between an angular prediction direction represented by the mode index and a horizontal direction is greater than 0 degrees and less than 90 degrees; the third reference sample point and the fourth reference sample point are located to the left of the predicted image block, the fifth reference sample point and the sixth reference sample point are located above the predicted image block, and the third reference sample point, the fourth reference sample point, the fifth reference sample point and the sixth reference sample point are all sample points within a reconstructed image block;
A fourth determining module, configured to determine a second predicted value corresponding to the predicted sample point according to a second gradient value between the third reconstructed value and the fourth reconstructed value and a third gradient value between the fifth reconstructed value and the sixth reconstructed value; the third reconstruction value is a reconstruction value of a third reference sample point, the fourth reconstruction value is a reconstruction value of a fourth reference sample point, the fifth reconstruction value is a reconstruction value of a fifth reference sample point, and the sixth reconstruction value is a reconstruction value of a sixth reference sample point.
12. The apparatus according to claim 11, wherein the third determining module is specifically configured to:
determining the third reference sample point from the left image block adjacent to the predicted image block according to the position information of the predicted sample point;
determining a fourth reference sample point located to the left of the third reference sample point based on the position information of the third reference sample point and the angle prediction mode;
Determining the fifth reference sample point from an upper image block adjacent to the predicted image block according to the position information of the predicted sample point;
A sixth reference sample point located above the fifth reference sample point is determined based on the position information of the fifth reference sample point and the angular prediction mode.
13. A terminal comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the intra prediction method of any one of claims 1-6.
14. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the intra prediction method according to any of claims 1-6.
CN202211249310.8A 2022-10-12 2022-10-12 Intra-frame prediction method, device and equipment Pending CN117915097A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211249310.8A CN117915097A (en) 2022-10-12 2022-10-12 Intra-frame prediction method, device and equipment
PCT/CN2023/123318 WO2024078401A1 (en) 2022-10-12 2023-10-08 Intra-frame prediction method and apparatus, and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211249310.8A CN117915097A (en) 2022-10-12 2022-10-12 Intra-frame prediction method, device and equipment

Publications (1)

Publication Number Publication Date
CN117915097A true CN117915097A (en) 2024-04-19

Family

ID=90668849

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211249310.8A Pending CN117915097A (en) 2022-10-12 2022-10-12 Intra-frame prediction method, device and equipment

Country Status (2)

Country Link
CN (1) CN117915097A (en)
WO (1) WO2024078401A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018225593A1 (en) * 2017-06-05 2018-12-13 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Coding device, decoding device, coding method and decoding method
EP3759928A1 (en) * 2018-02-26 2021-01-06 InterDigital VC Holdings, Inc. Gradient based boundary filtering in intra prediction
WO2019199093A1 (en) * 2018-04-11 2019-10-17 엘지전자 주식회사 Intra prediction mode-based image processing method and device therefor
KR102616680B1 (en) * 2019-03-08 2023-12-20 후아웨이 테크놀러지 컴퍼니 리미티드 Encoders, decoders and corresponding methods for inter prediction
US11671592B2 (en) * 2019-12-09 2023-06-06 Qualcomm Incorporated Position-dependent intra-prediction combination for angular intra-prediction modes for video coding

Also Published As

Publication number Publication date
WO2024078401A1 (en) 2024-04-18

Similar Documents

Publication Publication Date Title
EP2736011A1 (en) Method, apparatus and computer program product for generating super-resolved images
CN112804528B (en) Screen content processing method, device and equipment
US11238563B2 (en) Noise processing method and apparatus
CN116193419A (en) Communication method, device and equipment
CN114742934B (en) Image rendering method and device, readable medium and electronic equipment
CN117915097A (en) Intra-frame prediction method, device and equipment
CN113628259A (en) Image registration processing method and device
US11146826B2 (en) Image filtering method and apparatus
US9215474B2 (en) Block-based motion estimation method
CN113873233A (en) Lens module detection method and device and electronic equipment
WO2023116510A1 (en) Inter-frame prediction method and terminal
WO2023198144A1 (en) Inter-frame prediction method and terminal
JP2024530824A (en) Method and device for intra prediction
CN117412040A (en) Loop filtering method, device and equipment
CN111988215B (en) Method, equipment and computer readable medium for pushing user
CN113676271B (en) Method and device for switching antennas
CN117915100A (en) Chroma component prediction method, device and equipment
CN113938966B (en) Call control method and call control device
CN116233426B (en) Attribute quantization and inverse quantization methods, devices and equipment
CN118301354A (en) List construction method and terminal
CN114513222B (en) Radio frequency circuit control method and device, electronic equipment and radio frequency circuit
US20240340464A1 (en) Loop filtering method and terminal
CN117412058A (en) Encoding and decoding methods, devices and equipment
CN117478901A (en) Encoding and decoding methods, devices and equipment
CN117933333A (en) Method for determining neural network model loss value and related application method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination