CN111787334A - Filtering method, filter and device for intra-frame prediction - Google Patents
Filtering method, filter and device for intra-frame prediction Download PDFInfo
- Publication number
- CN111787334A CN111787334A CN202010537729.8A CN202010537729A CN111787334A CN 111787334 A CN111787334 A CN 111787334A CN 202010537729 A CN202010537729 A CN 202010537729A CN 111787334 A CN111787334 A CN 111787334A
- Authority
- CN
- China
- Prior art keywords
- filtering
- prediction
- intra
- prediction block
- reference pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention discloses a filtering method for intra-frame prediction. The intra prediction filtering method includes: obtaining a filtering parameter; wherein the filtering parameter comprises at least one of a filtering reference pixel, a filtering reference value, a filtering range and a filtering coefficient; and filtering the current intra-frame prediction block by utilizing the filtering parameters. Through the mode, the filtering method can be adjusted according to the actual situation of the intra-frame prediction block, and the most effective filtering mode is selected for different intra-frame prediction blocks.
Description
Technical Field
The present invention relates to the field of video coding, and in particular to the field of filtering for intra prediction.
Background
The purpose of video coding is to compress video data, thereby reducing the transmission traffic or storage space of video. The video coding system mainly comprises three parts of coding, transmission and decoding. In the encoding process, redundancy between adjacent pixel points in a frame of image is eliminated by selecting an optimal intra-frame prediction mode, and certain spatial correlation between pixels is removed, so that a video file is compressed. And in the decoding process, reading the optimal prediction mode obtained by calculation at the encoding end, and predicting the prediction value of the prediction block in the same way. In the intra-frame prediction process, a prediction block is obtained after a block to be coded is predicted, and then in order to enhance the correlation between the prediction block and a reference pixel, a filtering process can be performed on the prediction block.
The existing filtering process includes firstly obtaining reference pixels required by the filtering process, generally according to the horizontal direction or the vertical direction; then, selecting a filter coefficient according to the size of the prediction block, wherein the filter coefficient is a coefficient obtained by training a filter established based on generalized Gaussian distribution; then filtering the current prediction block; finally, the prediction using filtering and the prediction without filtering are compared, and whether filtering is performed or not is selected.
In the long-term research and development process of the inventor of the present application, it is found that in the existing filtering method, only one of horizontal or vertical filtering directions at the same angle is used, and the reference pixels in the same direction are referred to when the whole prediction block is filtered. And, for the prediction blocks with different sizes and different prediction modes, the set fixed value is adopted as the filtering range.
Disclosure of Invention
The invention mainly solves the technical problem of providing an intra-frame prediction filtering method which can solve the technical problem of low coding efficiency.
In order to solve the technical problems, the invention adopts a technical scheme that: provided is an intra prediction filtering method including: obtaining a filtering parameter; the filtering parameter comprises at least one of a filtering reference pixel, a filtering reference value, a filtering range and a filtering coefficient; the current intra prediction block is filtered using the filter parameters.
In order to solve the technical problem, the invention adopts another technical scheme that: there is provided an encoding method including: obtaining a plurality of intra-frame predicted values of a current intra-frame predicted block, wherein the intra-frame predicted values at least comprise one intra-frame predicted value obtained after filtering by using the intra-frame prediction filtering method; selecting the intra-frame prediction value with the minimum prediction cost as an intra-frame prediction result of the current intra-frame prediction block; and coding the current intra-frame prediction block based on the intra-frame prediction result of the current intra-frame prediction block to obtain a code stream.
In order to solve the technical problem, the invention adopts another technical scheme that: there is provided a decoding method including: code stream data is obtained after intra-frame prediction filtering; analyzing the header information of the code stream to obtain the code stream length and the coding mode of each layer of code stream; decoding residual images in the code stream in sequence according to the corresponding coding mode, and restoring original images; the intra-frame prediction filtering method comprises the following steps: obtaining a filtering parameter; the filtering parameter comprises at least one of a filtering reference pixel, a filtering reference value, a filtering range and a filtering coefficient; the current intra prediction block is filtered using the filter parameters.
In order to solve the technical problem, the invention adopts another technical scheme that: there is provided an encoder comprising a processor for executing instructions to implement the aforementioned encoding method.
In order to solve the technical problem, the invention adopts another technical scheme that: there is provided a decoder comprising a processor for executing instructions to implement the aforementioned decoding method.
In order to solve the technical problem, the invention adopts another technical scheme that: there is provided a device having a storage function, storing program data executable by a processor to implement the aforementioned method
The invention has the beneficial effects that: in contrast to the prior art, the present application discloses various methods for selecting filtering parameters and discloses a way in which various filtering methods can be compared during filtering to determine the final effective filtering method. The method disclosed by the application can adjust the filtering method according to the actual situation of the intra-frame prediction block, and selects the most effective filtering mode aiming at different intra-frame prediction blocks, so that the prediction precision after filtering is improved, the accuracy of intra-frame prediction is increased, the prediction residual error is reduced, and the coding efficiency is improved.
In addition, the present application provides various methods of filtering pixel chrominance values in addition to various methods of filtering pixel luminance. The code stream can be further reduced by filtering the chroma, and the coding quality is improved.
Drawings
FIG. 1 is a schematic diagram of angular prediction directions according to an embodiment of the present application;
FIG. 2 is a flow chart illustrating an intra prediction filtering method according to an embodiment of the present application;
FIG. 3 is a flow chart illustrating a method of determining a filtering method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a prediction block partition according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a filtered reference pixel selection method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a filtered reference pixel selection method according to another embodiment of the present application;
FIG. 7 is a schematic diagram of a filtered reference pixel selection method according to another embodiment of the present application;
FIG. 8 is a schematic diagram of a filtering range selection method according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a filtering range selection method according to another embodiment of the present application;
FIG. 10 is a schematic diagram of a method of determining a filter reference value according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a filter coefficient determination method according to an embodiment of the present application;
FIG. 12 is a schematic diagram of an encoding method according to an embodiment of the present application;
FIG. 13 is a schematic diagram of a decoding method according to an embodiment of the present application;
FIG. 14 is a schematic diagram of a filter structure according to an embodiment of the present application;
FIG. 15 is a schematic block diagram of an encoder according to an embodiment of the present application;
FIG. 16 is a block diagram of a decoder according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solution and effect of the present application clearer and clearer, the present application is further described in detail below with reference to the accompanying drawings and examples.
The embodiment can be applied to a video coding and decoding process. In particular, the method can be applied to an intra-frame prediction process in a video coding process. It should be understood that the application scenarios of the system and method of the present application are merely examples or embodiments of the present application, and those skilled in the art can also apply the present application to other similar scenarios without inventive effort based on these drawings.
The conventional intra luminance prediction modes include a DC mode, a Planar mode, a Bilinear mode, an angular mode, and the like. The DC mode takes the mean of all available reference pixels as the prediction value for each point within the block to be predicted. The Planar mode carries out prediction according to the position relation between a point to be predicted and the center point of a block and by combining the horizontal and vertical reference pixel gradients. The Biliner mode is a bidirectional prediction method, firstly predicting the right and lower pixels of a point to be predicted and taking the pixels as reference pixels, then using the upper and lower reference pixels to perform longitudinal prediction on the pixel value at the current position, using the left and right reference pixel values to perform transverse prediction on the current pixel, and finally averaging the predicted values in the two directions. The angle prediction mode is that each point to be predicted in a prediction block is predicted in one angle direction, and a reference pixel corresponding to each point to be predicted is determined according to the current angle direction and is used as a prediction reference pixel to predict the point to be predicted. Referring to fig. 1, fig. 1 is a schematic view illustrating an angle prediction direction according to an embodiment of the present application. Here, the number N of prediction mode types is 66, 0 represents a DC mode, 1 represents a Planar mode, and 2 represents a Bilinear mode. The directions of the angular modes are indicated in fig. 1. The directions are in the first quadrant (the lower left reference pixel predicts the upper right point to be predicted, i.e., the prediction direction is from lower left to upper right), the third quadrant (the upper right reference pixel predicts the lower left point to be predicted, i.e., the prediction direction is from upper right to lower left), and the fourth quadrant (the upper left reference pixel predicts the lower right point to be predicted, i.e., the prediction direction is from upper left to lower right), respectively. It should be noted that the intra prediction method in fig. 1 is only for illustration, and the prediction method applicable to the filtering method in the present application is not limited to the prediction method described in fig. 1.
The existing intra chrominance prediction modes include DM mode, DC mode, horizontal mode, vertical mode, BI mode, TSCPM _ L mode, TSCPM _ T mode, PMC _ L mode, PMC _ T mode. The DM mode performs chroma prediction according to the prediction block luma prediction mode. Chroma prediction for DC mode, horizontal mode, vertical mode, and BI mode is the same as the corresponding mode for luma prediction. The TSCPM mode is an inter-component prediction technique that predicts the chroma components using the predicted block reconstruction of the luma, i.e., predicts U, V the components (chroma components) using the Y component (luma component), and removes inter-component redundancy by exploring the linear relationship between the different components. The TSCPM _ L mode and the TSCPM _ T mode are enhanced TSCPM modes in the TSCPM mode, and sampling points selected in the calculation process of the prediction coefficient are different from TSCPM. The PMC mode is also an inter-component prediction technique, which predicts the V component using the Y component and the U component.
It is to be noted that the intra prediction block mentioned in the present application may include a luma prediction block which is divided based on a luma component and then luma prediction, and a chroma prediction block which is divided based on a chroma component and then chroma prediction. When the current intra-predicted block is a luma predicted block, only luma data may be included in the current intra-predicted block, which may be luma filtered. When the current intra-predicted block is a chroma predicted block, only chroma data may be included in the current intra-predicted block, which may be chroma filtered. Of course, there are also cases where the current intra-prediction block is a luma prediction block and a chroma prediction block, and then luma and chroma data may be included, which may be both luma and chroma filtered. In addition, the luminance value mentioned in the present application refers to a predicted value of the luminance component, and the chrominance value refers to a predicted value of the chrominance component.
After the intra prediction is completed, a prediction block is obtained, and the prediction block may be subjected to a filtering process for enhancing the correlation between the prediction block and the reference pixel. The present application provides a method of intra prediction, referring to fig. 2. Fig. 2 is a flowchart illustrating an intra prediction filtering method according to an embodiment of the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 2 is not limited in this embodiment. As shown in fig. 2, the method includes:
step S210: and obtaining the filtering parameters.
On the basis of the existing intra-frame filtering technology, different filtering methods are obtained by changing filtering parameters and combining. In one embodiment, the filter parameter includes at least one of a filter reference pixel, a filter reference value, a filter range, and a filter coefficient. In one embodiment, the obtained filter parameter may be any combination of a filter reference pixel, a filter reference value, a filter range, and a filter coefficient.
In one embodiment, the direction relationship between the filtered reference pixels and the prediction reference pixels of the whole prediction block is consistent by changing the selection direction of the filtered reference pixels. For example, both use the same direction reference pixel, both use reference pixels in different directions, or filtering uses both direction reference pixels. It is worth noting that the method is also applicable to filtering luminance or chrominance of intra-predicted blocks.
In an embodiment, the filter range is determined based on the correlation parameters of the prediction block. For example, the filtering range is determined based on a preset size ratio of the filtering range to the intra-prediction block. For another example, the prediction point in the current intra-prediction block satisfying the second predetermined condition is determined as the point to be filtered, and the point to be filtered constitutes the filtering range; wherein the second predetermined condition is that a distance relationship between the prediction point to the filtering reference pixel and the prediction reference pixel is less than a third threshold. It is worth noting that the method is also applicable to filtering luminance or chrominance of intra-predicted blocks.
In one embodiment, when the luminance of the intra prediction block is predicted, a luminance value obtained by optimizing a luminance value of a reference pixel is used as a luminance filtering reference value. For example, filtering the filtering reference pixel by using a luminance reference pixel filtering mode to obtain a luminance filtering reference value; wherein the luma reference pixel filter is associated with a number of reference pixels in the vicinity of the filtered reference pixel.
In an embodiment, when the chroma of the intra prediction block is predicted, a chroma value obtained by optimizing a chroma value of a reference pixel is used as a chroma filtering reference value. For example, filtering the filtering reference pixel by using a chrominance reference pixel filtering mode to obtain a chrominance filtering reference value; wherein the chroma reference pixel filtering manner is associated with a number of reference pixels in the vicinity of the filtered reference pixel.
In one embodiment, the filter coefficients are related to the position of the point to be filtered. For example, the filter coefficient is determined based on a relationship between a first distance of the point to be filtered and the filter reference pixel and a second distance of the point to be filtered and the prediction reference pixel. It is worth noting that the method is also applicable to filtering luminance or chrominance of intra-predicted blocks.
It should be noted that, as described in all the above embodiments, how to select the filtering reference pixel, how to select the filtering range, how to calculate the filtering reference value, how to determine the filtering coefficient, and the like can be arbitrarily combined in many different ways without conflict.
In step S220, the current intra-prediction block is filtered by using the filtering parameter. In an embodiment, filtering the current intra-prediction block includes filtering a luminance value of the current intra-prediction block and/or filtering a chrominance value of the current intra-prediction block. For convenience of description, the filtering of the luminance value of the intra prediction block is referred to as luminance filtering, and the filtering of the chrominance value of the intra prediction block is referred to as chrominance filtering.
Specifically, only luma filtering, only chroma filtering, or both luma and chroma filtering may be performed on the current intra prediction block. The chroma filtering method may be determined based on the luma filtering method, or may be determined independently from the luma filtering method.
In an embodiment, when chroma filtering needs to be performed on an intra-prediction block, whether to perform filtering on the intra-prediction block may be determined by a defined condition.
In an embodiment, at least one filtering method may be obtained using the determined at least one filtering parameter, and the intra prediction block may be filtered using each obtained filtering method. In one embodiment, the above methods may be combined arbitrarily without any conflict. For example, the filtering mode may be determined by determining a filtering reference pixel based on any one of the above methods and determining a reference pixel value using the above method, and the filtering range and the filtering coefficient may be determined using the conventional art method. For another example, the filtering reference pixel may be determined based on any one of the above methods, and the filtering range and the filtering coefficient may be determined by using the above methods, so as to determine the filtering manner, in which the filtering reference value may be determined by using the conventional technical method. In one embodiment, a plurality of filtering methods are available, some or all of the parameters of which are determined by the above-described method. The optimal filtering method is selected as an effective filtering method by comparing the filtering methods. Specifically, after the intra prediction block is filtered by using the obtained multiple filtering methods, a filtering method with the minimum loss may be selected by using a Rate-distortion optimization (RDO) method, and the filtering method is used as an effective filtering method for the intra prediction block. The RDO loss of filtering versus unfiltered may also be compared to determine if filtering is required.
In an embodiment, RDO losses may be calculated for intra prediction results after luminance filtering and chrominance filtering, respectively, and whether to perform the luminance filtering and the chrominance filtering may be determined based on the two RDO losses. RDO losses may also be calculated for intra prediction results after luma filtering and chroma filtering, respectively, to determine whether luma filtering or chroma filtering is required, respectively.
The application discloses a selection method of various filtering parameters and discloses a method for determining final effective filtering method by using a comparison mode of various filtering methods in the filtering process. The method disclosed by the application can adjust the filtering method according to the actual situation of the intra-frame prediction block, and selects the most effective filtering mode aiming at different intra-frame prediction blocks, so that the prediction precision after filtering is improved, the accuracy of intra-frame prediction is increased, the prediction residual error is reduced, and the coding efficiency is improved. Meanwhile, the present application provides various methods for filtering pixel chrominance values in addition to various methods for filtering pixel luminance values. The code stream can be further reduced by filtering the chrominance value, and the coding quality is improved.
Referring to fig. 3, fig. 3 is a flowchart illustrating a method for determining a filtering method according to an embodiment of the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 3 is not limited in this embodiment. As shown in fig. 3, the method includes:
in step S310, a filtered reference pixel is obtained.
It is worth noting that the method in this step is also applicable to filtering luminance or chrominance of intra-predicted blocks.
In the prior art, one direction of reference pixels is selected as the filtering reference pixels of the whole prediction block. For the case where the prediction direction is in the fourth quadrant, the prediction reference pixels at the points in the prediction block may be distributed in different directions. This will cause some points in the same prediction block to have the prediction reference pixels and the filter reference pixels in the same direction, and some points to have the prediction reference pixels and the filter reference pixels in different directions. The directional relationship between the prediction reference pixels and the filtering reference pixels in the whole prediction block cannot be unified.
In an embodiment, for the case that the prediction direction is angular prediction and the prediction direction is from top left to bottom right (i.e. in the fourth quadrant), the filtering reference pixel of each point to be filtered in the current intra-prediction block is in the same direction or in a different direction from the prediction reference pixel of the point to be filtered.
In one embodiment, based on the direction of the prediction reference pixel of the point to be filtered, a reference pixel in the same direction as the prediction reference pixel is selected as the filtering reference pixel. In an embodiment, based on the direction of the prediction reference pixel of the point to be filtered, a reference pixel in a direction different from that of the prediction reference pixel is selected as a filtering reference pixel.
Specifically, the prediction block is partitioned based on the prediction direction so that points where the prediction reference pixel directions are the same are in the same region. And selecting the filtering reference pixels in the same direction for the points in one area, and selecting the filtering reference pixels in the other direction for the points in the other area.
Referring to fig. 4 by way of example only, fig. 4 is a schematic diagram of a prediction block partition according to an embodiment of the present application. In the figure, the reference pixel at the upper left corner is used as a reference, and the whole prediction block is divided into two regions according to the direction indicated by the prediction angle direction line 410. The prediction reference pixels of the prediction points above the dividing line 420 are all located in the same direction, e.g., above; the prediction reference pixels of the prediction points below the division line 420 are all located in the other direction, for example, to the left.
By way of example only, referring to fig. 5, fig. 5 is a schematic diagram of a filtered reference pixel selection method according to an embodiment of the present application. In the figure, a reference pixel at the upper left corner is used as a reference, the reference pixel is divided according to the prediction angle direction, and the whole prediction block is divided into two regions. The prediction reference pixels of the prediction points above the dividing line 510 are all located in the same direction, e.g., above; the prediction reference pixels of the prediction points below the dividing line 510 are all located in the other direction, for example, to the left. For prediction points above the partition line 510, the upper reference pixel (e.g., U7) is selected as the filtering reference pixel, and for prediction points below the partition line 510, the left reference pixel (e.g., L4) is selected as the filtering reference pixel, so that the prediction reference pixel and the filtering reference pixel in the prediction block are in the same direction.
By way of illustration only, referring to fig. 6, fig. 6 is a schematic diagram of a filtered reference pixel selection method according to another embodiment of the present application. In the figure, a reference pixel at the upper left corner is used as a reference, the reference pixel is divided according to a prediction angle direction line, and the whole prediction block is divided into two regions. The prediction reference pixels of the prediction points above the dividing line 610 are all located in the same direction, e.g., above; the prediction reference pixels of the prediction points below the dividing line 610 are all located in the other direction, for example, to the left. For the prediction points above the partition line 610, the left reference pixel (e.g., L2) is selected as the filtering reference pixel, and for the prediction points below the partition line 610, the left reference pixel (e.g., U5) is selected as the filtering reference pixel, so that the prediction reference pixel and the filtering reference pixel in the prediction block are in different directions.
In the application, the prediction reference pixel and the filtering reference pixel are in the same direction, so that more information of the reference pixel on the same side can be better fused in the intra-frame prediction process. By having the prediction reference pixel and the filter reference pixel in different directions, the information of the reference pixel on the other side can be supplemented, so that the intra prediction process refers to more information.
In one embodiment, the prediction mode of the intra prediction block is an angular prediction mode, and obtaining the filter parameter includes: reference pixels in at least two directions are obtained as filtering reference pixels. Specifically, in any mode of angular prediction, filtering may be performed using both a reference pixel on the same side as the prediction reference pixel and a reference pixel on a different side. That is, the filtering is performed with reference to the reference pixels in two directions at the same time. In an embodiment, a specific filtering manner for filtering with two reference pixels may be to determine a filtering reference value for filtering by weighted average of values of the two reference pixels. In one embodiment, a specific filtering manner for filtering with two reference pixels may be performed using a 3-tap filter.
By way of illustration only, referring to fig. 7, fig. 7 is a schematic diagram of a filtered reference pixel selection method according to another embodiment of the present application. The filtered reference pixels for the predicted points in fig. 7 may be filtered with the upper reference pixel (e.g., U7) and the left reference pixel (e.g., L2).
In the method and the device, the reference pixels on two sides are selected for filtering, so that more information can be referred in the intra-frame prediction process, and the accuracy of intra-frame prediction is improved.
It should be noted that, in one filtering method, only one filtering pixel selection method may be selected, for example, any one of the methods shown in fig. 5, 6, and 7.
Step S320, selecting a filtering range.
It is worth noting that the method in this step is also applicable to filtering luminance or chrominance of intra-predicted blocks.
In an embodiment, the prediction range may determine the filtering range based on a preset size ratio of the filtering range to the intra-prediction block. In one embodiment, the filtering range may be determined by the following method. First, the size ratio of a preset filtering range and an intra-frame prediction block is obtained. Wherein the ratio of the filtering range to the size of the intra-predicted block may be any value from 0 to 1. The width of the filtering range, i.e. the width of the filtering range, is then determined based on the ratio and the width of the intra-predicted blockAnd determining the height of said filtering range, i.e. the height of the intra-predicted block, based on the ratio and the height of said filtering rangeFinally based on the width of the filtering range and the filteringThe height of the wave range, determines the filtering range.
Referring to fig. 8 by way of example only, fig. 8 is a schematic diagram of a filtering range selection method according to an embodiment of the present application. The prediction block size w × h in the figure is 8 × 4, and the size ratio a of the filter range to the intra prediction block is set to 0.6. According to the calculation, the width of the filtering ranges from 0 to 4, and the height of the filtering ranges from 0 to 2, as shown by the hatched area in the figure.
In one embodiment, the prediction mode of the current intra prediction block is angular prediction, and obtaining the filter parameter includes: and determining the prediction points in the current intra-frame prediction block meeting the second preset condition as points to be filtered, wherein all the points to be filtered form a filtering range. Wherein the second predetermined condition is that a distance relationship between the prediction point to the filtering reference pixel and the prediction reference pixel is less than a third threshold. The third threshold may be preset empirically or determined based on statistics of several frames of images. It should be noted that, when the prediction mode of the current intra prediction block is not angle prediction, such as DC mode, Planar mode, Bilinear mode, etc., the corresponding filter range determination mode may be a mode of presetting a fixed filter range, or may be the above-mentioned filter range determination mode.
In one embodiment, for a reference pixel with a filtering reference pixel comprising two directions, the filtering reference pixel with the direction different from that of the prediction reference pixel is selected for distance relation calculation.
Specifically, the filter range is determined as follows: for any prediction point in a prediction block, the prediction point (x, y) is first computed to the prediction reference pixel (x)r,yr) A distance d betweenr(x, y) calculating the prediction point (x, y) to the filtering reference pixel (x, y) as shown in equation 1f,yf) A distance d betweenf(x, y), as shown in equation 2:
dr(x,y)=sqrt((x-xr)2+(y-yr)2)(1)
df(x,y)=sqrt((x-xf)2+(y-yf)2)(2)
then, a third threshold a is set, and two distance relationships are set as F (d)f(x,y),dr(x, y)), when the distance relation is less than or equal to the third threshold value a, setting the point as a point to be filtered, namely within the filtering range. When the distance relation is larger than the third threshold a, the point is set not to be a point to be filtered, i.e., not within the filtering range. The filtering range is the range formed by the points to be filtered. For convenience of the following description, this method is referred to as a second filtering range selection scheme.
By way of example only, referring to fig. 9, fig. 9 is a schematic diagram of a filtering range selection method according to another embodiment of the present application. The prediction reference pixel of the direction indicated by the prediction direction of the current prediction point is U5, and the filter reference pixel is L2. The distance relationship between the current predicted point and two reference pixels is set asThe threshold is set to 1, the current point (x, y) is (3,2), and two distance relationships are calculatedTherefore, the current prediction point is not the point to be filtered and is not in the filtering range. A similar determination is made for each prediction point in the prediction block, thereby determining the filtering range. Under the above conditions, the hatched portion in the figure is the filtering range.
Compared with a method for presetting a fixed filtering range, the filtering range selecting method disclosed by the application can flexibly set the filtering range based on the size of the prediction block, so that the intra-frame filtering efficiency is enhanced.
It should be noted that, in one filtering method, only one filtering range selection method may be selected, for example, any one of the methods shown in fig. 8 and 9.
In step S330, a filtering reference value is determined.
In an embodiment, the luma filter reference value is determined when the luma of the intra prediction block is filtered, and the chroma filter reference value is determined when the chroma of the intra prediction block is filtered.
When the luminance value of the intra prediction block is filtered, the filtering reference pixel may be filtered by using a luminance reference pixel filtering manner, so as to obtain a luminance filtering reference value. Wherein the luma reference pixel filter is associated with a number of reference pixels in the vicinity of the filtered reference pixel. Filtering the filtered reference pixel may use any filtering means that is related to several reference pixels in the vicinity of the filtered reference pixel. For example, the filtering mode may be positively correlated with several nearby reference pixels, so that the influence of the filtered reference pixels on the filtering process may be weakened, and the nearby reference pixels are referred to more. For another example, the filtering mode may be a filtering mode negatively correlated with several nearby reference pixels, so that the influence of the filtered reference pixels on the filtering process may be strengthened, and more filtered reference pixels may be referred to in the filtering process. And brightness filtering is carried out on the prediction block by taking the brightness value obtained after the filtering of the filtering reference pixel as a brightness filtering reference value.
In one embodiment, when the current intra-prediction block meets a first predetermined condition, filtering the filtered reference pixels by using a luminance reference pixel filtering manner is performed on the current intra-prediction block. The first predetermined condition may include that the luma prediction mode of the current intra-prediction block is non-angle prediction. Alternatively, the first predetermined condition may further include that the area (w × h) of the current intra-prediction block is larger than a first threshold, where the first threshold may be preset empirically or determined based on statistics of several frame images. For example, the first threshold may be 64. Still alternatively, the first predetermined condition may further include that the quantization parameter of the current intra-prediction block is greater than a second threshold, where the second threshold may be preset empirically or determined based on statistics of several frame images. For example, the second threshold may be 40. In an embodiment, the luma filter reference value may be obtained by filtering the filter reference pixels as described above for all intra prediction blocks in the current frame image.
In one embodiment, the filtering the filtered reference pixel by using a luma reference pixel filtering method includes: acquiring a filtering reference pixel and coefficients corresponding to a plurality of reference pixels nearby the filtering reference pixel; based on the coefficient, the luminance value of the filtered reference pixel and the luminance values of several reference pixels in the vicinity thereof are weighted and averaged.
The specific method is that N points are arranged around a filtering reference pixel x, filtering is carried out by using the following formula 3, the filtered brightness value is used as the brightness value of the filtering reference pixel to filter the pixel point to be predicted, the position of the reference pixel corresponding to the filtering direction is set as x, and the left (or upper) a point, the right (or lower) N-a-1 point and the point are adopted to carry out filtering.
Where R (x) represents a reference pixel luminance value before filtering, R' (x) represents a reference pixel after filtering, and f (i) represents a filter coefficient corresponding to each reference pixel position.
In an embodiment, when there are two filtering reference pixels, two filtering reference pixels need to be filtered at the same time, and the prediction block is filtered by using the filtered luminance value as the luminance filtering reference value.
For illustration only, referring to fig. 10, fig. 10 is a schematic diagram of a method for determining a luminance filtering reference value according to an embodiment of the present application. The filter reference pixel at the prediction point in fig. 10 is L2, that is, R (2) ═ L (2), N ═ 3, and a ═ 1 in equation 3, and the filter coefficient used for the filter reference pixel is [1, 2, 1]That is, f (1) is 1/4, f (2) is 1/2, and f (3) is 1/4, the filtering reference pixel luminance value r (r) used for the filtering process is:
when filtering the chroma value of the intra prediction block, the chroma reference pixel may be filtered by using a chroma reference pixel filtering manner to obtain a chroma filtering reference value. Wherein the chroma reference pixel filtering manner is associated with a number of reference pixels in the vicinity of the filtered reference pixel. Filtering the filtered reference pixel may use any filtering means that is related to several reference pixels in the vicinity of the filtered reference pixel. For example, the filtering mode may be positively correlated with several nearby reference pixels, so that the influence of the filtered reference pixels on the filtering process may be weakened, and the nearby reference pixels are referred to more. For another example, the filtering mode may be a filtering mode negatively correlated with several nearby reference pixels, so that the influence of the filtered reference pixels on the filtering process may be strengthened, and more filtered reference pixels may be referred to in the filtering process. And chroma filtering is carried out on the prediction block by taking chroma values obtained after filtering the filtering reference pixels as chroma filtering reference values.
In an embodiment, when the current intra-prediction block satisfies a fourth predetermined condition, filtering the filtered reference pixels by using a chroma reference pixel filtering manner is performed on the current intra-prediction block. Wherein the fourth predetermined condition may include that the chroma prediction mode of the current intra prediction block is non-angular prediction. Alternatively, the fourth predetermined condition may further include that the area (w × h) of the current intra-prediction block is larger than a seventh threshold, where the seventh threshold may be preset empirically or determined based on statistics of several frame images. For example, the seventh threshold may be 16. Still alternatively, the fourth predetermined condition may further include that the quantization parameter of the current intra-prediction block is greater than an eighth threshold, where the eighth threshold may be preset empirically or determined based on statistics of several frame images. For example, the eighth threshold may be 40. In an embodiment, the chroma filtering reference value may be obtained by performing the above-described filtering of the filtering reference pixel on all intra prediction blocks in the current frame image.
In one embodiment, the filtering the filtered reference pixel by using a chroma reference pixel filtering method includes: acquiring a filtering reference pixel and coefficients corresponding to a plurality of reference pixels nearby the filtering reference pixel; based on the coefficient, the chrominance values of the filtered reference pixels and the chrominance values of several reference pixels in the vicinity thereof are weighted and averaged. The specific method may refer to the above method for determining the brightness value of the reference pixel in the brightness filtering process.
In an embodiment, when there are two filtering reference pixels, the two filtering reference pixels need to be filtered simultaneously, and the prediction block is filtered by using the filtered chrominance value as the chrominance filtering reference value.
Compared with a method of directly using the luminance value (chroma value) of the filtering reference pixel as the luminance filtering reference value (chroma value), the method for determining the luminance filtering reference value (chroma value) disclosed by the application solves the problem that the filtering reference pixel only uses the unicity of the luminance value (chroma value) of one pixel, can more fully utilize the information of the reference pixel, and improves the prediction effect.
In step S340, a filter coefficient is determined.
It is worth noting that the method in this step is also applicable to filtering luminance or chrominance of intra-predicted blocks.
In one embodiment, the filter coefficient is determined based on a relationship between a first distance and a second distance of the point to be filtered, wherein the first distance is a distance between the point to be filtered and the filter reference pixel, and the second distance is a distance between the point to be filtered and the prediction reference pixel. Specifically, for each point to be filtered, the distance between the point to be filtered and the filtering reference pixel and the distance between the point to be filtered and the prediction reference pixel are calculated, and the size of the filtering coefficient is determined according to the size relation of the two distances. For example, reference pixels that are farther away use smaller filter coefficients, and reference pixels that are closer together use larger filter weights.
In an embodiment, the filter coefficient is at least inversely related to the ratio between the first distance and the second distance of the point to be filtered. The specific method for determining the filter coefficient comprises the following steps: the coordinates of the prediction point P are (x, y), and the prediction reference pixel is a point a having a value P (a). The filtering reference pixels are i reference pixel points with f (i), and one or more filtering reference pixel values are selected from the i reference pixel points and added into the calculation. For example, the filtered reference pixels have b, c, d … points, and the corresponding reference pixel values are f (b), f (c), f (d) …. Wherein each reference pixel corresponds to a filter coefficientCan be expressed as:the calculation process of the filtering can refer to equation 4.
Wherein P (x, y) is the predicted value of the current predicted point before filtering, P' (x, y) is the predicted value of the current predicted point after filtering, d (P, i) represents the distance between the predicted point P and the ith reference pixel point, d (P, a) represents the distance between the P point and the a point, and satisfies the condition
For convenience of description, when the number of the filtering reference pixels is 1, for example, at point b, the filtering coefficients may be:can be simplified into
Referring to fig. 11 by way of example only, fig. 11 is a schematic diagram of a filter coefficient determination method according to an embodiment of the present application. In the embodiment of fig. 11, the number of filtered reference pixels is only 1, i.e., reference pixel L5. According to the prediction reference direction, for the prediction point (x, y), the prediction reference pixel is the reference pixel U (5), so d (P, a) represents the distance between the prediction point (x, y) and U (5), and P (x, y) is equal to U (5). For the prediction point (x, y), the filtering direction points to the reference pixel L (5), so d (P, b) represents the distance of the prediction point (x, y) from L (5). The filtering process is calculated as shown in equation 5.
Compared with a method for determining the filter coefficient of each reference pixel by a table look-up method and the like, the method for determining the filter coefficient disclosed by the application can determine the filter coefficient according to the relation of the distance between the prediction point and the reference pixel. The method disclosed by the application can enable the reference pixels far away from the prediction point to use smaller weight, and enable the reference pixels near the prediction point to use larger weight. For an image, the relation between two points far away is weaker than the relation between two points near, so that the filter coefficient determining method can enable the filtering process to be more reasonable and fit to reality.
Step S350, a filtering method is obtained.
In an embodiment, at least one of the reference coefficients in the filtering method is determined by the above method. In an embodiment, several filtering methods can be obtained by freely combining the above methods. For example, the filtering method may be a method of determining the filtering reference pixel by using any one of the methods in step S310, and determining the filtering reference value by using the method in step S330, and determining the other filtering parameters by using the existing method. For another example, the filtering method may include determining the filtering range using any one of the steps 320, determining the filtering reference value using any one of the steps 330, and determining the filtering coefficient using any one of the steps 340, which are combined without conflict. For another example, the filtering method may be that the filtering reference pixel, the filtering range, the filtering reference value, and the filtering coefficient are determined by the method in the above steps, and are combined without conflict. The above determination process of the filtering method is applicable to both luminance filtering and chrominance filtering.
In one embodiment, the chroma may also be filtered using methods used to filter the luma in the prior art.
In an embodiment, the chroma of each point to be filtered in the current intra-prediction block may also be filtered by using a preset chroma filter. The preset chroma filter is a matrix with fixed size and elements. For example, the chroma filter may be a 3 × 3 matrix, wherein the values of the elements in the matrix may be fixed values preset based on the prediction block. Filtering the chroma of the intra prediction block when the current intra prediction block satisfies a third predetermined condition. In an embodiment, the third predetermined condition may be that the current intra prediction block is obtained based on a synchronous coding unit partition (CU partition) of chroma and luma.
In an embodiment, the third predetermined condition may be that the chroma prediction mode of the current intra-prediction block is a designated prediction mode. Specifically, the chroma of the intra prediction block is filtered only when the chroma prediction mode of the current intra prediction block is the designated prediction mode. For example, the modes other than DM, DC, Planar, and BI of the chroma prediction mode of the intra prediction block are not filtered; filtering is not carried out on modes except modes of DC, Planar and BI of chroma prediction modes of the intra-frame prediction block; for another example, the chroma prediction mode of the intra prediction block is a diagonal mode, TSCPM _ L, TSCPM _ T, PMC _ L, PMC _ T, and a prediction mode other than horizontal and vertical; for another example, the chroma prediction mode of the intra prediction block is set to TSCPM _ L, TSCPM _ T and the prediction modes other than horizontal and vertical are not filtered; also for example, filtering is performed for an arbitrary chroma prediction mode.
In an embodiment, the third predetermined condition is that the size of the current intra-predicted block is within a preset threshold. Specifically, when the length of the current intra-prediction block is equal to or greater than a fourth threshold and the width is equal to or greater than a fifth threshold, the chroma of the current intra-prediction block is filtered. For example, only intra prediction blocks with width and height greater than 8, respectively, are chroma filtered. Alternatively, chroma filtering is performed for blocks with a high-width product, i.e., a prediction block area greater than or equal to a sixth threshold (e.g., 64), and for example, chroma filtering is performed without any restriction on the current intra prediction block size.
Referring to fig. 12, fig. 12 is a flowchart illustrating an encoding method according to an embodiment of the present application. Note that, if the result is substantially the same, the flow sequence shown in fig. 12 is not limited in this embodiment. As shown in fig. 12, the method includes:
in step S1210, a plurality of intra prediction values of the current intra prediction block are obtained.
In an embodiment, the plurality of intra prediction values includes at least one intra prediction value obtained by filtering according to the intra prediction filtering method. The intra prediction block may have a plurality of intra prediction values, some of which are obtained after filtering, and some of which are obtained by direct prediction without filtering. The filtering process may employ any of the filtering methods described previously. The intra prediction value comprises a brightness value and a chromatic value.
Step S1220, the intra prediction value with the minimum prediction cost is selected as the intra prediction result of the current intra prediction block.
In an embodiment, the intra prediction value with the smallest RDO loss may be selected as the intra prediction result of the intra prediction block by calculating the RDO loss of each intra prediction result. The RDO loss may be calculated for intra prediction results after the luminance filtering and the chrominance filtering, respectively, and whether to perform the luminance filtering and the chrominance filtering may be determined based on the two RDO losses. RDO losses may also be calculated for intra prediction results after luma filtering and chroma filtering, respectively, to determine whether luma filtering or chroma filtering is required, respectively.
Step 1230, the intra-frame prediction block is coded based on the prediction result of the intra-frame prediction block to obtain a code stream.
The code stream includes an intra prediction filtering syntax element, and the intra prediction filtering syntax element may be used to indicate whether the prediction result is an intra prediction value obtained after filtering. The intra prediction filtering syntax element may also be used to indicate that the filtering method employed is a prior art method, or any of the filtering methods previously described.
In an embodiment, the intra prediction filtering syntax element may be determined based on an original syntax element ipf _ flag, or may be a newly added syntax element. The newly added syntax element may be a syntax element such as ipf _ mode. The intra prediction filtering syntax element may be used for selection of a plurality of filtering methods.
In an embodiment, a syntax element for expressing whether or not chroma of the intra prediction block is filtered may be that chroma and luma are commonly expressed by one syntax element to express whether or not a filtering process is performed, for example, ipf _ flag. In an embodiment, the syntax element used to express whether the chroma of the intra prediction block is filtered or not may also be a new syntax element for chroma filtering, and the chroma filtering and the luma filtering are RDO selected or not for filtering, respectively. The following explains a method of using syntax elements by taking a luminance filtering method as an example. For convenience of description, schemes for parameter selection in the foregoing luminance filtering method will be numbered here. It should be noted that these numbers are only for convenience of the following exemplary description and do not mean that the foregoing parameter selection schemes are only these.
The brightness filtering reference pixel selection has three schemes, wherein the first scheme is to select a reference pixel in the same direction as the prediction reference pixel as the brightness filtering reference pixel; selecting reference pixels in different directions from the prediction reference pixels as brightness filtering reference pixels; in the third scheme, when the prediction mode of the intra prediction block is an angle prediction mode, obtaining the brightness filtering parameter comprises: and obtaining reference pixels in at least two directions as brightness filtering reference pixels, and obtaining the brightness filtering reference pixels by adopting a method in the prior art when the prediction mode is a non-angle prediction mode.
The brightness filtering range selection has two schemes, namely, a scheme I is that the prediction range can determine the brightness filtering range based on the preset size ratio of the brightness filtering range to the intra-prediction block, namely, the value a; in the second scheme, when the prediction mode of the current intra prediction block is angle prediction, obtaining the brightness filtering parameter includes: and determining the prediction point in the current intra-frame prediction block meeting the second preset condition as a to-be-brightness filtering point, wherein the to-be-brightness filtering point forms a brightness filtering range, and when the prediction mode of the current intra-frame prediction block is non-angle prediction, the brightness filtering range can be obtained by adopting the prior art.
The luminance filtering reference value is selected by a scheme that when the current prediction block meets a first predetermined condition, luminance filtering is performed on the luminance filtering reference pixel by using a pixel value luminance filtering mode to obtain the luminance filtering reference value.
There is a scheme for selecting the brightness filter coefficient, that is, the brightness filter coefficient is determined based on a relationship between a first distance between the brightness filter point and the brightness filter reference pixel and a second distance between the brightness filter point and the prediction reference pixel.
For example only, different luma filtering methods may be expressed in a manner of adding an ipf _ mode syntax element. If 7 kinds of luminance filtering methods are set, the value range of the ipf _ mode is [0,6], the original syntax element ipf _ flag is unchanged, if the ipf _ flag is 0, no filtering is indicated, and if the ipf _ flag is 1, filtering is indicated. Table 1 below lists the combinations of methods represented by each value of the syntax element ipf _ mode, respectively, by way of example.
Table 1 syntax element each value represents a schema combination
For illustration only, for the filtering method described in the previous example, the filtering method may be expressed by modifying an ipf _ flag syntax element, and table 2 below lists the method combinations represented by each value of the syntax element, respectively, by way of example.
Table 2 schema combinations represented by each value of syntax elements
For illustration only, the following embodiment is a filtering scheme for filtering chroma of an intra prediction block. The chrominance filtering method comprises the following steps: the method for determining the chrominance filter reference value is based on the conventional method for filtering the luminance, and adopts the method disclosed in the application. That is, 3 points are arranged above and below the filtering reference pixel, the chroma values of the three points are weighted and averaged according to the reference coefficient [1, 2, 1], and chroma filtering is carried out on the pixel point to be predicted by taking the filtered chroma value as the chroma value of the filtering reference pixel.
The syntax elements for whether the intra-predicted block is filtered are expressed as: when the syntax ipf _ flag of the luminance filtering is 1, the luminance and the chrominance are filtered, and when the ipf _ flag is 0, the luminance and the chrominance are not filtered.
Meanwhile, the following constraint is added to whether the intra prediction block is chroma filtered: chroma filtering when the intra prediction block is obtained based on a synchronous coding unit partition (CU partition) of chroma and luma; chroma filtering when the chroma prediction mode is angle mode, TSCPM _ L, TSCPM _ T, PMC _ L, PMC _ T, and horizontal-vertical; there is no restriction on the block size selection, i.e. chroma filtering can be performed for any size block.
The prediction block is chroma filtered based on the chroma filtering scheme described above.
For illustration only, the following embodiment is another filtering scheme for filtering chroma of an intra prediction block. The chrominance filtering method comprises the following steps: conventional methods of filtering luminance. The syntax elements for whether the intra-predicted block is filtered are expressed as: when the syntax ipf _ flag of the luminance filtering is 1, the luminance and the chrominance are filtered, and when the ipf _ flag is 0, the luminance and the chrominance are not filtered.
Meanwhile, the following constraint is added to whether the intra prediction block is chroma filtered: chroma filtering when the intra prediction block is obtained based on a synchronous coding unit partition (CU partition) of chroma and luma; carrying out chroma filtering when the chroma prediction mode is TSCPM _ L, TSCPM _ T and the chroma prediction mode is horizontal and vertical; there is no restriction on the block size selection, i.e. chroma filtering can be performed for any size block.
For example only, the intra prediction block is filtered by filtering the luminance, wherein the filtering range uses the method disclosed in the present application, and the constraint condition that only the prediction block in the non-angular prediction mode is subjected to luminance filtering is added; the chroma filtering is performed by adopting one scheme of the two embodiments.
Referring to fig. 13, fig. 13 is a flowchart illustrating a decoding method according to an embodiment of the present application. Note that, if the result is substantially the same, the flow sequence shown in fig. 13 is not limited in this embodiment. As shown in fig. 13, the method includes:
step S1310, code stream data is acquired.
In an embodiment, the code stream data is obtained by encoding according to any one of the aforementioned encoding methods. In an embodiment, the code stream data is obtained after performing intra prediction filtering by any one of the aforementioned filtering methods.
Step S1320, parsing the code stream data, and obtaining a filtering method for the intra prediction value of the current intra prediction block.
In an embodiment, the bitstream data may be parsed to obtain intra prediction filtering syntax elements, and a filtering method for obtaining an intra prediction value of the current intra prediction block may be performed. The intra prediction filtering syntax element may be a syntax element corresponding to the filtering method determined by any one of the aforementioned methods.
Step S1330, restoring the intra prediction value of the current intra prediction block before filtering according to the decoding filtering method corresponding to the filtering method of the intra prediction value of the current intra prediction block, and further decoding a residual image to restore the original image.
In an embodiment, the decoding filtering method is changed as the filtering method of the intra prediction value of the current intra prediction block is changed. In one embodiment, a first intra prediction value before filtering of a first intra prediction block is restored at a first time according to a first decoding filtering method corresponding to a first filtering method of an intra prediction value of the first intra prediction block, and a second intra prediction value before filtering of a second intra prediction block is restored at a subsequent second time according to a second decoding filtering method corresponding to a second filtering method of an intra prediction value of the second intra prediction block.
Referring to fig. 14, fig. 14 is a schematic structural diagram of a filter according to an embodiment of the present application. In this embodiment, the filter 1400 includes an obtaining module 1410 and a filtering module 1420, and the filter may be configured to perform the filtering method described in any of the above embodiments.
In an embodiment, the obtaining module 1410 may obtain the filtering parameters. In one embodiment, the filter parameter includes at least one of a filter reference pixel, a filter reference value, a filter range, and a filter coefficient. In an embodiment, the obtaining module 1410 may change the selection direction of the filtering reference pixels so that the directional relationship between the filtering reference pixels and the prediction reference pixels of the whole prediction block is consistent. In an embodiment, the obtaining module 1410 may determine the filtering range based on the relevant parameters of the prediction block. In an embodiment, the obtaining module 1410 may use a value obtained by optimizing the reference pixel value as the filtering reference value. In an embodiment, the obtaining module 1410 may obtain a filter coefficient, where the filter coefficient is related to a position of a point to be filtered.
In an embodiment, the filtering module 1420 may obtain at least one filtering method using the determined filtering parameters, and filter the intra prediction block using each obtained filtering method. In an embodiment, after the filtering module 1420 may filter the intra-prediction block by using the obtained multiple filtering methods, a filtering method with the smallest loss may be selected as an effective filtering method for the intra-prediction block by calculating an RDO (Rate-distortion optimization) loss.
Referring to fig. 15, fig. 15 is a schematic structural diagram of an encoder according to an embodiment of the present application. In this embodiment, the encoder 1500 includes a processor 1510.
The encoder 1500 may further include a memory (not shown) for storing instructions and data required for the operation of the processor 1510.
The processor 1510 is configured to execute instructions to implement the methods provided by any of the embodiments of the intra prediction methods of the present application and any non-conflicting combinations thereof described above.
Referring to fig. 16, fig. 16 is a schematic structural diagram of a decoder according to an embodiment of the present application. In this embodiment, decoder 1600 includes a processor 1610.
The processor 1610 is configured to execute instructions to implement the methods provided by any of the embodiments of the intra prediction method of the present application and any non-conflicting combinations as described above.
Referring to fig. 17, fig. 17 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present application. The computer-readable storage medium 1700 of an embodiment of the present application stores instructions/program data that, when executed, implement the methods provided by any embodiment of the intra prediction methods of the present application, as well as any non-conflicting combinations. The instructions may form a program file stored in the storage medium 1700 in the form of a software product, so as to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the methods according to the embodiments of the present application. And the aforementioned storage medium 1700 includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The above description is only an embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes performed by the content of the present specification and the attached drawings, or applied to other related technical fields directly or indirectly, are included in the scope of the present invention.
Claims (30)
1. An intra prediction filtering method, comprising:
obtaining a filtering parameter; wherein the filtering parameter comprises at least one of a filtering reference pixel, a filtering reference value, a filtering range and a filtering coefficient;
and filtering the current intra-frame prediction block by utilizing the filtering parameters.
2. The method of claim 1, wherein the obtaining filter parameters comprises:
and selecting a reference pixel in the same direction as the prediction reference pixel as the filtering reference pixel based on the direction of the prediction reference pixel of the point to be filtered in the current intra-frame prediction block.
3. The method of claim 1, wherein the obtaining filter parameters comprises:
and selecting reference pixels in different directions with the prediction reference pixels as the filtering reference pixels based on the directions of the prediction reference pixels of the points to be filtered in the current intra-frame prediction block.
4. The method of claim 1, wherein the prediction mode of the current intra-prediction block is an angular prediction mode, and wherein the obtaining the filtering parameter comprises:
reference pixels in at least two directions are obtained as the filtering reference pixels.
5. The method of claim 1, wherein the obtaining filter parameters comprises:
and determining the filtering range based on the preset size ratio of the filtering range to the current intra-frame prediction block.
6. The method of claim 5, wherein the determining the filtering range based on a preset size ratio of the filtering range to the current intra-prediction block comprises:
acquiring the preset size ratio of the filtering range to the current intra-frame prediction block;
determining a width of the filtering range based on the ratio and a width of the current intra-prediction block, and determining a height of the filtering range based on the ratio and a height of the current intra-prediction block;
determining the filtering range based on a width of the filtering range and a height of the filtering range.
7. The method of claim 1, wherein the prediction mode of the current intra-prediction block is angular prediction, and wherein the obtaining the filtering parameter comprises:
determining a prediction point in the current intra-frame prediction block meeting a second predetermined condition as a point to be filtered, wherein the point to be filtered forms the filtering range;
wherein the second predetermined condition is that a distance relationship between the prediction point to the filtering reference pixel and the prediction reference pixel is less than a third threshold.
8. The method of claim 7, wherein the filtered reference pixels comprise reference pixels in two directions, and wherein determining the prediction point in the current intra prediction block that satisfies a second predetermined condition as the point to be filtered comprises:
and selecting the filtering reference pixels in different directions from the prediction reference pixels to calculate the distance relation.
9. The method of claim 1, wherein the obtaining filter parameters comprises:
the filter coefficient is determined based on a relationship between a first distance and a second distance of the point to be filtered, wherein the first distance is a distance between the point to be filtered and the filtering reference pixel, and the second distance is a distance between the point to be filtered and the prediction reference pixel.
10. The method of claim 9, wherein determining the filter coefficient based on a relationship between a first distance and a second distance of the point to be filtered comprises:
the filter coefficient is inversely related to at least a ratio between the first distance and the second distance of the point to be filtered.
11. The method of claim 1, wherein the filtering a current intra-prediction block using the filtering parameters comprises:
and filtering the brightness of the current intra-prediction block by using the filtering parameters.
12. The method of claim 11, wherein the obtaining filter parameters comprises:
filtering the filtering reference pixel by using a brightness reference pixel filtering mode to obtain a brightness filtering reference value; wherein the luma reference pixel filtering manner is associated with a number of reference pixels in a vicinity of the filtered reference pixel.
13. The method of claim 12, wherein filtering the filtered reference pixels using luma reference pixel filtering comprises:
acquiring the filter reference pixels and coefficients corresponding to the plurality of reference pixels nearby;
and carrying out weighted averaging on the brightness value of the filtering reference pixel and the brightness values of the plurality of reference pixels nearby based on the coefficient.
14. The method of claim 12, wherein filtering the filtered reference pixels using luma reference pixel filtering comprises:
when the current intra-frame prediction block meets a first preset condition, the filtering of the filtering reference pixels by utilizing a brightness reference pixel filtering mode is carried out;
the first preset condition is that the brightness prediction mode of the current intra-frame prediction block is not an angle prediction mode; or the like, or, alternatively,
the first predetermined condition is that an area of the current intra-prediction block is greater than a first threshold; or the like, or, alternatively,
the first predetermined condition is that a quantization parameter of the current intra-prediction block is greater than a second threshold.
15. The method of claim 1, wherein the filtering a current intra-prediction block using the filtering parameters comprises:
and filtering the chroma value of the current intra-prediction block by utilizing the filtering parameter.
16. The method of claim 15, wherein the filtering chroma values of a current intra-predicted block using the filtering parameters comprises:
filtering a chroma of the intra-prediction block when the current intra-prediction block satisfies a third predetermined condition;
wherein the third predetermined condition is that the current intra-prediction block is obtained based on a synchronous coding unit division of chroma and luma; or
The third predetermined condition is that the chroma prediction mode of the current intra-prediction block is a specified prediction mode; or
The third predetermined condition is that the length of the current intra-prediction block is equal to or greater than a fourth threshold and the width is equal to or greater than a fifth threshold, or that the product of the length and the width is equal to or less than a sixth threshold.
17. The method of claim 15, wherein the obtaining filter parameters comprises:
filtering the filtering reference pixel by utilizing a chrominance reference pixel filtering mode to obtain the chrominance filtering reference value; wherein the chroma reference pixel filtering manner is related to a number of reference pixels in the vicinity of the filtered reference pixel.
18. The method of claim 17, wherein the filtering the filtered reference pixels using chroma reference pixel filtering comprises:
acquiring the filter reference pixels and coefficients corresponding to the plurality of reference pixels nearby;
and carrying out weighted averaging on the chrominance values of the filtering reference pixel and the chrominance values of the plurality of reference pixels nearby based on the coefficient.
19. The method of claim 17, wherein the filtering the filtered reference pixels using chroma reference pixel filtering comprises:
performing the filtering of the filtering reference pixels by using a chroma reference pixel filtering mode when the current intra-frame prediction block meets a fourth preset condition;
wherein the fourth predetermined condition is that the chroma prediction mode of the current intra-frame prediction block is not the angle prediction mode; or the like, or, alternatively,
the fourth predetermined condition is that the area of the current intra-prediction block is greater than a seventh threshold; or the like, or, alternatively,
the fourth predetermined condition is that a quantization parameter of the current intra-prediction block is greater than an eighth threshold.
20. The method of claim 15, wherein the filtering chroma values of a current intra-predicted block using the filtering parameters comprises:
filtering the chroma of each point to be filtered in the current intra-frame prediction block by using a preset chroma filter;
the preset chrominance filter is a matrix with fixed sizes and elements.
21. The method of claim 1, wherein the filtering a current intra-prediction block using the filtering parameters comprises:
and performing brightness filtering and chroma filtering on the current intra-frame prediction block by using the filtering parameters.
22. A method of encoding, comprising:
obtaining a plurality of intra prediction values of a current intra prediction block, wherein at least one of the intra prediction values comprises an intra prediction value obtained by filtering according to at least one intra prediction filtering method of any one of claims 1 to 21;
selecting the intra-frame prediction value with the minimum prediction cost as an intra-frame prediction result of the current intra-frame prediction block;
and carrying out subsequent coding process on the current intra-frame prediction block based on the intra-frame prediction result of the current intra-frame prediction block to obtain a code stream.
23. The encoding method as claimed in claim 22, wherein the intra prediction value includes a luminance value and a chrominance value, and the code stream corresponding to the current intra prediction block includes a code stream encoded by the luminance value and a code stream encoded by the chrominance value.
24. The encoding method according to claim 22, wherein the code stream includes an intra prediction filtering syntax element, and the intra prediction filtering syntax element is used to indicate that the intra prediction result is an intra prediction value obtained by filtering according to the filtering method of any one of claims 1 to 21.
25. A method of decoding, comprising:
acquiring code stream data, wherein the intra-frame predicted value of the current intra-frame predicted block in the code stream data is an intra-frame predicted value obtained after filtering;
analyzing the code stream data to obtain a filtering method of the intra-frame predicted value of the current intra-frame predicted block;
and reducing the intra-frame predicted value of the current intra-frame predicted block before filtering according to a decoding filtering method corresponding to the filtering method of the intra-frame predicted value of the current intra-frame predicted block, further decoding a residual image, and reducing an original image.
26. The decoding method according to claim 25, wherein said parsing the codestream data comprises:
a filtering method that parses the bitstream data to obtain intra prediction filtering syntax elements, obtaining an intra prediction value of the current intra prediction block.
27. The decoding method according to claim 25, wherein said reducing the intra prediction value of the current intra prediction block before filtering according to a decoding filtering method corresponding to a filtering method of the intra prediction value of the current intra prediction block comprises:
and restoring a first intra prediction value before filtering of the first intra prediction block according to a first decoding filtering method corresponding to a first filtering method of the intra prediction value of the first intra prediction block at a first moment, and restoring a second intra prediction value before filtering of the second intra prediction block according to a second decoding filtering method corresponding to a second filtering method of the intra prediction value of the second intra prediction block at a subsequent second moment.
28. An encoder, characterized in that the encoder comprises a processor for executing instructions to implement the encoding method according to any of claims 22-24.
29. A decoder, characterized in that it comprises a processor for executing instructions to implement a decoding method according to any one of claims 25-27.
30. An apparatus having a memory function, characterized in that program data are stored, which program data can be executed by a processor to implement the method according to any of claims 1-27.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010477074 | 2020-05-29 | ||
CN202010477074X | 2020-05-29 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111787334A true CN111787334A (en) | 2020-10-16 |
CN111787334B CN111787334B (en) | 2021-09-14 |
Family
ID=72756304
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010537729.8A Active CN111787334B (en) | 2020-05-29 | 2020-06-12 | Filtering method, filter and device for intra-frame prediction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111787334B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103096057A (en) * | 2011-11-08 | 2013-05-08 | 华为技术有限公司 | Chromaticity intra-frame prediction method and device |
CN103621080A (en) * | 2011-06-28 | 2014-03-05 | 索尼公司 | Image processing device and image processing method |
CN103796029A (en) * | 2011-06-20 | 2014-05-14 | 韩国电子通信研究院 | Video encoding apparatus |
CN103813176A (en) * | 2012-11-14 | 2014-05-21 | 北京三星通信技术研究有限公司 | Deblocking filter method and adaptive loop filter method in video encoding and decoding |
CN107071417A (en) * | 2017-04-10 | 2017-08-18 | 电子科技大学 | A kind of intra-frame prediction method for Video coding |
US20180054618A1 (en) * | 2010-12-08 | 2018-02-22 | Lg Electronics Inc. | Intra prediction in image processing |
US20180063532A1 (en) * | 2011-11-04 | 2018-03-01 | Infobridge Pte. Ltd. | Apparatus of decoding video data |
CN107801024A (en) * | 2017-11-09 | 2018-03-13 | 北京大学深圳研究生院 | A kind of boundary filtering method for infra-frame prediction |
CN108353172A (en) * | 2015-09-30 | 2018-07-31 | 凯迪迪爱通信技术有限公司 | Processing unit, processing method and the computer readable storage medium of cardon |
CN109937571A (en) * | 2016-09-05 | 2019-06-25 | Lg电子株式会社 | Image coding/decoding method and its device |
-
2020
- 2020-06-12 CN CN202010537729.8A patent/CN111787334B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180054618A1 (en) * | 2010-12-08 | 2018-02-22 | Lg Electronics Inc. | Intra prediction in image processing |
CN103796029A (en) * | 2011-06-20 | 2014-05-14 | 韩国电子通信研究院 | Video encoding apparatus |
CN103621080A (en) * | 2011-06-28 | 2014-03-05 | 索尼公司 | Image processing device and image processing method |
US20180063532A1 (en) * | 2011-11-04 | 2018-03-01 | Infobridge Pte. Ltd. | Apparatus of decoding video data |
CN103096057A (en) * | 2011-11-08 | 2013-05-08 | 华为技术有限公司 | Chromaticity intra-frame prediction method and device |
CN103813176A (en) * | 2012-11-14 | 2014-05-21 | 北京三星通信技术研究有限公司 | Deblocking filter method and adaptive loop filter method in video encoding and decoding |
CN108353172A (en) * | 2015-09-30 | 2018-07-31 | 凯迪迪爱通信技术有限公司 | Processing unit, processing method and the computer readable storage medium of cardon |
CN109937571A (en) * | 2016-09-05 | 2019-06-25 | Lg电子株式会社 | Image coding/decoding method and its device |
CN107071417A (en) * | 2017-04-10 | 2017-08-18 | 电子科技大学 | A kind of intra-frame prediction method for Video coding |
CN107801024A (en) * | 2017-11-09 | 2018-03-13 | 北京大学深圳研究生院 | A kind of boundary filtering method for infra-frame prediction |
Also Published As
Publication number | Publication date |
---|---|
CN111787334B (en) | 2021-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10743033B2 (en) | Method and device for optimizing encoding/decoding of compensation offsets for a set of reconstructed samples of an image | |
US20150043647A1 (en) | Image encoding/decoding apparatus and method to which filter selection by precise units is applied | |
KR20190052097A (en) | Image processing method and apparatus therefor | |
US20190238838A1 (en) | Devices and methods for video coding | |
CN111131837B (en) | Motion compensation correction method, encoding method, encoder, and storage medium | |
CN116980594A (en) | Intra-frame prediction method, encoder, decoder, and storage medium | |
CN109587491A (en) | A kind of intra-frame prediction method, device and storage medium | |
CN110166773B (en) | Intra-frame prediction method, video encoding method, video processing apparatus, and storage medium | |
CN113489974A (en) | Intra-frame prediction method, video/image coding and decoding method and related device | |
CN113489977B (en) | Loop filtering method, video/image coding and decoding method and related device | |
CN110213595B (en) | Intra-frame prediction based encoding method, image processing apparatus, and storage device | |
CN111787334B (en) | Filtering method, filter and device for intra-frame prediction | |
CN110166774B (en) | Intra-frame prediction method, video encoding method, video processing apparatus, and storage medium | |
CN112055224A (en) | Filtering method, encoding and decoding system and storage medium for prediction | |
WO2020000487A1 (en) | Transformation method, inverse transformation method and apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |