WO2022179504A1 - 编解码方法、装置及其设备 - Google Patents

编解码方法、装置及其设备 Download PDF

Info

Publication number
WO2022179504A1
WO2022179504A1 PCT/CN2022/077298 CN2022077298W WO2022179504A1 WO 2022179504 A1 WO2022179504 A1 WO 2022179504A1 CN 2022077298 W CN2022077298 W CN 2022077298W WO 2022179504 A1 WO2022179504 A1 WO 2022179504A1
Authority
WO
WIPO (PCT)
Prior art keywords
value
pixel
filtering
current
adjustment
Prior art date
Application number
PCT/CN2022/077298
Other languages
English (en)
French (fr)
Inventor
陈方栋
曹小强
孙煜程
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Priority to EP22758868.8A priority Critical patent/EP4277267A1/en
Priority to JP2023551246A priority patent/JP2024506213A/ja
Priority to AU2022227062A priority patent/AU2022227062B2/en
Priority to KR1020237027399A priority patent/KR20230128555A/ko
Priority to US18/264,036 priority patent/US20240048695A1/en
Publication of WO2022179504A1 publication Critical patent/WO2022179504A1/zh
Priority to ZA2023/07790A priority patent/ZA202307790B/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness

Definitions

  • the present application relates to the technical field of encoding and decoding, and in particular, to an encoding and decoding method, apparatus, and device thereof.
  • a complete video encoding method may include processes such as prediction, transformation, quantization, entropy encoding, and filtering.
  • the predictive coding may include intra-frame coding and inter-frame coding.
  • inter-frame coding uses the correlation in the video time domain to predict the current pixel by using the pixels adjacent to the encoded image, so as to achieve the purpose of effectively removing the video time domain redundancy.
  • Intra-frame coding refers to using the correlation of the video spatial domain to predict the current pixel by using the pixels of the coded block of the current frame image, so as to achieve the purpose of removing the redundancy in the video spatial domain.
  • DBF DeBlocking Filter, deblocking filter
  • SAO Sample Adaptive Offset, sample adaptive compensation
  • ALF Adaptive Loop Filter, adaptive loop filter
  • the DBF technique is used to remove the block boundary effect caused by block coding.
  • the SAO technology classifies the pixel value based on the sample and the gradient value of the surrounding blocks, and adds different compensation values to the pixel value of each category, so that the reconstructed image is closer to the original image.
  • ALF technology filters the reconstructed image through the Wiener filter, so that the reconstructed image is closer to the original image.
  • filtering techniques such as DBF, SAO, and ALF are all classified based on the pixel value of the current pixel, or based on the relationship between the pixel value of the current pixel and the pixel values of surrounding pixels, and then based on different categories.
  • Different filtering operations may cause filtering phenomenon, that is, the pixel value after filtering is much larger or much smaller than the pixel value before filtering, and it is also much larger or much smaller than the original pixel value, the filtering effect is not good, and the coding performance is compared. Poor and other issues.
  • the present application provides an encoding and decoding method, apparatus and device thereof, which can improve encoding performance.
  • the present application provides a method for encoding and decoding, the method comprising:
  • the gradient value of the current pixel is determined based on the original pixel value of the current pixel and the original pixel values of the surrounding pixels of the current pixel ; Determine the adjusted pixel value of the current pixel based on the gradient value of the current pixel and the original pixel value of the current pixel.
  • the application provides a decoding device, the decoding device includes:
  • a memory configured to store video data
  • a decoder configured to:
  • the gradient value of the current pixel is determined based on the original pixel value of the current pixel and the original pixel values of the surrounding pixels of the current pixel ; Determine the adjusted pixel value of the current pixel based on the gradient value of the current pixel and the original pixel value of the current pixel.
  • the application provides an encoding device, the encoding device includes:
  • a memory configured to store video data
  • an encoder configured to:
  • the gradient value of the current pixel is determined based on the original pixel value of the current pixel and the original pixel values of the surrounding pixels of the current pixel ; Determine the adjusted pixel value of the current pixel based on the gradient value of the current pixel and the original pixel value of the current pixel.
  • the present application provides a decoding end device, including: a processor and a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions that can be executed by the processor;
  • the processor is configured to execute machine-executable instructions to implement the following steps:
  • the gradient value of the current pixel is determined based on the original pixel value of the current pixel and the original pixel values of the surrounding pixels of the current pixel ; Determine the adjusted pixel value of the current pixel based on the gradient value of the current pixel and the original pixel value of the current pixel.
  • the present application provides an encoding device, including: a processor and a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions that can be executed by the processor;
  • the processor is configured to execute machine-executable instructions to implement the following steps:
  • the gradient value of the current pixel is determined based on the original pixel value of the current pixel and the original pixel values of the surrounding pixels of the current pixel ; Determine the adjusted pixel value of the current pixel based on the gradient value of the current pixel and the original pixel value of the current pixel.
  • the current pixel in the current block satisfies the enabling condition of the enhancement adjustment mode, the current pixel can be determined based on the gradient value of the current pixel and the original pixel value of the current pixel.
  • the adjusted pixel value of the current pixel point that is, the original pixel value of the current pixel point is adjusted based on the gradient value of the current pixel point, so that the adjusted pixel value of the current pixel point is closer to the original pixel, thereby improving the encoding performance.
  • FIG. 1 is a schematic diagram of an encoding and decoding framework in an embodiment of the present application.
  • FIGS. 2A and 2B are schematic diagrams of block division in an embodiment of the present application.
  • FIG. 3 is a schematic diagram of deblocking filtering in an embodiment of the present application.
  • 5A is a hardware structure diagram of a decoding end device in an embodiment of the present application.
  • FIG. 5B is a hardware structure diagram of an encoding end device in an embodiment of the present application.
  • the first information may also be referred to as the second information, and similarly, the second information may also be referred to as the first information.
  • the use of the word "if” can be interpreted as “at the time of,” or “when,” or “in response to determining,” depending on the context.
  • Video coding framework can be used to implement the processing flow of the coding end in this embodiment of the present application.
  • the schematic diagram of the video decoding framework can be similar to that in FIG. 1 , which is not repeated here, and can be implemented using a video decoding framework.
  • prediction such as intra prediction and inter prediction, etc.
  • motion estimation/motion compensation reference image buffer
  • in-loop filtering reconstruction
  • transformation quantization
  • Inverse transform inverse quantization
  • entropy encoder entropy encoder
  • Loop filtering is used to reduce problems such as image blockiness or poor image quality, and can improve image quality. It can include three filters, DBF, SAO, and ALF.
  • DBF is deblocking filtering, which is used to remove blocks generated by block coding. boundary effect.
  • SAO is a sample adaptive compensation filter, which is used for classification based on the pixel value of the sample and the gradient value of the surrounding blocks. Different compensation values are added to the pixel value of each category to make the reconstructed image closer to the original image.
  • ALF is adaptive loop filtering, that is, through the Wiener filter, the reconstructed image is filtered, so that the reconstructed image is closer to the original image.
  • the prediction process may include intra-frame prediction and inter-frame prediction.
  • Intra-frame prediction is to take into account the strong spatial correlation between adjacent blocks, using the surrounding reconstructed pixels as reference pixels to predict the current uncoded block, only the residual value needs to be subsequently encoded, while Instead of encoding the original value, it can effectively remove the redundancy in the spatial domain and greatly improve the compression efficiency.
  • Inter-frame prediction is to use the correlation in the temporal domain of the video to predict the pixels of the current image using the pixels of the adjacent coded image, so as to achieve the purpose of removing the temporal redundancy of the video.
  • transformation refers to converting an image described in the form of pixels in the spatial domain into an image in the transform domain, and expressing it in the form of transform coefficients. Since most images contain many flat areas and slowly changing areas, a proper transformation process can convert the scattered distribution of image energy in the spatial domain into a relatively concentrated distribution in the transformation domain, so that the signal energy can be removed. The frequency domain correlation between the two, combined with the quantization process, can effectively compress the code stream.
  • entropy coding refers to a method of lossless coding according to the principle of information entropy.
  • a series of element symbols used to represent a video sequence are converted into a binary code for transmission or storage.
  • Code stream the input symbols may include quantized transform coefficients, motion vector information, prediction mode information, transform and quantization related syntax, etc.
  • the output data of the entropy coding module is the final code stream after compression of the original video. Entropy coding can effectively remove the statistical redundancy of these video element symbols, and is one of the important tools to ensure the compression efficiency of video coding.
  • in-loop filtering is used to reduce problems such as image blockiness or poor image quality, and to improve image quality, and may include, but is not limited to, DBF, SAO, and ALF.
  • DBF DBF technology
  • DBF technology can be used to deblock the boundaries.
  • Deblocking filtering includes filtering decision and filtering operations. During the filtering decision process, the boundary strength (such as no filtering, weak filter or strong filter) and filter parameters.
  • the pixels are modified according to the boundary strength and filtering parameters. For example, when filtering the boundary, it may be strong filtering or weak filtering, and taps of different lengths are used for filtering.
  • SAO filtering used to eliminate ringing effects. Ringing effect is due to the quantization distortion of high-frequency AC coefficients, which will generate ripples around the edge after decoding. The larger the transform block size, the more obvious the ringing effect.
  • the basic principle of SAO is to compensate the peak pixels in the reconstruction curve by adding negative values, and add positive values to the valley pixels to compensate.
  • SAO is based on CTU (Coding Tree Unit, coding tree unit) as the basic unit, which can include two types of compensation forms: edge compensation (Edge Offset, referred to as EO) and sideband compensation (Band Offset, referred to as BO). In addition, it also introduces Parameter fusion technology.
  • ALF filtering The optimal filter in the mean square sense can be calculated according to the original signal and the distorted signal, that is, the Wiener filter.
  • the filters of ALF may include but are not limited to: 7*7 diamond filter or 5*5 diamond filter, 7*7 cross plus 3*3 square centrosymmetric filter, or 7*7 cross plus 5 *5 Square centrosymmetric filter.
  • Intra-frame prediction Make use of the correlation in the video spatial domain and use the coded block of the current block for prediction to achieve the purpose of removing the redundancy in the video spatial domain.
  • Intra prediction specifies multiple prediction modes, and each prediction mode corresponds to a texture direction (except DC mode). For example, if the image texture is arranged horizontally, the horizontal prediction mode can better predict image information.
  • Inter-frame prediction Based on the correlation in the temporal domain of the video, since the video sequence contains strong temporal correlation, using adjacent coded image pixels to predict the pixels of the current image can effectively remove the temporal redundancy of the video.
  • the block-based motion compensation technology is used in the inter-frame prediction part of the video coding standard. The main principle is to find a best matching block in the previously encoded image for each pixel block of the current image. This process is called motion estimation (Motion Estimation). , ME).
  • Motion Vector In inter prediction, a motion vector can be used to represent the relative displacement between the current block of the current frame image and the reference block of the reference frame image. Each divided block has a corresponding motion vector sent to the decoding end. If the motion vector of each block is independently encoded and transmitted, especially a large number of small-sized blocks, it will consume many bits. In order to reduce the number of bits used to encode the motion vector, the spatial correlation between adjacent blocks can be used to predict the motion vector of the current block to be encoded according to the motion vector of the adjacent encoded block, and then encode the prediction difference, This can effectively reduce the number of bits representing the motion vector.
  • the motion vector of the adjacent coded block can be used to predict the motion vector of the current block, and then the predicted value (MVP, Motion Vector Prediction) of the motion vector and the true estimation of the motion vector can be used to predict the motion vector of the current block.
  • MVP Motion Vector Prediction
  • the difference between the values (MVD, Motion Vector Difference) is encoded.
  • Motion Information Since the motion vector represents the position offset between the current block and a certain reference block, in order to accurately obtain the information pointing to the block, in addition to the motion vector, the index information of the reference frame image is also required to represent the current block. Which reference frame image to use.
  • a reference frame image list can usually be established for the current frame image, and the reference frame image index information indicates which reference frame image in the reference frame image list is used by the current block.
  • an index value can also be used to indicate which reference picture list is used, and this index value can be called a reference direction.
  • motion-related information such as a motion vector, a reference frame index, and a reference direction may be collectively referred to as motion information.
  • Flag coding In video coding, there are many modes. For a block, one of these modes may be used. In order to indicate which mode is adopted, each block needs to be marked by encoding the corresponding flag bit. For example, for the encoding end, the value of the flag bit is determined through the decision of the encoding end, and then the value of the flag bit is encoded and transmitted to the decoding end. For the decoding end, it is determined whether the corresponding mode is enabled by parsing the value of the flag bit.
  • the coding of the flag bit can be realized through the high-level syntax, and the high-level syntax can be used to indicate whether a certain mode is allowed to be enabled, that is, a certain mode is allowed to be enabled through the high-level syntax, or a certain mode is prohibited to be enabled.
  • the high-level syntax may be a high-level syntax at a sequence parameter set level, or a high-level syntax at a picture parameter set level, or a high-level syntax at a slice header level, or a high-level syntax at the image header level, which is not limited.
  • SPS sequence parameter set
  • a flag that determines whether certain mode (tool/method) switches are allowed in the entire video sequence (ie, multi-frame video images). For example, if the flag is a value of A (such as a value of 1, etc.), the video sequence can allow the mode corresponding to the flag to be enabled; or, if the flag is a value of B (such as a value of 0, etc.), the video sequence can be disabled. Allows the mode corresponding to this flag to be enabled.
  • the flag bit determines whether certain mode (tool/method) switches are allowed in a certain picture (such as a video image). If the flag bit is the value A, the video image is allowed to enable the mode corresponding to the flag bit; if the flag bit is the value B, the video image is not allowed to enable the mode corresponding to the flag bit.
  • the image header For the high-level syntax of the picture header, there is a flag bit for whether certain mode (tool/method) switches are allowed in a certain picture header. If the flag is the value A, the image header allows the mode corresponding to the flag to be enabled; if the flag is the value B, the image header does not allow the mode corresponding to the flag to be enabled.
  • the image header stores common information only for the current image. For example, when the image includes multiple slices, the multiple slices can share the information in the image header.
  • the slice header For the high-level syntax of the slice header (Slice header), there is a flag for whether certain mode (tool/method) switches are allowed in a slice. If the flag is the value A, slice allows the mode corresponding to the flag to be enabled; if the flag is the value B, the slice does not allow the mode corresponding to the flag to be enabled.
  • one frame of image may include one slice or multiple slices, and the high-level syntax for the slice header (Slice header) is the high-level syntax configured for each slice.
  • High-level syntax It is used to indicate whether certain tools (methods) are allowed to be enabled, that is, certain tools (methods) are allowed to be enabled through high-level syntax, or certain tools (methods) are forbidden to be enabled.
  • the high-level syntax may be the high-level syntax at the sequence parameter set level, or the high-level syntax at the image parameter set level, or the high-level syntax at the slice header level, or the high-level syntax at the image header level. Limit, as long as the above functions can be achieved.
  • Rate-Distortion Optimized There are two major indicators for evaluating coding efficiency: bit rate and PSNR (Peak Signal to Noise Ratio, peak signal-to-noise ratio). The smaller the bit stream, the greater the compression rate and the greater the PSNR. , the better the reconstructed image quality is.
  • is the Lagrange multiplier
  • R is the actual number of bits required for image block coding in this mode, including the sum of bits required for coding mode information, motion information, and residuals.
  • mode selection if the RDO principle is used to make comparison decisions on encoding modes, the best encoding performance can usually be guaranteed.
  • a coding tree unit (Coding Tree Unit, CTU for short) is recursively divided into CUs (Coding Unit, coding unit) using a quadtree. Whether to use intra-frame coding or inter-frame coding is determined at the leaf node CU level.
  • a CU can be divided into two or four prediction units (Prediction Units, PUs for short), and the same prediction information is used in the same PU. After the residual information is obtained after the prediction is completed, a CU can be divided into four transform units (Transform Units, TU for short). For example, the current image block in this application is a PU.
  • CUs can be either square or rectangular partitions.
  • the CTU first divides the quad-tree, and then the leaf nodes of the quad-tree division can be divided into a binary tree and a ternary tree.
  • FIG. 2A there are five division types of CUs, namely quad-tree division, horizontal binary tree division, vertical binary tree division, horizontal ternary tree division and vertical ternary tree division.
  • CU division in CTU can be It is an arbitrary combination of the above five division types. It can be seen from the above that different division methods make the shapes of each PU different, such as rectangles and squares of different sizes.
  • the DBF filtering process includes two processes: filtering decision and filtering operation.
  • the filtering decision includes: 1) obtaining boundary strength (BS value); 2) filtering switching decision; 3) filtering strength selection.
  • BS value boundary strength
  • filtering switching decision a filtering switching decision
  • 3) filtering strength selection for the chroma component, there is only step 1), and the BS values of the luma component are multiplexed directly.
  • the filtering operation is performed only when the BS value is 2 (that is, at least one of the blocks on both sides of the current block adopts the intra (intra frame) mode).
  • the filtering operations include: 1) strong and weak filtering for luma components; 2) filtering for chroma classification.
  • the DBF filtering process generally performs horizontal boundary filtering (also referred to as horizontal DBF filtering) and vertical boundary filtering (also referred to as vertical DBF filtering) in units of 8*8, and at most 3 filters on both sides of the boundary are applied.
  • the pixels are filtered, and at most 4 pixels on both sides of the boundary are used for filtering. Therefore, the horizontal DBF filtering and vertical DBF filtering of different blocks do not affect each other, that is, the horizontal DBF filtering and vertical DBF filtering can be performed in parallel. .
  • the vertical alignment of 3 columns of pixels on the left side of the current block and 3 columns of pixels on the right side of the left block can be performed first.
  • DBF filtering is performed, and then horizontal DBF filtering is performed on 3 rows of pixels on the upper side of the current block and 3 rows of pixels on the lower side of the upper block (ie, the upper side block of the current block).
  • vertical DBF filtering is usually performed first, and then horizontal DBF filtering is performed.
  • horizontal DBF filtering may also be performed first, and then vertical DBF filtering may be performed.
  • vertical DBF filtering is performed first, and then horizontal DBF filtering is performed as an example.
  • the processing flow of DBF filtering may include the following steps:
  • step S11 the edge condition values in the horizontal direction and the vertical direction are calculated in units of 4*4, respectively.
  • the edge condition value is 2 (used to indicate that both the luminance component and the chrominance component are filtered).
  • the edge condition value is 1 (used to indicate that the brightness components are filtered, but not the chroma components). For other cases than the above two cases, the edge condition value is 0.
  • step S12 the vertical filtering of all blocks is completed in units of 4*4 (the filtering processing is in units of 8*8, but information such as edge condition values is stored in units of 4*4).
  • edge condition value is not 0, the following filtering process is performed:
  • Luminance component filtering vertical filtering processes 4 rows of vertical boundaries, and horizontal filtering processes 4 columns of horizontal boundaries
  • the filter type (df_type) of the current frame is not type 1
  • the filter type (df_type) of the current frame is not type 1)
  • the filtering coefficient and the number of filtering pixels For example, suppose that the 4 pixels on the left or upper side of the boundary are L0-L3 respectively (as shown in Figure 3, the left side is taken as an example in Figure 3); the 4 pixels on the right or lower side of the boundary are R0 -R3 (as shown in Figure 3, the right side is used as an example in Figure 3). Then for the luminance component:
  • the filter coefficient is [3, 8, 10, 8, 3]/32, that is, in order to determine the filtered pixel value of the pixel point L0, the pixel points L2, L1, L0, R0 and R1 are used respectively.
  • the pixel values are weighted and summed, and the weighting coefficients (ie filter coefficients) are 3/32, 8/32, 10/32, 8/32 and 3/32 in turn.
  • wj is a filter coefficient
  • the pixel values of the pixel points R2, R1, R0, L0 and L1 are used for weighted summation, and the weighting coefficients are 3/32, 8/32, 10/32, 8/ 32 and 3/32.
  • wj is a filter coefficient
  • wj 3/32
  • wj 8/32
  • wj 10/32
  • wj 8/32
  • wj 8/32.
  • L0' clip(L2*3+L1*8+L0*10+R0*8+R1*3+16)>>5)
  • L0' is the filtered pixel value of pixel L0
  • L0 ⁇ L2 is the pixel value of the pixel points L0-L2 before filtering
  • R0-R1 is the pixel value of the pixel point R0-R1, the same below.
  • R0' clip((R2*3+R1*8+R0*10+L0*8+L1*3+16)>>5)>>5).
  • “>>” is a right shift operation, which is used to replace division, that is, ">>5" is equivalent to dividing by 2 5 (ie, 32).
  • Multiplication can be replaced by a left shift.
  • a multiplication by 4 can be replaced by a left shift by 2 bits, that is, by a ⁇ 2; a multiplication by 10 can be replaced by (a ⁇ 3) +(a ⁇ 1) instead.
  • is a left shift operation, used instead of multiplication, that is, "a ⁇ 2" is equivalent to multiplying by 2 2 (ie, 4).
  • the operation result is usually rounded directly, that is, when the operation result is a non-integer between N ⁇ N+1, the result is N, and considering that when the fractional part is greater than 0.5
  • the numerator of the above weighted sum can be added to the denominator (i.e. the dividend) 1/2 , for rounding.
  • the numerator of the above weighted sum can be added to the denominator (i.e. the dividend) 1/2 , for rounding.
  • a right shift of 5 bits is equivalent to dividing by 25 (ie, 32). Therefore, 16 can be added to the numerator of the above weighted sum.
  • clip(x) is a trimming operation.
  • the value of x is set to the upper limit of the preset value range; when x is lower than the lower limit of the preset value range, the value of x is set is the lower limit of the preset data range.
  • the filter coefficient is [1, 4, 6, 4, 1]/16
  • L0' clip(L2*1+L1*4+L0*6+R0*4+R1*1+8 )>>
  • R0' clip(R2*1+R1*4+R0*6+L0*4+L1*1+8)>>4.
  • chroma filtering is performed for the boundary of the 16*16 block, that is, the filtering of the chrominance component is performed for the boundary of the 16*16 block.
  • the filtering process of the chrominance component is as follows:
  • Subtract 1 from the BS value (eg, 4, 3, 2, 1, 0, etc.) of the obtained chrominance component that is, the optional value of BS can be 3, 2, 1, or 0.
  • the filtering process of the chrominance component is performed based on the BS value, and the specific process is as follows:
  • Alpha and Beta of the above process are related to the mean QP of the blocks on both sides of the boundary, i.e. the current block and the left block of the current block (for vertical DBF filtering) or the current block and the upper block of the current block (for horizontal DBF).
  • the QP mean value of filtering is related, and the values of Alpha and Beta can be obtained by looking up the table, which is not limited.
  • step S13 the unit of 4*4 is used (the filtering processing is in units of 8*8, and the information such as the edge condition value is stored in units of 4*4), and the horizontal filtering of all blocks is completed.
  • the implementation method is similar to that of step S12. Repeat again.
  • the filtering technologies such as DBF, SAO and ALF are all classified based on the pixel value of the current pixel, or based on the relationship between the pixel value of the current pixel and the pixel values of the surrounding pixels.
  • Different filtering operations based on different categories may cause filtering phenomenon, that is, the filtered pixel value is much larger or much smaller than the pre-filtered pixel value, and also much larger or smaller than the original pixel value, resulting in poor filtering effect. Coding performance is relatively poor and so on.
  • this embodiment proposes an encoding and decoding method, which can adjust the original pixel value of the current pixel based on the gradient value of the current pixel, so that the adjusted pixel value of the current pixel is closer to the original pixel, thereby improving the encoding performance. .
  • the filtering process if the current pixel in the current block satisfies the enabling condition of the enhancement adjustment mode, after adjusting the original pixel value of the current pixel based on the gradient value of the current pixel, the filtering effect and coding performance can be improved.
  • Embodiment 1 An encoding and decoding method is proposed in the embodiment of the present application, and the method can be applied to the encoding end or the decoding end.
  • FIG. 4 which is a schematic flowchart of the encoding and decoding method, the method may include:
  • Step 401 If the current pixel in the current block satisfies the enabling condition of the enhancement adjustment mode, determine the gradient value of the current pixel based on the original pixel value of the current pixel and the original pixel values of the surrounding pixels of the current pixel.
  • the gradient value of the current pixel can be determined based on the difference between the original pixel value of the current pixel and the original pixel values of the surrounding pixels, that is, the gradient value of the current pixel reflects the two difference of pixel values.
  • the surrounding pixels of the current pixel may be adjacent pixels of the current pixel, or may be non-adjacent pixels of the current pixel.
  • the surrounding pixels of the current pixel may be pixels located in the current block, or may be pixels located in adjacent blocks of the current block.
  • the surrounding pixels of the current pixel can be the pixels to the left of the current pixel, the pixels to the right of the current pixel, the pixels above the current pixel, or the pixels below the current pixel. There is no restriction on the position of the surrounding pixels of the current pixel.
  • the surrounding pixels of the current pixel may be L0 in the adjacent blocks to the left of the current block. If the current pixel is the pixel in the first row and the second column in the current block, the surrounding pixels of the current pixel may be the pixels in the eighth row and the second column in the adjacent block above the current block.
  • Step 402 Determine the adjusted pixel value of the current pixel based on the gradient value of the current pixel and the original pixel value of the current pixel.
  • the adjustment pixel of the current pixel point may be determined based on the gradient value of the current pixel point, the original pixel value of the current pixel point, the first adjustment threshold, the second adjustment threshold, the first adjustment offset value and the second adjustment offset value value.
  • the adjusted pixel value of the current pixel is determined based on the original pixel value of the current pixel and the first adjusted offset value, for example, The adjusted pixel value of the current pixel is determined based on the sum of the original pixel value of the current pixel and the first adjusted offset value. If the gradient value of the current pixel is less than the second adjustment threshold, the adjusted pixel value of the current pixel is determined based on the original pixel value of the current pixel and the second adjustment offset value, for example, based on the original pixel value of the current pixel and The sum of the second adjustment offset values determines the adjusted pixel value of the current pixel point.
  • the first adjustment threshold and the second adjustment threshold may be opposite numbers to each other. Of course, the first adjustment threshold and the second adjustment threshold may not be opposite numbers to each other, and the first adjustment threshold and the second adjustment threshold may be arbitrarily set.
  • the reference pixel corresponding to the current pixel may also be determined from the adjacent blocks of the current block, and based on the reference pixel
  • the original pixel value of the point and the original pixel value of the surrounding pixels of the reference pixel point determine the gradient value of the reference pixel point; based on the gradient value of the reference pixel point and the original pixel value of the reference pixel point, the adjusted pixel value of the reference pixel point is determined.
  • the reference pixel point may be a pixel point in an adjacent block that is adjacent to the current pixel point, or may be a pixel point in the adjacent block that is not adjacent to the current pixel point, which is not limited.
  • the reference pixel can be L0 in the adjacent block to the left of the current block, or L1 in the adjacent block to the left of the current block , L2, etc., there is no restriction on this.
  • the reference pixel can be L0 in the adjacent block to the left of the current block, or L1, L2, etc. in the adjacent block to the left of the current block, which is not limited. .
  • the reference pixel may be the pixel in the eighth row and the second column in the adjacent block above the current block, or it may be the pixel at the upper side of the current block.
  • the pixels in the seventh row and the second column in the adjacent block which are not limited.
  • the gradient value of the reference pixel can be determined based on the difference between the original pixel value of the reference pixel and the original pixel values of the surrounding pixels of the reference pixel, that is to say, the gradient value reflects the two difference of pixel values.
  • the surrounding pixels of the reference pixel may be adjacent pixels of the reference pixel, or may be non-adjacent pixels of the reference pixel.
  • the surrounding pixels of the reference pixel may be pixels located in the block where the reference pixel is located, or may be pixels located in adjacent blocks of the block where the reference pixel is located.
  • the surrounding pixels of the reference pixel can be the pixels on the left side of the reference pixel, the pixels on the right side of the reference pixel, the pixels on the upper side of the reference pixel, or the pixels on the lower side of the reference pixel.
  • the position of the surrounding pixels of this reference pixel is not limited.
  • the surrounding pixels of the reference pixel may be the current pixels in the current block, and similarly, the surrounding pixels of the current pixel may be the pixels in the adjacent blocks of the current block. reference pixel.
  • the adjusted pixel value of the reference pixel is determined based on the gradient value of the reference pixel and the original pixel value of the reference pixel, which may include, but is not limited to: the gradient value based on the reference pixel and the original pixel value of the reference pixel.
  • the third adjustment threshold (which can be the same as or different from the first adjustment threshold)
  • the fourth adjustment threshold (which can be the same as or different from the second adjustment threshold)
  • the third adjustment offset value which may be the same as or different from the first adjustment threshold
  • the value may be the same or different) and the fourth adjustment offset value (which may be the same as or different from the third adjustment offset value) to determine the adjustment pixel value of the reference pixel point.
  • the adjusted pixel value of the reference pixel is determined based on the original pixel value of the reference pixel and the third adjustment offset value, for example, based on the original pixel value of the reference pixel
  • the sum of the pixel value and the third adjustment offset value determines the adjusted pixel value of the reference pixel point. If the gradient value of the reference pixel point is smaller than the fourth adjustment threshold, then determine the adjusted pixel value of the reference pixel point based on the original pixel value of the reference pixel point and the fourth adjustment offset value, for example, based on the original pixel value of the reference pixel point and The sum of the fourth adjustment offset values determines the adjusted pixel value of the reference pixel point.
  • the third adjustment threshold and the fourth adjustment threshold may be opposite numbers to each other.
  • the third adjustment threshold and the fourth adjustment threshold may not be opposite numbers to each other, and the third adjustment threshold and the fourth adjustment threshold may be arbitrarily set.
  • the first adjustment threshold corresponding to the current block the second adjustment threshold, the first adjustment offset, the second adjustment offset, the third adjustment threshold, the Four adjustment thresholds, third adjustment offset values and fourth adjustment offset values.
  • the first adjustment threshold value, the first adjustment offset value, the second adjustment offset value, the third adjustment threshold value, the third adjustment offset value and the fourth adjustment offset value corresponding to the current block can be parsed from the high-level syntax .
  • the second adjustment threshold value, the first adjustment offset value, the second adjustment offset value, the third adjustment threshold value, the third adjustment offset value and the fourth adjustment offset value corresponding to the current block can be parsed from the high-level syntax .
  • the first adjustment threshold value, the first adjustment offset value, the second adjustment offset value, the fourth adjustment threshold value, the third adjustment offset value and the fourth adjustment offset value corresponding to the current block can be parsed from the high-level syntax .
  • the second adjustment threshold value, the first adjustment offset value, the second adjustment offset value, the fourth adjustment threshold value, the third adjustment offset value and the fourth adjustment offset value corresponding to the current block can be parsed from the high-level syntax .
  • the second adjustment threshold can be deduced, and after parsing the second adjustment threshold from the high-level grammar.
  • the first adjustment threshold can be derived. If the third adjustment threshold and the fourth adjustment threshold are opposite numbers to each other, after parsing the third adjustment threshold from the high-level grammar, the fourth adjustment threshold can be deduced, and after parsing the fourth adjustment threshold from the high-level grammar, it can be deduced The third adjustment threshold.
  • the current pixel in the current block satisfies the enabling condition of the enhancement adjustment mode, which may include, but is not limited to: if the boundary strength of the boundary to be filtered corresponding to the current pixel in the current block satisfies the enhancement adjustment mode If the enable condition is set, it is determined that the current pixel satisfies the enable condition of the enhanced adjustment mode. For example, if the boundary intensity of the boundary to be filtered corresponding to the current pixel point is the preset first value, it can be determined that the boundary intensity of the boundary to be filtered satisfies the enabling condition of the enhancement adjustment mode. Exemplarily, the preset first value may be 0. Of course, the preset first value may also be other values.
  • the feature information corresponding to the current block satisfies the enabling condition of the enhancement adjustment mode, it is determined that the current pixel in the current block satisfies the enabling condition of the enhancement adjustment mode.
  • the feature information corresponding to the current block satisfies the enabling condition of the enhancement adjustment mode means that if, based on the feature information corresponding to the current block, it is determined not to start a filtering operation (such as a deblocking filtering operation, etc.) for the current block, then it is determined that the current block corresponds to The feature information of satisfies the enabling conditions of enhanced adjustment mode.
  • the enhancement adjustment mode enable flag bit corresponding to the current block can be obtained first, if the enhancement adjustment mode enable flag bit corresponding to the current block allows
  • the enhancement adjustment mode is enabled in the current block, it is determined whether the current pixel in the current block satisfies the enablement condition of the enhancement adjustment mode, that is, it is determined that the current pixel meets the enablement condition of the enhancement adjustment mode, or does not meet the enablement condition of the enhancement adjustment mode.
  • the enhancement adjustment mode enable flag corresponding to the current block does not allow the enhancement adjustment mode to be enabled for the current block, it is directly determined that the current pixel in the current block does not meet the enabling condition of the enhancement adjustment mode.
  • the enhanced adjustment mode enable flag corresponding to the current block may be parsed from the high-level syntax, and then whether to allow the current block to enable the enhanced adjustment mode is determined based on the enhanced adjustment mode enable flag.
  • the enhanced adjustment mode enable flag is the first value (such as 1)
  • the enhanced adjustment mode enable flag allows the current block to enable the enhanced adjustment mode
  • the enhanced adjustment mode enable flag is the second value (eg 0)
  • the enhanced adjustment mode enable flag does not allow the current block to enable the enhanced adjustment mode.
  • the original pixel value of the current pixel may be a prediction value obtained by intra-frame prediction or inter-frame prediction , and the adjusted pixel value of the current pixel is taken as the target pixel value of the current pixel (the final pixel value of the prediction process).
  • the encoding and decoding method is applied to the filtering process, the original pixel value of the current pixel can be the predicted value before filtering, and the adjusted pixel value of the current pixel is the target pixel value of the current pixel (the final pixel value of the filtering process) .
  • the current pixel in the current block satisfies the enabling condition of the enhancement adjustment mode, the current pixel can be determined based on the gradient value of the current pixel and the original pixel value of the current pixel.
  • the adjusted pixel value of the current pixel point that is, the original pixel value of the current pixel point is adjusted based on the gradient value of the current pixel point, so that the adjusted pixel value of the current pixel point is closer to the original pixel, thereby improving the encoding performance.
  • the original pixel value of the current pixel may also be subjected to deblocking filtering (ie DBF filtering) to obtain the current pixel filtered pixel value.
  • deblocking filtering is just an example, and other filtering methods can also be used to filter the original pixel value of the current pixel, for example, performing SAO filtering on the original pixel value of the current pixel to obtain the filtered pixel value of the current pixel. Or, perform ALF filtering on the original pixel value of the current pixel to obtain the filtered pixel value of the current pixel.
  • step S11-step S13 it shows the process of performing deblocking filtering on the original pixel value of the current pixel to obtain the filtered pixel value of the current pixel, which will not be repeated here. Repeat.
  • step S11-step S13 It can be seen from these steps that the filtering process will be skipped only if the blocks on both sides of the boundary are non-intra-mode blocks, have no residuals, and have the same motion, otherwise, the filtering process needs to be performed. .
  • the value of BS can also be obtained. If BS is equal to 0, no filtering is performed, that is, the pixels on both sides of the boundary are not filtered. If BS is greater than 0, the pixels on both sides of the boundary are filtered.
  • the current pixel in the current block does not meet the enabling conditions of the normal filtering mode. If the blocks on both sides of the border are non-intra mode blocks, there is no residual, and the motion consistency is not established (that is, the blocks on both sides of the border are not non-intra mode blocks, or the blocks on both sides of the border have residuals, or the blocks on both sides of the border The block motion is inconsistent), and BS is equal to 0, then the current pixel in the current block does not meet the enabling condition of the normal filtering mode. If the blocks on both sides of the boundary are non-intra mode blocks, have no residual, and the motion consistency is not established, and BS is greater than 0, the current pixel in the current block satisfies the enabling condition of the normal filtering mode.
  • the current pixel in the current block satisfies the enabling conditions of the normal filtering mode, the current pixel in the current block does not satisfy the enabling condition of the enhancement adjustment mode, and if the current pixel in the current block satisfies the enhancement adjustment mode. , the current pixel in the current block does not meet the enabling conditions of the normal filtering mode.
  • the current pixel After the current pixel satisfies the enabling conditions of the common filtering mode, and the original pixel value of the current pixel is subjected to deblocking filtering to obtain the filtered pixel value of the current pixel, it is also necessary to determine whether the current pixel in the current block satisfies the Enable condition for enhanced filter mode. If the current pixel in the current block satisfies the enabling conditions of the enhanced filtering mode, then based on the filtered pixel value of the current pixel and the original pixel value of the current pixel, the adjusted pixel value of the current pixel is determined, that is, the adjusted pixel of the current pixel.
  • the value is used as the target pixel value of the current pixel (the final pixel value of the deblocking filtering process). If the current pixel in the current block does not meet the enabling conditions of the enhanced filtering mode, the filtered pixel value of the current pixel is not adjusted, and the filtered pixel value of the current pixel is used as the target pixel value of the current pixel (the deblocking filtering process final pixel value).
  • the adjusted pixel value of the current pixel is determined based on the filtered pixel value of the current pixel and the original pixel value of the current pixel, which may include but not include: Limited to: based on the filtered pixel value of the current pixel, the original pixel value of the current pixel, the first filtering threshold, the second filtering threshold, the first filtering offset and the second filtering offset, determine the adjustment pixel of the current pixel value.
  • the first filtering threshold and the second filtering threshold may be mutually inverse numbers.
  • the first filtering threshold and the second filtering threshold may not be mutually inverse numbers, and the first filtering threshold and the second filtering threshold may be arbitrarily set. .
  • the reference pixel corresponding to the current pixel can also be determined from the adjacent blocks of the current block, and the reference pixel
  • the original pixel value of the point is subjected to deblocking filtering (ie DBF filtering) to obtain the filtered pixel value of the reference pixel point.
  • deblocking filtering ie DBF filtering
  • the deblocking filtering is just an example, and other filtering methods can also be used to filter the original pixel values of the reference pixels. Pixel values.
  • step S11-step S13 it shows the process of performing deblocking filtering on the original pixel value of the reference pixel to obtain the filtered pixel value of the reference pixel, which will not be repeated here. Repeat.
  • the reference pixel point may be a pixel point adjacent to the current pixel point in an adjacent block, and the reference pixel point may also be a pixel point not adjacent to the current pixel point in the adjacent block, which is not limited.
  • the current pixel After the current pixel satisfies the enabling conditions of the common filtering mode, and the original pixel value of the reference pixel is subjected to deblocking filtering to obtain the filtered pixel value of the reference pixel, it is also necessary to determine whether the current pixel in the current block satisfies the Enable condition for enhanced filter mode. If the current pixel in the current block satisfies the enabling condition of the enhanced filtering mode, then based on the filtered pixel value of the reference pixel and the original pixel value of the reference pixel, the adjusted pixel value of the reference pixel is determined, that is, the adjusted pixel of the reference pixel.
  • the value is used as the target pixel value of the reference pixel point (the final pixel value of the deblocking filtering process). If the current pixel in the current block does not meet the enabling conditions of the enhanced filtering mode, the filtered pixel value of the reference pixel is not adjusted, and the filtered pixel value of the reference pixel is used as the target pixel value of the reference pixel (the deblocking filtering process final pixel value).
  • the adjusted pixel value of the reference pixel is determined based on the filtered pixel value of the reference pixel and the original pixel value of the reference pixel, which may include, but is not limited to: based on the filtered pixel value of the reference pixel, the original pixel value of the reference pixel.
  • the pixel value, the third filtering threshold, the fourth filtering threshold, the third filtering offset value and the fourth filtering offset value determine the adjusted pixel value of the reference pixel point; wherein, the third filtering threshold and the fourth filtering threshold may be mutually Opposite numbers, of course, the third filtering threshold and the fourth filtering threshold may not be opposite numbers to each other, and the third filtering threshold and the fourth filtering threshold may be arbitrarily set.
  • the first filtering threshold value, the first filtering offset value, the second filtering offset value, the third filtering threshold value, the third filtering offset value and the fourth filtering offset value corresponding to the current block can be parsed from the high-level syntax .
  • the second filtering threshold value, the first filtering offset value, the second filtering offset value, the third filtering threshold value, the third filtering offset value and the fourth filtering offset value corresponding to the current block can be parsed from the high-level syntax .
  • the first filtering threshold value, the first filtering offset value, the second filtering offset value, the fourth filtering threshold value, the third filtering offset value and the fourth filtering offset value corresponding to the current block can be parsed from the high-level syntax .
  • the second filter threshold, the first filter offset value, the second filter offset value, the fourth filter threshold value, the third filter offset value and the fourth filter offset value corresponding to the current block can be parsed from the high-level syntax .
  • the second filtering threshold can be derived, and after parsing the second filtering threshold from the high-level grammar. , the first filtering threshold can be derived. If the third filtering threshold and the fourth filtering threshold are opposite numbers to each other, after parsing the third filtering threshold from the high-level grammar, the fourth filtering threshold can be deduced, and after parsing the fourth filtering threshold from the high-level grammar, it can be deduced The third filter threshold.
  • the current pixel in the current block satisfies the enabling condition of the enhanced filtering mode, which may include, but is not limited to: if the boundary strength of the boundary to be filtered corresponding to the current pixel in the current block satisfies the enhanced filtering mode and the absolute value of the difference between the filtered pixel value of the current pixel point and the original pixel value of the current pixel point is greater than the preset threshold value (the preset threshold value is a positive value, and this preset threshold value is not limited, For example, if the first filtering threshold and the second filtering threshold are opposite numbers, when the first filtering threshold is positive, the preset threshold is the same as the first filtering threshold, and when the second filtering threshold is positive, the The preset threshold is the same as the second filtering threshold, of course, the preset threshold may also be other values), then it is determined that the current pixel satisfies the enabling condition of the enhanced filtering mode.
  • the preset threshold value is a positive value, and this preset threshold value
  • the boundary strength of the boundary to be filtered corresponding to the current pixel in the current block satisfies the enabling condition of the enhanced filtering mode, which may include, but is not limited to: if the boundary strength of the boundary to be filtered corresponding to the current pixel is a preset second value (different from the preset first value, that is, not 0, for example, the preset second value may be greater than 0), it is determined that the boundary strength of the boundary to be filtered satisfies the enabling condition of the enhanced filtering mode.
  • the enhancement filter mode enable flag corresponding to the current block can be obtained first, if the enhancement filter mode enable flag corresponding to the current block allows If the enhancement filter mode is enabled for the current block, it is determined whether the current pixel in the current block satisfies the enable condition of the enhancement filter mode, that is, it is determined that the current pixel meets the enable condition of the enhancement filter mode, or does not meet the enable condition of the enhancement filter mode.
  • the enhancement filter mode enable flag corresponding to the current block does not allow the enhancement filter mode to be enabled for the current block, it is directly determined that the current pixel in the current block does not satisfy the enablement condition of the enhancement filter mode.
  • the enhancement filtering mode enable flag corresponding to the current block may be parsed from the high-level syntax, and then whether to allow the current block to enable the enhancement filtering mode is determined based on the enhancement filtering mode enable flag.
  • the enhancement filter mode enable flag is the first value (such as 1)
  • the enhancement filter mode enable flag allows the current block to enable the enhancement filter mode
  • the enhancement filter mode enable flag is the second value (eg 0)
  • the enhancement filter mode enable flag bit does not allow the current block to enable the enhancement filter mode.
  • the high-level syntax may include, but is not limited to, one of the following syntaxes: sequence-level parameter set SPS-level high-level syntax; picture parameter set PPS-level high-level syntax; image header-level high-level syntax; frame-level high-level syntax Syntax; slice header-level high-level syntax; coding tree unit CTU-level high-level syntax; coding unit CU-level high-level syntax.
  • the pixel value of the current pixel in the current block may be a luminance component or a chrominance component.
  • the adjusted pixel value of the current pixel can be determined based on the filtered pixel value of the current pixel and the original pixel value of the current pixel, That is, the original pixel value of the current pixel is adjusted based on the filtered pixel value of the current pixel, so that the adjusted pixel value of the current pixel is closer to the original pixel, thereby improving the encoding performance.
  • the filtering process such as DBF, SAO, ALF, etc.
  • the current pixel in the current block satisfies the enabling conditions of the enhanced filtering mode, after the original pixel value of the current pixel is adjusted based on the filtered pixel value of the current pixel, It can improve the filtering effect and improve the coding performance.
  • Embodiment 2 When filtering is required, it is necessary to first determine whether to skip the filtering process. For example, if the blocks on both sides of the boundary (that is, the current block and the adjacent blocks of the current block, for the vertical boundary, the The adjacent block on the left, for the horizontal boundary, is the adjacent block on the upper side of the current block) is a non-intra-mode block (that is, neither the current block nor the adjacent block is an intra-frame block), no residual (that is, the current When there is no residual between the block and the adjacent block) and the motion is consistent (that is, the motion of the current block and the adjacent block is consistent), the filtering process will be skipped, otherwise, the filtering process will not be skipped.
  • the blocks on both sides of the boundary that is, the current block and the adjacent blocks of the current block, for the vertical boundary, the The adjacent block on the left, for the horizontal boundary, is the adjacent block on the upper side of the current block
  • no residual that is, the current When there is no residual between the block and the adjacent block
  • skip the filtering process can be used as the enabling condition of the enhancement adjustment mode, that is, if the filtering process is skipped for the current pixel in the current block, the current pixel in the current block satisfies the enabling condition of the enhancement adjustment mode .
  • the enhancement adjustment mode can be used to adjust the original pixel value of the current pixel, so that the pixel value is closer to the original pixel.
  • the feature information corresponding to the current block satisfies the enabling condition of the enhancement adjustment mode, it is determined that the current pixel in the current block satisfies the enabling condition of the enhancement adjustment mode.
  • the feature information corresponding to the current block is used to indicate whether the blocks on both sides of the boundary are non-intra mode blocks, whether the blocks on both sides of the boundary have no residual, and whether the blocks on both sides of the boundary have the same motion.
  • the feature information corresponding to the current block is used to indicate that the blocks on both sides of the border are non-intra mode blocks, and used to indicate that the blocks on both sides of the border have no residual, and is used to indicate that the blocks on both sides of the border have the same motion, It means that the feature information corresponding to the current block satisfies the enabling condition of the enhanced adjustment mode, and it is determined that the current pixel in the current block satisfies the enabling condition of the enhanced adjustment mode, that is, each pixel in the current block satisfies the enabling condition of the enhanced adjustment mode.
  • the feature information corresponding to the current block is used to indicate that the blocks on both sides of the boundary are not non-intra mode blocks, and/or, the feature information corresponding to the current block is used to indicate that the blocks on both sides of the boundary have residuals, and/or Or, the feature information corresponding to the current block is used to indicate that the motion of the blocks on both sides of the boundary is inconsistent, which means that the feature information corresponding to the current block does not meet the enabling conditions of the enhancement adjustment mode, and it is determined that the current pixel in the current block does not meet the enhancement adjustment mode. , that is, each pixel in the current block does not meet the enabling condition of the enhanced adjustment mode.
  • the enhanced adjustment mode can be used to adjust the original pixel value of the current pixel.
  • the current pixel can be determined based on the original pixel value of the current pixel. and determine the adjusted pixel value of the current pixel based on the gradient value of the current pixel and the original pixel value of the current pixel.
  • the process of determining the adjusted pixel value reference may be made to the subsequent embodiments, which will not be repeated here.
  • Embodiment 3 When filtering is required, it is necessary to first determine whether to skip the filtering process. For example, if the blocks on both sides of the boundary are non-intra mode blocks, have no residuals, and have consistent motion, the filtering process will be skipped. , otherwise, the filtering process is not skipped.
  • the BS value can also be determined. If the BS value is greater than 0 (for example, the BS value is 1, 2, 3, 4, etc.), the pixels on both sides of the boundary can be filtered. If the BS value is 0, no filtering is performed, that is, the pixels on both sides of the boundary are not filtered.
  • the "BS value is 0" can be used as the enabling condition of the enhanced adjustment mode, that is, if the BS value of the current pixel in the current block is 0, the current pixel in the current block satisfies the enabling condition of the enhanced adjustment mode ; If the BS value of the current pixel in the current block is greater than 0, the current pixel in the current block does not meet the enabling condition of the enhancement adjustment mode.
  • the boundary intensity of the boundary to be filtered corresponding to the current pixel point in the current block satisfies the enabling condition of the enhancement adjustment mode
  • the boundary intensity of the boundary to be filtered corresponding to the current pixel point may be determined first, and if the boundary intensity is a preset first value, it is determined that the boundary intensity satisfies the enabling condition of the enhancement adjustment mode.
  • the preset first value can be configured according to experience, for example, the preset first value is 0.
  • the boundary intensity of the boundary to be filtered corresponding to the current pixel point is 0, it means that the boundary intensity of the boundary to be filtered corresponding to the current pixel point satisfies the enabling conditions of the enhancement adjustment mode, and it is determined that the current pixel point satisfies the enhancement adjustment mode. enabling conditions.
  • the boundary strength of the boundary to be filtered corresponding to the current pixel in the current block does not satisfy the enabling condition of the enhancement adjustment mode, it is determined that the current pixel does not satisfy the enabling condition of the enhancement adjustment mode. For example, if the boundary intensity of the boundary to be filtered corresponding to the current pixel is not the preset first value, it is determined that the boundary intensity does not meet the enabling condition of the enhancement adjustment mode, so that it can be determined that the current pixel in the current block does not meet the enhancement requirement. Adjustment mode enable condition.
  • the enhancement adjustment mode may be used to adjust the original pixel value of the current pixel, so that the pixel value is closer to the original pixel. For example, you can first determine the gradient value of the current pixel point based on the original pixel value of the current pixel point, and then determine the adjusted pixel value of the current pixel point based on the gradient value of the current pixel point and the original pixel value of the current pixel point, and adjust the pixel value. For the determination process of , refer to the subsequent embodiments.
  • Embodiment 4 When filtering is required, it is necessary to first determine whether to skip the filtering process. For example, if the blocks on both sides of the boundary are non-intra-mode blocks, have no residuals, and have consistent motion, the filtering process will be skipped. , otherwise, the filtering process is not skipped. When the filtering process is not skipped, the BS value can also be determined. If the BS value is greater than 0 (eg, the BS value is 1, 2, 3, 4, etc.), the pixels on both sides of the boundary can be filtered. If the BS value is 0, no filtering is performed, that is, the pixels on both sides of the boundary are not filtered.
  • the BS value can also be determined. If the BS value is greater than 0 (eg, the BS value is 1, 2, 3, 4, etc.), the pixels on both sides of the boundary can be filtered. If the BS value is 0, no filtering is performed, that is, the pixels on both sides of the boundary are not filtered.
  • the "BS value is greater than 0" can be used as the enabling condition of the common filtering mode, that is, if the BS value of the current pixel in the current block is greater than 0, the current pixel in the current block satisfies the enabling condition of the common filtering mode ; If the BS value of the current pixel in the current block is equal to 0, then the current pixel in the current block does not meet the enabling condition of the normal filtering mode.
  • the boundary strength of the boundary to be filtered corresponding to the current pixel in the current block satisfies the enabling condition of the common filtering mode, it is determined that the current pixel meets the enabling condition of the common filtering mode.
  • the boundary intensity of the boundary to be filtered corresponding to the current pixel point may be determined first, and if the boundary intensity is a preset second value, it is determined that the boundary intensity satisfies the enabling condition of the normal filtering mode.
  • the preset second value may be configured according to experience, for example, the preset second value may be greater than 0, for example, the preset second value may be 1, 2, 3, 4, and so on.
  • the boundary intensity of the boundary to be filtered corresponding to the current pixel point is greater than 0 (that is, the boundary intensity is not 0), it means that the boundary intensity of the boundary to be filtered corresponding to the current pixel point satisfies the enabling conditions of the ordinary filtering mode, and Make sure that the current pixel meets the enabling conditions of the normal filtering mode.
  • the boundary strength of the boundary to be filtered corresponding to the current pixel in the current block does not satisfy the enabling condition of the normal filtering mode, it is determined that the current pixel does not satisfy the enabling condition of the normal filtering mode. For example, if the boundary intensity (such as 0) of the boundary to be filtered corresponding to the current pixel is not the preset second value, it is determined that the boundary intensity does not meet the enabling conditions of the normal filtering mode, so that the current pixel in the current block can be determined. The point does not meet the enabling conditions for normal filter mode.
  • the original pixel value of the current pixel can also be subjected to deblocking filtering (ie, DBF filtering, this paper takes deblocking filtering as an example) to obtain the current pixel value.
  • deblocking filtering ie, DBF filtering, this paper takes deblocking filtering as an example
  • Embodiment 5 On the basis that the current pixel point in the current block satisfies the enabling conditions of the common filtering mode, and the original pixel value of the current pixel point is deblocked and filtered to obtain the filtered pixel value of the current pixel point, the current pixel value can also be determined. Whether the current pixel in the block satisfies the enabling condition of the enhancement filter mode.
  • the enabling condition of the filtering mode is not greater than the preset threshold, it is determined that the current pixel in the current block does not meet the enabling condition of the enhanced filtering mode.
  • the current pixel in the current block satisfies the enabling condition of the enhancement filtering mode, which may include: if the boundary strength of the boundary to be filtered corresponding to the current pixel in the current block satisfies the enabling condition of the enhancement filtering mode, and the current pixel If the absolute value of the difference between the filtered pixel value of the point and the original pixel value of the current pixel point is greater than the preset threshold, it is determined that the current pixel point satisfies the enabling condition of the enhanced filtering mode.
  • the BS value can also be determined. If the BS value is greater than 0, the pixels on both sides of the boundary can be filtered. Based on this, "BS value greater than 0" can be used as the enabling condition for the enhanced filtering mode, that is, "BS value greater than 0" can be used as the enabling condition for both the normal filtering mode and the enhanced filtering mode. When the BS value is greater than 0, it is necessary to perform deblocking filtering on the original pixel value of the current pixel to obtain the filtered pixel value of the current pixel.
  • the boundary strength of the boundary to be filtered corresponding to the current pixel in the current block satisfies the enabling condition of the enhanced filtering mode, and the difference between the filtered pixel value of the current pixel and the original pixel value of the current pixel is If the absolute value of is greater than the preset threshold, it is determined that the current pixel meets the enabling condition of the enhancement filtering mode. Otherwise, it is determined that the current pixel does not meet the enabling condition of the enhancement filtering mode.
  • the boundary strength of the boundary to be filtered corresponding to the current pixel point is first determined, and if the boundary strength is a preset second value, it is determined that the boundary strength satisfies the enabling condition of the enhanced filtering mode.
  • the preset second value may be configured according to experience, for example, the preset second value may be greater than 0, for example, the preset second value may be 1, 2, 3, 4, and so on.
  • the adjusted pixel value of the current pixel can be determined based on the filtered pixel value of the current pixel and the original pixel value of the current pixel, that is, the current pixel's adjusted pixel value. Adjust the pixel value as the target pixel value of the current pixel point (the final pixel value of the deblocking filtering process). If the current pixel in the current block does not meet the enabling conditions of the enhanced filtering mode, the filtered pixel value of the current pixel is not adjusted, and the filtered pixel value of the current pixel is used as the target pixel value of the current pixel (the deblocking filtering process final pixel value).
  • Embodiments 1, 2, 3, 4, and 5 that this paper involves the enhancement adjustment mode, the normal filtering mode and the enhancement filtering mode.
  • the pixel value is processed to obtain the target pixel value of the current pixel point (ie, the final pixel value). For example, if the current pixel meets the enabling conditions of the enhanced adjustment mode, in the enhanced adjustment mode, the original pixel value of the current pixel can be adjusted based on the gradient value of the current pixel to obtain the adjusted pixel value of the current pixel, Set the adjusted pixel value as the target pixel value.
  • the original pixel value of the current pixel can be filtered to obtain the filtering of the current pixel. Pixel value, the filtered pixel value is used as the target pixel value.
  • the original pixel value of the current pixel can be filtered to obtain the filtered pixel of the current pixel. and adjust the original pixel value of the current pixel point based on the filtered pixel value of the current pixel point to obtain the adjusted pixel value of the current pixel point, and use the adjusted pixel value as the target pixel value.
  • the enhancement adjustment mode, the normal filtering mode or the enhancement filtering mode may be used to process the original pixel value of the current pixel, that is, the enhancement adjustment mode , the normal filtering mode and the enhancement filtering mode all belong to the deblocking filtering mode, that is, the enhancement adjustment mode, the normal filtering mode and the enhancement filtering mode may be sub-modes of the deblocking filtering mode.
  • the enhancement adjustment mode is used to process the original pixel value of the current pixel, or the ordinary filtering mode is used to process the original pixel value of the current pixel, or the enhancement filtering mode is used to process the original pixel value of the current pixel. Process the original pixel value of the current pixel.
  • the enhanced adjustment mode, the normal filtering mode and the enhanced filtering mode may also belong to other types of filtering modes, such as the SAO filtering mode or the ALF filtering mode, that is, the enhanced adjustment mode, the ordinary filtering mode and the enhanced filtering mode may be SAO filtering
  • the sub-modes under the mode alternatively, the enhancement adjustment mode, the normal filtering mode and the enhancement filtering mode may be sub-modes under the ALF filtering mode.
  • the enhancement adjustment mode is used to process the original pixel value of the current pixel, or the ordinary filtering mode is used to process the original pixel value of the current pixel, or, using The enhancement filter mode processes the original pixel value of the current pixel.
  • the ordinary filtering mode can be called the ordinary mode of the deblocking filtering mode, that is, the original pixel value of the current pixel point. After deblocking filtering is performed to obtain the filtered pixel value, the filtered pixel value after deblocking filtering is no longer adjusted.
  • the enhancement filter mode can be called deblocking refinement (abbreviated as DBR), that is, after deblocking the original pixel value of the current pixel to obtain the filtered pixel value, it is also necessary to deblock the filtered pixel value after filtering. make adjustments.
  • DBR deblocking refinement
  • the enhancement adjustment mode can be called an optional deblocking filter adjustment mode (alt deblocking refinement, abbreviated as ADBR), that is, without performing deblocking filtering on the original pixel value of the current pixel, directly adjust the original pixel value of the current pixel. make adjustments.
  • ADBR alt deblocking refinement
  • Embodiment 6 For Embodiment 1, Embodiment 2 and Embodiment 3, the enhancement adjustment mode can be used to adjust the original pixel value of the current pixel point. When adjusting the original pixel value, the following steps can be used:
  • Step S21 Determine the gradient value of the current pixel based on the original pixel value of the current pixel and the original pixel values of the surrounding pixels of the current pixel.
  • the gradient value of the current pixel point may be determined based on the difference between the original pixel value of the current pixel point and the original pixel value of the surrounding pixel points, and the determination method is not limited.
  • Step S22 from the adjacent block of the current block (for the vertical boundary, the adjacent block is the left adjacent block of the current block, and for the horizontal boundary, the adjacent block is the upper adjacent block of the current block. ), the reference pixel corresponding to the current pixel is determined, and the gradient value of the reference pixel is determined based on the original pixel value of the reference pixel and the original pixel values of the surrounding pixels of the reference pixel.
  • the gradient value of the reference pixel may be determined based on the difference between the original pixel value of the reference pixel and the original pixel values of the surrounding pixels of the reference pixel, and the determination method is not limited.
  • the gradient value of the current pixel is determined based on the original pixel value of the current pixel and the original pixel values of the surrounding pixels of the current pixel (for example, the surrounding pixel is a reference pixel), based on the surrounding pixels of the reference pixel.
  • the surrounding pixel is the current pixel
  • the original pixel value and the original pixel value of the reference pixel determine the gradient value of the reference pixel.
  • the above is just an example of determining the gradient value of the current pixel point and the gradient value of the reference pixel point, which is not limited.
  • the gradient value of the current pixel point may be determined based on the difference between the original pixel value of the current pixel point and the original pixel value of the reference pixel point.
  • the gradient value of the reference pixel point may be determined based on the difference between the original pixel value of the reference pixel point and the original pixel value of the current pixel point.
  • Step S23 Determine the adjusted pixel value of the current pixel based on the gradient value of the current pixel and the original pixel value of the current pixel. For example, if the gradient value of the current pixel is greater than the first adjustment threshold, the adjustment of the current pixel is determined based on the original pixel value of the current pixel and the first adjustment offset value (also referred to as the first adjustment offset) Pixel values. If the gradient value of the current pixel point is smaller than the second adjustment threshold, the adjusted pixel value of the current pixel point is determined based on the original pixel value of the current pixel point and the second adjustment offset value.
  • the first adjustment threshold and the second adjustment threshold may be opposite numbers to each other.
  • Step S24 Determine the adjusted pixel value of the reference pixel based on the gradient value of the reference pixel and the original pixel value of the reference pixel. For example, if the gradient value of the reference pixel is greater than the third adjustment threshold, the adjustment of the reference pixel is determined based on the original pixel value of the reference pixel and the third adjustment offset value (also referred to as the third adjustment offset) Pixel values. If the gradient value of the reference pixel is smaller than the fourth adjustment threshold, the adjusted pixel value of the reference pixel is determined based on the original pixel value of the reference pixel and the fourth adjustment offset value.
  • the third adjustment threshold and the fourth adjustment threshold may be opposite numbers to each other.
  • a second adjustment offset value may be represented.
  • the third adjustment threshold is the same as the first adjustment threshold as an example, in practical applications, the third adjustment threshold is the same as the first adjustment threshold.
  • the threshold value can also be different)
  • the fourth adjustment threshold is the same as the second adjustment threshold as an example, in practical applications, the fourth adjustment threshold is the same as the second adjustment threshold.
  • the adjustment threshold can also be different)
  • the value is the same as the second adjustment offset value.
  • the fourth adjustment offset value and the second adjustment offset value may also be different.
  • pi represents the original pixel value of the current pixel
  • DPi represents the gradient value of the current pixel
  • Pi represents the adjusted pixel value of the current pixel
  • qi represents the original pixel value of the reference pixel
  • DQi represents the reference pixel.
  • the gradient value of , Qi represents the adjusted pixel value of the reference pixel point.
  • clip(x) means to limit x to between [0, 2 ⁇ (bit_depth)-1] (including 0 and 2 ⁇ (bit_depth)-1), bit_depth means the bit depth of the image, generally 8, 10, 12, etc. .
  • the first adjustment threshold corresponding to the current block, the first adjustment offset value, the second adjustment offset value, and the third adjustment threshold value corresponding to the current block can be parsed from the high-level syntax, The third adjustment offset value and the fourth adjustment offset value. Since the first adjustment threshold and the second adjustment threshold are mutually inverse numbers, and the third adjustment threshold and the fourth adjustment threshold are mutually inverse numbers, the decoding end can determine the second adjustment threshold and the fourth adjustment threshold.
  • the first adjustment threshold, the first adjustment offset value, the second adjustment offset value, and the fourth adjustment threshold value corresponding to the current block can be parsed from the high-level syntax , the third adjustment offset value and the fourth adjustment offset value. Since the first adjustment threshold and the second adjustment threshold are mutually inverse numbers, and the third adjustment threshold and the fourth adjustment threshold are mutually inverse numbers, the decoding end can determine the second adjustment threshold and the third adjustment threshold.
  • the second adjustment threshold corresponding to the current block, the first adjustment offset value, the second adjustment offset value, and the third adjustment threshold can be parsed from the high-level syntax , the third adjustment offset value and the fourth adjustment offset value. Since the first adjustment threshold and the second adjustment threshold are mutually inverse numbers, and the third adjustment threshold and the fourth adjustment threshold are mutually inverse numbers, the decoding end can determine the first adjustment threshold and the fourth adjustment threshold.
  • the second adjustment threshold corresponding to the current block, the first adjustment offset value, the second adjustment offset value, and the fourth adjustment threshold value can be parsed from the high-level syntax , the third adjustment offset value and the fourth adjustment offset value. Since the first adjustment threshold and the second adjustment threshold are mutually inverse numbers, and the third adjustment threshold and the fourth adjustment threshold are mutually inverse numbers, the decoding end can determine the first adjustment threshold and the third adjustment threshold.
  • the first adjustment threshold (or the second adjustment threshold, or the third adjustment threshold, or the fourth adjustment threshold) corresponding to the current block can be parsed from the high-level syntax, That is, the other three adjustment thresholds can be derived from one adjustment threshold), the first adjustment offset value (or the third adjustment offset value) and the second adjustment offset value (or the fourth adjustment offset value).
  • the second adjustment threshold can be determined. Since the first adjustment threshold is the same as the third adjustment threshold, the third adjustment threshold can be determined. Since the third adjustment offset value is the same as the first adjustment offset value, the third adjustment offset value can be determined. Since the fourth adjustment offset value is the same as the second adjustment offset value, the fourth adjustment offset value can be determined. Since the third adjustment threshold and the fourth adjustment threshold are opposite numbers to each other, the fourth adjustment threshold can be determined.
  • the above methods are just a few examples, which are not limited, as long as the decoding end can know the first adjustment threshold, the second adjustment threshold, the third adjustment threshold, the fourth adjustment threshold, the first adjustment offset value, the second adjustment threshold
  • the offset value, the third adjustment offset value and the fourth adjustment offset value are sufficient, that is, the above-mentioned values can be obtained by analysis or deduction.
  • the high-level syntax may include, but is not limited to, one of the following syntaxes: SPS-level high-level syntax; PPS-level high-level syntax; picture header-level high-level syntax; frame-level high-level syntax; slice header-level high-level syntax; CTU-level high-level syntax ; CU-level high-level syntax.
  • SPS-level high-level syntax SPS-level high-level syntax
  • PPS-level high-level syntax picture header-level high-level syntax
  • frame-level high-level syntax frame-level high-level syntax
  • slice header-level high-level syntax CTU-level high-level syntax
  • CTU-level high-level syntax CTU-level high-level syntax
  • CU-level high-level syntax coding of the high-level syntax.
  • the pixel value of the current pixel in the current block may be a luminance component or a chrominance component.
  • whether the enhanced adjustment mode is allowed to be enabled may be indicated by the enhanced adjustment mode enable flag. If the enhanced adjustment mode enable flag allows the current block to enable the enhanced adjustment mode, it is necessary to determine the current block in the current block. Whether the pixel meets the enabling conditions of the enhanced adjustment mode, if the current pixel meets the enabling conditions of the enhanced adjustment mode, the enhanced adjustment mode is used to adjust the original pixel value of the current pixel. If the enhanced adjustment mode enable flag does not allow the current block to enable the enhanced adjustment mode, it is directly determined that each pixel in the current block does not meet the enabling conditions of the enhanced adjustment mode, and the enhanced adjustment mode will not be used for the original pixels of the current pixel. value to adjust.
  • the enhanced adjustment mode enable flag corresponding to the current block allows the current block to enable the enhanced adjustment mode, it is determined whether the current pixel in the current block satisfies the enabling condition of the enhanced adjustment mode. If the enhancement adjustment mode enable flag corresponding to the current block does not allow the enhancement adjustment mode to be enabled for the current block, it is determined that each pixel in the current block does not satisfy the enabling condition of the enhancement adjustment mode.
  • the enhancement adjustment mode enable flag corresponding to the current block may be parsed from the high-level syntax. For example, if the enhanced adjustment mode enable flag is the first value (such as 1), it means that the enhanced adjustment mode enable flag allows the current block to enable the enhanced adjustment mode. If the enhanced adjustment mode enable flag is the first A value of two (such as 0) indicates that the enhanced adjustment mode enable flag bit does not allow the current block to enable the enhanced adjustment mode.
  • the high-level syntax may include, but is not limited to, one of the following syntaxes: SPS-level high-level syntax; PPS-level high-level syntax; picture header-level high-level syntax; frame-level high-level syntax; slice header-level high-level syntax; CTU-level high-level syntax ; CU-level high-level syntax.
  • SPS-level high-level syntax SPS-level high-level syntax
  • PPS-level high-level syntax picture header-level high-level syntax
  • frame-level high-level syntax frame-level high-level syntax
  • slice header-level high-level syntax CTU-level high-level syntax
  • CTU-level high-level syntax CTU-level high-level syntax
  • CU-level high-level syntax coding of the type of the high-level syntax is not limited, as long as the enhanced adjustment mode enable flag corresponding to the current block can be carried through the high-level syntax.
  • Embodiment 7 For Embodiment 1 and Embodiment 5, the enhancement filtering mode can be used to adjust the original pixel value of the current pixel point, and the following steps can be used when adjusting the original pixel value of the current pixel point:
  • Step S31 Perform deblocking filtering on the original pixel value of the current pixel to obtain the filtered pixel value of the current pixel.
  • Step S32 from the adjacent block of the current block (for the vertical boundary, the adjacent block is the left adjacent block of the current block, and for the horizontal boundary, the adjacent block is the upper adjacent block of the current block. ), determine the reference pixel point corresponding to the current pixel point, and perform deblocking filtering on the original pixel value of the reference pixel point to obtain the filtered pixel value of the reference pixel point.
  • DBF filtering ie, deblocking filtering
  • the DBF filtering method can be used to perform deblocking filtering on the original pixel value of the reference pixel.
  • perform deblocking filtering to obtain the filtered pixel value of the reference pixel.
  • SAO filtering can also be used to filter the original pixel value of the current pixel to obtain the filtered pixel value of the current pixel, and SAO filtering can be used to filter the original pixel value of the reference pixel to obtain the filtering of the reference pixel. Pixel values.
  • ALF filtering can be used to filter the original pixel value of the current pixel to obtain the filtered pixel value of the current pixel
  • ALF filtering can be used to filter the original pixel value of the reference pixel to obtain the filtered pixel of the reference pixel. value.
  • the DBF filtering method is used to perform deblocking filtering on the original pixel values of the current pixel point and the reference pixel point as an example.
  • only the horizontal DBF filtering can be performed on the current pixel and the reference pixel, or only the vertical DBF filtering can be performed on the current pixel and the reference pixel, and the current pixel can also be filtered first.
  • the vertical DBF filtering is performed on the point and the reference pixel, and then the horizontal DBF filtering is performed on the current pixel and the reference pixel.
  • Step S33 Determine the adjusted pixel value of the current pixel based on the filtered pixel value of the current pixel and the original pixel value of the current pixel. For example, based on the filtered pixel value of the current pixel, the original pixel value of the current pixel, the first filtering threshold, the second filtering threshold, the first filtering offset and the second filtering offset, determine the adjustment of the current pixel pixel value; wherein, the first filtering threshold and the second filtering threshold may be opposite numbers to each other.
  • Step S34 Determine the adjusted pixel value of the reference pixel based on the filtered pixel value of the reference pixel and the original pixel value of the reference pixel. For example, based on the filtered pixel value of the reference pixel, the original pixel value of the reference pixel, the third filtering threshold, the fourth filtering threshold, the third filtering offset and the fourth filtering offset, the adjustment of the reference pixel is determined. pixel value; wherein, the third filtering threshold and the fourth filtering threshold may be opposite numbers to each other.
  • step S31 and step S32 are performed, and the filtered pixel value is used as the target pixel value (the final pixel of the deblocking filtering process). value). If the current pixel satisfies the enabling conditions of the normal filtering mode and the enabling conditions of the enhanced filtering mode, step S31-step S34 is performed, and the adjusted pixel value is taken as the target pixel value (the final pixel value of the deblocking filtering process).
  • step S33 and step S34 based on the filtered pixel value and the original pixel value without filtering processing, the original pixel value of the pixel point can be subjected to enhancement filtering processing, that is, the original pixel value of the pixel point is subjected to enhancement filtering processing to obtain enhanced filtering.
  • enhancement filtering processing that is, the original pixel value of the pixel point is subjected to enhancement filtering processing to obtain enhanced filtering.
  • the adjusted pixel value after processing so that the adjusted pixel value after the enhancement process is closer to the real pixel than the filtered pixel value, avoiding the filtered pixel value caused by filtering that is much larger or smaller than the real pixel of the pixel, improving Image Quality.
  • step S33 if the difference between the filtered pixel value of the current pixel and the original pixel value of the current pixel is greater than the first filtering threshold, then the current pixel can be based on the filtered pixel value of the current pixel.
  • the original pixel value of the point and the first filter offset value determine the adjusted pixel value of the current pixel point. If the difference between the filtered pixel value of the current pixel point and the original pixel value of the current pixel point is less than the second filtering threshold, then based on the filtered pixel value of the current pixel point, the original pixel value of the current pixel point and the second filtering bias Shift value to determine the adjusted pixel value of the current pixel.
  • Y 1 (i) represent the original pixel value of the current pixel
  • Y 2 (i) represent the filtered pixel value of the current pixel
  • Y 3 (i) represent the adjusted pixel value of the current pixel
  • Y v (i) (Y 1 (i)+Y 2 (i)+1)>>1.
  • T v can represent the first filtering threshold
  • f0 v can represent the first filtering offset value
  • NT v can represent the second filtering threshold
  • f1 v can represent the second filtering offset value
  • NT v is generally set as -T v
  • clip(x) means to limit x to a preset value range
  • the range is generally [0, 2 D -1]
  • D is the image bit depth, for an 8-bit image
  • the range is [0, 255], or [0, 1023] for 10-bit images.
  • the adjusted pixel value can be clipped to the preset value range through the Clip (trim) operation.
  • the adjusted pixel value is set to the upper limit of the preset value range; when the adjusted pixel value is smaller than the lower limit of the preset value range, the adjusted pixel value is set to the preset value The lower limit of the value range. For example, taking an 8-bit image as an example, when the adjustment pixel value is less than 0, the adjustment pixel value is set to 0; when the adjustment pixel value is greater than 255, the adjustment pixel value is set to 255.
  • the reference pixel may be based on the filtered pixel value of the reference pixel.
  • the original pixel value of the point and the third filter offset value determine the adjusted pixel value of the reference pixel point. If the difference between the filtered pixel value of the reference pixel and the original pixel value of the reference pixel is less than the fourth filtering threshold, then based on the filtered pixel value of the reference pixel, the original pixel value of the reference pixel and the fourth filtering bias Shift value to determine the adjusted pixel value of the reference pixel point.
  • the manner of determining the adjusted pixel value of the reference pixel point is similar to the manner of determining the adjusted pixel value of the current pixel point, and details are not described herein again.
  • the third filtering threshold and the first filtering threshold may be the same or different, the third filtering offset value and the first filtering offset value may be the same or different, and the fourth filtering threshold and the second filtering threshold may be The same or different, the fourth filter offset value and the second filter offset value may be the same or different.
  • the first filtering threshold value, the first filtering offset value, the second filtering offset value, and the third filtering threshold value corresponding to the current block can be parsed from the high-level syntax, The third filter offset value and the fourth filter offset value. Since the first filtering threshold and the second filtering threshold are opposite numbers, and the third filtering threshold and the fourth filtering threshold are opposite numbers, the decoding end can determine the second filtering threshold and the fourth filtering threshold.
  • the first filtering threshold, the first filtering offset, the second filtering offset, and the fourth filtering threshold corresponding to the current block can be parsed from the high-level syntax , the third filter offset value and the fourth filter offset value. Since the first filtering threshold and the second filtering threshold are opposite numbers, and the third filtering threshold and the fourth filtering threshold are opposite numbers, the decoding end can determine the second filtering threshold and the third filtering threshold.
  • the second filtering threshold, the first filtering offset value, the second filtering offset value, and the third filtering threshold corresponding to the current block can be parsed from the high-level syntax , the third filter offset value and the fourth filter offset value. Since the first filtering threshold and the second filtering threshold are opposite numbers, and the third filtering threshold and the fourth filtering threshold are opposite numbers, the decoding end can determine the first filtering threshold and the fourth filtering threshold.
  • the second filtering threshold, the first filtering offset value, the second filtering offset value, and the fourth filtering threshold corresponding to the current block can be parsed from the high-level syntax , the third filter offset value and the fourth filter offset value. Since the first filtering threshold and the second filtering threshold are opposite numbers, and the third filtering threshold and the fourth filtering threshold are opposite numbers, the decoding end can determine the first filtering threshold and the third filtering threshold.
  • the first filtering threshold (or the second filtering threshold, or the third filtering threshold, or the fourth filtering threshold) corresponding to the current block may be parsed from the high-level syntax, That is, the other three filter thresholds can be derived from one filter threshold), the first filter offset value (or the third filter offset value) and the second filter offset value (or the fourth filter offset value).
  • the second filtering threshold can be determined. Since the first filtering threshold is the same as the third filtering threshold, the third filtering threshold can be determined. Since the third filter offset value is the same as the first filter offset value, the third filter offset value can be determined. Since the fourth filter offset value is the same as the second filter offset value, the fourth filter offset value can be determined. Since the third filtering threshold and the fourth filtering threshold are opposite numbers to each other, the fourth filtering threshold can be determined.
  • the above methods are just a few examples, which are not limited, as long as the decoding end can know the first filtering threshold, the second filtering threshold, the third filtering threshold, the fourth filtering threshold, the first filtering offset value, the second filtering threshold
  • the offset value, the third filter offset value, and the fourth filter offset value are sufficient, that is, the above-mentioned values can be obtained through analysis or deduction.
  • the high-level syntax may include, but is not limited to, one of the following syntaxes: SPS-level high-level syntax; PPS-level high-level syntax; picture header-level high-level syntax; frame-level high-level syntax; slice header-level high-level syntax; CTU-level high-level syntax ; CU-level high-level syntax.
  • SPS-level high-level syntax SPS-level high-level syntax
  • PPS-level high-level syntax picture header-level high-level syntax
  • frame-level high-level syntax frame-level high-level syntax
  • slice header-level high-level syntax CTU-level high-level syntax
  • CTU-level high-level syntax CTU-level high-level syntax
  • CU-level high-level syntax coding of the filter threshold and filter offset value corresponding to the current block.
  • the pixel value of the current pixel in the current block may be a luminance component or a chrominance component.
  • the enhanced filtering mode enable flag can be used to indicate whether the enhanced filtering mode is allowed to be enabled. If the enhanced filtering mode enable flag allows the current block to enable the enhanced filtering mode, the current block in the current block needs to be determined. Whether the pixel satisfies the enabling condition of the enhancement filter mode, if the current pixel meets the enable condition of the enhancement filter mode, the enhancement filter mode is used to adjust the original pixel value of the current pixel. If the enhancement filter mode enable flag does not allow the current block to enable the enhancement filter mode, it is directly determined that each pixel in the current block does not meet the enablement conditions of the enhancement filter mode, and the enhancement filter mode will not be used for the original pixel of the current pixel. value to adjust.
  • the enhancement filter mode enable flag corresponding to the current block allows the enhancement filter mode to be enabled for the current block, it is determined whether the current pixel in the current block satisfies the enablement condition of the enhancement filter mode. If the enhancement filter mode enable flag corresponding to the current block does not allow the enhancement filter mode to be enabled for the current block, it is determined that each pixel in the current block does not satisfy the enablement condition of the enhancement filter mode.
  • the enhancement filtering mode enable flag corresponding to the current block may be parsed from the high-level syntax. For example, if the enhancement filter mode enable flag is the first value (such as 1), it means that the enhancement filter mode enable flag allows the current block to enable the enhancement filter mode. If the enhancement filter mode enable flag is the first value A value of two (eg 0) means that the enhancement filter mode enable flag bit does not allow the current block to enable the enhancement filter mode.
  • the high-level syntax may include, but is not limited to, one of the following syntaxes: SPS-level high-level syntax; PPS-level high-level syntax; picture header-level high-level syntax; frame-level high-level syntax; slice header-level high-level syntax; CTU-level high-level syntax ; CU-level high-level syntax.
  • SPS-level high-level syntax SPS-level high-level syntax
  • PPS-level high-level syntax picture header-level high-level syntax
  • frame-level high-level syntax frame-level high-level syntax
  • slice header-level high-level syntax CTU-level high-level syntax
  • CTU-level high-level syntax CTU-level high-level syntax
  • CU-level high-level syntax coding of the type of the high-level syntax is not limited, as long as the enhanced filtering mode enable flag corresponding to the current block can be carried through the high-level syntax.
  • Embodiment 8 When the enabling conditions of the common filtering mode are met, the DBF filtering method (ie, the deblocking filtering method) can be used to perform deblocking filtering on the original pixel value of the pixel point, because the DBF filtering is divided into vertical DBF filtering and horizontal DBF filtering. , therefore, the following steps can be used to perform deblocking filtering on the original pixel value of the pixel:
  • the first step the original pixel value Y 1 (i) is filtered by the vertical DBF to obtain the filtered pixel value Y 2 (i);
  • the second step the pixel value Y 2 (i) is filtered by the horizontal DBF to obtain the filtered pixel value Y 3 (i).
  • the first step is performed to obtain the filtered pixel values of the pixels.
  • the second step is performed to obtain the filtered pixel value of the pixel, and the pixel value Y 2 (i) in the second step is replaced with the original pixel value of the pixel. If the vertical DBF filtering is performed on the pixels first, and then the horizontal DBF filtering is performed, the first step and the second step are performed in sequence.
  • the DBF filtering method ie, the deblocking filtering method
  • the deblocking filtering method can be used to perform deblocking filtering on the original pixel values of the pixels, and the filtered pixels after the deblocking filtering can be used for deblocking filtering. Since DBF filtering is divided into vertical DBF filtering and horizontal DBF filtering, the following steps can be used to perform deblocking filtering processing on the original pixel value of the pixel point, and adjust the filtered pixel value after deblocking filtering:
  • the first step the original pixel value Y 1 (i) is filtered by the vertical DBF to obtain the filtered pixel value Y 2 (i);
  • the second step based on Y 2 (i)-Y 1 (i), obtain the adjusted pixel value Y 3 (i);
  • pixel value Y 3 (i) is filtered by horizontal DBF to obtain filtered pixel value Y 4 (i);
  • Step 4 Based on Y 4 (i)-Y 3 (i), obtain the adjusted pixel value Y 5 (i).
  • the first and second steps are performed to obtain the adjusted pixel values of the pixels.
  • the third and fourth steps are performed to obtain the adjusted pixel value of the pixel, and the pixel value Y 3 (i) of the third step is replaced with the original pixel value of the pixel, that is Can.
  • the vertical DBF filtering is performed on the pixels first, and then the horizontal DBF filtering is performed, the first step, the second step, the third step and the fourth step are performed in sequence. If the horizontal DBF filtering is performed on the pixels first, and then the vertical DBF filtering is performed, the execution steps are similar, and details are not repeated here.
  • the processing process of the enhanced filtering mode is adopted, that is, the process of adjusting the filtered pixel value to obtain the adjusted pixel value.
  • clip(x) indicates that x is limited to a preset image value range, and the image value range can generally be [0, 2 D -1], and D is the image bit depth. Therefore, for 8 bits image, the value range of the image can be [0, 255], and for a 10-bit image, the value range of the image is [0, 1023].
  • the threshold value NT v is generally set to -T v , and may be other values.
  • Tv and NTv are filtering thresholds
  • f0v, f1v and f2v are filtering offset values
  • clip(x) indicates that x is limited to a preset value range.
  • Tv is the first filtering threshold and the third filtering threshold above (the first filtering threshold and the third filtering threshold are the same as an example)
  • NTv is the second filtering threshold and the fourth filtering threshold above (using the The second filter threshold and the fourth filter threshold are the same as an example)
  • f0v is the first filter offset value and the third filter offset value in the above (take the first filter offset value and the third filter offset value as the same as an example)
  • f1v is the second filter offset value and the fourth filter offset value in the above (taking the second filter offset value and the fourth filter offset value as the same as an example).
  • NTv -Tv, that is, Tv and NTv are opposite numbers to each other.
  • Th and NTh are filtering thresholds
  • f0h, f1h and f2h are filtering offset values
  • clip(x) indicates that x is limited to a preset value range.
  • Th is the first filtering threshold and the third filtering threshold above (take the first filtering threshold and the third filtering threshold are the same as an example)
  • NTh is the second filtering threshold and the fourth filtering threshold above (taking the first filtering threshold and the third filtering threshold as the same as an example)
  • the second filter threshold and the fourth filter threshold are the same as an example
  • f0h is the first filter offset value and the third filter offset value in the above (take the first filter offset value and the third filter offset value are the same as an example)
  • f1h is the second filter offset value and the fourth filter offset value in the above (taking the second filter offset value and the fourth filter offset value as the same as an example).
  • NTh -Th, that is, Th and NTh are opposite numbers to each other.
  • Embodiment 9 In DBF, filtering is performed only according to a predetermined criterion, and there may be filtering or under-filtering. For example, if the reconstructed value before DBF is Y1, and the pixel value after DBF filtering is Y2, classification can be performed based on Y2-Y1.
  • the main benefit of classification based on filtered residuals is that some filtered or pseudo-filtered pixel values can be specially enhanced to achieve the effect that these classes of pixels are closer to the original values.
  • the so-called filtering means that Y2 is much larger (or much smaller than) Y1, so that Y2 is much larger (or much smaller) than the original pixel value.
  • the enhancement adjustment mode can be used to adjust the pixel value of the pixel point, that is, if the enhancement adjustment mode is enabled for the current pixel in the current block, the enhancement adjustment mode can be used to adjust the pixel value.
  • the original pixel value of the pixel is adjusted instead of using the normal filter mode or the enhancement filter mode to adjust the original pixel value of the pixel.
  • the adjustment process of the original pixel value may include the following steps:
  • the first step the original pixel value Y 1 (i) is filtered by the vertical DBF to obtain the filtered pixel value Y 2 (i);
  • the second step based on Y 2 (i)-Y 1 (i), obtain the adjusted pixel value Y 3 (i);
  • pixel value Y 3 (i) is filtered by horizontal DBF to obtain filtered pixel value Y 4 (i);
  • Step 4 Based on Y 4 (i)-Y 3 (i), obtain the adjusted pixel value Y 5 (i).
  • the threshold may be the first filtering threshold or the second filtering threshold in the foregoing embodiment.
  • the first filtering threshold and the second filtering threshold are opposite numbers to each other. If the first filtering threshold is a positive value, the The threshold may be the first filtering threshold, and if the second filtering threshold is a positive value, the threshold may be the second filtering threshold.
  • the adjustment process of the original pixel value can be divided into the following three cases:
  • BS is greater than 0, but abs(Y 2 (i)-Y 1 (i)) ⁇ threshold, filtering can be performed at this time (that is, vertical DBF filtering is performed on the original pixel value Y 1 (i), that is, the first step).
  • the filter pixel value Y 2 (i) can also be adjusted by using an enhanced filtering mode to obtain the adjusted pixel value Y 3 (i) of the pixel point.
  • the adjustment process of the original pixel value can be divided into the following three cases:
  • BS is greater than 0, but, abs(Y4(i)-Y3(i)) ⁇ threshold, filtering can be performed at this time (that is, horizontal DBF filtering is performed on the original pixel value Y3 (i), that is, the third step is performed ), on the basis of performing the third step, an enhanced filtering mode can also be used to adjust the filtered pixel value Y 4 (i) of the pixel point to obtain the adjusted pixel value Y 5 (i) of the pixel point.
  • the first step the original pixel value Y 1 (i) is filtered by the vertical DBF to obtain the filtered pixel value Y 2 (i).
  • the first step is not actually performed, that is, it is not necessary to obtain the filtered pixel value Y 2 (i).
  • the third step the pixel value Y 3 (i) is filtered by the horizontal DBF to obtain the filtered pixel value Y 4 (i).
  • the third step is not actually performed, that is, it is not necessary to obtain the filtered pixel value Y 4 (i).
  • the first step the original pixel value Y 1 (i) is filtered by the vertical DBF to obtain the filtered pixel value Y 2 (i).
  • Step 2 If BS is greater than 0, but after vertical DBF filtering is performed on Y 1 (i), it still satisfies abs(Y 2 (i)-Y 1 (i)) ⁇ threshold, then the adjustment pixels are obtained through the enhanced filtering mode
  • the third step the pixel value Y 3 (i) is filtered by the horizontal DBF to obtain the filtered pixel value Y 4 (i).
  • Step 4 If BS is greater than 0, but after performing horizontal DBF filtering on Y 3 (i), it still satisfies abs(Y 4 (i)-Y 3 (i)) ⁇ threshold, then obtain adjustment pixels through the enhanced filtering mode.
  • Embodiment 10 For Embodiment 9, if BS is 0, Y 1 (i) is adjusted through the enhanced adjustment mode to obtain the adjusted pixel value Y 3 (i). For the specific adjustment process, refer to the following steps. If BS is 0, Y 3 (i) is adjusted by the enhanced adjustment mode to obtain the adjusted pixel value Y 5 (i), and this process is similar to the process of obtaining the adjusted pixel value Y 3 (i), which will not be repeated here. .
  • the gradient value of Y 1 (i) is determined, and Y 1 (i) may be the original pixel value of the current pixel point or the original pixel value of the reference pixel point.
  • the horizontal gradient value DY 1 (i) of Y 1 (i) can be calculated; for horizontal boundaries, the vertical gradient value DY 1 (i) of Y 1 (i) can be calculated.
  • alt_dbr_th represents the first adjustment threshold and the third adjustment threshold (the third adjustment threshold is the same as the first adjustment threshold as an example), and alt_dbr_offset0 represents the first adjustment offset and the third adjustment offset (take the The third adjustment offset value is the same as the first adjustment offset value as an example), alt_dbr_offset1 represents the second adjustment offset value and the fourth adjustment offset value (take the fourth adjustment offset value and the second adjustment offset value as the same as an example) ), -alt_dbr_th represents the second adjustment threshold and the fourth adjustment threshold (taking the fourth adjustment threshold and the second adjustment threshold as the same as an example), and -alt_dbr_th and alt_dbr_th are opposite numbers to each other.
  • Embodiment 11 Control the enabling of enhanced adjustment mode through high-level syntax (eg, SPS-level high-level syntax). For example, to encode/decode the flag bit adbr_enable_flag in the sequence header, that is, the encoding end encodes the flag bit adbr_enable_flag in the sequence header, and the decoding end decodes the flag bit adbr_enable_flag from the sequence header.
  • adbr_enable_flag is a binary variable, with a value of '1' indicating that the enhanced adjustment mode can be used, and a value of '0' indicating that the enhanced adjustment mode should not be used.
  • the value of AdbrEnableFlag is equal to adbr_enable_flag. If adbr_enable_flag does not exist in the bitstream, the value of AdbrEnableFlag is 0.
  • the enhanced adjustment mode enable flag (ie AdbrEnableFlag) corresponding to the current block can be parsed from the high-level syntax. If the enhanced adjustment mode enable flag is 1, it indicates that the enhanced adjustment mode is enabled. The mode enable flag allows the current block to enable the enhanced adjustment mode. If the enhanced adjustment mode enable flag is 0, it means that the enhanced adjustment mode enable flag does not allow the current block to enable the enhanced adjustment mode.
  • Embodiment 12 The enabling of the enhanced filtering mode and the enabling of the enhanced adjustment mode are simultaneously controlled through high-level syntax (eg, SPS-level high-level syntax). For example, encoding/decoding the flag bit dbr_enable_flag in the sequence header, that is, the encoding end encodes the flag bit dbr_enable_flag in the sequence header, and the decoding end decodes the flag bit dbr_enable_flag from the sequence header.
  • high-level syntax eg, SPS-level high-level syntax
  • dbr_enable_flag is a binary variable, with a value of '1' indicating that the enhancement filter mode and enhancement adjustment mode are allowed, and a value of '0' indicating that the enhancement filter mode and enhancement adjustment mode are not allowed.
  • the value of DbrEnableFlag is equal to dbr_enable_flag. If dbr_enable_flag does not exist in the bitstream, the value of DbrEnableFlag is 0.
  • the enhancement filter mode enable flag and enhancement adjustment mode enable flag corresponding to the current block can be parsed from the high-level syntax (ie DbrEnableFlag, that is, DbrEnableFlag is simultaneously used as the enhancement filter mode enable flag bit).
  • DbrEnableFlag that is, DbrEnableFlag is simultaneously used as the enhancement filter mode enable flag bit.
  • Enable flag bit and enhancement adjustment mode enable flag if DbrEnableFlag is 1, it means that the current block is allowed to enable the enhancement filter mode and enhancement adjustment mode, if DbrEnableFlag is 0, it means that the current block is not allowed to enable enhancement filter mode and enhancement adjustment model.
  • Embodiment 13 For an expression of the high-level syntax (such as the high-level syntax of the picture header), reference may be made to Table 1, for example, the syntax shown in Table 1 is encoded/decoded in the picture header. That is, the encoding side encodes the syntax shown in Table 1 in the image header, and the decoding side decodes the syntax shown in Table 1 from the image header.
  • Table 1 For an expression of the high-level syntax (such as the high-level syntax of the picture header), reference may be made to Table 1, for example, the syntax shown in Table 1 is encoded/decoded in the picture header. That is, the encoding side encodes the syntax shown in Table 1 in the image header, and the decoding side decodes the syntax shown in Table 1 from the image header.
  • Picture-level deblocking filtering vertical adjustment enable flag picture_dbr_v_enable_flag, picture_dbr_v_enable_flag is a binary variable, a value of '1' indicates that the current image allows the use of deblocking filtering vertical adjustment, and a value of '0' indicates that the current image does not allow the use of deblocking filtering vertical adjustment.
  • the value of PictureDbrVEnableFlag is equal to the value of picture_dbr_v_enable_flag. If picture_dbr_v_enable_flag does not exist in the bitstream, the value of PhDbrVEnableFlag is 0.
  • PictureDbrVEnableFlag corresponds to the enhancement adjustment mode enable flag, which is the enhancement adjustment mode enable flag for vertical DBF filtering. That is, when vertical DBF filtering is required, PictureDbrVEnableFlag indicates that the enhanced adjustment mode is allowed to be enabled, or the enhanced adjustment mode is not allowed to be enabled.
  • PictureDbrVEnableFlag corresponds to the enhancement filtering mode enable flag bit, and is the enhancement filtering mode enable flag bit for vertical DBF filtering. That is to say, when vertical DBF filtering is required, PictureDbrVEnableFlag indicates that the enhancement filtering mode is allowed to be enabled, or the enhancement filtering mode is not allowed to be enabled.
  • PictureDbrVEnableFlag can represent the enhancement adjustment mode enable flag bit for vertical DBF filtering and the enhancement filter mode enable flag bit for vertical DBF filtering, that is, the enhancement adjustment mode enable flag bit and the enhancement filter mode enable flag bit.
  • the enable flag bits share the same flag bit, that is, the current image allows the enhancement adjustment mode and the enhancement filter mode to be enabled at the same time, or the current image does not allow the enhancement adjustment mode and the enhancement filter mode to be enabled at the same time.
  • the deblocking filtering vertical adjustment thresholds dbr_v_threshold_minus1 and dbr_v_threshold_minus1 are used to determine the vertical adjustment threshold of the current image deblocking filtering, and the value range is 0-1.
  • the value of DbrVThreshold is equal to the value of dbr_v_threshold_minus1 plus 1. If dbr_v_threshold_minus1 does not exist in the bitstream, the value of DbrVThreshold is 0.
  • DbrVThreshold corresponds to the first adjustment threshold (taking the third adjustment threshold and the first adjustment threshold as the same as an example), and is the first adjustment threshold for vertical DBF filtering. That is to say, when vertical DBF filtering is required, DbrVThreshold represents the first adjustment threshold in the above embodiment.
  • the second adjustment threshold in the above embodiment taking the fourth adjustment threshold and the second adjustment threshold as the same as an example
  • the first adjustment threshold are opposite numbers to each other. Therefore, the second adjustment threshold can also be determined based on DbrVThreshold.
  • DbrVThreshold corresponds to the first filtering threshold (taking the third filtering threshold and the first filtering threshold as the same as an example), and is the first filtering threshold for vertical DBF filtering. That is to say, when vertical DBF filtering is required, DbrVThreshold represents the first filtering threshold in the above embodiment.
  • the second filtering threshold taking the fourth filtering threshold being the same as the second filtering threshold as an example
  • the first filtering threshold in the above embodiment are opposite numbers to each other. Therefore, the second filtering threshold can also be determined based on DbrVThreshold.
  • DbrVThreshold may represent the first adjustment threshold and the first filtering threshold for vertical DBF filtering, that is, the first adjustment threshold and the first filtering threshold are the same, and both take the same value.
  • the deblocking filter vertical adjustment offset value 0 (dbr_v_offset0_minus1) is used to determine the offset value 0 of the deblocking filter vertical adjustment of the current image, and the value range is 0-3.
  • the value of DbrVOffset0 is equal to the negative value obtained by adding 1 to the value of dbr_v_offset0_minus1 and then taking the inverse number. If dbr_v_offset0_minus1 does not exist in the bit stream, the value of DbrVOffset0 is 0.
  • DbrVOffset0 corresponds to the first filter offset value (taking the third filter offset value and the first filter offset value as the same as an example), and is the first filter offset value for vertical DBF filtering.
  • Offset value that is, when vertical DBF filtering is required
  • DbrVOffset0 represents the first filtering offset value of the above embodiment.
  • the deblocking filtering vertical adjustment offset value 1 (dbr_v_offset1_minus1) is used to determine the offset value 1 of the deblocking filtering vertical adjustment of the current image, and the value range may be 0-3.
  • the value of DbrVOffset1 is equal to the value of dbr_v_offset1_minus1 plus 1. If dbr_v_offset1_minus1 does not exist in the bitstream, the value of DbrVOffset1 is 0.
  • DbrVOffset1 corresponds to the second filter offset value (taking the fourth filter offset value and the second filter offset value as the same as an example), and is the second filter offset value for vertical DBF filtering.
  • Offset value that is, when vertical DBF filtering is required
  • DbrVOffset1 represents the second filtering offset value of the above embodiment.
  • the enhanced deblocking filter vertical adjustment offset value 0 (dbr_v_alt_offset0_minus1), dbr_v_alt_offset0_minus1 is used to determine the vertical adjustment offset value 0 when the current image deblocking filter BS is 0, and the value range of dbr_v_alt_offset0_minus1 may be 0-3.
  • the value of DbrVAltOffset0 can be equal to the negative value obtained by adding 1 to the value of dbr_v_alt_offset0_minus1 and then taking the opposite number. If dbr_v_alt_offset0_minus1 does not exist in the bit stream, the value of DbrVAltOffset0 is 0.
  • DbrVAltOffset0 corresponds to the first adjustment offset value (taking the third adjustment offset value and the first adjustment offset value as the same as an example), and is the first adjustment offset for vertical DBF filtering.
  • Offset value that is, when performing vertical DBF filtering
  • DbrVAltOffset0 represents the first adjustment offset value of the above embodiment.
  • the enhanced deblocking filter vertical adjustment offset value 1 (dbr_v_alt_offset1_minus1), dbr_v_alt_offset1_minus1 is used to determine the vertical adjustment offset value 1 when the current image deblocking filter BS is 0, and the value range of dbr_v_alt_offset1_minus1 may be 0-3.
  • the value of DbrVAltOffset1 is equal to the value of dbr_v_alt_offset1_minus1 plus 1. If dbr_v_alt_offset1_minus1 does not exist in the bit stream, the value of DbrVAltOffset1 is 0.
  • DbrVAltOffset1 corresponds to the second adjustment offset value (taking the fourth adjustment offset value and the second adjustment offset value as the same as an example), which is the second adjustment offset for vertical DBF filtering.
  • Offset value that is, when vertical DBF filtering is performed
  • DbrVAltOffset1 represents the second adjustment offset value of the above embodiment.
  • Picture-level deblocking filtering horizontal adjustment allow flag picture_dbr_h_enable_flag, picture_dbr_h_enable_flag is a binary variable, the value of '1' indicates that the current image allows the use of deblocking filtering horizontal adjustment, the value of '0' indicates that the current image does not allow the use of deblocking filtering horizontal adjustment.
  • the value of PhDbrHEnableFlag is equal to the value of picture_dbr_h_enable_flag. If picture_dbr_h_enable_flag does not exist in the bitstream, the value of PhDbrHEnableFlag is 0.
  • PhDbrHEnableFlag corresponds to the enhancement adjustment mode enable flag, which is the enhancement adjustment mode enable flag for horizontal DBF filtering. That is to say, when horizontal DBF filtering is required, PhDbrHEnableFlag indicates that the enhancement adjustment mode is allowed to be enabled, or the enhancement adjustment mode is not allowed to be enabled.
  • PhDbrHEnableFlag corresponds to the enhancement filter mode enable flag bit, and is the enhancement filter mode enable flag bit for horizontal DBF filtering. That is to say, when horizontal DBF filtering is required, PhDbrHEnableFlag indicates that the enhancement filtering mode is allowed to be enabled, or the enhancement filtering mode is not allowed to be enabled.
  • PhDbrHEnableFlag can represent the enhancement adjustment mode enable flag bit for horizontal DBF filtering and the enhancement filter mode enable flag bit for horizontal DBF filtering, that is, the enhancement adjustment mode enable flag bit and the enhancement filter mode enable flag bit.
  • the enable flag bits share the same flag bit, that is, the current image allows the enhancement adjustment mode and the enhancement filter mode to be enabled at the same time, or the current image does not allow the enhancement adjustment mode and the enhancement filter mode to be enabled at the same time.
  • the deblocking filtering level adjustment thresholds dbr_h_threshold_minus1 and dbr_h_threshold_minus1 are used to determine the threshold for the deblocking filtering level adjustment of the current image, and the value range is 0-1.
  • the value of DbrHThreshold is equal to the value of dbr_h_threshold_minus1 plus 1. If dbr_h_threshold_minus1 does not exist in the bitstream, the value of DbrHThreshold is 0.
  • DbrHThreshold corresponds to the first adjustment threshold (taking the third adjustment threshold and the first adjustment threshold as the same as an example), and is the first adjustment threshold for horizontal DBF filtering. That is to say, when horizontal DBF filtering is required, DbrHThreshold represents the first adjustment threshold in the above embodiment.
  • the second adjustment threshold in the above embodiment taking the fourth adjustment threshold and the second adjustment threshold as the same as an example
  • the first adjustment threshold are opposite numbers to each other. Therefore, the second adjustment threshold can also be determined based on DbrHThreshold.
  • DbrHThreshold corresponds to the first filtering threshold (taking the third filtering threshold being the same as the first filtering threshold as an example), and is the first filtering threshold for horizontal DBF filtering. That is to say, when horizontal DBF filtering is required, DbrHThreshold represents the first filtering threshold in the above embodiment.
  • the second filtering threshold taking the fourth filtering threshold being the same as the second filtering threshold as an example
  • the first filtering threshold in the above embodiments are opposite numbers to each other. Therefore, the second filtering threshold may also be determined based on DbrHThreshold.
  • DbrHThreshold may represent the first adjustment threshold and the first filtering threshold for horizontal DBF filtering, that is, the first adjustment threshold and the first filtering threshold are the same, and both take the same value.
  • the deblocking filter horizontal adjustment offset value 0 (dbr_h_offset0_minus1) is used to determine the offset value 0 of the deblocking filter horizontal adjustment of the current image, and the value range is 0-3.
  • the value of DbrHOffset0 is equal to the negative value obtained by adding 1 to the value of dbr_h_offset0_minus1 and then taking the inverse number. If dbr_h_offset0_minus1 does not exist in the bit stream, the value of DbrHOffset0 is 0.
  • DbrHOffset0 corresponds to the first filter offset value (taking the third filter offset value and the first filter offset value as the same as an example), and is the first filter offset value for horizontal DBF filtering.
  • Offset value that is, when horizontal DBF filtering is required
  • DbrHOffset0 represents the first filtering offset value of the above-mentioned embodiment.
  • the deblocking filter horizontal adjustment offset value 1 (dbr_h_offset1_minus1) is used to determine the offset value 1 of the deblocking filter horizontal adjustment of the current image, and the value range may be 0-3.
  • the value of DbrHOffset1 is equal to the value of dbr_h_offset1_minus1 plus 1. If dbr_h_offset1_minus1 does not exist in the bitstream, the value of DbrHOffset1 is 0.
  • DbrHOffset1 corresponds to the second filter offset value (taking the fourth filter offset value and the second filter offset value as the same as an example), and is the second filter offset value for horizontal DBF filtering.
  • Offset value that is, when horizontal DBF filtering is required
  • DbrHOffset1 represents the second filter offset value of the above embodiment.
  • the enhanced deblocking filter horizontal adjustment offset value 0 (dbr_h_alt_offset0_minus1), dbr_h_alt_offset0_minus1 is used to determine the horizontal adjustment offset value 0 when the current image deblocking filter BS is 0, and the value range of dbr_h_alt_offset0_minus1 may be 0-3.
  • the value of DbrHAltOffset0 can be equal to the negative value obtained by adding 1 to the value of dbr_h_alt_offset0_minus1 and then taking the opposite number. If dbr_h_alt_offset0_minus1 does not exist in the bit stream, the value of DbrHAltOffset0 is 0.
  • DbrHAltOffset0 corresponds to the first adjustment offset value (taking the third adjustment offset value and the first adjustment offset value as the same as an example), and is the first adjustment offset for horizontal DBF filtering.
  • Offset value that is, when performing horizontal DBF filtering
  • DbrHAltOffset0 represents the first adjustment offset value of the above-mentioned embodiment.
  • the enhanced deblocking filter horizontal adjustment offset value 1 (dbr_h_alt_offset1_minus1), dbr_h_alt_offset1_minus1 is used to determine the horizontal adjustment offset value 1 when the current image deblocking filter BS is 0, and the value range of dbr_h_alt_offset1_minus1 may be 0-3.
  • the value of DbrHAltOffset1 is equal to the value of dbr_h_alt_offset1_minus1 plus 1. If dbr_h_alt_offset1_minus1 does not exist in the bit stream, the value of DbrHAltOffset1 is 0.
  • DbrHAltOffset1 corresponds to the second adjustment offset value (taking the fourth adjustment offset value and the second adjustment offset value as the same as an example), and is the second adjustment offset for horizontal DBF filtering.
  • Offset value that is, when performing horizontal DBF filtering
  • DbrHAltOffset1 represents the second adjustment offset value of the above embodiment.
  • Embodiment 14 For an expression of the high-level syntax (such as the high-level syntax of the picture header), reference may be made to Table 2, for example, the syntax shown in Table 2 for encoding/decoding of the picture header. That is, the encoding side encodes the syntax shown in Table 2 in the image header, and the decoding side decodes the syntax shown in Table 2 from the image header.
  • Table 2 For an expression of the high-level syntax (such as the high-level syntax of the picture header), reference may be made to Table 2, for example, the syntax shown in Table 2 for encoding/decoding of the picture header. That is, the encoding side encodes the syntax shown in Table 2 in the image header, and the decoding side decodes the syntax shown in Table 2 from the image header.
  • the picture-level enhanced vertical adjustment enable flag picture_alt_dbr_v_enable_flag is a binary variable.
  • the value of '1' means that the current image allows the use of enhanced vertical adjustment, and the value of '0' means that the current image does not allow the use of enhanced vertical adjustment.
  • the value of PictureAltDbrVEnableFlag may be equal to the value of picture_alt_dbr_v_enable_flag, and the value of PhAltDbrVEnableFlag is 0 if picture_alt_dbr_v_enable_flag does not exist in the bitstream.
  • PictureAltDbrVEnableFlag corresponds to the enhancement adjustment mode enable flag bit, and is the enhancement adjustment mode enable flag bit for vertical DBF filtering, that is, when vertical DBF filtering is required, PictureAltDbrVEnableFlag indicates that Enable or disable Enhanced Adjustment Mode.
  • PictureAltDbrVEnableFlag is only an enhancement adjustment mode enable flag for vertical DBF filtering, not an enhancement filter mode enable flag for vertical DBF filtering.
  • the picture-level enhancement level adjustment enable flag picture_alt_dbr_h_enable_flag is a binary variable.
  • the value of '1' indicates that the current image is allowed to use the enhancement level adjustment, and the value of '0' indicates that the current image does not allow the enhancement level adjustment.
  • the value of PictureAltDbrHEnableFlag may be equal to the value of picture_alt_dbr_h_enable_flag. If picture_alt_dbr_h_enable_flag does not exist in the bitstream, PhAltDbrHEnableFlag is 0.
  • PhAltDbrHEnableFlag corresponds to the enhancement adjustment mode enable flag bit, and is the enhancement adjustment mode enable flag bit for horizontal DBF filtering, that is, when horizontal DBF filtering is required, PhAltDbrHEnableFlag indicates Enable or disable Enhanced Adjustment Mode.
  • PhAltDbrHEnableFlag is only an enhancement adjustment mode enable flag for horizontal DBF filtering, not an enhancement filter mode enable flag for horizontal DBF filtering.
  • Embodiment 15 For Embodiment 11, for the encoding and decoding of adbr_enable_flag, the encoding and decoding of adbr_enable_flag can be performed only when the deblocking filtering mode is enabled, that is, whether to enable the deblocking filtering mode can be determined first, and if so, The flag bit adbr_enable_flag will be encoded/decoded in the sequence header. If not, the flag bit adbr_enable_flag will not be encoded/decoded in the sequence header.
  • the enhancement adjustment mode (adbr_enable_flag is used to control the enabling of the enhancement adjustment mode) is a sub-mode of the deblocking filter mode, and the enhancement adjustment mode is allowed to be enabled only when the deblocking filter mode is enabled.
  • the encoding and decoding of dbr_enable_flag can be performed only when the deblocking filter mode is enabled, that is to say, it can be determined whether the deblocking filter mode is enabled first, and if so, it will be used in the sequence
  • the encoding/decoding flag bit dbr_enable_flag in the header if not, the encoding/decoding flag bit dbr_enable_flag is not in the sequence header.
  • the enhancement filter mode (dbr_enable_flag is used to control the enablement of the enhancement filter mode) is a sub-mode of the deblock filter mode, and the enhancement filter mode is allowed to be enabled only when the deblock filter mode is enabled.
  • the encoding and decoding of the high-level syntax shown in Table 1 (for controlling the enabling of the enhancement filtering mode and the enabling of the enhancement adjustment mode) can be performed only when the deblocking filtering mode is enabled. and decoding, that is, you can first determine whether to enable the deblocking filtering mode, if so, encode/decode the high-level syntax shown in Table 1 in the image header, if not, do not encode/decode Table 1 in the image header The high-level syntax shown.
  • the encoding and decoding of the high-level syntax shown in Table 2 may be performed only when the deblocking filtering mode is enabled. and decoding, that is, you can first determine whether to enable the deblocking filtering mode, if so, encode/decode the high-level syntax shown in Table 2 in the image header, if not, do not encode/decode Table 2 in the image header The high-level syntax shown.
  • Embodiment 16 For the deblocking filtering process of the luminance component (that is, the current block is the luminance component), for example, the luminance component is adjusted by the enhancement adjustment mode, or the luminance component is adjusted by the enhancement filtering mode.
  • the current boundary to be filtered is a vertical boundary and the value of PictureDbrVEnableFlag is 1, or if the current boundary to be filtered is a horizontal boundary and the value of PictureDbrHEnableFlag is 1, the value of PictureDbrEnableFlag is 1; otherwise, PictureDbrEnableFlag is 0.
  • the current boundary to be filtered is a vertical boundary and the value of PictureAltDbrVEnableFlag is 1, or, if the current boundary to be filtered is a horizontal boundary and the value of PictureAltDbrHEnableFlag is 1, then the value of PictureAltDbrEnableFlag is 1; otherwise, PictureAltDbrEnableFlag is 0.
  • dbr_th DbrVThreshold
  • dbr_offset0 DbrVOffset0
  • dbr_offset1 DbrVOffset1
  • alt_dbr_offset0 DbrVAltOffset0
  • alt_dbr_offset1 DbrVAltOffset1.
  • dbr_th DbrHThreshold
  • dbr_offset0 DbrHOffset0
  • dbr_offset1 DbrHOffset1
  • alt_dbr_offset0 DbrHAltOffset0
  • alt_dbr_offset1 DbrHAltOffset1.
  • P1 (p2*4+p1*5+p0*4+q0*3+8)>>4;
  • P2 (p3*2+p2*2+p1*2+p0*1+q0*1+4)>>3;
  • P0, P1, P2 and Q0, Q1, Q2 are all filtered values (ie, filtered pixel values).
  • pi can represent the original pixel value
  • qi can represent the original pixel value
  • Pi can represent the filtered pixel value
  • Qi can represent the filtered pixel value
  • Pi' can represent the adjusted pixel value
  • Qi' can represent the adjusted pixel value
  • P1 ((p2 ⁇ 1)+p2+(p1 ⁇ 3)+(p0 ⁇ 2)+q0+8)>>4;
  • P0, P1 and Q0, Q1 are all filtered values (ie, filtered pixel values).
  • pi can represent the original pixel value
  • qi can represent the original pixel value
  • Pi can represent the filtered pixel value
  • Qi can represent the filtered pixel value
  • Pi' can represent the adjusted pixel value
  • Qi' can represent the adjusted pixel value
  • P0 ((p1 ⁇ 1)+p1+(p0 ⁇ 3)+(p0 ⁇ 1)+(q0 ⁇ 1)+q0+8)>>4;
  • Both P0 and Q0 are filtered values (ie, filtered pixel values).
  • pi can represent the original pixel value
  • qi can represent the original pixel value
  • Pi can represent the filtered pixel value
  • Qi can represent the filtered pixel value
  • Pi' can represent the adjusted pixel value
  • Qi' can represent the adjusted pixel value
  • Both P0 and Q0 are filtered values (ie, filtered pixel values).
  • pi can represent the original pixel value
  • qi can represent the original pixel value
  • Pi can represent the filtered pixel value
  • Qi can represent the filtered pixel value
  • Pi' can represent the adjusted pixel value
  • Qi' can represent the adjusted pixel value
  • Mode 1 of the boundary filtering process when the BS of the luminance component is equal to 0 (using the enhancement adjustment mode for processing):
  • PhAltDbrEnableFlag 1
  • the above i may be 0, or may be 0, 1, 2, etc., which is not limited.
  • pi can represent the original pixel value
  • qi can represent the original pixel value
  • DPi can represent the gradient value
  • DQi can represent the gradient value
  • Pi can represent the adjusted pixel value
  • Qi can represent the adjusted pixel value
  • PhAltDbrEnableFlag 1
  • the above-mentioned 2*dbr_th and –2*dbr_th may be the adjustment thresholds in the above-mentioned embodiment.
  • the above i may be 0, or may be 0, 1, 2, etc., which is not limited.
  • pi can represent the original pixel value
  • qi can represent the original pixel value
  • DPi can represent the gradient value
  • DQi can represent the gradient value
  • Pi can represent the adjusted pixel value
  • Qi can represent the adjusted pixel value
  • Mode 3 of the boundary filtering process when the BS of the luminance component is equal to 0 (the enhancement adjustment mode is used for processing):
  • PhAltDbrEnableFlag 1
  • PhAltDbrEnableFlag 1
  • the above i may be 0, or may be 0, 1, 2, etc., which is not limited.
  • pi can represent the original pixel value
  • qi can represent the original pixel value
  • DPi can represent the gradient value
  • DQi can represent the gradient value
  • Pi can represent the adjusted pixel value
  • Qi can represent the adjusted pixel value
  • clip(x) means to limit x to be between [0, 2 ⁇ (bit_depth)-1] (the interval may include 0 and 2 ⁇ (bit_depth)-1).
  • bit_depth represents the bit depth of the image, generally 8, 10, 12, etc.
  • Mode 4 of the boundary filtering process when the BS of the luminance component is equal to 0 (the enhancement adjustment mode is used for processing):
  • PhAltDbrEnableFlag 1
  • the above i may be 0, or may be 0, 1, 2, etc., which is not limited.
  • pi can represent the original pixel value
  • qi can represent the original pixel value
  • DPi can represent the gradient value
  • DQi can represent the gradient value
  • Pi can represent the adjusted pixel value
  • Qi can represent the adjusted pixel value
  • PhAltDbrEnableFlag 1
  • the above i may be 0, or may be 0, 1, 2, etc., which is not limited.
  • pi can represent the original pixel value
  • qi can represent the original pixel value
  • DPi can represent the gradient value
  • DQi can represent the gradient value
  • Pi can represent the adjusted pixel value
  • Qi can represent the adjusted pixel value
  • Embodiment 17 For Embodiment 11 and Embodiment 12, the SPS-level high-level syntax can be replaced with the PPS-level high-level syntax, or the picture header-level high-level syntax, or the frame-level high-level syntax, or the slice-header-level high-level syntax, or the CTU-level high-level syntax.
  • the high-level syntax, or the CU-level high-level syntax does not limit the type of the high-level syntax, that is, dbr_enable_flag or adbr_enable_flag can be transmitted through various types of high-level syntax.
  • the picture header high-level syntax can be replaced with SPS-level high-level syntax, or PPS-level high-level syntax, or frame-level high-level syntax, or slice header-level high-level syntax, or CTU-level high-level syntax, or CU
  • SPS-level high-level syntax or PPS-level high-level syntax
  • frame-level high-level syntax or slice header-level high-level syntax, or CTU-level high-level syntax, or CU
  • the type of this high-level syntax that is, the contents of Table 1 or Table 2 can be transmitted through various types of high-level syntax, that is, the enhanced adjustment mode enable flag bit, Enhanced filter mode enable flag, first adjustment threshold, first filter threshold, first filter offset value, second filter offset value, first adjustment offset value, second adjustment offset value and other parameters, specific implementation
  • the manner is similar to that of Embodiment 13 and Embodiment 14, and details are not repeated here.
  • the high-level syntax of the image header can be replaced by the CTU-level high-level syntax, and the relevant parameters of DBR are transmitted through the CTU-level high-level syntax, and the relevant parameters of the DBR can include the first adjustment threshold and the first filtering threshold. , the first filter offset value, the second filter offset value, the first adjustment offset value, the second adjustment offset value and other parameters, see Embodiment 13 and Embodiment 14.
  • the high-level syntax of the image header can be replaced by the CU-level high-level syntax, and the relevant parameters of the DBR are transmitted through the CU-level high-level syntax, and the relevant parameters of the DBR can include the first adjustment threshold, the first filtering threshold, the first filtering offset, the For parameters such as the second filter offset value, the first adjustment offset value, and the second adjustment offset value, see Embodiment 13 and Embodiment 14.
  • Embodiment 18 For Embodiment 16, it is a deblocking filtering process for the luminance component, and the luminance component can also be replaced with a chrominance component, that is, deblocking filtering is performed for the chrominance component (that is, the current block is a chrominance component).
  • the deblocking filtering process of the chrominance component is similar to the deblocking filtering process of the luminance component, see Embodiment 16, and details are not repeated here.
  • Embodiment 1 to Embodiment 18 may be implemented independently, or may be combined arbitrarily.
  • Embodiment 1 and Embodiment 2 may be combined
  • Embodiment 1 and Embodiment 3 may be combined
  • Embodiment 1 and Embodiment 4 may be combined.
  • Embodiment 1 and Embodiment 5 may be combined, and at least one of Embodiment 1 and Embodiment 8 to Embodiment 18 may be combined; at least two embodiments of Embodiment 8 to Embodiment 18 may be combined arbitrarily; At least one of Example 2 and Example 8-Example 18 may be combined; Example 3 and Example 8-At least one of Example 18 may be combined; Example 4 and Example 8-Example At least one of Embodiment 18 may be combined; at least one of Example 5 and Example 8-Example 18 may be combined; at least one of Example 6 and Example 8-Example 18 may be combined; Embodiment 7 and at least one of Embodiments 8-18 may be combined.
  • the above are only examples of several combinations, and any at least two embodiments between Embodiment 1 to Embodiment 18 can be combined to implement related processes.
  • the content of the encoding end can also be applied to the decoding end, that is, the decoding end can process in the same way, and the content of the decoding end can also be applied to the encoding end, that is, the encoding end can process in the same way.
  • Embodiment 19 Based on the same application concept as the above method, an embodiment of the present application also proposes a decoding device, the decoding device is applied to a decoding end, and the decoding device includes: a memory configured to store video data; A decoder, which is configured to implement the encoding and decoding methods in the above-mentioned Embodiments 1 to 18, that is, the processing flow of the decoding end.
  • a decoder configured to:
  • the gradient value of the current pixel is determined based on the original pixel value of the current pixel and the original pixel values of the surrounding pixels of the current pixel ; Determine the adjusted pixel value of the current pixel based on the gradient value of the current pixel and the original pixel value of the current pixel.
  • an encoding device is also proposed in the embodiments of the present application.
  • the encoding device is applied to an encoding end, and the encoding device includes: a memory configured to store video data; an encoder, which It is configured to implement the encoding and decoding methods in the above-mentioned Embodiment 1 to Embodiment 18, that is, the processing flow of the encoding end.
  • an encoder configured to:
  • the gradient value of the current pixel is determined based on the original pixel value of the current pixel and the original pixel values of the surrounding pixels of the current pixel ; Determine the adjusted pixel value of the current pixel based on the gradient value of the current pixel and the original pixel value of the current pixel.
  • the decoding end device (which may also be referred to as a video decoder) provided by the embodiment of the present application, in terms of hardware, can refer to FIG. 5A for a schematic diagram of its hardware architecture. It includes: a processor 511 and a machine-readable storage medium 512, wherein: the machine-readable storage medium 512 stores machine-executable instructions that can be executed by the processor 511; the processor 511 is used for executing machine-executable instructions instructions to implement the methods disclosed in the foregoing embodiments 1-18 of the present application. For example, the processor 511 is configured to execute machine-executable instructions to implement the following steps:
  • the gradient value of the current pixel is determined based on the original pixel value of the current pixel and the original pixel values of the surrounding pixels of the current pixel ; Determine the adjusted pixel value of the current pixel based on the gradient value of the current pixel and the original pixel value of the current pixel.
  • the encoding end device (which may also be referred to as a video encoder) provided by the embodiment of the present application, in terms of hardware, can refer to FIG. 5B for a schematic diagram of its hardware architecture. It includes: a processor 521 and a machine-readable storage medium 522, wherein: the machine-readable storage medium 522 stores machine-executable instructions that can be executed by the processor 521; the processor 521 is used for executing machine-executable instructions instructions to implement the methods disclosed in the foregoing embodiments 1-18 of the present application. For example, the processor 521 is configured to execute machine-executable instructions to implement the following steps:
  • the gradient value of the current pixel is determined based on the original pixel value of the current pixel and the original pixel values of the surrounding pixels of the current pixel ; Determine the adjusted pixel value of the current pixel based on the gradient value of the current pixel and the original pixel value of the current pixel.
  • an embodiment of the present application further provides a machine-readable storage medium, where several computer instructions are stored on the machine-readable storage medium, and when the computer instructions are executed by a processor, the present invention can be implemented.
  • the methods disclosed in the above examples of the application are, for example, the encoding and decoding methods in the above-mentioned embodiments.
  • the above-mentioned machine-readable storage medium may be any electronic, magnetic, optical or other physical storage device, which may contain or store information, such as executable instructions, data, and the like.
  • the machine-readable storage medium can be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, storage drive (such as hard disk drive), solid state drive, any type of storage disk (such as compact disc, dvd, etc.), or similar storage media, or a combination thereof.
  • RAM Random Access Memory, random access memory
  • volatile memory volatile memory
  • non-volatile memory flash memory
  • storage drive such as hard disk drive
  • solid state drive any type of storage disk (such as compact disc, dvd, etc.), or similar storage media, or a combination thereof.
  • an embodiment of the present application further provides a computer application program, which, when executed by a processor, can implement the encoding and decoding methods disclosed in the above examples of the present application.
  • the embodiment of the present application also provides an encoding and decoding device, which can be applied to the encoding end or the decoding end.
  • the encoding and decoding device may include:
  • a determination module configured to determine the current pixel based on the original pixel value of the current pixel and the original pixel value of the surrounding pixels of the current pixel if the current pixel in the current block satisfies the enabling condition of the enhancement adjustment mode.
  • the gradient value of the pixel point the processing module is configured to determine the adjusted pixel value of the current pixel point based on the gradient value of the current pixel point and the original pixel value of the current pixel point.
  • the processing module determines the adjusted pixel value of the current pixel based on the gradient value of the current pixel and the original pixel value of the current pixel, it is specifically used for:
  • the processing module is based on the gradient value of the current pixel point, the original pixel value of the current pixel point, the first adjustment threshold, the second adjustment threshold, the first adjustment offset value and the second adjustment offset value, when determining the adjusted pixel value of the current pixel point, it is specifically used for: if the gradient value of the current pixel point is greater than the first adjustment threshold, then based on the original pixel value of the current pixel point and the first adjustment offset value determining the adjusted pixel value of the current pixel;
  • the adjusted pixel value of the current pixel is determined based on the original pixel value of the current pixel and the second adjusted offset value.
  • the determining module is further configured to determine, from the adjacent blocks of the current block, the pixel point related to the current pixel point Corresponding reference pixel points, determine the gradient value of the reference pixel point based on the original pixel value of the reference pixel point and the original pixel value of the surrounding pixel points of the reference pixel point; The gradient value of the reference pixel point and the original pixel value of the reference pixel point are used to determine the adjusted pixel value of the reference pixel point.
  • the processing module determines the adjusted pixel value of the reference pixel based on the gradient value of the reference pixel and the original pixel value of the reference pixel, it is specifically used for:
  • the reference pixel point is determined based on the gradient value of the reference pixel point, the original pixel value of the reference pixel point, the third adjustment threshold, the fourth adjustment threshold, the third adjustment offset value and the fourth adjustment offset value the adjustment pixel value.
  • the processing module is based on the gradient value of the reference pixel, the original pixel value of the reference pixel, the third adjustment threshold, the fourth adjustment threshold, the third adjustment offset and the fourth adjustment offset.
  • determining the adjusted pixel value of the reference pixel it is specifically used for: if the gradient value of the reference pixel is greater than the third adjustment threshold, based on the original pixel value of the reference pixel and the third adjustment offset value determining the adjusted pixel value of the reference pixel point;
  • the adjusted pixel value of the reference pixel point is determined based on the original pixel value of the reference pixel point and the fourth adjustment offset value.
  • the determining module determines that the current pixel in the current block satisfies the enabling condition of the enhancement adjustment mode, it is specifically used for: if the boundary to be filtered corresponding to the current pixel in the current block is If the boundary strength of the current block satisfies the enabling condition of the enhanced adjustment mode, then it is determined that the current pixel meets the enabling condition of the enhanced adjustment mode; or, if the feature information corresponding to the current block satisfies the enabling condition of the enhanced adjustment mode, then it is determined that the current pixel The current pixel in the block satisfies the enabling condition of the enhanced adjustment mode.
  • the processing module is further configured to perform deblocking filtering on the original pixel value of the current pixel if the current pixel in the current block satisfies the enabling condition of the common filtering mode, to obtain the current pixel.
  • the filtered pixel value of the pixel if the current pixel in the current block satisfies the enabling condition of the enhanced filtering mode, then based on the filtered pixel value of the current pixel and the original pixel value of the current pixel, determine the The adjusted pixel value of the current pixel point.
  • the processing module determines the adjusted pixel value of the current pixel point based on the filtered pixel value of the current pixel point and the original pixel value of the current pixel point, it is specifically configured to: based on the current pixel point The filtered pixel value, the original pixel value of the current pixel point, the first filtering threshold value, the second filtering threshold value, the first filtering offset value and the second filtering offset value, determine the adjusted pixel value of the current pixel point;
  • the first filtering threshold and the second filtering threshold are opposite numbers to each other.
  • the processing module is further configured to determine a reference corresponding to the current pixel from the adjacent blocks of the current block. pixel point; perform deblocking filtering on the original pixel value of the reference pixel point to obtain the filtered pixel value of the reference pixel point;
  • the processing module determines the adjusted pixel value of the reference pixel based on the filtered pixel value of the reference pixel and the original pixel value of the reference pixel, it is specifically configured to: based on the reference pixel The filtered pixel value, the original pixel value of the reference pixel point, the third filtering threshold, the fourth filtering threshold, the third filtering offset value and the fourth filtering offset value, determine the adjusted pixel value of the reference pixel point;
  • the third filtering threshold and the fourth filtering threshold are opposite numbers to each other.
  • a typical implementing device is a computer, which may be in the form of a personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media player, navigation device, email sending and receiving device, game control desktop, tablet, wearable device, or a combination of any of these devices.
  • a computer which may be in the form of a personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media player, navigation device, email sending and receiving device, game control desktop, tablet, wearable device, or a combination of any of these devices.
  • the functions are divided into various units and described respectively.
  • the functions of each unit may be implemented in one or more software and/or hardware.
  • Embodiments of the present application may be provided as a method, a system, or a computer program product.
  • the application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects.
  • Embodiments of the present application may take the form of a computer program product implemented on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • These computer program instructions may also be stored in a computer readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer readable memory result in an article of manufacture comprising instruction means, the instructions An apparatus implements the functions specified in a flow or flows of the flowcharts and/or a block or blocks of the block diagrams.
  • These computer program instructions can also be loaded on a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that The instructions provide steps for implementing the functions specified in the flow or blocks of the flowcharts and/or the block or blocks of the block diagrams.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Reduction Or Emphasis Of Bandwidth Of Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

本申请提供一种编解码方法、装置及其设备,该方法可以包括:若当前块中的当前像素点满足增强调整模式的启用条件,则基于所述当前像素点的原始像素值和所述当前像素点的周围像素点的原始像素值确定所述当前像素点的梯度值;基于所述当前像素点的梯度值和所述当前像素点的原始像素值,确定所述当前像素点的调整像素值。通过本申请的技术方案,能够提高编码性能。

Description

编解码方法、装置及其设备 技术领域
本申请涉及编解码技术领域,尤其是涉及一种编解码方法、装置及其设备。
背景技术
为了达到节约空间的目的,视频图像都是经过编码后才传输的,完整的视频编码方法可以包括预测、变换、量化、熵编码、滤波等过程。其中,预测编码可以包括帧内编码和帧间编码。进一步的,帧间编码是利用视频时间域的相关性,使用邻近已编码图像的像素预测当前像素,以达到有效去除视频时域冗余的目的。帧内编码是指利用视频空间域的相关性,使用当前帧图像的已经编码块的像素预测当前像素,以达到去除视频空间域冗余的目的。
常用的滤波技术包括DBF(DeBlocking Filter,去块滤波)技术、SAO(Sample Adaptive Offset,样本自适应补偿)技术和ALF(Adaptive Loop Filter,自适应环路滤波)技术。DBF技术用于去除分块编码产生的块边界效应。SAO技术通过基于样本的像素值和周围块的梯度值进行分类,对于每个类别的像素值加上不同的补偿值,使得重建图像更接近于原始图像。ALF技术通过维纳滤波器,对重建图像进行滤波,使得重建图像更接近于原始图像。
但是,DBF、SAO和ALF等滤波技术,均是基于当前像素点的像素值进行分类,或者,基于当前像素点的像素值和周围像素点的像素值的关系进行分类,然后,再基于不同类别进行不同滤波操作,其可能会出现过滤波现象,即,滤波后的像素值远大于或远小于滤波前的像素值,也远大于或远小于原始像素值,存在滤波效果不佳,编码性能比较差等问题。
发明内容
本申请提供一种编解码方法、装置及其设备,能够提高编码性能。
本申请提供一种编解码方法,所述方法包括:
若当前块中的当前像素点满足增强调整模式的启用条件,则基于所述当前像素点的原始像素值和所述当前像素点的周围像素点的原始像素值确定所述当前像素点的梯度值;基于所述当前像素点的梯度值和所述当前像素点的原始像素值,确定所述当前像素点的调整像素值。
本申请提供一种解码装置,所述解码装置包括:
存储器,其经配置以存储视频数据;
解码器,其经配置以实现:
若当前块中的当前像素点满足增强调整模式的启用条件,则基于所述当前像素点的原始像素值和所述当前像素点的周围像素点的原始像素值确定所述当前像素点的梯度值;基于所述当前像素点的梯度值和所述当前像素点的原始像素值,确定所述当前像素点的调整像素值。
本申请提供一种编码装置,所述编码装置包括:
存储器,其经配置以存储视频数据;
编码器,其经配置以实现:
若当前块中的当前像素点满足增强调整模式的启用条件,则基于所述当前像素点的原始像素值和所述当前像素点的周围像素点的原始像素值确定所述当前像素点的梯度值;基于所述当前像素点的梯度值和所述当前像素点的原始像素值,确定所述当前像素点的调整像素值。
本申请提供一种解码端设备,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;
所述处理器用于执行机器可执行指令,以实现如下步骤:
若当前块中的当前像素点满足增强调整模式的启用条件,则基于所述当前像素点的原始像素值和所述当前像素点的周围像素点的原始像素值确定所述当前像素点的梯度值;基于所述当前像素点的梯度值和所述当前像素点的原始像素值,确定所述当前像素点的调整像素值。
本申请提供一种编码端设备,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;
所述处理器用于执行机器可执行指令,以实现如下步骤:
若当前块中的当前像素点满足增强调整模式的启用条件,则基于所述当前像素点的原始像素值和所述当前像素点的周围像素点的原始像素值确定所述当前像素点的梯度值;基于所述当前像素点的梯度值和所述当前像素点的原始像素值,确定所述当前像素点的调整像素值。
由以上技术方案可见,本申请实施例中,若当前块中的当前像素点满足增强调整模式的启用条件,则可以基于当前像素点的梯度值和当前像素点的原始像素值,确定当前像素点的调整像素值,也就是说,基于当前像素点的梯度值对当前像素点的原始像素值进行调整,使当前像素点的调整像素值更接近原始像素,从而提高编码性能。在滤波过程中,如DBF、SAO和ALF等,若当前块中的当前像素点满足增强调整模式的启用条件,在基于当前像素点的梯度值对当前像素点的原始像素 值进行调整后,可以提高滤波效果,提高编码性能。
附图说明
图1是本申请一种实施方式中的编解码框架的示意图;
图2A和图2B本申请一种实施方式中的块划分的示意图;
图3是本申请一种实施方式中的去块滤波的示意图;
图4是本申请一种实施方式中的编解码方法的流程图;
图5A是本申请一种实施方式中的解码端设备的硬件结构图;
图5B是本申请一种实施方式中的编码端设备的硬件结构图。
具体实施方式
在本申请实施例中使用的术语仅仅是出于描述特定实施例的目的,而非限制本申请。本申请实施例和权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其它含义。还应当理解,本文中使用的术语“和/或”是指包含一个或多个相关联的列出项目的任何或所有可能组合。还应当理解,尽管在本申请实施例可能采用术语第一、第二、第三等来描述各种信息,但是,这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请实施例范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,此外,所使用的词语“如果”可以被解释成为“在……时”,或“当……时”,或“响应于确定”。
为了使本领域技术人员更好地理解本申请的技术方案,下面对技术术语进行简单说明。
视频编码框架:参见图1所示,可以使用视频编码框架实现本申请实施例的编码端处理流程,视频解码框架的示意图可以与图1类似,在此不再重复赘述,可以使用视频解码框架实现本申请实施例的解码端处理流程。在视频编码框架和视频解码框架中,可以包括但不限于:预测(如帧内预测和帧间预测等)、运动估计/运动补偿、参考图像缓冲器、环内滤波、重建、变换、量化、反变换、反量化、熵编码器等模块。在编码端,通过这些模块之间的配合,实现编码端的处理流程,在解码端,通过这些模块之间的配合,实现解码端的处理流程。
环路滤波用于减少图像块效应或图像效果不佳等问题,能够改善图像质量,可以包括DBF、SAO和ALF等三种滤波器,DBF为去块滤波,用于去除分块编码产生的块边界效应。SAO为样本自适应补偿滤波,用于通过基于样本的像素值和周围块的梯度值进行分类,对于每个类别的像素值加上不同的补偿值,使得重建图像更接近于原始图像。ALF为自适应环路滤波,即通过维纳滤波器,对重建图像进行滤波,使得重建图像更接近于原始图像。
示例性的,在视频编解码过程中,预测过程可以包括帧内预测和帧间预测。帧内预测是考虑到相邻块之间存在很强的空间域相关性,利用周围已经重建的像素作为参考像素,对当前未编码块进行预测,只需要对残差值进行后续编码处理,而不是对原始值进行编码,从而有效去除空间域上的冗余,大大提高压缩效率。帧间预测是利用视频时间域的相关性,使用邻近已编码图像的像素预测当前图像的像素,达到去除视频时域冗余的目的。
示例性的,在视频编解码过程中,变换是指将以空间域中像素形式描述的图像转换至变换域的图像,并以变换系数的形式来表示。由于绝大多数图像都含有较多平坦区域和缓慢变化的区域,因此,适当的变换过程,可以使图像能量在空间域的分散分布,转换为在变换域的相对集中分布,从而能够去除信号之间的频域相关性,配合量化过程,可以有效压缩码流。
示例性的,熵编码是指按照信息熵的原理进行无损编码的方式,处于视频压缩的最后一个处理模块,将一系列用来表示视频序列的元素符号,转变为一个用来传输或存储的二进制码流,输入的符号可能包括量化后的变换系数,运动矢量信息,预测模式信息,变换量化相关语法等,熵编码模块的输出数据即原始视频压缩后的最终码流。熵编码可以有效地去除这些视频元素符号的统计冗余,是保证视频编码压缩效率的重要工具之一。
示例性的,环路滤波用于减少图像的块效应或者图像效果不佳等问题,用来改善图像质量,可以包括但不限于DBF、SAO和ALF等。例如,在视频图像中,图像块的边界并不连续,压缩重建图像具有明显的块效应,严重影响图像质量,可以采用DBF技术对边界进行去块滤波。针对所有预测单元(Prediction Unit,PU)和变换单元(Transform Unit,TU)的边界进行去块滤波,去块滤波包括滤波决策和滤波操作,在滤波决策过程中,获取边界强度(如不滤波、弱滤波或强滤波)和滤波参数。在滤波操作过程中,根据边界强度和滤波参数对像素进行修正,如对边界进行滤波时,可以是强滤波或弱滤波,采用不同长度的抽头进行滤波。
SAO滤波:用于消除振铃效应。振铃效应是由于高频交流系数的量化失真,解码后会在边缘周围产生波纹的现象,变换块尺寸越大振铃效应越明显。SAO的基本原理是对重构曲线中的波峰像素加上负值进行补偿,波谷像素加上正值进行补偿。SAO是以CTU(Coding Tree Unit,编码树单元) 为基本单位,可以包括两大类补偿形式:边界补偿(Edge Offset,简称EO)和边带补偿(Band Offset,简称BO),此外还引入了参数融合技术。
ALF滤波:可以根据原始信号和失真信号计算得到均方意义下的最优滤波器,即维纳滤波器。ALF的滤波器可以包括但不限于:7*7的菱形滤波器或5*5的菱形滤波器、7*7十字形加3*3方形的中心对称滤波器,或7*7十字形加5*5方形的中心对称滤波器。
帧内预测:利用视频空间域的相关性,使用当前块的已编码块进行预测,以达到去除视频空间域冗余的目的。帧内预测规定了多种预测模式,每种预测模式对应一种纹理方向(DC模式除外),例如,若图像纹理呈现水平状排布,则水平预测模式可以更好的预测图像信息。
帧间预测:基于视频时域的相关性,由于视频序列包含有较强的时域相关性,使用邻近已编码图像像素预测当前图像的像素,可以达到有效去除视频时域冗余的目的。视频编码标准帧间预测部分都采用了基于块的运动补偿技术,主要原理是为当前图像的每一个像素块在之前已编码图像中寻找一个最佳匹配块,该过程称为运动估计(Motion Estimation,ME)。
运动矢量(Motion Vector,MV):在帧间预测中,可以使用运动矢量表示当前帧图像的当前块与参考帧图像的参考块之间的相对位移。每个划分的块都有相应的运动矢量传送到解码端,如果对每个块的运动矢量进行独立编码和传输,特别是小尺寸的大量块,则消耗很多比特。为降低用于编码运动矢量的比特数,可以利用相邻块之间的空间相关性,根据相邻已编码块的运动矢量对当前待编码块的运动矢量进行预测,然后对预测差进行编码,这样可以有效降低表示运动矢量的比特数。在对当前块的运动矢量进行编码时,可以先使用相邻已编码块的运动矢量预测当前块的运动矢量,然后对该运动矢量的预测值(MVP,Motion Vector Prediction)与运动矢量的真正估值之间的差值(MVD,Motion Vector Difference)进行编码。
运动信息(Motion Information):由于运动矢量表示当前块与某个参考块之间的位置偏移,为了准确的获取指向块的信息,除了运动矢量,还需要参考帧图像的索引信息来表示当前块使用哪个参考帧图像。在视频编码技术中,对于当前帧图像,通常可以建立一个参考帧图像列表,参考帧图像索引信息则表示当前块采用了参考帧图像列表中的第几个参考帧图像。此外,很多编码技术还支持多个参考图像列表,因此,还可以使用一个索引值来表示使用了哪一个参考图像列表,这个索引值可以称为参考方向。综上所述,在视频编码技术中,可以将运动矢量、参考帧索引、参考方向等与运动相关的信息统称为运动信息。
标志位编码(flag coding):在视频编码中,存在很多模式。对于某个块来说,可能采用其中一种模式。为了表示采用何种模式,每个块需要通过编码对应的标志位来进行标记。比如说,针对编码端来说,通过编码端决策,确定该标志位的值,然后对标志位的值进行编码传递到解码端。针对解码端来说,通过解析标志位的值,确定对应模式是否启用。
在标志位的编码过程中,可以通过高层语法实现标志位的编码,高层语法可以用于表示是否允许启用某种模式,即通过高层语法允许启用某种模式,或者禁止启用某种模式。
示例性的,高层语法可以是序列参数集级的高层语法,或者图像参数集级的高层语法,或者片头级的高层语法,或者图像头级的高层语法,对此高层语法不做限制。
针对序列参数集(SPS,sequence parameter set)的高层语法,存在确定整个视频序列(即多帧视频图像)中是否允许某些模式(工具/方法)开关的标志位。例如,若标志位为取值A(如数值1等),则视频序列可以允许启用该标志位对应的模式;或者,若标志位为取值B(如数值0等),则视频序列可以不允许启用该标志位对应的模式。
针对图像参数集(PPS,picture parameter set)的高层语法,存在确定某图片(如视频图像)中是否允许某些模式(工具/方法)开关的标志位。若标志位为取值A,则视频图像允许启用该标志位对应的模式;若标志位为取值B,则视频图像不允许启用该标志位对应的模式。
针对图像头(picture header)的高层语法,存在某图像头中是否允许某些模式(工具/方法)开关的标志位。若标志位为取值A,则图像头允许启用该标志位对应的模式;若标志位为取值B,则图像头不允许启用该标志位对应的模式。示例性的,图像头保存的是仅针对当前图像的共同信息,例如,在图像包含多个slice时,多个slice可以通用图像头中的信息。
针对片头(Slice header)的高层语法,存在某个slice中是否允许某些模式(工具/方法)开关的标志位。若标志位为取值A,则slice允许启用该标志位对应的模式;若标志位为取值B,则slice不允许启用该标志位对应的模式。示例性的,一帧图像可以包含1个slice或多个slice,针对片头(Slice header)的高层语法,是针对每个slice配置的高层语法。
高层语法:用于表示是否允许启用某些工具(方法),即通过高层语法允许启用某些工具(方法),或者禁止启用某些工具(方法)。示例性的,参见上述介绍,高层语法可以是序列参数集级的高层语法,或者图像参数集级的高层语法,或者片头级的高层语法,或者图像头级的高层语法, 对此高层语法不做限制,只要能够实现上述功能即可。
率失真原则(Rate-Distortion Optimized):评价编码效率的有两大指标:码率和PSNR(Peak Signal to Noise Ratio,峰值信噪比),比特流越小,则压缩率越大,PSNR越大,则重建图像质量越好,在模式选择时,判别公式实质上也就是对二者的综合评价。例如,模式对应的代价:J(mode)=D+λ*R,其中,D表示Distortion(失真),通常可以使用SSE指标来进行衡量,SSE是指重建图像块与源图像的差值的均方和;λ是拉格朗日乘子,R就是该模式下图像块编码所需的实际比特数,包括编码模式信息、运动信息、残差等所需的比特总和。在模式选择时,若使用RDO原则去对编码模式做比较决策,通常可以保证编码性能最佳。
块划分技术:一个编码树单元(Coding Tree Unit,简称CTU)使用四叉树递归划分成CU(Coding Unit,编码单元)。在叶子节点CU级确定是否使用帧内编码或者帧间编码。CU可以划分成两个或四个预测单元(Prediction Unit,简称PU),同一个PU内使用相同的预测信息。在预测完成后得到残差信息后,一个CU可四叉划分成多个变换单元(Transform Units,简称TU)。例如,本申请中的当前图像块即为一个PU。还可以对块划分技术进行变化,比如说,使用一种混合了二叉树/三叉树/四叉树的划分结构取代原先划分模式,取消CU,PU,TU的概念的区分,支持CU的更灵活的划分方式。CU可以是正方形也可以是矩形划分。CTU首先进行四叉树的划分,然后四叉树划分的叶子节点可以进行二叉树和三叉树的划分。参见图2A所示,CU共有五种划分类型,分别为四叉树划分,水平二叉树划分,垂直二叉树划分,水平三叉树划分和垂直三叉树划分,参见图2B所示,CTU内的CU划分可以是上述五种划分类型的任意组合由上可知不同的划分方式,使得各个PU的形状有所不同,如不同尺寸的矩形,正方形。
DBF滤波(即去块滤波)方法:DBF滤波处理包括两个过程:滤波决策和滤波操作。
滤波决策包括:1)获取边界强度(BS值);2)滤波开关决策;3)滤波强弱选择。对于色度分量,仅存在步骤1),且直接复用亮度分量的BS值。对于色度分量,只有BS值为2时(即当前块两侧的块至少有一个采用intra(帧内)模式),才进行滤波操作。
滤波操作包括:1)对于亮度分量的强滤波和弱滤波;2)对于色度分类的滤波。
示例性的,DBF滤波处理一般以8*8为单位进行水平边界滤波(也可以称为水平DBF滤波)和垂直边界滤波(也可以称为垂直DBF滤波),且最多对边界两侧的3个像素点进行滤波,且最多利用到边界两侧的4个像素点进行滤波,因此,不同块的水平DBF滤波和垂直DBF滤波互不影响,也就是说,水平DBF滤波和垂直DBF滤波可以并行进行。
如图3所示,对于当前块(以8*8为例)来说,可以先进行当前块左侧3列像素点以及左边块(即当前块的左边块)右侧3列像素点的垂直DBF滤波,再进行当前块上侧3行像素点以及上边块(即当前块的上边块)下侧3行像素点的水平DBF滤波。
示例性的,对于需要分别进行垂直DBF滤波和水平DBF滤波的像素点来说,通常先进行垂直DBF滤波,后进行水平DBF滤波。当然,也可以先进行水平DBF滤波,后进行垂直DBF滤波。在后续实施例中,以先进行垂直DBF滤波,后进行水平DBF滤波为例。
在一种可能的实施方式中,关于DBF滤波的处理流程,可以包括以下步骤:
步骤S11、以4*4为单位分别计算水平方向和垂直方向的edge condition(边缘条件)值。
对于CU边界,且为8*8边界,则edge condition值为2(用于表示对亮度分量和色度分量均进行滤波处理)。对于PU(Prediction Unit,预测单元)边界(如2N*hN的内部1/4、1/2、3/4水平线),且为8*8边界,则edge condition值为1(用于表示对亮度分量进行滤波处理,但是不对色度分量进行滤波处理)。对于上述两种情况之外的其它情况,则edge condition值为0。
步骤S12、以4*4为单位(滤波处理以8*8单位,只是以4*4为单位存储edge condition值等信息),完成所有块的垂直滤波。当edge condition值不为0时,进行如下滤波处理过程:
1、亮度分量滤波(垂直滤波则处理垂直边界的4行,水平滤波则处理水平边界的4列):
1.1、先判断是否跳过滤波过程。示例性的,若边界两侧的块为非帧内模式块、无残差、且运动一致时,才会跳过滤波过程,否则,均需要进行滤波过程。
1.2、若不跳过滤波过程,则进行如下处理:
1.2.1、若当前帧的滤波类型(df_type)为类型1,且ABS(R0-L0)>=4*Alpha,则FS为0;否则进行步骤1.2.2,确定FS。Alpha为预设数值,ABS()为取绝对值运算。
1.2.2、计算FL(Flatness Left,左侧平滑度,可选值为0、2、3)和FR(Flatness Right,右侧平滑度,可选值为0、2、3),FL和FR用于判断两侧内部的平滑程度。然后,基于FL和FR确定FS。比如说,可以采用如下公式确定FS:FS=FL+FR。
1.2.2.1、若ABS(L1-L0)<Beta,且ABS(L2-L0)<Beta,则FL为3;若ABS(L1-L0)<Beta,且ABS(L2-L0)>=Beta,则FL为2;若ABS(L1-L0)>=Beta,且ABS(L2-L0)<Beta, 则FL为1;否则FL为0。Beta为预设数值,ABS()为取绝对值运算。
1.2.2.2、FR的计算方式与FL的计算方式类似,在此不再重复赘述。
1.2.3、基于FS确定BS值(FS的可选值为0、2、3、4、5、6,BS的可选值为0、1、2、3、4)。比如说,在得到FS之后,可以基于FS的取值确定BS值。
1.2.3.1、若FS小于等于2(最多有一边中等平滑),则BS=0。
1.2.3.2、若FS为3(有且只有一边高度平滑),则BS=(ABS(L1-R1)<Beta)?1:0,也就是说,若ABS(L1-R1)<Beta成立,则BS=1;否则,BS=0。
1.2.3.3、若FS为4(即两边均中等平滑),则BS=(FL==2)?2:1,也就是说,若FL=2,则BS=2;否则,即FL不等于2,则BS=1。
1.2.3.4、若FS为5(即一边中等平滑,另一边高度平滑),则:
若当前帧的滤波类型(df_type)为类型1,则BS=(R1==R0&&L0==L1&&ABS(R2–L2)<Alpha)?3:2;也就是说,若像素点R1的像素值等于像素点R0的像素值,且像素点L0的像素值等于像素点L1的像素值,且像素点R2的像素值与像素点L2的像素值之间的差值的绝对值小于Alpha(即预先配置的数值),则BS=3;否则,BS=2。
否则(即当前帧的滤波类型(df_type)不为类型1),若当前帧的滤波类型为类型0,则BS=(R1==R0&&L0==L1)?3:2;也就是说,若像素点R1的像素值等于像素点R0的像素值,且像素点L0的像素值等于像素点L1的像素值,则BS=3;否则,BS=2。
1.2.3.5、若FS为6(即两边均高度平滑),则:
若当前帧的滤波类型(df_type)为类型1,则BS=(ABS(R0-R1)<=Beta/4&&ABS(L0-L1)<=Beta/4&&ABS(R0-L0)<Alpha)&&ABS(R0-R3)<=Beta/2&&ABS(L0-L3)<=Beta/2?4:3;也就是说,若ABS(R0-R1)<=Beta/4、ABS(L0-L1)<=Beta/4、ABS(R0-L0)<Alpha、ABS(R0-R3)<=Beta/2、ABS(L0-L3)<=Beta/2均成立,则BS=4;否则,BS=3。
否则(即当前帧的滤波类型(df_type)不为类型1),若当前帧的滤波类型为类型0,则BS=(ABS(R0-R1)<=Beta/4&&ABS(L0-L1)<=Beta/4&&ABS(R0-L0)<Alpha)?4:3;也就是说,若ABS(R0-R1)<=Beta/4、ABS(L0-L1)<=Beta/4、ABS(R0-L0)<Alpha均成立,则BS=4,否则,BS=3。
1.2.4、基于BS值,确定滤波系数,以及滤波像素个数。比如说,假设边界左侧或上侧的4个像素点分别为L0-L3(如图3所示,图3中以左侧为例);边界右侧或下侧的4个像素点为R0-R3(如图3所示,图3中以右侧为例)。则对于亮度分量:
1.2.4.1、若BS=4,则对边界两侧的各3个像素进行滤波:
针对L0和R0来说,滤波系数为[3,8,10,8,3]/32,即:为了确定像素点L0滤波后的像素值,分别使用像素点L2、L1、L0、R0和R1的像素值进行加权求和,加权系数(即滤波系数)依次为3/32、8/32、10/32、8/32以及3/32。若wj为滤波系数,则j=-2(当前像素点左侧的第2个像素点,即L2)时,wj=3/32;j=-1(当前像素点左侧的第1个像素点,即L1)时,wj=8/32;j=0(当前像素点,即L0)时,wj=10/32;j=1(当前像素点右侧的第1个像素点,即R0)时,wj=8/32;j=2(当前像素点右侧的第2个像素点,R1)时,wj=8/32。为了确定像素点R0滤波后的像素值,分别使用像素点R2、R1、R0、L0和L1的像素值进行加权求和,加权系数依次为3/32、8/32、10/32、8/32以及3/32。若wj为滤波系数,则j=-2(当前像素点右侧的第2个像素点,即R2)时,wj=3/32;j=-1(当前像素点右侧的第1个像素点,即R1)时,wj=8/32;j=0(当前像素点,即R0)时,wj=10/32;j=1(当前像素点左侧的第1个像素点,即L0)时,wj=8/32;j=2(当前像素点左侧的第2个像素点,L1)时,wj=8/32。
综上所述,L0'=clip(L2*3+L1*8+L0*10+R0*8+R1*3+16)>>5),L0'为像素点L0滤波后的像素值,L0~L2为像素点L0~L2滤波前的像素值,R0~R1为像素点R0~R1的像素值,下同。R0'=clip((R2*3+R1*8+R0*10+L0*8+L1*3+16)>>5)>>5)。
在上述公式中,“>>”为右移位运算,用于替代除法,即“>>5”相当于除以2 5(即32)。乘法(“*”)可以通过左移位的方式来替代,如a乘以4可以通过左移2位替代,即通过a<<2替代;a乘以10,可以通过(a<<3)+(a<<1)替代。“<<”为左移位运算,用于替代乘法,即“a<<2”相当于乘以2 2(即4)。考虑到通过移位的方式实现除法运算时,对于运算结果通常直接取整,即当运算结果为N~N+1之间的非整数时,取结果为N,而考虑到当小数部分大于0.5时,取结果为N+1的准确性会更高,因此,为了提高所确定的像素值的准确性,进行计算时,可以为上述加权和的分子加上分母(即被除数)的1/2,以达到四舍五入的效果。以上述L0'的计算为例,右移5位相当于除以25(即32),因此,可以为上述加权和的分子加上16。clip(x)为修剪操作,当x超出预设数值范围的上限时,将x的值设置为该预设数值范围的上限;当x低于预设数值范围的下限时,将x的值设置为该预设数据范围的下限。
针对L1和R1来说,滤波系数为[4,5,4,3]/16,在此基础上,L1'=clip((L2*4+L1*5+L0*4+R0*3+8)>>4),R1'=clip((R2*4+R1*5+R0*4+L0*3+8)>>4)。
针对L2和R2来说,滤波系数为[2,2,2,1,1]/8,则L2'=clip((L3*2+L2*2+L1*2+L0*1+R0*1+4)>>3),R2'=clip((R3*2+R2*2+R1*2+R0*1+L0*1+4)>>3)。
1.2.4.2、若BS=3,则对边界两侧各2个像素进行滤波:
针对L0和R0来说,滤波系数为[1,4,6,4,1]/16,L0'=clip(L2*1+L1*4+L0*6+R0*4+R1*1+8)>>4,R0'=clip(R2*1+R1*4+R0*6+L0*4+L1*1+8)>>4。
针对L1和R1来说,滤波系数为[3,8,4,1]/16,则L1'=clip((L2*3+L1*8+L0*4+R0*1+8)>>4),R1'=clip((R2*3+R1*8+R0*4+L0*1+8)>>4)。
1.2.4.3、若BS=2,则对边界两侧各1个像素进行滤波:
针对L0和R0来说,滤波系数为[3,10,3]/16,在此基础上,L0'=clip(L1*3+L0*10+R0*3+8)>>4,R0'=clip(R1*3+R0*10+L0*3+8)>>4。
1.2.4.4、若BS=1,则对边界两侧各1个像素进行滤波:针对L0和R0来说,滤波系数为[3,1]/4,L0'=clip(L0*3+R0*1+2)>>2,R0'=clip(R0*3+L0*1+2)>>2。
1.2.4.5、若BS=0,则不滤波,即不对边界两侧的像素进行滤波。
2、edge condition值为2时,为16*16块的边界进行色度滤波,也就是说,为16*16块的边界进行色度分量的滤波处理,该色度分量的滤波处理过程如下:
2.1、先判断是否需要进行滤波处理,过程与亮度分量类似,在此不再赘述。
2.2、若需要进行滤波处理(即不跳过滤波过程),则分别计算FL和FR,再基于FL和FR得到FS,基于FS获得BS值,该过程也和亮度分量类似,在此不再赘述。
2.3、获得的色度分量的BS值(如4、3、2、1、0等)减1,也就是说,BS的可选值可以为3、2、1、0。基于BS值进行色度分量的滤波处理,具体过程如下:
若BS=3,则对边界两侧的各2个像素进行滤波:针对L0和R0来说,滤波系数为[3,10,3]/16,L0'=clip(L1*3+L0*10+R0*3+8)>>4,R0'=clip(R1*3+R0*10+L0*3+8)>>4。针对L1和R1来说,滤波系数为[3,8,3,2]/16,L1'=clip((L2*3+L1*8+L0*3+R0*2+8)>>4),R1'=clip((R2*3+R1*8+R0*3+L0*2+8)>>4)。
若BS=2,或BS=1,则对边界两侧的各1个像素进行滤波:针对L0和R0来说,滤波系数为[3,10,3]/16,L0'=clip(L1*3+L0*10+R0*3+8)>>4,R0'=clip(R1*3+R0*10+L0*3+8)>>4。若BS=0,则不滤波,即不对边界两侧的像素进行滤波。
示例性的,上述过程的Alpha和Beta与边界两侧的块的QP均值相关,即当前块与当前块的左侧块(对于垂直DBF滤波)或当前块与当前块的上方块(对于水平DBF滤波)的QP均值相关,可通过查表获得Alpha和Beta的取值,对此不做限制。
步骤S13、以4*4为单位(滤波处理以8*8单位,只是以4*4为单位存储edge condition值等信息),完成所有块的水平滤波,实现方式与步骤S12类似,在此不再重复赘述。
在相关技术中,DBF、SAO和ALF等滤波技术,均是基于当前像素点的像素值进行分类,或者,基于当前像素点的像素值和周围像素点的像素值的关系进行分类,然后,再基于不同类别进行不同滤波操作,其可能会出现过滤波现象,即,滤波后的像素值远大于或远小于滤波前的像素值,也远大于或远小于原始像素值,存在滤波效果不佳,编码性能比较差等问题。
针对上述发现,本实施例提出一种编解码方法,可以基于当前像素点的梯度值对当前像素点的原始像素值进行调整,使当前像素点的调整像素值更接近原始像素,从而提高编码性能。在滤波过程中,若当前块中的当前像素点满足增强调整模式的启用条件,在基于当前像素点的梯度值对当前像素点的原始像素值进行调整后,可以提高滤波效果,提高编码性能。
以下结合具体实施例,对本申请实施例中的编解码方法进行详细说明。
实施例1:本申请实施例中提出一种编解码方法,该方法可以应用于编码端或者解码端,参见图4所示,为该编解码方法的流程示意图,该方法可以包括:
步骤401,若当前块中的当前像素点满足增强调整模式的启用条件,则基于当前像素点的原始像素值和当前像素点的周围像素点的原始像素值确定当前像素点的梯度值。
示例性的,当前像素点的梯度值,可以是基于当前像素点的原始像素值和周围像素点的原始像素值之间的差值确定,也就是说,当前像素点的梯度值反映的是两个像素值的差值。
示例性的,当前像素点的周围像素点,可以是当前像素点的相邻像素点,也可以是当前像素点的非相邻像素点。当前像素点的周围像素点,可以是位于当前块中的像素点,也可以是位于当前块的相邻块中的像素点。比如说,当前像素点的周围像素点,可以是当前像素点左侧的像素点,可以是当前像素点右侧的像素点,可以是当前像素点上侧的像素点,可以是当前像素点下侧的像素点, 对此当前像素点的周围像素点的位置不做限制。
比如说,参见图3所示,若当前像素点是当前块中的R0,则当前像素点的周围像素点可以是当前块左侧相邻块中的L0。若当前像素点是当前块中第一行第二列的像素点,则当前像素点的周围像素点可以是当前块上侧相邻块中第八行第二列的像素点。
步骤402,基于当前像素点的梯度值和当前像素点的原始像素值,确定当前像素点的调整像素值。例如,可以基于当前像素点的梯度值、当前像素点的原始像素值、第一调整阈值、第二调整阈值、第一调整偏移值和第二调整偏移值,确定当前像素点的调整像素值。
在一种可能的实施方式中,若当前像素点的梯度值大于第一调整阈值,则基于当前像素点的原始像素值和第一调整偏移值确定当前像素点的调整像素值,比如说,基于当前像素点的原始像素值和第一调整偏移值之和确定当前像素点的调整像素值。若当前像素点的梯度值小于第二调整阈值,则基于当前像素点的原始像素值和第二调整偏移值确定当前像素点的调整像素值,比如说,基于当前像素点的原始像素值和第二调整偏移值之和确定当前像素点的调整像素值。示例性的,第一调整阈值和第二调整阈值可以互为相反数。当然,第一调整阈值和第二调整阈值也可以不互为相反数,可以任意设置第一调整阈值和第二调整阈值。
在一种可能的实施方式中,若当前块中的当前像素点满足增强调整模式的启用条件,还可以从当前块的相邻块中确定与当前像素点对应的参考像素点,并基于参考像素点的原始像素值和参考像素点的周围像素点的原始像素值确定参考像素点的梯度值;基于参考像素点的梯度值和参考像素点的原始像素值,确定参考像素点的调整像素值。
示例性的,参考像素点可以是相邻块中与当前像素点相邻的像素点,也可以是相邻块中与当前像素点非相邻的像素点,对此不做限制。比如说,参见图3所示,若当前像素点是当前块中的R0,则参考像素点可以是当前块左侧相邻块中的L0,也可以是当前块左侧相邻块中的L1、L2等,对此不做限制。若当前像素点是当前块中的R1,则参考像素点可以是当前块左侧相邻块中的L0,也可以是当前块左侧相邻块中的L1、L2等,对此不做限制。若当前像素点是当前块中第一行第二列的像素点,则参考像素点可以是当前块上侧相邻块中第八行第二列的像素点,也可以是当前块上侧相邻块中第七行第二列的像素点,对此不做限制。
示例性的,参考像素点的梯度值,可以是基于参考像素点的原始像素值和参考像素点的周围像素点的原始像素值之间的差值确定,也就是说,梯度值反映的是两个像素值的差值。
示例性的,参考像素点的周围像素点,可以是参考像素点的相邻像素点,也可以是参考像素点的非相邻像素点。参考像素点的周围像素点,可以是位于参考像素点所在块中的像素点,也可以是位于参考像素点所在块的相邻块中的像素点。参考像素点的周围像素点,可以是参考像素点左侧的像素点,可以是参考像素点右侧的像素点,可以是参考像素点上侧的像素点,可以是参考像素点下侧的像素点,对此参考像素点的周围像素点的位置不做限制。
在一种可能的实施方式中,参考像素点的周围像素点,可以是当前块中的当前像素点,与此类似的,当前像素点的周围像素点,可以是当前块的相邻块中的参考像素点。
示例性的,基于参考像素点的梯度值和参考像素点的原始像素值,确定参考像素点的调整像素值,可以包括但不限于:基于参考像素点的梯度值、参考像素点的原始像素值、第三调整阈值(与第一调整阈值可以相同,也可以不同)、第四调整阈值(与第二调整阈值可以相同,也可以不同)、第三调整偏移值(与第一调整偏移值可以相同,也可以不同)和第四调整偏移值(与第三调整偏移值可以相同,也可以不同),确定参考像素点的调整像素值。
比如说,若参考像素点的梯度值大于第三调整阈值,则基于参考像素点的原始像素值和第三调整偏移值确定参考像素点的调整像素值,比如说,基于参考像素点的原始像素值和第三调整偏移值之和确定参考像素点的调整像素值。若参考像素点的梯度值小于第四调整阈值,则基于参考像素点的原始像素值和第四调整偏移值确定参考像素点的调整像素值,比如说,基于参考像素点的原始像素值和第四调整偏移值之和确定参考像素点的调整像素值。
示例性的,第三调整阈值和第四调整阈值可以互为相反数。当然,第三调整阈值和第四调整阈值也可以不互为相反数,可以任意设置第三调整阈值和第四调整阈值。
在一种可能的实施方式中,可以从高层语法中解析出当前块对应的第一调整阈值,第二调整阈值,第一调整偏移值,第二调整偏移值,第三调整阈值,第四调整阈值,第三调整偏移值和第四调整偏移值。或者,可以从高层语法中解析出当前块对应的第一调整阈值,第一调整偏移值,第二调整偏移值,第三调整阈值,第三调整偏移值和第四调整偏移值。或者,可以从高层语法中解析出当前块对应的第二调整阈值,第一调整偏移值,第二调整偏移值,第三调整阈值,第三调整偏移值和第四调整偏移值。或者,可以从高层语法中解析出当前块对应的第一调整阈值,第一调整偏移值,第二调整偏移值,第四调整阈值,第三调整偏移值和第四调整偏移值。或者,可以从高层语法中解 析出当前块对应的第二调整阈值,第一调整偏移值,第二调整偏移值,第四调整阈值,第三调整偏移值和第四调整偏移值。
示例性的,若第一调整阈值和第二调整阈值互为相反数,从高层语法中解析出第一调整阈值后,可推导出第二调整阈值,从高层语法中解析出第二调整阈值后,可推导出第一调整阈值。若第三调整阈值和第四调整阈值互为相反数,从高层语法中解析出第三调整阈值后,可推导出第四调整阈值,从高层语法中解析出第四调整阈值后,可推导出第三调整阈值。
在一种可能的实施方式中,当前块中的当前像素点满足增强调整模式的启用条件,可以包括但不限于:若当前块中的当前像素点对应的待滤波边界的边界强度满足增强调整模式的启用条件,则确定当前像素点满足增强调整模式的启用条件。比如说,若当前像素点对应的待滤波边界的边界强度为预设第一数值,则可以确定待滤波边界的边界强度满足增强调整模式的启用条件。示例性的,预设第一数值可以为0。当然,预设第一数值还可以为其它数值。
或者,若当前块对应的特征信息满足增强调整模式的启用条件,则确定当前块中的当前像素点满足增强调整模式的启用条件。示例性的,当前块对应的特征信息满足增强调整模式的启用条件是指,若基于当前块对应的特征信息,确定不对当前块启动滤波操作(如去块滤波操作等),则确定当前块对应的特征信息满足增强调整模式的启用条件。
示例性的,在确定当前块中的当前像素点满足增强调整模式的启用条件之前,可以先获取当前块对应的增强调整模式使能标志位,若当前块对应的增强调整模式使能标志位允许当前块启用增强调整模式,则确定当前块中的当前像素点是否满足增强调整模式的启用条件,即确定出当前像素点满足增强调整模式的启用条件,或者,不满足增强调整模式的启用条件。
或者,若当前块对应的增强调整模式使能标志位不允许当前块启用增强调整模式,则直接确定当前块中的当前像素点不满足增强调整模式的启用条件。
示例性的,针对解码端来说,可以从高层语法中解析出当前块对应的增强调整模式使能标志位,继而基于增强调整模式使能标志位确定是否允许当前块启用增强调整模式。
比如说,若增强调整模式使能标志位为第一取值(如1),则增强调整模式使能标志位允许当前块启用增强调整模式,若增强调整模式使能标志位为第二取值(如0),则增强调整模式使能标志位不允许当前块启用增强调整模式。
在一种可能的实施方式中,若该编解码方法应用于预测过程(如帧内预测或者帧间预测),则当前像素点的原始像素值可以是帧内预测或者帧间预测得到的预测值,而当前像素点的调整像素值作为当前像素点的目标像素值(预测过程的最终像素值)。若该编解码方法应用于滤波过程,则当前像素点的原始像素值可以是滤波前的预测值,而当前像素点的调整像素值作为当前像素点的目标像素值(滤波过程的最终像素值)。
由以上技术方案可见,本申请实施例中,若当前块中的当前像素点满足增强调整模式的启用条件,则可以基于当前像素点的梯度值和当前像素点的原始像素值,确定当前像素点的调整像素值,也就是说,基于当前像素点的梯度值对当前像素点的原始像素值进行调整,使当前像素点的调整像素值更接近原始像素,从而提高编码性能。在滤波过程中,如DBF、SAO和ALF等,若当前块中的当前像素点满足增强调整模式的启用条件,在基于当前像素点的梯度值对当前像素点的原始像素值进行调整后,可以提高滤波效果,提高编码性能。
在一种可能的实施方式中,若当前块中的当前像素点满足普通滤波模式的启用条件,则还可以对当前像素点的原始像素值进行去块滤波(即DBF滤波),得到当前像素点的滤波像素值。当然,去块滤波只是一个示例,还可以采用其它滤波方式对当前像素点的原始像素值进行滤波,比如说,对当前像素点的原始像素值进行SAO滤波,得到当前像素点的滤波像素值。或者,对当前像素点的原始像素值进行ALF滤波,得到当前像素点的滤波像素值。
示例性的,以去块滤波为例,参见步骤S11-步骤S13,示出了对当前像素点的原始像素值进行去块滤波,得到当前像素点的滤波像素值的过程,在此不再重复赘述。
继续参见步骤S11-步骤S13,从这些步骤可以看出,若边界两侧的块为非帧内模式块、无残差、且运动一致时,才会跳过滤波过程,否则,需要进行滤波过程。在需要进行滤波过程时,还可以获取BS的取值,若BS等于0,则不滤波,即不对边界两侧的像素进行滤波,若BS大于0,则对边界两侧的像素进行滤波。综上所述,若边界两侧的块为非帧内模式块、无残差、且运动一致时,则当前块中的当前像素点不满足普通滤波模式的启用条件。若边界两侧的块为非帧内模式块、无残差、且运动一致不成立(即边界两侧的块不为非帧内模式块,或边界两侧的块有残差,或边界两侧的块运动不一致),且BS等于0,则当前块中的当前像素点不满足普通滤波模式的启用条件。若边界两侧的块为非帧内模式块、无残差、且运动一致不成立,且BS大于0,则当前块中的当前像素点满足普通滤波模式的启用条件。
需要注意的是,若当前块中的当前像素点满足普通滤波模式的启用条件,则当前块中的当前像素点不满足增强调整模式的启用条件,若当前块中的当前像素点满足增强调整模式的启用条件,则当前块中的当前像素点不满足普通滤波模式的启用条件。
在当前像素点满足普通滤波模式的启用条件,且对当前像素点的原始像素值进行去块滤波,得到当前像素点的滤波像素值的基础上,还需要确定当前块中的当前像素点是否满足增强滤波模式的启用条件。若当前块中的当前像素点满足增强滤波模式的启用条件,则基于当前像素点的滤波像素值和当前像素点的原始像素值,确定当前像素点的调整像素值,即当前像素点的调整像素值作为当前像素点的目标像素值(去块滤波过程的最终像素值)。若当前块中的当前像素点不满足增强滤波模式的启用条件,则不对当前像素点的滤波像素值进行调整,当前像素点的滤波像素值作为当前像素点的目标像素值(去块滤波过程的最终像素值)。
示例性的,若当前块中的当前像素点满足增强滤波模式的启用条件,基于当前像素点的滤波像素值和当前像素点的原始像素值,确定当前像素点的调整像素值,可以包括但不限于:基于当前像素点的滤波像素值,当前像素点的原始像素值,第一滤波阈值,第二滤波阈值,第一滤波偏移值和第二滤波偏移值,确定当前像素点的调整像素值。
示例性的,第一滤波阈值和第二滤波阈值可以互为相反数,当然,第一滤波阈值和第二滤波阈值也可以不互为相反数,可以任意设置第一滤波阈值和第二滤波阈值。
在一种可能的实施方式中,若当前块中的当前像素点满足普通滤波模式的启用条件,还可以从当前块的相邻块中确定与当前像素点对应的参考像素点,并对参考像素点的原始像素值进行去块滤波(即DBF滤波),得到参考像素点的滤波像素值。当然,去块滤波只是一个示例,还可以采用其它滤波方式对参考像素点的原始像素值进行滤波,比如说,对参考像素点的原始像素值进行SAO滤波或者ALF滤波,得到参考像素点的滤波像素值。
示例性的,以去块滤波为例,参见步骤S11-步骤S13,示出了对参考像素点的原始像素值进行去块滤波,得到参考像素点的滤波像素值的过程,在此不再重复赘述。
示例性的,参考像素点可以是相邻块中与当前像素点相邻的像素点,参考像素点也可以是相邻块中与当前像素点非相邻的像素点,对此不做限制。
在当前像素点满足普通滤波模式的启用条件,且对参考像素点的原始像素值进行去块滤波,得到参考像素点的滤波像素值的基础上,还需要确定当前块中的当前像素点是否满足增强滤波模式的启用条件。若当前块中的当前像素点满足增强滤波模式的启用条件,则基于参考像素点的滤波像素值和参考像素点的原始像素值,确定参考像素点的调整像素值,即参考像素点的调整像素值作为参考像素点的目标像素值(去块滤波过程的最终像素值)。若当前块中的当前像素点不满足增强滤波模式的启用条件,则不对参考像素点的滤波像素值进行调整,参考像素点的滤波像素值作为参考像素点的目标像素值(去块滤波过程的最终像素值)。
示例性的,基于参考像素点的滤波像素值和参考像素点的原始像素值,确定参考像素点的调整像素值,可以包括但不限于:基于参考像素点的滤波像素值,参考像素点的原始像素值,第三滤波阈值,第四滤波阈值,第三滤波偏移值和第四滤波偏移值,确定参考像素点的调整像素值;其中,第三滤波阈值和第四滤波阈值可以互为相反数,当然,第三滤波阈值和第四滤波阈值也可以不互为相反数,可以任意设置第三滤波阈值和第四滤波阈值。
在一种可能的实施方式中,可以从高层语法中解析出当前块对应的第一滤波阈值,第二滤波阈值,第一滤波偏移值,第二滤波偏移值,第三滤波阈值,第四滤波阈值,第三滤波偏移值和第四滤波偏移值。或者,可以从高层语法中解析出当前块对应的第一滤波阈值,第一滤波偏移值,第二滤波偏移值,第三滤波阈值,第三滤波偏移值和第四滤波偏移值。或者,可以从高层语法中解析出当前块对应的第二滤波阈值,第一滤波偏移值,第二滤波偏移值,第三滤波阈值,第三滤波偏移值和第四滤波偏移值。或者,可以从高层语法中解析出当前块对应的第一滤波阈值,第一滤波偏移值,第二滤波偏移值,第四滤波阈值,第三滤波偏移值和第四滤波偏移值。或者,可以从高层语法中解析出当前块对应的第二滤波阈值,第一滤波偏移值,第二滤波偏移值,第四滤波阈值,第三滤波偏移值和第四滤波偏移值。
示例性的,若第一滤波阈值和第二滤波阈值互为相反数,从高层语法中解析出第一滤波阈值后,可推导出第二滤波阈值,从高层语法中解析出第二滤波阈值后,可推导出第一滤波阈值。若第三滤波阈值和第四滤波阈值互为相反数,从高层语法中解析出第三滤波阈值后,可推导出第四滤波阈值,从高层语法中解析出第四滤波阈值后,可推导出第三滤波阈值。
在一种可能的实施方式中,当前块中的当前像素点满足增强滤波模式的启用条件,可以包括但不限于:若当前块中的当前像素点对应的待滤波边界的边界强度满足增强滤波模式的启用条件,且当前像素点的滤波像素值与当前像素点的原始像素值之间的差值的绝对值大于预设阈值(该预设 阈值为正值,对此预设阈值不做限制,比如说,若第一滤波阈值和第二滤波阈值互为相反数,当第一滤波阈值为正值时,该预设阈值与第一滤波阈值相同,当第二滤波阈值为正值时,该预设阈值与第二滤波阈值相同,当然,预设阈值也可以为其它取值),则确定当前像素点满足增强滤波模式的启用条件。示例性的,当前块中的当前像素点对应的待滤波边界的边界强度满足增强滤波模式的启用条件,可以包括但不限于:若当前像素点对应的待滤波边界的边界强度为预设第二数值(与预设第一数值不同,即不为0,如预设第二数值可以大于0),则确定待滤波边界的边界强度满足增强滤波模式的启用条件。
示例性的,在确定当前块中的当前像素点满足增强滤波模式的启用条件之前,可以先获取当前块对应的增强滤波模式使能标志位,若当前块对应的增强滤波模式使能标志位允许当前块启用增强滤波模式,则确定当前块中的当前像素点是否满足增强滤波模式的启用条件,即确定出当前像素点满足增强滤波模式的启用条件,或者,不满足增强滤波模式的启用条件。
或者,若当前块对应的增强滤波模式使能标志位不允许当前块启用增强滤波模式,则直接确定当前块中的当前像素点不满足增强滤波模式的启用条件。
示例性的,针对解码端来说,可以从高层语法中解析出当前块对应的增强滤波模式使能标志位,继而基于增强滤波模式使能标志位确定是否允许当前块启用增强滤波模式。
比如说,若增强滤波模式使能标志位为第一取值(如1),则增强滤波模式使能标志位允许当前块启用增强滤波模式,若增强滤波模式使能标志位为第二取值(如0),则增强滤波模式使能标志位不允许当前块启用增强滤波模式。
示例性的,在上述实施例中,高层语法可以包括但不限于如下语法中的一种:序列级参数集SPS级高层语法;图像参数集PPS级高层语法;图像头级高层语法;帧级高层语法;片头级高层语法;编码树单元CTU级高层语法;编码单元CU级高层语法。
示例性的,在上述实施例中,当前块中的当前像素点的像素值可以为亮度分量或者色度分量。
由以上技术方案可见,若当前块中的当前像素点满足增强滤波模式的启用条件,则可以基于当前像素点的滤波像素值和当前像素点的原始像素值,确定当前像素点的调整像素值,也就是说,基于当前像素点的滤波像素值对当前像素点的原始像素值进行调整,使当前像素点的调整像素值更接近原始像素,从而提高编码性能。在滤波过程中,如DBF、SAO和ALF等,若当前块中的当前像素点满足增强滤波模式的启用条件,在基于当前像素点的滤波像素值对当前像素点的原始像素值进行调整后,可以提高滤波效果,提高编码性能。
实施例2:在需要进行滤波处理时,需要先判断是否跳过滤波过程,比如说,若边界两侧的块(即当前块和当前块的相邻块,对于垂直边界来说,是当前块左侧的相邻块,对于水平边界来说,是当前块上侧的相邻块)为非帧内模式块(即当前块和相邻块都不是帧内块)、无残差(即当前块和相邻块之间没有残差)、且运动一致(即当前块和相邻块的运动一致)时,会跳过滤波过程,否则,不会跳过滤波过程。基于此,可以将“跳过滤波过程”作为增强调整模式的启用条件,即,若针对当前块中的当前像素点跳过滤波过程,则当前块中的当前像素点满足增强调整模式的启用条件。在当前像素点满足增强调整模式的启用条件时,可以采用增强调整模式对当前像素点的原始像素值进行调整,从而使得像素值更接近原始像素。
示例性的,若当前块对应的特征信息满足增强调整模式的启用条件,则确定当前块中的当前像素点满足增强调整模式的启用条件。当前块对应的特征信息用于表示边界两侧的块是否为非帧内模式块,用于表示边界两侧的块是否无残差,以及,用于表示边界两侧的块是否运动一致。基于此,若当前块对应的特征信息用于表示边界两侧的块为非帧内模式块,且用于表示边界两侧的块无残差,且用于表示边界两侧的块运动一致,则说明当前块对应的特征信息满足增强调整模式的启用条件,并确定当前块中的当前像素点满足增强调整模式的启用条件,即当前块中的每个像素点满足增强调整模式的启用条件。或者,若当前块对应的特征信息用于表示边界两侧的块不均为非帧内模式块,和/或,当前块对应的特征信息用于表示边界两侧的块有残差,和/或,当前块对应的特征信息用于表示边界两侧的块运动不一致,则说明当前块对应的特征信息不满足增强调整模式的启用条件,并确定当前块中的当前像素点不满足增强调整模式的启用条件,即当前块中的每个像素点均不满足增强调整模式的启用条件。
示例性的,在当前像素点满足增强调整模式的启用条件时,可以采用增强调整模式对当前像素点的原始像素值进行调整,比如说,可以先基于当前像素点的原始像素值确定当前像素点的梯度值,并基于当前像素点的梯度值和当前像素点的原始像素值,确定当前像素点的调整像素值,关于调整像素值的确定过程,可以参见后续实施例,在此不再赘述。
实施例3:在需要进行滤波处理时,需要先判断是否跳过滤波过程,比如说,若边界两侧的块为非帧内模式块、无残差、且运动一致时,会跳过滤波过程,否则,不会跳过滤波过程。在不会跳 过滤波过程时,还可以确定BS值,若BS值大于0(如BS值为1、2、3、4等),则可以对边界两侧的像素进行滤波。若BS值为0,则不滤波,即不对边界两侧的像素进行滤波。基于此,可以将“BS值为0”作为增强调整模式的启用条件,即,若当前块中的当前像素点的BS值为0,则当前块中的当前像素点满足增强调整模式的启用条件;若当前块中的当前像素点的BS值大于0,则当前块中的当前像素点不满足增强调整模式的启用条件。
示例性的,若当前块中的当前像素点对应的待滤波边界的边界强度满足增强调整模式的启用条件,则确定当前像素点满足增强调整模式的启用条件。比如说,可以先确定当前像素点对应的待滤波边界的边界强度,若该边界强度为预设第一数值,则确定该边界强度满足增强调整模式的启用条件。该预设第一数值可以根据经验配置,如预设第一数值为0。综上所述,若当前像素点对应的待滤波边界的边界强度为0,则说明当前像素点对应的待滤波边界的边界强度满足增强调整模式的启用条件,并确定当前像素点满足增强调整模式的启用条件。
或者,若当前块中的当前像素点对应的待滤波边界的边界强度不满足增强调整模式的启用条件,则确定当前像素点不满足增强调整模式的启用条件。比如说,若当前像素点对应的待滤波边界的边界强度不为预设第一数值,则确定该边界强度不满足增强调整模式的启用条件,从而能够确定当前块中的当前像素点不满足增强调整模式的启用条件。
示例性的,在当前像素点满足增强调整模式的启用条件时,可以采用增强调整模式对当前像素点的原始像素值进行调整,从而使得像素值更接近原始像素。比如说,可以先基于当前像素点的原始像素值确定当前像素点的梯度值,并基于当前像素点的梯度值和当前像素点的原始像素值,确定当前像素点的调整像素值,调整像素值的确定过程参见后续实施例。
实施例4:在需要进行滤波处理时,需要先判断是否跳过滤波过程,比如说,若边界两侧的块为非帧内模式块、无残差、且运动一致时,会跳过滤波过程,否则,不会跳过滤波过程。在不会跳过滤波过程时,还可以确定BS值,若BS值大于0(如BS值为1、2、3、4等),则可以对边界两侧的像素进行滤波。若BS值为0,则不滤波,即不对边界两侧的像素进行滤波。基于此,可以将“BS值大于0”作为普通滤波模式的启用条件,即,若当前块中的当前像素点的BS值大于0,则当前块中的当前像素点满足普通滤波模式的启用条件;若当前块中的当前像素点的BS值等于0,则当前块中的当前像素点不满足普通滤波模式的启用条件。
示例性的,若当前块中的当前像素点对应的待滤波边界的边界强度满足普通滤波模式的启用条件,则确定当前像素点满足普通滤波模式的启用条件。比如说,可以先确定当前像素点对应的待滤波边界的边界强度,若该边界强度为预设第二数值,则确定该边界强度满足普通滤波模式的启用条件。该预设第二数值可以根据经验进行配置,如预设第二数值可以大于0,如预设第二数值可以为1、2、3、4等。综上所述,若当前像素点对应的待滤波边界的边界强度大于0(即边界强度不为0),则说明当前像素点对应的待滤波边界的边界强度满足普通滤波模式的启用条件,并确定当前像素点满足普通滤波模式的启用条件。
或者,若当前块中的当前像素点对应的待滤波边界的边界强度不满足普通滤波模式的启用条件,则确定当前像素点不满足普通滤波模式的启用条件。比如说,若当前像素点对应的待滤波边界的边界强度(如0)不为预设第二数值,则确定该边界强度不满足普通滤波模式的启用条件,从而能够确定当前块中的当前像素点不满足普通滤波模式的启用条件。
示例性的,在当前像素点满足普通滤波模式的启用条件时,还可以对当前像素点的原始像素值进行去块滤波(即DBF滤波,本文以去块滤波为例),得到当前像素点的滤波像素值。
实施例5:在当前块中的当前像素点满足普通滤波模式的启用条件,且对当前像素点的原始像素值进行去块滤波,得到当前像素点的滤波像素值的基础上,还可以确定当前块中的当前像素点是否满足增强滤波模式的启用条件。比如说,确定当前像素点的滤波像素值与当前像素点的原始像素值之间的差值的绝对值是否大于预设阈值,若大于预设阈值,则确定当前块中的当前像素点满足增强滤波模式的启用条件,若不大于预设阈值,则确定当前块中的当前像素点不满足增强滤波模式的启用条件。综上所述,当前块中的当前像素点满足增强滤波模式的启用条件,可以包括:若当前块中的当前像素点对应的待滤波边界的边界强度满足增强滤波模式的启用条件,且当前像素点的滤波像素值与当前像素点的原始像素值之间的差值的绝对值大于预设阈值,则确定当前像素点满足增强滤波模式的启用条件。
以下结合具体实施例,对增强滤波模式的启用条件进行说明。
在需要进行滤波处理时,需要先判断是否跳过滤波过程,在不会跳过滤波过程时,还可以确定BS值,若BS值大于0,则可以对边界两侧的像素进行滤波。基于此,可以将“BS值大于0”作为增强滤波模式的启用条件,也就是说,“BS值大于0”同时作为普通滤波模式和增强滤波模式的启用条件。在BS值大于0时,需要对当前像素点的原始像素值进行去块滤波,得到当前像素点的滤波 像素值。在得到当前像素点的滤波像素值之后,还可以确定当前像素点的滤波像素值与当前像素点的原始像素值之间的差值的绝对值是否大于预设阈值,并将“差值的绝对值大于预设阈值”作为增强滤波模式的启用条件。
综上所述,若当前块中的当前像素点对应的待滤波边界的边界强度满足增强滤波模式的启用条件,且当前像素点的滤波像素值与当前像素点的原始像素值之间的差值的绝对值大于预设阈值,则确定当前像素点满足增强滤波模式的启用条件。否则,确定当前像素点不满足增强滤波模式的启用条件。比如说,先确定当前像素点对应的待滤波边界的边界强度,若该边界强度为预设第二数值,则确定该边界强度满足增强滤波模式的启用条件。预设第二数值可以根据经验进行配置,如预设第二数值可以大于0,如预设第二数值可以为1、2、3、4等。
示例性的,在当前像素点满足增强滤波模式的启用条件时,则可以基于当前像素点的滤波像素值和当前像素点的原始像素值,确定当前像素点的调整像素值,即当前像素点的调整像素值作为当前像素点的目标像素值(去块滤波过程的最终像素值)。若当前块中的当前像素点不满足增强滤波模式的启用条件,则不对当前像素点的滤波像素值进行调整,当前像素点的滤波像素值作为当前像素点的目标像素值(去块滤波过程的最终像素值)。
从上述实施例1、2、3、4、5可以看出,本文涉及增强调整模式、普通滤波模式和增强滤波模式,可以基于增强调整模式、普通滤波模式或增强滤波模式对当前像素点的原始像素值进行处理,得到当前像素点的目标像素值(即最终像素值)。比如说,若当前像素点满足增强调整模式的启用条件,则在增强调整模式下,可以基于当前像素点的梯度值对当前像素点的原始像素值进行调整,得到当前像素点的调整像素值,将该调整像素值作为目标像素值。又例如,若当前像素点满足普通滤波模式的启用条件,但是不满足增强滤波模式的启用条件,则在普通滤波模式下,可以对当前像素点的原始像素值进行滤波,得到当前像素点的滤波像素值,将该滤波像素值作为目标像素值。又例如,若当前像素点满足普通滤波模式的启用条件,且满足增强滤波模式的启用条件,则在增强滤波模式下,可以对当前像素点的原始像素值进行滤波,得到当前像素点的滤波像素值,并基于当前像素点的滤波像素值对当前像素点的原始像素值进行调整,得到当前像素点的调整像素值,将该调整像素值作为目标像素值。
在一种可能的实施方式中,在对当前块进行去块滤波时,可以采用增强调整模式、普通滤波模式或增强滤波模式对当前像素点的原始像素值进行处理,也就是说,增强调整模式、普通滤波模式和增强滤波模式均归属于去块滤波模式,即,增强调整模式、普通滤波模式和增强滤波模式可以是去块滤波模式下的子模式。基于此,在去块滤波模式下,可以确定采用增强调整模式对当前像素点的原始像素值进行处理,或者,采用普通滤波模式对当前像素点的原始像素值进行处理,或者,采用增强滤波模式对当前像素点的原始像素值进行处理。
当然,增强调整模式、普通滤波模式和增强滤波模式也可以归属于其它类型的滤波模式,如SAO滤波模式或者ALF滤波模式等,即,增强调整模式、普通滤波模式和增强滤波模式可以是SAO滤波模式下的子模式,或者,增强调整模式、普通滤波模式和增强滤波模式可以是ALF滤波模式下的子模式。基于此,在SAO滤波模式或者ALF滤波模式下,可以确定采用增强调整模式对当前像素点的原始像素值进行处理,或者,采用普通滤波模式对当前像素点的原始像素值进行处理,或者,采用增强滤波模式对当前像素点的原始像素值进行处理。
示例性的,以增强调整模式、普通滤波模式和增强滤波模式均归属于去块滤波模式为例,则普通滤波模式可以称为去块滤波模式的普通模式,即对当前像素点的原始像素值进行去块滤波得到滤波像素值后,不再对去块滤波后的滤波像素值进行调整。增强滤波模式可以称为去块滤波调整模式(deblocking refinement,缩写为DBR),即对当前像素点的原始像素值进行去块滤波得到滤波像素值后,还需要对去块滤波后的滤波像素值进行调整。增强调整模式可以称为可选去块滤波调整模式(alt deblocking refinement,缩写为ADBR),即在不对当前像素点的原始像素值进行去块滤波的基础上,直接对当前像素点的原始像素值进行调整。
实施例6:针对实施例1、实施例2和实施例3,可以采用增强调整模式对当前像素点的原始像素值进行调整,在对原始像素值进行调整时,可以采用如下步骤:
步骤S21、基于当前像素点的原始像素值和当前像素点的周围像素点的原始像素值确定当前像素点的梯度值。比如说,当前像素点的梯度值,可以是基于当前像素点的原始像素值和周围像素点的原始像素值之间的差值确定,对此确定方式不做限制。
步骤S22、从当前块的相邻块(针对垂直边界来说,该相邻块是当前块的左侧相邻块,对于水平边界来说,该相邻块是当前块的上侧相邻块)中确定与当前像素点对应的参考像素点,并基于参考像素点的原始像素值和参考像素点的周围像素点的原始像素值确定参考像素点的梯度值。比如说,参考像素点的梯度值,可以是基于参考像素点的原始像素值和参考像素点的周围像素点的原始像素 值之间的差值确定,对此确定方式不做限制。
示例性的,基于当前像素点的原始像素值和当前像素点的周围像素点(如周围像素点是参考像素点)的原始像素值确定当前像素点的梯度值,基于参考像素点的周围像素点(如周围像素点是当前像素点)的原始像素值和参考像素点的原始像素值确定参考像素点的梯度值。
比如说,假设pi为当前块中的当前像素点的原始像素值,也即参考像素点的周围像素点的原始像素值,qi为相邻块中的参考像素点的原始像素值,也即当前像素点的周围像素点的原始像素值,也就是说,pi和qi分别为边界两侧的原始像素值,则当前像素点pi的梯度值DPi可以采用如下方式确定:DPi=(pi-qi+2)>>2,参考像素点qi的梯度值DQi可以采用如下方式确定:DQi=(qi-pi+2)>>2。当然,上述只是确定当前像素点的梯度值和参考像素点的梯度值的示例,对此不做限制。显然,当前像素点的梯度值,可以是基于当前像素点的原始像素值和参考像素点的原始像素值之间的差值确定。参考像素点的梯度值,可以是基于参考像素点的原始像素值和当前像素点的原始像素值之间的差值确定。
以当前像素点pi是p0(对应图3的R0),参考像素点qi是q0(对应图3的L0)为例,当前像素点p0的梯度值DP0采用如下方式确定:DP0=(p0-q0+2)>>2,参考像素点q0的梯度值DQ0采用如下方式确定:DQ0=(q0-p0+2)>>2。DP0=(p0-q0+2)>>2的另一种表述为DP0=(p0-q0+1)>>1,DQ0=(q0-p0+2)>>2的另一种表述为DQ0=(q0-p0+1)>>1。
步骤S23、基于当前像素点的梯度值和当前像素点的原始像素值,确定当前像素点的调整像素值。比如说,若当前像素点的梯度值大于第一调整阈值,则基于当前像素点的原始像素值和第一调整偏移值(也可以称为第一调整偏移量)确定当前像素点的调整像素值。若当前像素点的梯度值小于第二调整阈值,则基于当前像素点的原始像素值和第二调整偏移值确定当前像素点的调整像素值。示例性的,第一调整阈值和第二调整阈值可以互为相反数。
步骤S24、基于参考像素点的梯度值和参考像素点的原始像素值,确定参考像素点的调整像素值。比如说,若参考像素点的梯度值大于第三调整阈值,则基于参考像素点的原始像素值和第三调整偏移值(也可以称为第三调整偏移量)确定参考像素点的调整像素值。若参考像素点的梯度值小于第四调整阈值,则基于参考像素点的原始像素值和第四调整偏移值确定参考像素点的调整像素值。示例性的,第三调整阈值和第四调整阈值可以互为相反数。
比如说,若当前像素点pi的梯度值DPi大于alt_dbr_th(alt_dbr_th表示第一调整阈值),则当前像素点pi的调整像素值Pi可以采用如下方式确定:Pi=clip(pi+alt_dbr_offset0),alt_dbr_offset0可以表示第一调整偏移值。或者,若当前像素点pi的梯度值DPi小于–alt_dbr_th(–alt_dbr_th表示第二调整阈值),则当前像素点pi的调整像素值Pi可以采用如下方式确定:Pi=clip(pi+alt_dbr_offset1),alt_dbr_offset1可以表示第二调整偏移值。
在上述实施例中,i可以为0、1、2,以i为0为例进行说明,则:若DP0>dbr_th,则P0=clip(p0+alt_dbr_offset0);若DP0<–dbr_th,则P0=clip(p0+alt_dbr_offset1)。
比如说,若参考像素点qi的梯度值DQi大于alt_dbr_th(alt_dbr_th表示第三调整阈值,此处以第三调整阈值与第一调整阈值相同为例,在实际应用中,第三调整阈值与第一调整阈值也可以不同),则参考像素点qi的调整像素值Qi可以采用如下方式确定:Qi=clip(qi+alt_dbr_offset0),alt_dbr_offset0可以表示第三调整偏移值,此次以第三调整偏移值与第一调整偏移值相同为例,在实际应用中,第三调整偏移值与第一调整偏移值也可以不同。
或者,若参考像素点qi的梯度值DQi小于–alt_dbr_th(–alt_dbr_th表示第四调整阈值,此处以第四调整阈值与第二调整阈值相同为例,在实际应用中,第四调整阈值与第二调整阈值也可以不同),则参考像素点qi的调整像素值Qi可以采用如下方式确定:Qi=clip(qi+alt_dbr_offset1),alt_dbr_offset1可以表示第四调整偏移值,此次以第四调整偏移值与第二调整偏移值相同为例,在实际应用中,第四调整偏移值与第二调整偏移值也可以不同。
在上述实施例中,i可以为0、1、2,以i为0为例进行说明,则:若DQ0>dbr_th,则Q0=clip(q0+alt_dbr_offset0);若DQ0<–dbr_th,则Q0=clip(q0+alt_dbr_offset1)。
在上述实施例中,pi表示当前像素点的原始像素值,DPi表示当前像素点的梯度值,Pi表示当前像素点的调整像素值,qi表示参考像素点的原始像素值,DQi表示参考像素点的梯度值,Qi表示参考像素点的调整像素值。clip(x)表示将x限制在[0,2^(bit_depth)-1]之间(包括0和2^(bit_depth)-1),bit_depth表示图像的比特深度,一般为8、10、12等。
在一种可能的实施方式中,针对解码端来说,可以从高层语法中解析出当前块对应的第一调整阈值,第一调整偏移值,第二调整偏移值,第三调整阈值,第三调整偏移值和第四调整偏移值。由于第一调整阈值与第二调整阈值互为相反数,第三调整阈值与第四调整阈值互为相反数,因此,解码端可以确定出第二调整阈值和第四调整阈值。
在另一种可能的实施方式中,针对解码端来说,可以从高层语法中解析出当前块对应的第一调整阈值,第一调整偏移值,第二调整偏移值,第四调整阈值,第三调整偏移值和第四调整偏移值。由于第一调整阈值与第二调整阈值互为相反数,第三调整阈值与第四调整阈值互为相反数,因此,解码端可以确定出第二调整阈值和第三调整阈值。
在另一种可能的实施方式中,针对解码端来说,可以从高层语法中解析出当前块对应的第二调整阈值,第一调整偏移值,第二调整偏移值,第三调整阈值,第三调整偏移值和第四调整偏移值。由于第一调整阈值与第二调整阈值互为相反数,第三调整阈值与第四调整阈值互为相反数,因此,解码端可以确定出第一调整阈值和第四调整阈值。
在另一种可能的实施方式中,针对解码端来说,可以从高层语法中解析出当前块对应的第二调整阈值,第一调整偏移值,第二调整偏移值,第四调整阈值,第三调整偏移值和第四调整偏移值。由于第一调整阈值与第二调整阈值互为相反数,第三调整阈值与第四调整阈值互为相反数,因此,解码端可以确定出第一调整阈值和第三调整阈值。
在另一种可能的实施方式中,针对解码端来说,可以从高层语法中解析出当前块对应的第一调整阈值(或第二调整阈值,或第三调整阈值,或第四调整阈值,即能够通过一个调整阈值推导出其它三个调整阈值),第一调整偏移值(或第三调整偏移值)和第二调整偏移值(或第四调整偏移值)。在此基础上,由于第一调整阈值与第二调整阈值互为相反数,因此,可以确定出第二调整阈值。由于第一调整阈值与第三调整阈值相同,因此,可以确定出第三调整阈值。由于第三调整偏移值与第一调整偏移值相同,因此,可以确定出第三调整偏移值。由于第四调整偏移值与第二调整偏移值相同,因此,可以确定出第四调整偏移值。由于第三调整阈值与第四调整阈值互为相反数,因此,可以确定出第四调整阈值。
当然,上述方式只是几个示例,对此不做限制,只要解码端能够获知第一调整阈值、第二调整阈值、第三调整阈值、第四调整阈值、第一调整偏移值、第二调整偏移值、第三调整偏移值和第四调整偏移值即可,即上述各数值可以通过解析得到或者推导得到。
在上述实施例中,高层语法可以包括但不限于如下语法中的一种:SPS级高层语法;PPS级高层语法;图像头级高层语法;帧级高层语法;片头级高层语法;CTU级高层语法;CU级高层语法。当然,上述只是高层语法的几个示例,对此高层语法的类型不做限制,只要能够通过高层语法携带当前块对应的调整阈值和调整偏移值即可。
在上述实施例中,当前块中的当前像素点的像素值可以为亮度分量或者色度分量。
在一种可能的实施方式中,可以通过增强调整模式使能标志位表示是否允许启用增强调整模式,若增强调整模式使能标志位允许当前块启用增强调整模式,则需要确定当前块中的当前像素点是否满足增强调整模式的启用条件,若当前像素点满足增强调整模式的启用条件,则采用增强调整模式对当前像素点的原始像素值进行调整。若增强调整模式使能标志位不允许当前块启用增强调整模式,则直接确定当前块中的每个像素点不满足增强调整模式的启用条件,不会采用增强调整模式对当前像素点的原始像素值进行调整。在此基础上,若当前块对应的增强调整模式使能标志位允许当前块启用增强调整模式,则确定当前块中的当前像素点是否满足增强调整模式的启用条件。若当前块对应的增强调整模式使能标志位不允许当前块启用增强调整模式,则确定当前块中的每个像素点不满足增强调整模式的启用条件。
示例性的,针对解码端来说,可以从高层语法中解析出当前块对应的增强调整模式使能标志位。比如说,若该增强调整模式使能标志位为第一取值(如1),则说明增强调整模式使能标志位允许当前块启用增强调整模式,若该增强调整模式使能标志位为第二取值(如0),则说明增强调整模式使能标志位不允许当前块启用增强调整模式。
在上述实施例中,高层语法可以包括但不限于如下语法中的一种:SPS级高层语法;PPS级高层语法;图像头级高层语法;帧级高层语法;片头级高层语法;CTU级高层语法;CU级高层语法。当然,上述只是高层语法的几个示例,对此高层语法的类型不做限制,只要能够通过高层语法携带当前块对应的增强调整模式使能标志位即可。
实施例7:针对实施例1和实施例5,可以采用增强滤波模式对当前像素点的原始像素值进行调整,在对当前像素点的原始像素值进行调整时,可以采用如下步骤:
步骤S31、对当前像素点的原始像素值进行去块滤波,得到当前像素点的滤波像素值。
步骤S32、从当前块的相邻块(针对垂直边界来说,该相邻块是当前块的左侧相邻块,对于水平边界来说,该相邻块是当前块的上侧相邻块)中确定与当前像素点对应的参考像素点,并对参考像素点的原始像素值进行去块滤波,得到参考像素点的滤波像素值。
示例性的,可以采用DBF滤波(即去块滤波)方式对当前像素点的原始像素值进行去块滤波,得到当前像素点的滤波像素值,并采用DBF滤波方式对参考像素点的原始像素值进行去块滤波,得 到参考像素点的滤波像素值。当然,也可以采用SAO滤波方式对当前像素点的原始像素值进行滤波,得到当前像素点的滤波像素值,并采用SAO滤波方式对参考像素点的原始像素值进行滤波,得到参考像素点的滤波像素值。或者,可以采用ALF滤波方式对当前像素点的原始像素值进行滤波,得到当前像素点的滤波像素值,并采用ALF滤波方式对参考像素点的原始像素值进行滤波,得到参考像素点的滤波像素值。为了方便描述,在后续实施例中,以采用DBF滤波方式对当前像素点和参考像素点的原始像素值进行去块滤波为例。
参见图3所示,基于当前像素点的位置,可以只对当前像素点和参考像素点进行水平DBF滤波,也可以只对当前像素点和参考像素点进行垂直DBF滤波,还可以先对当前像素点和参考像素点进行垂直DBF滤波,后对当前像素点和参考像素点进行水平DBF滤波。
步骤S33、基于当前像素点的滤波像素值和当前像素点的原始像素值,确定当前像素点的调整像素值。比如说,基于当前像素点的滤波像素值,当前像素点的原始像素值,第一滤波阈值,第二滤波阈值,第一滤波偏移值和第二滤波偏移值,确定当前像素点的调整像素值;其中,该第一滤波阈值和该第二滤波阈值可以互为相反数。
步骤S34、基于参考像素点的滤波像素值和参考像素点的原始像素值,确定参考像素点的调整像素值。比如说,基于参考像素点的滤波像素值,参考像素点的原始像素值,第三滤波阈值,第四滤波阈值,第三滤波偏移值和第四滤波偏移值,确定参考像素点的调整像素值;其中,该第三滤波阈值和该第四滤波阈值可以互为相反数。
示例性的,若当前像素点只满足普通滤波模式的启用条件,不满足增强滤波模式的启用条件,则执行步骤S31和步骤S32,将滤波像素值作为目标像素值(去块滤波过程的最终像素值)。若当前像素点满足普通滤波模式的启用条件和增强滤波模式的启用条件,则执行步骤S31-步骤S34,将调整像素值作为目标像素值(去块滤波过程的最终像素值)。
在步骤S33和步骤S34中,可以基于滤波像素值与未经过滤波处理的原始像素值,对像素点的原始像素值进行增强滤波处理,即对像素点的原始像素值进行增强滤波处理,得到增强处理后的调整像素值,以使增强处理后的调整像素值相比滤波像素值来说,更接近真实像素,避免由于过滤波导致的滤波像素值远大于或远小于像素点的真实像素,提升图像质量。
示例性的,针对步骤S33来说,若当前像素点的滤波像素值与当前像素点的原始像素值之间的差值大于第一滤波阈值,则可以基于当前像素点的滤波像素值,当前像素点的原始像素值和第一滤波偏移值,确定当前像素点的调整像素值。若当前像素点的滤波像素值与当前像素点的原始像素值之间的差值小于第二滤波阈值,则可以基于当前像素点的滤波像素值,当前像素点的原始像素值和第二滤波偏移值,确定当前像素点的调整像素值。
比如说,设Y 1(i)表示当前像素点的原始像素值,Y 2(i)表示当前像素点的滤波像素值,Y 3(i)表示当前像素点的调整像素值,并设Y v(i)=(Y 1(i)+Y 2(i)+1)>>1。
基于此,若Y 1(i)-Y 2(i)>T v,则Y 3(i)=Clip(Y v(i)+f0 v);若Y 1(i)-Y 2(i)<NT v,则Y 3(i)=Clip(Y v(i)+f1 v)。在上述公式中,T v可以表示第一滤波阈值,f0 v可以表示第一滤波偏移值,NT v可以表示第二滤波阈值,f1 v可以表示第二滤波偏移值,NT v一般设为-T v,也可以为其它值,clip(x)表示将x限制在预设取值范围内,该范围一般为[0,2 D-1],D为图像比特深度,对于8比特图像,范围为[0,255],对于10比特图像,范围为[0,1023]。
为了避免增强处理后的调整像素值超出像素值取值范围,在得到调整像素值时,可以通过Clip(修剪)操作,将调整像素值Clip到预设取值范围。当调整像素值大于预设取值范围的上限时,将调整像素值设置为预设取值范围的上限;当调整像素值小于预设取值范围的下限时,将调整像素值设置为预设取值范围的下限。举例来说,以8比特图像为例,当调整像素值小于0时,将调整像素值设置为0;当调整像素值大于255时,将调整像素值设置为255。
示例性的,针对步骤S34来说,若参考像素点的滤波像素值与参考像素点的原始像素值之间的差值大于第三滤波阈值,则可以基于参考像素点的滤波像素值,参考像素点的原始像素值和第三滤波偏移值,确定参考像素点的调整像素值。若参考像素点的滤波像素值与参考像素点的原始像素值之间的差值小于第四滤波阈值,则可以基于参考像素点的滤波像素值,参考像素点的原始像素值和第四滤波偏移值,确定参考像素点的调整像素值。参考像素点的调整像素值的确定方式与当前像素点的调整像素值的确定方式类似,在此不再赘述。
示例性的,第三滤波阈值与第一滤波阈值可以相同,也可以不同,第三滤波偏移值与第一滤波偏移值可以相同,也可以不同,第四滤波阈值与第二滤波阈值可以相同,也可以不同,第四滤波偏移值与第二滤波偏移值可以相同,也可以不同。
在一种可能的实施方式中,针对解码端来说,可以从高层语法中解析出当前块对应的第一滤波阈值,第一滤波偏移值,第二滤波偏移值,第三滤波阈值,第三滤波偏移值和第四滤波偏移值。 由于第一滤波阈值与第二滤波阈值互为相反数,第三滤波阈值与第四滤波阈值互为相反数,因此,解码端可以确定出第二滤波阈值和第四滤波阈值。
在另一种可能的实施方式中,针对解码端来说,可以从高层语法中解析出当前块对应的第一滤波阈值,第一滤波偏移值,第二滤波偏移值,第四滤波阈值,第三滤波偏移值和第四滤波偏移值。由于第一滤波阈值与第二滤波阈值互为相反数,第三滤波阈值与第四滤波阈值互为相反数,因此,解码端可以确定出第二滤波阈值和第三滤波阈值。
在另一种可能的实施方式中,针对解码端来说,可以从高层语法中解析出当前块对应的第二滤波阈值,第一滤波偏移值,第二滤波偏移值,第三滤波阈值,第三滤波偏移值和第四滤波偏移值。由于第一滤波阈值与第二滤波阈值互为相反数,第三滤波阈值与第四滤波阈值互为相反数,因此,解码端可以确定出第一滤波阈值和第四滤波阈值。
在另一种可能的实施方式中,针对解码端来说,可以从高层语法中解析出当前块对应的第二滤波阈值,第一滤波偏移值,第二滤波偏移值,第四滤波阈值,第三滤波偏移值和第四滤波偏移值。由于第一滤波阈值与第二滤波阈值互为相反数,第三滤波阈值与第四滤波阈值互为相反数,因此,解码端可以确定出第一滤波阈值和第三滤波阈值。
在另一种可能的实施方式中,针对解码端来说,可以从高层语法中解析出当前块对应的第一滤波阈值(或第二滤波阈值,或第三滤波阈值,或第四滤波阈值,即能够通过一个滤波阈值推导出其它三个滤波阈值),第一滤波偏移值(或第三滤波偏移值)和第二滤波偏移值(或第四滤波偏移值)。在此基础上,由于第一滤波阈值与第二滤波阈值互为相反数,因此,可以确定出第二滤波阈值。由于第一滤波阈值与第三滤波阈值相同,因此,可以确定出第三滤波阈值。由于第三滤波偏移值与第一滤波偏移值相同,因此,可以确定出第三滤波偏移值。由于第四滤波偏移值与第二滤波偏移值相同,因此,可以确定出第四滤波偏移值。由于第三滤波阈值与第四滤波阈值互为相反数,因此,可以确定出第四滤波阈值。
当然,上述方式只是几个示例,对此不做限制,只要解码端能够获知第一滤波阈值、第二滤波阈值、第三滤波阈值、第四滤波阈值、第一滤波偏移值、第二滤波偏移值、第三滤波偏移值和第四滤波偏移值即可,即上述各数值可以通过解析得到或者推导得到。
在上述实施例中,高层语法可以包括但不限于如下语法中的一种:SPS级高层语法;PPS级高层语法;图像头级高层语法;帧级高层语法;片头级高层语法;CTU级高层语法;CU级高层语法。当然,上述只是高层语法的几个示例,对此高层语法的类型不做限制,只要能够通过高层语法携带当前块对应的滤波阈值和滤波偏移值即可。
在上述实施例中,当前块中的当前像素点的像素值可以为亮度分量或者色度分量。
在一种可能的实施方式中,可以通过增强滤波模式使能标志位表示是否允许启用增强滤波模式,若增强滤波模式使能标志位允许当前块启用增强滤波模式,则需要确定当前块中的当前像素点是否满足增强滤波模式的启用条件,若当前像素点满足增强滤波模式的启用条件,则采用增强滤波模式对当前像素点的原始像素值进行调整。若增强滤波模式使能标志位不允许当前块启用增强滤波模式,则直接确定当前块中的每个像素点不满足增强滤波模式的启用条件,不会采用增强滤波模式对当前像素点的原始像素值进行调整。在此基础上,若当前块对应的增强滤波模式使能标志位允许当前块启用增强滤波模式,则确定当前块中的当前像素点是否满足增强滤波模式的启用条件。若当前块对应的增强滤波模式使能标志位不允许当前块启用增强滤波模式,则确定当前块中的每个像素点不满足增强滤波模式的启用条件。
示例性的,针对解码端来说,可以从高层语法中解析出当前块对应的增强滤波模式使能标志位。比如说,若该增强滤波模式使能标志位为第一取值(如1),则说明增强滤波模式使能标志位允许当前块启用增强滤波模式,若该增强滤波模式使能标志位为第二取值(如0),则说明增强滤波模式使能标志位不允许当前块启用增强滤波模式。
在上述实施例中,高层语法可以包括但不限于如下语法中的一种:SPS级高层语法;PPS级高层语法;图像头级高层语法;帧级高层语法;片头级高层语法;CTU级高层语法;CU级高层语法。当然,上述只是高层语法的几个示例,对此高层语法的类型不做限制,只要能够通过高层语法携带当前块对应的增强滤波模式使能标志位即可。
实施例8:在满足普通滤波模式的启用条件时,可以采用DBF滤波方式(即去块滤波方式)对像素点的原始像素值进行去块滤波,由于DBF滤波分为垂直DBF滤波和水平DBF滤波,因此,可以采用如下步骤对像素点的原始像素值进行去块滤波处理:
第一步:原始像素值Y 1(i)通过垂直DBF滤波后得到滤波像素值Y 2(i);
第二步:像素值Y 2(i)通过水平DBF滤波后得到滤波像素值Y 3(i)。
示例性的,若只对像素点进行垂直DBF滤波,则只执行第一步,得到像素点的滤波像素值。 若只对像素点进行水平DBF滤波,则只执行第二步,得到像素点的滤波像素值,将第二步的像素值Y 2(i)替换为像素点的原始像素值即可。若对像素点先进行垂直DBF滤波,后进行水平DBF滤波,则依次执行第一步和第二步。
在满足普通滤波模式的启用条件和增强滤波模式的启用条件时,可以采用DBF滤波方式(即去块滤波方式)对像素点的原始像素值进行去块滤波,并对去块滤波后的滤波像素值进行调整,由于DBF滤波分为垂直DBF滤波和水平DBF滤波,因此,可以采用如下步骤对像素点的原始像素值进行去块滤波处理,并对去块滤波后的滤波像素值进行调整:
第一步:原始像素值Y 1(i)通过垂直DBF滤波后得到滤波像素值Y 2(i);
第二步:基于Y 2(i)-Y 1(i),获得调整像素值Y 3(i);
第三步:像素值Y 3(i)通过水平DBF滤波后得到滤波像素值Y 4(i);
第四步:基于Y 4(i)-Y 3(i),获得调整像素值Y 5(i)。
示例性的,若只对像素点进行垂直DBF滤波,则只执行第一步和第二步,得到像素点的调整像素值。若只对像素点进行水平DBF滤波,则只执行第三步和第四步,得到像素点的调整像素值,将第三步的像素值Y 3(i)替换为像素点的原始像素值即可。若对像素点先进行垂直DBF滤波,后进行水平DBF滤波,则依次执行第一步、第二步、第三步和第四步。若对像素点先进行水平DBF滤波,后进行垂直DBF滤波,则执行步骤类似,在此不再赘述。
示例性的,针对第二步和第四步,就是满足增强滤波模式的启用条件时,采用增强滤波模式的处理过程,即,对滤波像素值进行调整,得到调整像素值的过程。
在第二步中,假设Y v(i)=(Y 1(i)+Y 2(i)+1)>>1,若Y 1(i)-Y 2(i)>T v,则Y 3(i)=Clip(Y v(i)+f0 v),若Y 1(i)-Y 2(i)<NT v,则Y 3(i)=Clip(Y v(i)+f1 v),否则,Y 3(i)=Y` 2(i)(在一种实施例中,该种情况也可以对Y` 2(i)进行滤波获得Y 3(i),如Y 3(i)=Y` 2(i)+f2 v)。
示例性的,clip(x)表示将x限制在预设的图像取值范围内,该图像取值范围一般可以为[0,2 D-1],D为图像比特深度,因此,对于8比特图像,该图像取值范围可以为[0,255],对于10比特图像,该图像取值范围为[0,1023]。阈值NT v一般设为-T v,也可以为其它值。
类似的,与第二步的处理过程类似,在第四步中,假设Y h(i)=(Y 3(i)+Y` 4(i)+1)>>1,若Y 4(i)-Y 3(i)>T h,则Y 5(i)=Clip(Y h(i)+f0 h),若Y 1(i)-Y 2(i)<NT h,则Y 5(i)=Clip(y h(i)+f1 h),否则,Y 5(i)=Y` 4(i)(在一种实施例中,该种情况也可以对Y` 4(i)进行滤波获得Y 5(i),如Y 5(i)=Y` 4(i)+f2 h),NT h一般设为-T h,也可以为其它值。
在上述实施例中,Tv和NTv为滤波阈值,f0v、f1v和f2v为滤波偏移值,clip(x)表示将x限制在预设的取值范围内。比如说,Tv为上文中的第一滤波阈值和第三滤波阈值(以第一滤波阈值和第三滤波阈值相同为例),NTv为上文中的第二滤波阈值和第四滤波阈值(以第二滤波阈值和第四滤波阈值相同为例),f0v为上文中的第一滤波偏移值和第三滤波偏移值(以第一滤波偏移值和第三滤波偏移值相同为例),f1v为上文中的第二滤波偏移值和第四滤波偏移值(以第二滤波偏移值和第四滤波偏移值相同为例)。NTv=-Tv,即Tv和NTv互为相反数。
在上述实施例中,Th和NTh为滤波阈值,f0h、f1h和f2h为滤波偏移值,clip(x)表示将x限制在预设的取值范围内。比如说,Th为上文中的第一滤波阈值和第三滤波阈值(以第一滤波阈值和第三滤波阈值相同为例),NTh为上文中的第二滤波阈值和第四滤波阈值(以第二滤波阈值和第四滤波阈值相同为例),f0h为上文中的第一滤波偏移值和第三滤波偏移值(以第一滤波偏移值和第三滤波偏移值相同为例),f1h为上文中的第二滤波偏移值和第四滤波偏移值(以第二滤波偏移值和第四滤波偏移值相同为例)。NTh=-Th,即Th和NTh互为相反数。
实施例9:在DBF中,仅根据一个既定准则进行滤波,会存在过滤波或欠滤波的情况。例如,若进行DBF前的重建值为Y1,经过DBF滤波后的像素值为Y2,则可基于Y2-Y1进行分类。基于滤波残差分类的主要好处是,可以对于一些过滤波或伪滤波的像素值进行特殊增强,以实现这些类别的像素更接近原始值的效果。所谓过滤波,是指Y2远大于(或远小于)Y1,以至于Y2远大于(或远小于)原始像素值。所谓伪滤波,是指Y2-Y1为0,或接近于0,即这些像素值经过滤波后仍保持不变,未达到滤波效果。针对上述发现,本实施例中,可以采用增强调整模式对像素点的像素值进行调整,也就是说,若对当前块中的当前像素点启用增强调整模式,则可以采用增强调整模式对像素点的原始像素值进行调整,而不再采用普通滤波模式或者增强滤波模式对像素点的原始像素值进行调整。
在一种可能的实施方式中,原始像素值的调整过程,可以包括以下步骤:
第一步:原始像素值Y 1(i)通过垂直DBF滤波后得到滤波像素值Y 2(i);
第二步:基于Y 2(i)-Y 1(i),获得调整像素值Y 3(i);
第三步:像素值Y 3(i)通过水平DBF滤波后得到滤波像素值Y 4(i);
第四步:基于Y 4(i)-Y 3(i),获得调整像素值Y 5(i)。
在第二步中,存在如下两种情况,abs(Y 2(i)-Y 1(i))<阈值和abs(Y 2(i)-Y 1(i))不小于阈值,若abs(Y 2(i)-Y 1(i))<阈值,还可以分为以下两种情况,即BS为0和BS大于0。示例性的,该阈值可以为上述实施例的第一滤波阈值或者第二滤波阈值,比如说,第一滤波阈值和第二滤波阈值互为相反数,若第一滤波阈值为正值,则该阈值可以为第一滤波阈值,若第二滤波阈值为正值,则该阈值可以为第二滤波阈值。
综上所述,可以将原始像素值的调整过程划分为如下三种情况:
情况1、BS为0,此时不进行滤波(即Y 2(i)等于Y 1(i),相当于不对原始像素值Y 1(i)进行垂直DBF滤波,即不会执行第一步),但是,可以采用增强调整模式对原始像素值Y 1(i)进行调整,得到调整像素值。
情况2、BS大于0,但是,abs(Y 2(i)-Y 1(i))<阈值,此时可以进行滤波(即对原始像素值Y 1(i)进行垂直DBF滤波,即执行第一步)。在执行第一步的基础上,还可以采用增强滤波模式对滤波像素值Y 2(i)进行调整,得到像素点的调整像素值Y 3(i)。
情况3、BS大于0,但是,abs(Y 2(i)-Y 1(i))不小于阈值,此时可以进行滤波(即对原始像素值Y 1(i)进行垂直DBF滤波,即执行第一步)。在执行第一步的基础上,不再采用增强滤波模式对滤波像素值Y 2(i)进行调整,即不再执行第二步,也就是,Y 3(i)=Y` 2(i)。
在第四步中,存在如下两种情况,abs(Y4(i)-Y3(i))<阈值和abs(Y4(i)-Y3(i))不小于阈值,若abs(Y4(i)-Y3(i))<阈值,则还可以分为以下两种情况,即BS为0和BS大于0。
综上所述,可以将原始像素值的调整过程划分为如下三种情况:
情况1、BS为0,此时不进行滤波(即Y 4(i)等于Y 3(i),相当于不对原始像素值Y 3(i)进行水平DBF滤波,不会执行第三步),但是,可以采用增强调整模式对原始像素值Y 3(i)进行调整,得到调整像素值。
情况2、BS大于0,但是,abs(Y4(i)-Y3(i))<阈值,此时可以进行滤波(即对原始像素值Y 3(i)进行水平DBF滤波,即执行第三步),在执行第三步的基础上,还可以采用增强滤波模式对像素点的滤波像素值Y 4(i)进行调整,得到像素点的调整像素值Y 5(i)。
情况3、BS大于0,但是,abs(Y4(i)-Y3(i))不小于阈值,此时可以进行滤波(即对原始像素值Y 3(i)进行水平DBF滤波,即执行第三步)。在执行第三步的基础上,不再采用增强滤波模式对滤波像素值Y 4(i)进行调整,即不再执行第四步,也就是,Y 5(i)=Y 4(i)。
综上所述,若BS为0,则采用增强调整模式进行处理,即可以不进行滤波处理,也就是说,Y 2(i)=Y 1(i),且Y 4(i)=Y 3(i),在此基础上,可以采用增强调整模式对Y 1(i)进行调整,并可以采用增强调整模式对Y 3(i)进行调整,因此,可以采用如下步骤进行DBF滤波处理:
第一步:原始像素值Y 1(i)通过垂直DBF滤波后得到滤波像素值Y 2(i)。
第二步:若BS为0,则通过增强调整模式对Y 1(i)进行调整,得到调整像素值Y 3(i)。若BS大于0,且abs(Y 2(i)-Y 1(i))<阈值,则启用增强滤波模式,基于Y 2(i)-Y 1(i),获得调整像素值Y 3(i),参见实施例8的第二步。若BS大于0,且abs(Y 2(i)-Y 1(i))不小于阈值,则启用普通滤波模式,不再对滤波像素值Y 2(i)进行调整,即,Y 3(i)=Y` 2(i)。
示例性的,若BS为0,实际上并不执行第一步,即并不需要得到滤波像素值Y 2(i)。
第三步:像素值Y 3(i)通过水平DBF滤波后得到滤波像素值Y 4(i)。
第四步:若BS为0,则通过增强调整模式对Y 3(i)进行调整,得到调整像素值y 5(i)。若BS大于0,且abs(Y4(i)-Y3(i))<阈值,则启用增强滤波模式,基于Y 4(i)-Y 3(i),获得调整像素值Y 5(i),参见实施例8的第四步。若BS大于0,且abs(Y4(i)-Y3(i))不小于阈值,则启用普通滤波模式,不再对滤波像素值Y 4(i)进行调整,即,Y 5(i)=Y 4(i)。
示例性的,若BS为0,实际上并不执行第三步,即并不需要得到滤波像素值Y 4(i)。
在另一可能的实施方式中,若BS大于0,则进行滤波处理,但滤波处理后abs(Y 2(i)-Y 1(i))<阈值,则采用增强滤波模式进行处理,即可以采用如下步骤进行DBF滤波处理:
第一步:原始像素值Y 1(i)通过垂直DBF滤波后得到滤波像素值Y 2(i)。
第二步:若BS大于0,但是,对Y 1(i)进行垂直DBF滤波后,仍然满足abs(Y 2(i)-Y 1(i))<阈值,则通过增强滤波模式获得调整像素值Y 3(i),比如说,Y 3(i)通过Y 1(i)加补偿值获得。否则,若abs(Y 2(i)-Y 1(i))不小于阈值,则Y 3(i)=Y` 2(i)。
第三步:像素值Y 3(i)通过水平DBF滤波后得到滤波像素值Y 4(i)。
第四步:若BS大于0,但是,对Y 3(i)进行水平DBF滤波后,仍然满足abs(Y 4(i)-Y 3(i))<阈值,则通过增强滤波模式获得调整像素值Y 5(i),比如说,Y 5(i)通过Y 3(i)加补偿值获得。否则,若 abs(Y4(i)-Y3(i))不小于阈值,则Y 5(i)=Y 4(i)。
实施例10:针对实施例9,若BS为0,则通过增强调整模式对Y 1(i)进行调整,得到调整像素值Y 3(i),具体调整过程参见如下步骤。若BS为0,则通过增强调整模式对Y 3(i)进行调整,得到调整像素值Y 5(i),该过程与得到调整像素值Y 3(i)的过程类似,在此不再赘述。
首先,确定Y 1(i)的梯度值,Y 1(i)可以是当前像素点的原始像素值,也可以是参考像素点的原始像素值。针对垂直边界来说,可以计算Y 1(i)的水平梯度值DY 1(i);针对水平边界来说,可以计算Y 1(i)的垂直梯度值DY 1(i)。比如说,假设pi和qi分别为当前像素点的原始像素值和参考像素点的原始像素值(对应Y 1(i)),则计算当前像素点的原始像素值pi的梯度值DP0:DP0=(pi-qi+2)>>2,计算参考像素点的原始像素值qi的梯度值DQ0:DQ0=(qi-pi+2)>>2。
然后,基于DY 1(i)的大小,进行补偿调整得到Y 3(i)。比如说,采用如下方式确定出当前像素点的原始像素值pi对应的调整像素值Pi:若DPi>alt_dbr_th,则Pi=clip(pi+alt_dbr_offset0);若DPi<alt_dbr_th,则Pi=clip(pi+alt_dbr_offset1),i为0、1、2等。采用如下方式确定出参考像素点的原始像素值qi对应的调整像素值Qi:若DQi>alt_dbr_th,则Qi=clip(qi+alt_dbr_offset0);若DQi<-alt_dbr_th,则Qi=clip(qi+alt_dbr_offset1),i为0、1、2。
在上述公式中,alt_dbr_th表示第一调整阈值和第三调整阈值(以第三调整阈值与第一调整阈值相同为例),alt_dbr_offset0表示第一调整偏移值和第三调整偏移值(以第三调整偏移值与第一调整偏移值相同为例),alt_dbr_offset1表示第二调整偏移值和第四调整偏移值(以第四调整偏移值与第二调整偏移值相同为例),-alt_dbr_th表示第二调整阈值和第四调整阈值(以第四调整阈值与第二调整阈值相同为例),且-alt_dbr_th与alt_dbr_th互为相反数。
实施例11:通过高层语法(如SPS级高层语法)控制增强调整模式的启用。比如说,在序列头中编码/解码标志位adbr_enable_flag,即编码端在序列头中编码标志位adbr_enable_flag,解码端从序列头中解码标志位adbr_enable_flag。adbr_enable_flag为二值变量,值为‘1’表示可使用增强调整模式,值为‘0’表示不应使用增强调整模式。AdbrEnableFlag的值等于adbr_enable_flag,如果位流中不存在adbr_enable_flag,AdbrEnableFlag的值为0。
综上所述,针对解码端来说,可以从高层语法中解析出当前块对应的增强调整模式使能标志位(即AdbrEnableFlag),若该增强调整模式使能标志位为1,则说明增强调整模式使能标志位允许当前块启用增强调整模式,若该增强调整模式使能标志位为0,则说明增强调整模式使能标志位不允许当前块启用增强调整模式。
实施例12:通过高层语法(如SPS级高层语法)同时控制增强滤波模式的启用和增强调整模式的启用。比如说,在序列头中编码/解码标志位dbr_enable_flag,即编码端在序列头中编码标志位dbr_enable_flag,解码端从序列头中解码标志位dbr_enable_flag。
dbr_enable_flag为二值变量,值为‘1’表示可允许使用增强滤波模式和增强调整模式,值为‘0’表示不允许使用增强滤波模式和增强调整模式。DbrEnableFlag的值等于dbr_enable_flag,如果位流中不存在dbr_enable_flag,DbrEnableFlag的值为0。
综上所述,针对解码端来说,可以从高层语法中解析出当前块对应的增强滤波模式使能标志位和增强调整模式使能标志位(即DbrEnableFlag,也就是DbrEnableFlag同时作为增强滤波模式使能标志位和增强调整模式使能标志位),若DbrEnableFlag为1,则说明允许当前块启用增强滤波模式和增强调整模式,若DbrEnableFlag为0,则说明不允许当前块启用增强滤波模式和增强调整模式。
实施例13:高层语法(如图像头高层语法)的一种表述可以参见表1所示,比如说,在图像头编码/解码表1所示的语法。即,编码端在图像头编码表1所示的语法,解码端从图像头中解码表1所示的语法。
表1
Figure PCTCN2022077298-appb-000001
Figure PCTCN2022077298-appb-000002
在表1中,相关语法的含义如下所示:
图像级去块滤波垂直调整允许标志picture_dbr_v_enable_flag,picture_dbr_v_enable_flag是二值变量,值为‘1’表示当前图像允许使用去块滤波垂直调整,值为‘0’表示当前图像不允许使用去块滤波垂直调整。PictureDbrVEnableFlag的值等于picture_dbr_v_enable_flag的值,如果位流中不存在picture_dbr_v_enable_flag,则PhDbrVEnableFlag的值为0。
示例性的,针对增强调整模式来说,PictureDbrVEnableFlag与增强调整模式使能标志位对应,是针对垂直DBF滤波的增强调整模式使能标志位。也就是说,在需要进行垂直DBF滤波时,PictureDbrVEnableFlag表示允许启用增强调整模式,或不允许启用增强调整模式。
示例性的,针对增强滤波模式来说,PictureDbrVEnableFlag与增强滤波模式使能标志位对应,是针对垂直DBF滤波的增强滤波模式使能标志位。也就是说,在需要进行垂直DBF滤波时,PictureDbrVEnableFlag表示允许启用增强滤波模式,或不允许启用增强滤波模式。
综上所述,PictureDbrVEnableFlag可以表示针对垂直DBF滤波的增强调整模式使能标志位和针对垂直DBF滤波的增强滤波模式使能标志位,也就是说,增强调整模式使能标志位和增强滤波模式使能标志位共用同一个标志位,即,当前图像同时允许启用增强调整模式和增强滤波模式,或者,当前图像同时不允许启用增强调整模式和增强滤波模式。
去块滤波垂直调整阈值dbr_v_threshold_minus1,dbr_v_threshold_minus1用于确定当前图像去块滤波垂直调整的阈值,取值范围是0-1。DbrVThreshold的值等于dbr_v_threshold_minus1的值加1,如果位流中不存在dbr_v_threshold_minus1,则DbrVThreshold的值为0。
示例性的,针对增强调整模式来说,DbrVThreshold与第一调整阈值(以第三调整阈值与第一调整阈值相同为例)对应,是针对垂直DBF滤波的第一调整阈值。也就是说,在需要进行垂直DBF滤波时,DbrVThreshold表示上述实施例的第一调整阈值。而且,上述实施例的第二调整阈值(以第四调整阈值与第二调整阈值相同为例)与第一调整阈值互为相反数,因此,也可以基于DbrVThreshold确定出第二调整阈值。
示例性的,针对增强滤波模式来说,DbrVThreshold与第一滤波阈值(以第三滤波阈值与第一滤波阈值相同为例)对应,是针对垂直DBF滤波的第一滤波阈值。也就是说,在需要进行垂直DBF滤波时,DbrVThreshold表示上述实施例的第一滤波阈值。而且,上述实施例的第二滤波阈值(以第四滤波阈值与第二滤波阈值相同为例)与第一滤波阈值互为相反数,因此,也可以基于DbrVThreshold确定出第二滤波阈值。
综上所述,DbrVThreshold可以表示针对垂直DBF滤波的第一调整阈值和第一滤波阈值,也就是说,第一调整阈值和第一滤波阈值相同,二者为同一个取值。
去块滤波垂直调整偏移值0(dbr_v_offset0_minus1),用于确定当前图像去块滤波垂直调整的偏移值0,取值范围是0-3。DbrVOffset0的值等于dbr_v_offset0_minus1的值加1后再取相反数得到的负值,如果位流中不存在dbr_v_offset0_minus1,则DbrVOffset0的值为0。
示例性的,针对增强滤波模式来说,DbrVOffset0与第一滤波偏移值(以第三滤波偏移值与第一滤波偏移值相同为例)对应,是针对垂直DBF滤波的第一滤波偏移值,即,在需要进行垂直DBF滤波时,DbrVOffset0表示上述实施例的第一滤波偏移值。
去块滤波垂直调整偏移值1(dbr_v_offset1_minus1),用于确定当前图像去块滤波垂直调整的偏移值1,取值范围可以是0-3。DbrVOffset1的值等于dbr_v_offset1_minus1的值加1。如果位流中不存在dbr_v_offset1_minus1,则DbrVOffset1的值为0。
示例性的,针对增强滤波模式来说,DbrVOffset1与第二滤波偏移值(以第四滤波偏移值与第二滤波偏移值相同为例)对应,是针对垂直DBF滤波的第二滤波偏移值,即,在需要进行垂直DBF滤波时,DbrVOffset1表示上述实施例的第二滤波偏移值。
增强去块滤波垂直调整偏移值0(dbr_v_alt_offset0_minus1),dbr_v_alt_offset0_minus1用于确定当前图像去块滤波BS为0时的垂直调整的偏移值0,dbr_v_alt_offset0_minus1的取值范围可以是0-3。DbrVAltOffset0的值可以等于dbr_v_alt_offset0_minus1的值加1后再取相反数得到的负值,如果位流中不存在dbr_v_alt_offset0_minus1,则DbrVAltOffset0的值为0。
示例性的,针对增强调整模式来说,DbrVAltOffset0与第一调整偏移值(以第三调整偏移值与第一调整偏移值相同为例)对应,是针对垂直DBF滤波的第一调整偏移值,即,在进行垂直DBF滤波时,DbrVAltOffset0表示上述实施例的第一调整偏移值。
增强去块滤波垂直调整偏移值1(dbr_v_alt_offset1_minus1),dbr_v_alt_offset1_minus1用于确定当前图像去块滤波BS为0时的垂直调整的偏移值1,dbr_v_alt_offset1_minus1的取值范围可以是0-3。其中,DbrVAltOffset1的值等于dbr_v_alt_offset1_minus1的值加1,如果位流中不存在dbr_v_alt_offset1_minus1,则DbrVAltOffset1的值为0。
示例性的,针对增强调整模式来说,DbrVAltOffset1与第二调整偏移值(以第四调整偏移值与第二调整偏移值相同为例)对应,是针对垂直DBF滤波的第二调整偏移值,即,在进行垂直DBF滤波时,DbrVAltOffset1表示上述实施例的第二调整偏移值。
图像级去块滤波水平调整允许标志picture_dbr_h_enable_flag,picture_dbr_h_enable_flag是二值变量,值为‘1’表示当前图像允许使用去块滤波水平调整,值为‘0’表示当前图像不允许使用去块滤波水平调整。PhDbrHEnableFlag的值等于picture_dbr_h_enable_flag的值,如果位流中不存在picture_dbr_h_enable_flag,则PhDbrHEnableFlag的值为0。
示例性的,针对增强调整模式来说,PhDbrHEnableFlag与增强调整模式使能标志位对应,是针对水平DBF滤波的增强调整模式使能标志位。也就是说,在需要进行水平DBF滤波时,PhDbrHEnableFlag表示允许启用增强调整模式,或不允许启用增强调整模式。
示例性的,针对增强滤波模式来说,PhDbrHEnableFlag与增强滤波模式使能标志位对应,是针对水平DBF滤波的增强滤波模式使能标志位。也就是说,在需要进行水平DBF滤波时,PhDbrHEnableFlag表示允许启用增强滤波模式,或不允许启用增强滤波模式。
综上所述,PhDbrHEnableFlag可以表示针对水平DBF滤波的增强调整模式使能标志位和针对水平DBF滤波的增强滤波模式使能标志位,也就是说,增强调整模式使能标志位和增强滤波模式使能标志位共用同一个标志位,即,当前图像同时允许启用增强调整模式和增强滤波模式,或者,当前图像同时不允许启用增强调整模式和增强滤波模式。
去块滤波水平调整阈值dbr_h_threshold_minus1,dbr_h_threshold_minus1用于确定当前图像去块滤波水平调整的阈值,取值范围是0-1。DbrHThreshold的值等于dbr_h_threshold_minus1的值加1,如果位流中不存在dbr_h_threshold_minus1,则DbrHThreshold的值为0。
示例性的,针对增强调整模式来说,DbrHThreshold与第一调整阈值(以第三调整阈值与第一调整阈值相同为例)对应,是针对水平DBF滤波的第一调整阈值。也就是说,在需要进行水平DBF滤波时,DbrHThreshold表示上述实施例的第一调整阈值。而且,上述实施例的第二调整阈值(以第四调整阈值与第二调整阈值相同为例)与第一调整阈值互为相反数,因此,也可以基于DbrHThreshold确定出第二调整阈值。
示例性的,针对增强滤波模式来说,DbrHThreshold与第一滤波阈值(以第三滤波阈值与第一滤波阈值相同为例)对应,是针对水平DBF滤波的第一滤波阈值。也就是说,在需要进行水平DBF滤波时,DbrHThreshold表示上述实施例的第一滤波阈值。而且,上述实施例的第二滤波阈值(以第四滤波阈值与第二滤波阈值相同为例)与第一滤波阈值互为相反数,因此,也可以基于DbrHThreshold确定出第二滤波阈值。
综上所述,DbrHThreshold可以表示针对水平DBF滤波的第一调整阈值和第一滤波阈值,也就是说,第一调整阈值和第一滤波阈值相同,二者为同一个取值。
去块滤波水平调整偏移值0(dbr_h_offset0_minus1),用于确定当前图像去块滤波水平调整的偏移值0,取值范围是0-3。DbrHOffset0的值等于dbr_h_offset0_minus1的值加1后再取相反数得到的负值,如果位流中不存在dbr_h_offset0_minus1,则DbrHOffset0的值为0。
示例性的,针对增强滤波模式来说,DbrHOffset0与第一滤波偏移值(以第三滤波偏移值与第一滤波偏移值相同为例)对应,是针对水平DBF滤波的第一滤波偏移值,即,在需要进行水平DBF滤波时,DbrHOffset0表示上述实施例的第一滤波偏移值。
去块滤波水平调整偏移值1(dbr_h_offset1_minus1),用于确定当前图像去块滤波水平调整的偏移值1,取值范围可以是0-3。DbrHOffset1的值等于dbr_h_offset1_minus1的值加1。如果位流中不存在dbr_h_offset1_minus1,则DbrHOffset1的值为0。
示例性的,针对增强滤波模式来说,DbrHOffset1与第二滤波偏移值(以第四滤波偏移值与第二滤波偏移值相同为例)对应,是针对水平DBF滤波的第二滤波偏移值,即,在需要进行水平DBF滤波时,DbrHOffset1表示上述实施例的第二滤波偏移值。
增强去块滤波水平调整偏移值0(dbr_h_alt_offset0_minus1),dbr_h_alt_offset0_minus1用于确定当前图像去块滤波BS为0时的水平调整的偏移值0,dbr_h_alt_offset0_minus1的取值范围可以是0-3。DbrHAltOffset0的值可以等于dbr_h_alt_offset0_minus1的值加1后再取相反数得到的负值,如果位流中不存在dbr_h_alt_offset0_minus1,DbrHAltOffset0的值为0。
示例性的,针对增强调整模式来说,DbrHAltOffset0与第一调整偏移值(以第三调整偏移值与 第一调整偏移值相同为例)对应,是针对水平DBF滤波的第一调整偏移值,即,在进行水平DBF滤波时,DbrHAltOffset0表示上述实施例的第一调整偏移值。
增强去块滤波水平调整偏移值1(dbr_h_alt_offset1_minus1),dbr_h_alt_offset1_minus1用于确定当前图像去块滤波BS为0时的水平调整的偏移值1,dbr_h_alt_offset1_minus1的取值范围可以是0-3。其中,DbrHAltOffset1的值等于dbr_h_alt_offset1_minus1的值加1,如果位流中不存在dbr_h_alt_offset1_minus1,则DbrHAltOffset1的值为0。
示例性的,针对增强调整模式来说,DbrHAltOffset1与第二调整偏移值(以第四调整偏移值与第二调整偏移值相同为例)对应,是针对水平DBF滤波的第二调整偏移值,即,在进行水平DBF滤波时,DbrHAltOffset1表示上述实施例的第二调整偏移值。
实施例14:高层语法(如图像头高层语法)的一种表述可以参见表2所示,比如说,在图像头编码/解码表2所示的语法。即,编码端在图像头编码表2所示的语法,解码端从图像头中解码表2所示的语法。
表2
Figure PCTCN2022077298-appb-000003
在表2中,相关语法的含义如下所示:
图像级增强垂直调整允许标志picture_alt_dbr_v_enable_flag,是一个二值变量,值为‘1’表示当前图像允许使用增强垂直调整,值为‘0’表示当前图像不允许使用增强垂直调整。PictureAltDbrVEnableFlag的值可以等于picture_alt_dbr_v_enable_flag的值,如果位流中不存在picture_alt_dbr_v_enable_flag,则PhAltDbrVEnableFlag的值为0。
示例性的,针对增强调整模式来说,PictureAltDbrVEnableFlag与增强调整模式使能标志位对应,是针对垂直DBF滤波的增强调整模式使能标志位,也就是说,在需要进行垂直DBF滤波时,PictureAltDbrVEnableFlag表示允许启用增强调整模式,或不允许启用增强调整模式。
与实施例13中的PictureDbrVEnableFlag不同的是,PictureAltDbrVEnableFlag只是针对垂直DBF滤波的增强调整模式使能标志位,而不是针对垂直DBF滤波的增强滤波模式使能标志位。
图像级增强水平调整允许标志picture_alt_dbr_h_enable_flag,是一个二值变量,值为‘1’表示当前图像允许使用增强水平调整,值为‘0’表示当前图像不允许使用增强水平调整。PictureAltDbrHEnableFlag的值可以等于picture_alt_dbr_h_enable_flag的值,如果位流中不存在picture_alt_dbr_h_enable_flag,PhAltDbrHEnableFlag的为0。
示例性的,针对增强调整模式来说,PhAltDbrHEnableFlag与增强调整模式使能标志位对应, 是针对水平DBF滤波的增强调整模式使能标志位,也就是说,在需要进行水平DBF滤波时,PhAltDbrHEnableFlag表示允许启用增强调整模式,或不允许启用增强调整模式。
与实施例13中的PictureDbrHEnableFlag不同的是,PhAltDbrHEnableFlag只是水平DBF滤波的增强调整模式使能标志位,而不是针对水平DBF滤波的增强滤波模式使能标志位。
关于表2中其它语法的含义,与表1中相关语法的含义相同,在此不再重复赘述。
实施例15:针对实施例11来说,adbr_enable_flag的编码和解码,可以在去块滤波模式启用时才进行adbr_enable_flag的编码和解码,也就是说,可以先确定是否启用去块滤波模式,如果是,才会在序列头中编码/解码标志位adbr_enable_flag,如果否,则不在序列头中编码/解码标志位adbr_enable_flag。综上所述,增强调整模式(adbr_enable_flag用于控制增强调整模式的启用)是去块滤波模式的子模式,在去块滤波模式启用时才允许启用增强调整模式。
针对实施例12来说,dbr_enable_flag的编码和解码,可以在去块滤波模式启用时才进行dbr_enable_flag的编码和解码,也就是说,可以先确定是否启用去块滤波模式,如果是,才会在序列头中编码/解码标志位dbr_enable_flag,如果否,则不在序列头中编码/解码标志位dbr_enable_flag。综上所述,增强滤波模式(dbr_enable_flag用于控制增强滤波模式的启用)是去块滤波模式的子模式,在去块滤波模式启用时才允许启用增强滤波模式。
针对实施例13来说,表1所示的高层语法(用于控制增强滤波模式的启用和增强调整模式的启用)的编码和解码,可以在去块滤波模式启用时才进行该高层语法的编码和解码,也就是说,可以先确定是否启用去块滤波模式,如果是,才会在图像头中编码/解码表1所示的高层语法,如果否,则不在图像头中编码/解码表1所示的高层语法。
针对实施例14来说,表2所示的高层语法(用于控制增强滤波模式的启用和增强调整模式的启用)的编码和解码,可以在去块滤波模式启用时才进行该高层语法的编码和解码,也就是说,可以先确定是否启用去块滤波模式,如果是,才会在图像头中编码/解码表2所示的高层语法,如果否,则不在图像头中编码/解码表2所示的高层语法。
实施例16:针对亮度分量(即当前块为亮度分量)的去块滤波过程,比如说,采用增强调整模式对亮度分量进行调整,或者,采用增强滤波模式对亮度分量进行调整。
关于亮度分量的DBR参数的推导过程:
如果当前待滤波边界为垂直边界且PictureDbrVEnableFlag的值为1,或者,当前待滤波边界为水平边界且PictureDbrHEnableFlag的值为1,则PictureDbrEnableFlag的值为1;否则,PictureDbrEnableFlag为0。以及,如果当前待滤波边界为垂直边界且PictureAltDbrVEnableFlag的值为1,或者,当前待滤波边界为水平边界且PictureAltDbrHEnableFlag的值为1,则PictureAltDbrEnableFlag的值为1;否则,PictureAltDbrEnableFlag为0。
按照如下方法导出dbr_th、dbr_offset0、dbr_offset1、alt_dbr_offset0、alt_dbr_offset1:
对于垂直边界,dbr_th=DbrVThreshold,dbr_offset0=DbrVOffset0,dbr_offset1=DbrVOffset1,alt_dbr_offset0=DbrVAltOffset0,alt_dbr_offset1=DbrVAltOffset1。
对于水平边界,dbr_th=DbrHThreshold,dbr_offset0=DbrHOffset0,dbr_offset1=DbrHOffset1,alt_dbr_offset0=DbrHAltOffset0,alt_dbr_offset1=DbrHAltOffset1。
(1)亮度分量的BS等于4时的边界滤波过程(采用增强滤波模式进行处理):
在BS的值为4时,对p0、p1、p2和q0、q1、q2的滤波计算过程如下:
P0=(p2*3+p1*8+p0*10+q0*8+q1*3+16)>>5;
P1=(p2*4+p1*5+p0*4+q0*3+8)>>4;
P2=(p3*2+p2*2+p1*2+p0*1+q0*1+4)>>3;
Q0=(p1*3+p0*8+q0*10+q1*8+q2×3+16)>>5
Q1=(p0*3+q0*4+q1*5+q2*4+8)>>4;
Q2=(p0*1+q0*1+q1*2+q2*2+q3*2+4)>>3。
P0、P1、P2和Q0、Q1、Q2均是滤波后的值(即滤波像素值)。
在得到P0、P1、P2和Q0、Q1、Q2后,若PhDbrEnableFlag为1时,则:
若pi>Pi+dbr_th,则Pi’=clip((Pi+pi+1)>>1+dbr_offset0);否则,若pi<Pi–dbr_th,则Pi’=clip((Pi+pi+1)>>1+dbr_offset1),i=0,1,2。
若qi>Qi+dbr_th,则Qi’=clip((Qi+qi+1)>>1+dbr_offset0);否则,若qi<Qi–dbr_th,则Qi’=clip((Qi+qi+1)>>1+dbr_offset1),i=0,1,2。
在上述公式中,pi可以表示原始像素值,qi可以表示原始像素值,Pi可以表示滤波像素值,Qi可以表示滤波像素值,Pi’可以表示调整像素值,Qi’可以表示调整像素值。
(2)亮度分量的BS等于3时的边界滤波过程(采用增强滤波模式进行处理):
在BS的值为3时,对p0、p1和q0、q1的滤波计算过程如下:
P0=(p2+(p1<<2)+(p0<<2)+(p0<<1)+(q0<<2)+q1+8)>>4;
P1=((p2<<1)+p2+(p1<<3)+(p0<<2)+q0+8)>>4;
Q0=(p1+(p0<<2)+(q0<<2)+(q0<<1)+(q1<<2)+q2+8)>>4;
Q1=((q2<<1)+q2+(q1<<3)+(q0<<2)+p0+8)>>4。
P0、P1和Q0、Q1均是滤波后的值(即滤波像素值)。
在得到P0、P1和Q0、Q1后,若PhDbrEnableFlag为1时,则:
若pi>Pi+dbr_th,则Pi’=clip((Pi+pi+1)>>1+dbr_offset0);否则,若pi<Pi–dbr_th,则Pi’=clip((Pi+pi+1)>>1+dbr_offset1),i=0,1。
若qi>Qi+dbr_th,则Qi’=clip((Qi+qi+1)>>1+dbr_offset0);否则,若qi<Qi–dbr_th,则Qi’=clip((Qi+qi+1)>>1+dbr_offset1),i=0,1。
在上述公式中,pi可以表示原始像素值,qi可以表示原始像素值,Pi可以表示滤波像素值,Qi可以表示滤波像素值,Pi’可以表示调整像素值,Qi’可以表示调整像素值。
(3)亮度分量的BS等于2时的边界滤波过程(采用增强滤波模式进行处理):
在BS的值为2时,对p0和q0的滤波计算过程如下:
P0=((p1<<1)+p1+(p0<<3)+(p0<<1)+(q0<<1)+q0+8)>>4;
Q0=((p0<<1)+p0+(q0<<3)+(q0<<1)+(q1<<1)+q1+8)>>4。
P0和Q0均是滤波后的值(即滤波像素值)。
在得到P0和Q0后,若PhDbrEnableFlag为1时,则:
若pi>Pi+dbr_th,则Pi’=clip((Pi+pi+1)>>1+dbr_offset0);否则,若pi<Pi–dbr_th,则Pi’=clip((Pi+pi+1)>>1+dbr_offset1),i=0。
若qi>Qi+dbr_th,则Qi’=clip((Qi+qi+1)>>1+dbr_offset0);否则,若qi<Qi–dbr_th,则Qi’=clip((Qi+qi+1)>>1+dbr_offset1),i=0。
在上述公式中,pi可以表示原始像素值,qi可以表示原始像素值,Pi可以表示滤波像素值,Qi可以表示滤波像素值,Pi’可以表示调整像素值,Qi’可以表示调整像素值。
(4)亮度分量的BS等于1时的边界滤波过程(采用增强滤波模式进行处理):
在BS的值为1时,对p0和q0的滤波计算过程如下:
P0=((p0<<1)+p0+q0+2)>>2;
Q0=((q0<<1)+q0+p0+2)>>2。
P0和Q0均是滤波后的值(即滤波像素值)。
在得到P0和Q0后,若PhDbrEnableFlag为1时,则:
若pi>Pi+dbr_th,则Pi’=clip((Pi+pi+1)>>1+dbr_offset0);否则,若pi<Pi–dbr_th,则Pi’=clip((Pi+pi+1)>>1+dbr_offset1),i=0。
若qi>Qi+dbr_th,则Qi’=clip((Qi+qi+1)>>1+dbr_offset0);否则,若qi<Qi–dbr_th,则Qi’=clip((Qi+qi+1)>>1+dbr_offset1),i=0。
在上述公式中,pi可以表示原始像素值,qi可以表示原始像素值,Pi可以表示滤波像素值,Qi可以表示滤波像素值,Pi’可以表示调整像素值,Qi’可以表示调整像素值。
(5)亮度分量的BS等于0时的边界滤波过程的方式一(采用增强调整模式进行处理):
在BS的值为0时,对pi和qi的滤波计算过程如下:
确定pi的梯度值DPi和qi的梯度值DQi。比如说,DPi=(pi-qi+2)>>2,DQi=(qi-pi+2)>>2。或者,DPi=(pi-qi+1)>>1,DQi=(qi-pi+1)>>1。
在得到DPi和DQi后,若PhAltDbrEnableFlag为1时,则:
若DPi>dbr_th,则Pi=clip(pi+alt_dbr_offset0);否则,若DPi<–dbr_th,则Pi=clip(pi+alt_dbr_offset1);
若DQi>dbr_th,则Qi=clip(qi+alt_dbr_offset0);否则,若DQi<–dbr_th,则Qi=clip(qi+alt_dbr_offset1)。
上述i可以为0,也可以为0,1,2等,对此不做限制。
在上述公式中,pi可以表示原始像素值,qi可以表示原始像素值,DPi可以表示梯度值DQi可以表示梯度值,Pi可以表示调整像素值,Qi可以表示调整像素值。
(6)亮度分量的BS等于0时的边界滤波过程的方式二(采用增强调整模式进行处理):
在BS的值为0时,对pi和qi的滤波计算过程如下:
确定pi的梯度值DPi和qi的梯度值DQi。如DPi=(pi-qi+1)>>1,DQi=(qi-pi+1)>>1。
在得到DPi和DQi后,若PhAltDbrEnableFlag为1时,则:
若DPi>2*dbr_th,则Pi=clip(pi+alt_dbr_offset0);否则,若DPi<–2*dbr_th,则Pi=clip(pi+alt_dbr_offset1);
若DQi>2*dbr_th,则Qi=clip(qi+alt_dbr_offset0);否则,若DQi<–2*dbr_th,则Qi=clip(qi+alt_dbr_offset1)。
上述2*dbr_th和–2*dbr_th,可以为上述实施例中的调整阈值。
上述i可以为0,也可以为0,1,2等,对此不做限制。
在上述公式中,pi可以表示原始像素值,qi可以表示原始像素值,DPi可以表示梯度值DQi可以表示梯度值,Pi可以表示调整像素值,Qi可以表示调整像素值。
(7)亮度分量的BS等于0时的边界滤波过程的方式三(采用增强调整模式进行处理):
在BS的值为0时,对pi和qi的滤波计算过程如下:
确定pi的梯度值DPi和qi的梯度值DQi。比如说,可以采用如下方式确定梯度值DPi和梯度值DQi:DPi=((pi<<1)+pi+qi+2)>>2,DQi=((qi<<1)+qi+pi+2)>>2。
在得到DPi和DQi后,若PhAltDbrEnableFlag为1时,则:
若pi>DPi+dbr_th,则Pi=clip(pi+alt_dbr_offset0);否则,若pi<DPi–dbr_th,则Pi=clip(pi+alt_dbr_offset1);
若qi>DQi+dbr_th,则Qi=clip(qi+alt_dbr_offset0);否则,若qi<DQi–dbr_th,则Qi=clip(qi+alt_dbr_offset1)。
在一种可能的实施方式中,上述表述可以等价为如下表达形式:
确定pi的梯度值DPi和qi的梯度值DQi。比如说,可以采用如下方式确定梯度值DPi和梯度值DQi:DPi=pi–(((pi<<1)+pi+qi+2)>>2),DQi=qi-(((qi<<1)+qi+pi+2)>>2)。
在得到DPi和DQi后,若PhAltDbrEnableFlag为1时,则:
若DPi>dbr_th,则Pi=clip(pi+alt_dbr_offset0);否则,若DPi<–dbr_th,则Pi=clip(pi+alt_dbr_offset1);
若DQi>dbr_th,则Qi=clip(qi+alt_dbr_offset0);否则,若DQi<–dbr_th,则Qi=clip(qi+alt_dbr_offset1)。
上述i可以为0,也可以为0,1,2等,对此不做限制。
在上述公式中,pi可以表示原始像素值,qi可以表示原始像素值,DPi可以表示梯度值DQi可以表示梯度值,Pi可以表示调整像素值,Qi可以表示调整像素值。
在上述实施例中,clip(x)表示将x限制在[0,2^(bit_depth)-1]之间(该区间可以包括0和2^(bit_depth)-1)。bit_depth表示图像的比特深度,一般为8、10、12等。
(8)亮度分量的BS等于0时的边界滤波过程的方式四(采用增强调整模式进行处理):
在BS的值为0时,对pi和qi的滤波计算过程如下:
确定pi的梯度值DPi和qi的梯度值DQi。比如说,可以采用如下方式确定梯度值DPi和梯度值DQi:DPi=(qi-pi+2)>>2,DQi=(pi-qi+2)>>2。
在得到DPi和DQi后,若PhAltDbrEnableFlag为1时,则:
若DPi<dbr_th,则Pi=clip(pi+dbr_alt_offset0);否则,若DPi>–dbr_th,则Pi=clip(pi+dbr_alt_offset1)。
若DQi<dbr_th,则Qi=clip(qi+alt_dbr_offset0);否则,若DQi>–dbr_th,则Qi=clip(qi+dbr_alt_offset1)。
上述i可以为0,也可以为0,1,2等,对此不做限制。
在上述公式中,pi可以表示原始像素值,qi可以表示原始像素值,DPi可以表示梯度值DQi可以表示梯度值,Pi可以表示调整像素值,Qi可以表示调整像素值。
(9)亮度分量的BS等于0时的边界滤波过程的方式五(采用增强调整模式进行处理):
在BS的值为0时,对pi和qi的滤波计算过程如下:
确定pi的梯度值DPi和qi的梯度值DQi。比如说,可以采用如下方式确定梯度值DPi和梯度值DQi:DPi=pi–(((pi<<1)+pi+qi+2)>>2),即DPi=(pi-qi-2)>>2,DQi=qi–(((qi<<1)+qi+pi+2)>>2),即DQi=(qi-pi-2)>>2。
在得到DPi和DQi后,若PhAltDbrEnableFlag为1时,则:
若DPi>dbr_th,则Pi=clip(pi+dbr_alt_offset0);否则,若DPi<-dbr_th则Pi=clip(pi+dbr_alt_offset1)。
若DQi>dbr_th则Qi=clip(qi+dbr_alt_offset0);否则,若DQi<–dbr_th,则Qi=clip(qi+dbr_alt_offset1)。
上述i可以为0,也可以为0,1,2等,对此不做限制。
在上述公式中,pi可以表示原始像素值,qi可以表示原始像素值,DPi可以表示梯度值DQi可以表示梯度值,Pi可以表示调整像素值,Qi可以表示调整像素值。
实施例17:针对实施例11和实施例12来说,可以将SPS级高层语法替换为PPS级高层语法,或图像头级高层语法,或帧级高层语法,或片头级高层语法,或CTU级高层语法,或CU级高层语法,对此高层语法的类型不做限制,即通过各种类型的高层语法均可以传输dbr_enable_flag或者adbr_enable_flag。针对实施例13和实施例14来说,可以将图像头高层语法替换为SPS级高层语法,或PPS级高层语法,或帧级高层语法,或片头级高层语法,或CTU级高层语法,或CU级高层语法,对此高层语法的类型不做限制,即通过各种类型的高层语法均可以传输表1或表2的内容,即通过各种类型的高层语法传输增强调整模式使能标志位、增强滤波模式使能标志位、第一调整阈值、第一滤波阈值、第一滤波偏移值、第二滤波偏移值、第一调整偏移值、第二调整偏移值等参数,具体实现方式与实施例13和实施例14的实施方式类似,在此不再赘述。
针对实施例13和实施例14来说,可以将图像头高层语法替换为CTU级高层语法,通过CTU级高层语法传输DBR的相关参数,DBR的相关参数可以包括第一调整阈值、第一滤波阈值、第一滤波偏移值、第二滤波偏移值、第一调整偏移值、第二调整偏移值等参数等内容,参见实施例13和实施例14。或者,可以将图像头高层语法替换为CU级高层语法,通过CU级高层语法传输DBR的相关参数,DBR的相关参数可以包括第一调整阈值、第一滤波阈值、第一滤波偏移值、第二滤波偏移值、第一调整偏移值、第二调整偏移值等参数等内容,参见实施例13和和实施例14。
实施例18:针对实施例16来说,是针对亮度分量的去块滤波过程,还可以将亮度分量替换为色度分量,即针对色度分量(即当前块为色度分量)进行去块滤波过程,色度分量的去块滤波过程与亮度分量的去块滤波过程类似,参见实施例16,在此不再赘述。
示例性的,上述实施例1-实施例18可以单独实现,也可以任意组合,如实施例1和实施例2可以组合,实施例1和实施例3可以组合,实施例1和实施例4可以组合,实施例1和实施例5可以组合,实施例1和实施例8-实施例18中的至少一个实施例可以组合;实施例8-实施例18中的至少两个实施例可以任意组合;实施例2和实施例8-实施例18中的至少一个实施例可以组合;实施例3和实施例8-实施例18中的至少一个实施例可以组合;实施例4和实施例8-实施例18中的至少一个实施例可以组合;实施例5和实施例8-实施例18中的至少一个实施例可以组合;实施例6和实施例8-实施例18中的至少一个实施例可以组合;实施例7和实施例8-实施例18中的至少一个实施例可以组合。当然,上述只是几个组合的示例,实施例1-实施例18之间的任意至少两个实施例,均可以进行组合实现相关过程。
示例性的,上述各实施例中,编码端的内容也可以应用到解码端,即解码端可以采用相同方式进行处理,解码端的内容也可以应用到编码端,即编码端可以采用相同方式进行处理。
实施例19:基于与上述方法同样的申请构思,本申请实施例中还提出一种解码装置,所述解码装置应用于解码端,所述解码装置包括:存储器,其经配置以存储视频数据;解码器,其经配置以实现上述实施例1-实施例18中的编解码方法,即解码端的处理流程。
比如说,在一种可能的实施方式中,解码器,其经配置以实现:
若当前块中的当前像素点满足增强调整模式的启用条件,则基于所述当前像素点的原始像素值和所述当前像素点的周围像素点的原始像素值确定所述当前像素点的梯度值;基于所述当前像素点的梯度值和所述当前像素点的原始像素值,确定所述当前像素点的调整像素值。
基于与上述方法同样的申请构思,本申请实施例中还提出一种编码装置,所述编码装置应用于编码端,所述编码装置包括:存储器,其经配置以存储视频数据;编码器,其经配置以实现上述实施例1-实施例18中的编解码方法,即编码端的处理流程。
比如说,在一种可能的实施方式中,编码器,其经配置以实现:
若当前块中的当前像素点满足增强调整模式的启用条件,则基于所述当前像素点的原始像素值和所述当前像素点的周围像素点的原始像素值确定所述当前像素点的梯度值;基于所述当前像素点的梯度值和所述当前像素点的原始像素值,确定所述当前像素点的调整像素值。
基于与上述方法同样的申请构思,本申请实施例提供的解码端设备(也可以称为视频解码器),从硬件层面而言,其硬件架构示意图具体可以参见图5A所示。包括:处理器511和机器可读存储介质512,其中:所述机器可读存储介质512存储有能够被所述处理器511执行的机器可执行指令;所述处理器511用于执行机器可执行指令,以实现本申请上述实施例1-18公开的方法。例如,所述处理器511用于执行机器可执行指令,以实现如下步骤:
若当前块中的当前像素点满足增强调整模式的启用条件,则基于所述当前像素点的原始像素值和所述当前像素点的周围像素点的原始像素值确定所述当前像素点的梯度值;基于所述当前像素点的梯度值和所述当前像素点的原始像素值,确定所述当前像素点的调整像素值。
基于与上述方法同样的申请构思,本申请实施例提供的编码端设备(也可以称为视频编码器),从硬件层面而言,其硬件架构示意图具体可以参见图5B所示。包括:处理器521和机器可读存储介质522,其中:所述机器可读存储介质522存储有能够被所述处理器521执行的机器可执行指令;所述处理器521用于执行机器可执行指令,以实现本申请上述实施例1-18公开的方法。例如,所述处理器521用于执行机器可执行指令,以实现如下步骤:
若当前块中的当前像素点满足增强调整模式的启用条件,则基于所述当前像素点的原始像素值和所述当前像素点的周围像素点的原始像素值确定所述当前像素点的梯度值;基于所述当前像素点的梯度值和所述当前像素点的原始像素值,确定所述当前像素点的调整像素值。
基于与上述方法同样的申请构思,本申请实施例还提供一种机器可读存储介质,所述机器可读存储介质上存储有若干计算机指令,所述计算机指令被处理器执行时,能够实现本申请上述示例公开的方法,如上述各实施例中的编解码方法。其中,上述机器可读存储介质可以是任何电子、磁性、光学或其它物理存储装置,可以包含或存储信息,如可执行指令、数据,等等。例如,机器可读存储介质可以是:RAM(Radom Access Memory,随机存取存储器)、易失存储器、非易失性存储器、闪存、存储驱动器(如硬盘驱动器)、固态硬盘、任何类型的存储盘(如光盘、dvd等),或者类似的存储介质,或者它们的组合。
基于与上述方法同样的申请构思,本申请实施例还提供一种计算机应用程序,所述计算机应用程序令被处理器执行时,能够实现本申请上述示例公开的编解码方法。
基于与上述方法同样的申请构思,本申请实施例还提供一种编解码装置,可以应用于编码端或者解码端,该编解码装置可以包括:
确定模块,用于若当前块中的当前像素点满足增强调整模式的启用条件,则基于所述当前像素点的原始像素值和所述当前像素点的周围像素点的原始像素值确定所述当前像素点的梯度值;处理模块,用于基于所述当前像素点的梯度值和所述当前像素点的原始像素值,确定所述当前像素点的调整像素值。
示例性的,所述处理模块基于所述当前像素点的梯度值和所述当前像素点的原始像素值,确定所述当前像素点的调整像素值时具体用于:
基于所述当前像素点的梯度值、所述当前像素点的原始像素值、第一调整阈值、第二调整阈值、第一调整偏移值和第二调整偏移值,确定所述当前像素点的调整像素值。
示例性的,所述处理模块基于所述当前像素点的梯度值、所述当前像素点的原始像素值、第一调整阈值、第二调整阈值、第一调整偏移值和第二调整偏移值,确定所述当前像素点的调整像素值时具体用于:若所述当前像素点的梯度值大于第一调整阈值,则基于所述当前像素点的原始像素值和第一调整偏移值确定所述当前像素点的调整像素值;
若所述当前像素点的梯度值小于第二调整阈值,则基于所述当前像素点的原始像素值和第二调整偏移值确定所述当前像素点的调整像素值。
在一种可能的实施方式中,若当前块中的当前像素点满足增强调整模式的启用条件,所述确定模块,还用于从所述当前块的相邻块中确定与所述当前像素点对应的参考像素点,基于所述参考像素点的原始像素值和所述参考像素点的周围像素点的原始像素值确定所述参考像素点的梯度值;所述处理模块,还用于基于所述参考像素点的梯度值和所述参考像素点的原始像素值,确定所述参考像素点的调整像素值。
示例性的,所述处理模块基于所述参考像素点的梯度值和所述参考像素点的原始像素值,确定所述参考像素点的调整像素值时具体用于:
基于所述参考像素点的梯度值、所述参考像素点的原始像素值、第三调整阈值、第四调整阈值、第三调整偏移值和第四调整偏移值,确定所述参考像素点的调整像素值。
示例性的,所述处理模块基于所述参考像素点的梯度值、所述参考像素点的原始像素值、第三调整阈值、第四调整阈值、第三调整偏移值和第四调整偏移值,确定所述参考像素点的调整像素值时具体用于:若所述参考像素点的梯度值大于第三调整阈值,则基于所述参考像素点的原始像素值和第三调整偏移值确定所述参考像素点的调整像素值;
若所述参考像素点的梯度值小于第四调整阈值,则基于所述参考像素点的原始像素值和第四调整偏移值确定所述参考像素点的调整像素值。
在一种可能的实施方式中,所述确定模块确定所述当前块中的当前像素点满足增强调整模式的启用条件时具体用于:若所述当前块中的当前像素点对应的待滤波边界的边界强度满足增强调整模式的启用条件,则确定所述当前像素点满足增强调整模式的启用条件;或者,若所述当前块对应的特征信息满足增强调整模式的启用条件,则确定所述当前块中的所述当前像素点满足增强调整模式的启用条件。
示例性的,所述处理模块,还用于若所述当前块中的当前像素点满足普通滤波模式的启用条件,则对所述当前像素点的原始像素值进行去块滤波,得到所述当前像素点的滤波像素值;若所述当前块中的当前像素点满足增强滤波模式的启用条件,则基于所述当前像素点的滤波像素值和所述当前像素点的原始像素值,确定所述当前像素点的调整像素值。
示例性的,所述处理模块基于所述当前像素点的滤波像素值和所述当前像素点的原始像素值,确定所述当前像素点的调整像素值时具体用于:基于所述当前像素点的滤波像素值,所述当前像素点的原始像素值,第一滤波阈值,第二滤波阈值,第一滤波偏移值和第二滤波偏移值,确定所述当前像素点的调整像素值;第一滤波阈值和第二滤波阈值互为相反数。
示例性的,若所述当前块中的当前像素点满足普通滤波模式的启用条件,所述处理模块,还用于从所述当前块的相邻块中确定与所述当前像素点对应的参考像素点;对所述参考像素点的原始像素值进行去块滤波,得到所述参考像素点的滤波像素值;
若所述当前块中的当前像素点满足增强滤波模式的启用条件,则基于所述参考像素点的滤波像素值和所述参考像素点的原始像素值,确定所述参考像素点的调整像素值。
示例性的,所述处理模块基于所述参考像素点的滤波像素值和所述参考像素点的原始像素值,确定所述参考像素点的调整像素值时具体用于:基于所述参考像素点的滤波像素值,所述参考像素点的原始像素值,第三滤波阈值,第四滤波阈值,第三滤波偏移值和第四滤波偏移值,确定所述参考像素点的调整像素值;第三滤波阈值和第四滤波阈值互为相反数。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机,计算机的具体形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件收发设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本申请时可以把各单元的功能在同一个或多个软件和/或硬件中实现。
本领域内的技术人员应明白,本申请实施例可提供为方法、系统、或计算机程序产品。本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。本申请实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可以由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其它可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其它可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可以存储在能引导计算机或其它可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或者多个流程和/或方框图一个方框或者多个方框中指定的功能。这些计算机程序指令也可装载到计算机或其它可编程数据处理设备上,使得在计算机或者其它可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其它可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (13)

  1. 一种解码方法,其特征在于,所述方法包括:
    若当前块中的当前像素点满足增强调整模式的启用条件,则基于所述当前像素点的原始像素值和所述当前像素点的周围像素点的原始像素值确定所述当前像素点的梯度值;
    基于所述当前像素点的梯度值和所述当前像素点的原始像素值,确定所述当前像素点的调整像素值;
    其中,所述当前块中的当前像素点满足增强调整模式的启用条件,包括:若所述当前块中的当前像素点对应的待滤波边界的边界强度满足增强调整模式的启用条件,则确定所述当前像素点满足增强调整模式的启用条件。
  2. 根据权利要求1所述的方法,其特征在于,所述当前块中的当前像素点对应的待滤波边界的边界强度满足增强调整模式的启用条件,包括:
    若所述当前像素点对应的待滤波边界的边界强度为预设第一数值,则确定所述待滤波边界的边界强度满足增强调整模式的启用条件。
  3. 根据权利要求2所述的方法,其特征在于,所述预设第一数值为0。
  4. 根据权利要求1所述的方法,其特征在于,所述基于所述当前像素点的梯度值和所述当前像素点的原始像素值,确定所述当前像素点的调整像素值,包括:
    基于所述当前像素点的梯度值、所述当前像素点的原始像素值、第一调整阈值、第二调整阈值、第一调整偏移值和第二调整偏移值,确定所述当前像素点的调整像素值;
    其中,若所述当前像素点的梯度值大于第一调整阈值,则基于所述当前像素点的原始像素值和第一调整偏移值确定所述当前像素点的调整像素值;
    若所述当前像素点的梯度值小于第二调整阈值,则基于所述当前像素点的原始像素值和第二调整偏移值确定所述当前像素点的调整像素值。
  5. 根据权利要求1所述的方法,其特征在于,
    若当前块中的当前像素点满足增强调整模式的启用条件,所述方法还包括:
    从所述当前块的相邻块中确定与所述当前像素点对应的参考像素点,基于所述参考像素点的原始像素值和所述参考像素点的周围像素点的原始像素值确定所述参考像素点的梯度值;
    基于所述参考像素点的梯度值、所述参考像素点的原始像素值、第三调整阈值、第四调整阈值、第三调整偏移值和第四调整偏移值,确定所述参考像素点的调整像素值;其中,若所述参考像素点的梯度值大于第三调整阈值,则基于所述参考像素点的原始像素值和第三调整偏移值确定所述参考像素点的调整像素值;若所述参考像素点的梯度值小于第四调整阈值,则基于所述参考像素点的原始像素值和第四调整偏移值确定所述参考像素点的调整像素值。
  6. 根据权利要求4或5所述的方法,其特征在于,所述方法还包括:
    从高层语法中解析出所述当前块对应的第一调整阈值,第一调整偏移值,第二调整偏移值,第三调整阈值,第三调整偏移值和第四调整偏移值。
  7. 根据权利要求1-5任一所述的方法,其特征在于,在确定当前块中的当前像素点满足增强调整模式的启用条件之前,所述方法还包括:
    若所述当前块对应的增强调整模式使能标志位允许所述当前块启用增强调整模式,则确定所述当前块中的当前像素点是否满足增强调整模式的启用条件;
    其中,所述方法还包括:从高层语法中解析出所述当前块对应的增强调整模式使能标志位。
  8. 一种编码方法,其特征在于,所述方法包括:
    若当前块中的当前像素点满足增强调整模式的启用条件,则基于所述当前像素点的原始像素值和所述当前像素点的周围像素点的原始像素值确定所述当前像素点的梯度值;
    基于所述当前像素点的梯度值和所述当前像素点的原始像素值,确定所述当前像素点的调整像素值;
    其中,所述当前块中的当前像素点满足增强调整模式的启用条件,包括:若所述当前块中的当前像素点对应的待滤波边界的边界强度满足增强调整模式的启用条件,则确定所述当前像素点满足增强调整模式的启用条件。
  9. 一种解码装置,其特征在于,所述解码装置包括:
    存储器,其经配置以存储视频数据;
    解码器,其经配置以实现权利要求1-7中任一项所述的方法。
  10. 一种编码装置,其特征在于,所述编码装置包括:
    存储器,其经配置以存储视频数据;
    编码器,其经配置以实现权利要求8所述的方法。
  11. 一种解码端设备,其特征在于,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;
    所述处理器用于执行机器可执行指令,以实现权利要求1-7中任一项所述的方法。
  12. 一种编码端设备,其特征在于,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;
    所述处理器用于执行机器可执行指令,以实现权利要求8所述的方法。
  13. 一种机器可读存储介质,其特征在于,
    所述机器可读存储介质存储有能够被处理器执行的机器可执行指令;其中,所述处理器用于执行所述机器可执行指令,以实现权利要求1-8中任一项所述的方法。
PCT/CN2022/077298 2021-02-23 2022-02-22 编解码方法、装置及其设备 WO2022179504A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
EP22758868.8A EP4277267A1 (en) 2021-02-23 2022-02-22 Coding and decoding method and apparatus, and devices therefor
JP2023551246A JP2024506213A (ja) 2021-02-23 2022-02-22 符号化復号方法、装置及びそのデバイス
AU2022227062A AU2022227062B2 (en) 2021-02-23 2022-02-22 Coding and decoding method and apparatus, and devices therefor
KR1020237027399A KR20230128555A (ko) 2021-02-23 2022-02-22 인코딩 및 디코딩 방법, 장치 및 이의 기기
US18/264,036 US20240048695A1 (en) 2021-02-23 2022-02-22 Coding and decoding method and apparatus and devices therefor
ZA2023/07790A ZA202307790B (en) 2021-02-23 2023-08-08 Coding and decoding method and apparatus, and devices therefor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110204154.2 2021-02-23
CN202110204154.2A CN114640845B (zh) 2021-02-23 2021-02-23 编解码方法、装置及其设备

Publications (1)

Publication Number Publication Date
WO2022179504A1 true WO2022179504A1 (zh) 2022-09-01

Family

ID=81073883

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/077298 WO2022179504A1 (zh) 2021-02-23 2022-02-22 编解码方法、装置及其设备

Country Status (9)

Country Link
US (1) US20240048695A1 (zh)
EP (1) EP4277267A1 (zh)
JP (1) JP2024506213A (zh)
KR (1) KR20230128555A (zh)
CN (2) CN114339223B (zh)
AU (1) AU2022227062B2 (zh)
TW (1) TWI806447B (zh)
WO (1) WO2022179504A1 (zh)
ZA (1) ZA202307790B (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113411584A (zh) * 2020-03-17 2021-09-17 北京三星通信技术研究有限公司 视频编解码的方法和装置
CN114339223B (zh) * 2021-02-23 2023-03-31 杭州海康威视数字技术股份有限公司 解码方法、装置、设备及机器可读存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060110062A1 (en) * 2004-11-23 2006-05-25 Stmicroelectronics Asia Pacific Pte. Ltd. Edge adaptive filtering system for reducing artifacts and method
US20120328029A1 (en) * 2011-06-22 2012-12-27 Texas Instruments Incorporated Systems and methods for reducing blocking artifacts
JP2015104061A (ja) * 2013-11-27 2015-06-04 三菱電機株式会社 動画像符号化装置及び動画像復号装置
CN105453565A (zh) * 2013-06-07 2016-03-30 凯迪迪爱通信技术有限公司 视频编码装置、视频解码装置、视频系统、视频编码方法、视频解码方法以及程序
CN106105201A (zh) * 2014-03-14 2016-11-09 高通股份有限公司 使用像素距离的解块滤波
CN108293117A (zh) * 2015-11-24 2018-07-17 三星电子株式会社 基于像素梯度的后处理帧内或帧间预测块的方法和装置
CN114125445A (zh) * 2021-06-30 2022-03-01 杭州海康威视数字技术股份有限公司 解码方法、装置、设备及机器可读存储介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9681132B2 (en) * 2010-11-24 2017-06-13 Thomson Licensing Dtv Methods and apparatus for adaptive loop filtering in video encoders and decoders
WO2019072582A1 (en) * 2017-10-09 2019-04-18 Canon Kabushiki Kaisha METHOD AND APPARATUS FOR FILTERING WITH DEPRESSION OF A BLOCK OF PIXELS
CN109889853A (zh) * 2019-02-26 2019-06-14 北京大学深圳研究生院 一种去块效应滤波方法、系统、设备及计算机可读介质
US11272203B2 (en) * 2019-07-23 2022-03-08 Tencent America LLC Method and apparatus for video coding
US11310519B2 (en) * 2019-09-18 2022-04-19 Qualcomm Incorporated Deblocking of subblock boundaries for affine motion compensated coding
CN113596457A (zh) * 2019-09-23 2021-11-02 杭州海康威视数字技术股份有限公司 编解码方法方法、装置及设备
CN112154666A (zh) * 2019-09-24 2020-12-29 深圳市大疆创新科技有限公司 视频编解码方法和装置
CN111669584B (zh) * 2020-06-11 2022-10-28 浙江大华技术股份有限公司 一种帧间预测滤波方法、装置和计算机可读存储介质
CN114339223B (zh) * 2021-02-23 2023-03-31 杭州海康威视数字技术股份有限公司 解码方法、装置、设备及机器可读存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060110062A1 (en) * 2004-11-23 2006-05-25 Stmicroelectronics Asia Pacific Pte. Ltd. Edge adaptive filtering system for reducing artifacts and method
US20120328029A1 (en) * 2011-06-22 2012-12-27 Texas Instruments Incorporated Systems and methods for reducing blocking artifacts
CN105453565A (zh) * 2013-06-07 2016-03-30 凯迪迪爱通信技术有限公司 视频编码装置、视频解码装置、视频系统、视频编码方法、视频解码方法以及程序
JP2015104061A (ja) * 2013-11-27 2015-06-04 三菱電機株式会社 動画像符号化装置及び動画像復号装置
CN106105201A (zh) * 2014-03-14 2016-11-09 高通股份有限公司 使用像素距离的解块滤波
CN108293117A (zh) * 2015-11-24 2018-07-17 三星电子株式会社 基于像素梯度的后处理帧内或帧间预测块的方法和装置
CN114125445A (zh) * 2021-06-30 2022-03-01 杭州海康威视数字技术股份有限公司 解码方法、装置、设备及机器可读存储介质

Also Published As

Publication number Publication date
EP4277267A1 (en) 2023-11-15
CN114640845A (zh) 2022-06-17
TWI806447B (zh) 2023-06-21
TW202245474A (zh) 2022-11-16
CN114339223B (zh) 2023-03-31
KR20230128555A (ko) 2023-09-05
CN114339223A (zh) 2022-04-12
JP2024506213A (ja) 2024-02-09
CN114640845B (zh) 2023-02-28
AU2022227062A1 (en) 2023-09-07
ZA202307790B (en) 2024-04-24
AU2022227062B2 (en) 2023-12-14
US20240048695A1 (en) 2024-02-08

Similar Documents

Publication Publication Date Title
CN113228646B (zh) 具有非线性限幅的自适应环路滤波(alf)
US11190762B2 (en) Intra-prediction mode-based image processing method and apparatus therefor
CN107197256B (zh) 用于对图像的序列进行编码和解码的方法和装置
KR102367708B1 (ko) 비디오 코딩 방법 및 장치
AU2023201119A1 (en) Sample adaptive offset control
JP2020018022A (ja) ビデオ符号化方法、ビデオ復号方法、ビデオエンコーダ、及びビデオデコーダ
WO2022179504A1 (zh) 编解码方法、装置及其设备
TW201743610A (zh) 樣本可調適之偏移控制
JP2023126949A (ja) クロスコンポーネント線形モデルを用いたビデオコーディング
JP2023090929A (ja) ビデオ復号化方法、ビデオ復号化装置及び記憶媒体
CN114143548B (zh) 视频编解码中变换系数的编解码
CN114125445B (zh) 解码方法、装置、设备及机器可读存储介质
CN114640847B (zh) 编解码方法、装置及其设备
CN113132739B (zh) 边界强度确定、编解码方法、装置及其设备
WO2021143177A1 (zh) 编码、解码方法、装置及其设备
RU2817405C1 (ru) Способ, оборудование и устройства для кодирования и декодирования
CN114007082A (zh) 一种解码、编码、编解码方法、装置及其设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22758868

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18264036

Country of ref document: US

ENP Entry into the national phase

Ref document number: 20237027399

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1020237027399

Country of ref document: KR

ENP Entry into the national phase

Ref document number: 2022758868

Country of ref document: EP

Effective date: 20230810

WWE Wipo information: entry into national phase

Ref document number: 2022227062

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 2023551246

Country of ref document: JP

ENP Entry into the national phase

Ref document number: 2022227062

Country of ref document: AU

Date of ref document: 20220222

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2023122954

Country of ref document: RU

NENP Non-entry into the national phase

Ref country code: DE