CN116980627A - Video filtering method and device for decoding, electronic equipment and storage medium - Google Patents

Video filtering method and device for decoding, electronic equipment and storage medium Download PDF

Info

Publication number
CN116980627A
CN116980627A CN202310845545.1A CN202310845545A CN116980627A CN 116980627 A CN116980627 A CN 116980627A CN 202310845545 A CN202310845545 A CN 202310845545A CN 116980627 A CN116980627 A CN 116980627A
Authority
CN
China
Prior art keywords
filtering
boundary
boundaries
video
blocks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310845545.1A
Other languages
Chinese (zh)
Inventor
朱泽智
冯伟伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202310845545.1A priority Critical patent/CN116980627A/en
Publication of CN116980627A publication Critical patent/CN116980627A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The disclosure provides a video filtering method and device for decoding, electronic equipment and a storage medium, and belongs to the technical field of multimedia. The method comprises the following steps: in decoding any video block within any video frame of a video, dividing the video block into a plurality of video sub-blocks; for any boundary in any video sub-block, determining a filtering parameter of the boundary based on the coding information of the video block; and filtering pixels at two sides of the boundaries of the video sub-blocks based on the filtering parameters of the boundaries of the video sub-blocks in the video block to obtain the filtered video block. The method can filter the current video block after the filtering parameters of the current video block are obtained, and the corresponding filtering parameters are not required to be searched from the memory storing the filtering parameters of a plurality of video blocks to filter the current video block, so that the time consumption of the reading process of the filtering parameters is reduced, and the filtering efficiency of the video is improved.

Description

Video filtering method and device for decoding, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of multimedia, and in particular relates to a video filtering method, a device, electronic equipment and a storage medium for decoding.
Background
With the development of multimedia technology, video can be transmitted through a network. Since the transmission is generally an encoded video, the receiving end of the video usually needs to decode the video before playing the video. Video coding typically includes processes such as prediction, transformation, quantization, reconstruction, in-Loop Filtering (In-Loop Filtering), and the like. In-loop filtering is an effective tool for improving video quality, and mainly includes luminance mapping and chrominance scaling, deblocking filtering (DBF), sample adaptive compensation, adaptive loop filtering, and the like. How to implement video filtering is an important point of research in the art.
In the related art, when deblocking filtering is started, a receiving end needs to first calculate and store a filtering parameter of a plurality of Coding Tree Units (CTUs) in a current video frame for each video frame; and then, carrying out filtering operation on each coding tree unit based on the filtering parameters of each coding tree unit, so that other decoding operations are continued based on the plurality of filtered coding tree units to obtain decoded video frames.
However, in the above technical solution, for each video frame, the filtering parameters need to be stored in frame level units, and then when each coding tree unit is processed later, the filtering parameters corresponding to the coding tree unit are read in the frame level memory.
Disclosure of Invention
The invention provides a video filtering method, a device, electronic equipment and a storage medium for decoding, which realize the purpose of packaging the calculation of filtering parameters in a video block and the filtering process together, so that after the filtering parameters of a current video block are obtained, the current video block can be filtered without searching corresponding filtering parameters from a memory storing the filtering parameters of a plurality of video blocks, the filtering of the current video block is carried out, the time consumption of the filtering parameters in the storage and reading processes is reduced, the follow-up access efficiency can be improved, and the filtering efficiency of the video is improved. The technical scheme of the present disclosure is as follows:
according to an aspect of the disclosed embodiments, there is provided a video filtering method for decoding, the method comprising:
In decoding any video block within any video frame of a video, dividing the video block into a plurality of video sub-blocks;
for any boundary in any video sub-block, determining a filtering parameter of the boundary based on the coding information of the video block;
and filtering pixels at two sides of the boundaries of the video sub-blocks based on the filtering parameters of the boundaries of the video sub-blocks in the video block to obtain the filtered video block.
According to another aspect of the disclosed embodiments, there is provided a decoded video filtering apparatus, the apparatus comprising:
a dividing unit configured to perform division of a video block into a plurality of video sub-blocks in decoding any video block within any video frame of a video;
a first determining unit configured to perform determination of a filtering parameter of any boundary in any one of the video sub-blocks based on encoding information of the video block;
and the filtering unit is configured to execute filtering parameters based on the boundaries of a plurality of video sub-blocks in the video block, and filter pixels on two sides of the boundaries of the plurality of video sub-blocks to obtain the filtered video block.
In some embodiments, the filtering unit includes:
a processing subunit configured to perform filtering a plurality of first boundaries from the boundaries of the plurality of video sub-blocks based on boundary strengths of the boundaries of the plurality of video sub-blocks to obtain a plurality of second boundaries, the first boundaries being used for representing boundaries that do not require filtering of pixels on both sides of the boundaries;
a determining subunit configured to perform determining a filtering manner of the plurality of second boundaries based on a boundary strength and a quantization parameter among the filtering parameters of the plurality of second boundaries;
and the filtering subunit is configured to execute a filtering mode based on the plurality of second boundaries, and filter pixels on two sides of the plurality of second boundaries respectively to obtain the filtered video block.
In some embodiments, the boundary strength of the boundary includes three cases of 0,1 and 2, where 0 is used to indicate that the corresponding boundary does not need to be subjected to filtering processing;
the apparatus further comprises:
a storage unit configured to perform storing boundary intensities of the boundaries in the form of 2 bits, wherein 00 is used to represent the boundary intensities of the boundaries equal to 0, 01 is used to represent the boundary intensities of the boundaries equal to 1, and 11 is used to represent the boundary intensities of the boundaries equal to 2.
In some embodiments, the processing subunit is configured to perform detecting boundary strength in filtering parameters of boundaries of the plurality of video sub-blocks with a count trailing zero instruction, filtering a plurality of first boundaries from the boundaries of the plurality of video sub-blocks, resulting in a plurality of third boundaries; and further screening the third boundaries to obtain the second boundaries.
In some embodiments, the processing subunit is configured to perform a third boundary traversing a target number at a time, the target number being used to represent a maximum number of boundaries that can be processed in parallel; for any traversal, detecting whether a boundary with zero boundary strength exists in the third boundaries of the target number in the current traversal; and when the boundary with the boundary strength of zero exists in the third boundaries of the target quantity, filtering the boundary with the boundary strength of zero from the third boundaries of the target quantity by adopting a gradient descent method to obtain the plurality of second boundaries.
In some embodiments, the processing subunit is configured to perform, in a case where there is a boundary with a boundary strength of zero in the third boundary of the target number, narrowing the detection range to half of the target number, and detecting again; and filtering out the boundary with zero boundary strength to obtain the plurality of second boundaries in the case that the boundary with zero boundary strength is found.
In some embodiments, the determining subunit comprises:
a first determining subunit configured to perform a process of determining, based on the single instruction stream multiple data streams, the number of boundaries for which filtering decisions are made at a time as a target number, the filtering decisions being used to represent a filtering manner of determining the boundaries, the target number being used to represent a maximum number of boundaries that can be processed in parallel;
and a second determining subunit configured to determine, in a filtering decision process, a filtering manner of the second boundaries based on a boundary strength and a quantization parameter in filtering parameters of the second boundaries for any one of the second boundaries of the target number.
In some embodiments, the second determining subunit is configured to perform, for any one of the second boundaries of the target number, determining a filtering threshold based on quantization parameters of the second boundary; determining that the second boundary needs to be filtered under the condition that the texture degree of the second boundary is smaller than the filtering threshold value, wherein the texture degree is used for representing the pixel change rate of two sides of the corresponding boundary; determining a filtering intensity threshold based on the boundary intensity in the filtering parameters of the second boundary; and determining a filtering mode of the second boundary based on the filtering intensity threshold.
In some embodiments, the second determining subunit is configured to determine a filtering range of the second boundary based on a maximum filtering length of the second boundary, where the filtering range is used to represent how many pixels are used when filtering pixels on both sides of the second boundary; in the case that the filtering range of the second boundary indicates that short tap filtering is required for pixels on two sides of the second boundary, determining the filtering strength of the second boundary based on the filtering strength threshold, wherein the filtering strength is used for representing the correction range of the filtered pixels compared with the pixels before filtering.
In some embodiments, the apparatus further comprises:
and a second determination unit configured to determine, based on the filter parameters of the boundary, filter parameters of other boundaries located at a common side of transform units in the video block, the transform units being determined based on the encoding information, the common side referring to a side common to both transform units.
In some embodiments, the filtering unit is configured to perform parallel processing on filtering decisions of a plurality of boundaries located on a common side of the transformation unit based on a single instruction stream multiple data stream, so as to obtain filtering modes of the plurality of boundaries, wherein the filtering decisions are used for representing a process of determining the filtering modes of the boundaries; and filtering the plurality of boundaries in parallel based on the single instruction stream multiple data streams and the filtering mode of the plurality of boundaries.
According to another aspect of the embodiments of the present disclosure, there is provided an electronic device including:
one or more processors;
a memory for storing the processor-executable program code;
wherein the processor is configured to execute the program code to implement the video filtering method for decoding described above.
According to another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the video filtering method for decoding described above.
According to another aspect of the disclosed embodiments, there is provided a computer program product comprising a computer program/instruction which, when executed by a processor, implements the video filtering method for decoding described above.
The embodiment of the disclosure provides a video filtering method for decoding, in the process of decoding any video frame in a video, the video frame can be processed by taking a video block as a unit, specifically, when any video block in the video frame is decoded, the video block is divided into a plurality of video sub-blocks, then according to coding information of the video block, filtering parameters of boundaries of the plurality of video sub-blocks are calculated, and then according to the filtering parameters of the boundaries of the plurality of video sub-blocks, pixels on two sides of the boundaries of the plurality of video sub-blocks are filtered to obtain a filtered video block, so that the purpose of packaging calculation and filtering processes of the filtering parameters in the video block together is realized, the current video block can be filtered after the filtering parameters of the current video block are obtained, the corresponding filtering parameters are not required to be searched from a memory storing the filtering parameters of the plurality of video blocks, the time consumption of storing and reading processes of the filtering parameters is reduced, and the subsequent access efficiency of video is improved, and thus the filtering efficiency of the video is improved; in addition, as the filtering parameters of the video blocks lose the effect after the video blocks are filtered, the filtering parameters of the previous video blocks can be directly covered for storage in the process of filtering the next video block, the filtering parameters of a plurality of video blocks in the whole video frame do not need to be stored successively, the purpose of storing the filtering parameters in units of the video frames and storing the filtering parameters in units of the video blocks is achieved, and therefore the memory space occupied by storing the filtering parameters can be reduced, and the cost is saved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
Fig. 1 is a schematic diagram illustrating an implementation environment of a video filtering method for decoding according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating a video filtering method for decoding according to an exemplary embodiment.
Fig. 3 is a flowchart illustrating another video filtering method for decoding according to an exemplary embodiment.
Fig. 4 is a schematic diagram illustrating a chromaticity component storage manner according to an exemplary embodiment.
Fig. 5 is a schematic diagram illustrating filtering for a luminance component according to an exemplary embodiment.
Fig. 6 is a schematic diagram illustrating filtering for chrominance components according to an exemplary embodiment.
Fig. 7 is a block diagram illustrating a video filtering apparatus for decoding according to an exemplary embodiment.
Fig. 8 is a block diagram illustrating another video filtering apparatus for decoding according to an exemplary embodiment.
Fig. 9 is a block diagram of a terminal according to an exemplary embodiment.
Fig. 10 is a block diagram of a server, according to an example embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It should be noted that, the information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals related to the present disclosure are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions. For example, the videos referred to in this disclosure are all acquired with sufficient authorization.
Fig. 1 is a schematic diagram illustrating an implementation environment of a video filtering method for decoding according to an exemplary embodiment. Taking an example in which the electronic device is provided as a server, referring to fig. 1, the implementation environment specifically includes: a terminal 101 and a server 102.
The terminal 101 is at least one of a smart phone, a smart watch, a desktop computer, a laptop computer, an MP3 player (Moving Picture Experts Group Audio Layer III, dynamic image expert compression standard audio plane 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, dynamic image expert compression standard audio plane 4), and a laptop portable computer. The terminal 101 has an application program running thereon that supports video decoding. The application may be a multimedia class application, a social class application, or a conference class application, etc., to which embodiments of the present disclosure are not limited. The user can log in the application through the terminal 101 to acquire a service provided by the application. The terminal 101 can be connected to the server 102 via a wireless network or a wired network, and thus can acquire video from the server 102. The terminal 101 can then decode the video, thereby playing the decoded video.
The terminal 101 refers broadly to one of a plurality of terminals, and this embodiment is illustrated with the terminal 101. Those skilled in the art will recognize that the number of terminals may be greater or lesser. For example, the number of the terminals may be several, or the number of the terminals may be tens or hundreds, or more, and the number and the device type of the terminals are not limited in the embodiments of the present disclosure.
Server 102 is at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The server 102 can be connected to the terminal 101 and other terminals through a wireless network or a wired network, and the server 102 can transmit video to the terminal 101 or receive video transmitted from the terminal 101. In some embodiments, the number of servers described above may be greater or lesser, and embodiments of the present disclosure are not limited in this regard. Of course, the server 102 also includes other functional servers to provide more comprehensive and diverse services.
Fig. 2 is a flowchart illustrating a video filtering method for decoding, which is applied to a terminal, according to an exemplary embodiment, referring to fig. 2, comprising the steps of:
in step 201, in decoding any video block within any video frame of a video, a terminal divides the video block into a plurality of video sub-blocks.
In the embodiment of the present disclosure, a video block refers to a Coding Tree Unit (CTU) in video. Since each video frame in the video is obtained by encoding a plurality of video blocks, the terminal can decode the plurality of video blocks in the video frame during the process of decoding the video frame. In decoding any video block, the terminal can divide the video block into a plurality of video sub-blocks. The terminal can then filter the video frame based on the plurality of video sub-blocks. The size of the video sub-blocks is used to reflect the granularity at which the video blocks are filtered. The embodiments of the present disclosure do not limit the size of video sub-blocks.
In step 202, for any boundary in any video sub-block, the terminal determines filtering parameters for the boundary based on the coding information of the video block.
In an embodiment of the present disclosure, the encoding information of the video block includes encoding parameters and video content. The coding parameters include coding partitioning and quantization parameters (Quantization Parameter, QP) for the video block. The Coding division mode comprises a division mode of a Coding Unit (CU) and a division mode of a Transformation Unit (TU) in a video block in the video Coding process. The terminal can determine filtering parameters of a plurality of boundaries in the video block according to the coding information of the video block. For any boundary in a video block, the filtering parameters of that boundary can indicate the filtering manner in which the pixels on both sides of that boundary are filtered.
In step 203, the terminal filters pixels at both sides of the boundaries of the multiple video sub-blocks based on the filtering parameters of the boundaries of the multiple video sub-blocks in the video block, to obtain a filtered video block.
In the embodiment of the disclosure, for any boundary in any video sub-block in a video frame, the terminal can determine the filtering mode of the boundary based on the filtering parameters of the boundary. Then, the terminal filters pixels on both sides of the boundary based on the filtering method of the boundary. After filtering the pixels on both sides of the boundaries, the terminal obtains a filtered video block. The terminal then obtains a filtered video frame based on the plurality of filtered video blocks.
According to the scheme provided by the embodiment of the disclosure, in the process of decoding any video frame in video, the video block can be processed as a unit, specifically, when any video block in the video frame is decoded, the video block is divided into a plurality of video sub-blocks, then the filtering parameters of the boundaries of the video sub-blocks are calculated according to the coding information of the video block, and then the pixels at two sides of the boundaries of the video sub-blocks are filtered according to the filtering parameters of the boundaries of the video sub-blocks, so that the purpose of packaging the filtering parameters in the video block together with the filtering process is achieved, the current video block can be filtered after the filtering parameters of the current video block are obtained, the corresponding filtering parameters are not required to be searched from the memory storing the filtering parameters of the video blocks, the time consumption of the filtering parameters in the storage and reading processes is reduced, the follow-up access efficiency can be improved, and the filtering efficiency of the video is improved; in addition, as the filtering parameters of the video blocks lose the effect after the video blocks are filtered, the filtering parameters of the previous video blocks can be directly covered for storage in the process of filtering the next video block, the filtering parameters of a plurality of video blocks in the whole video frame do not need to be stored successively, the purpose of storing the filtering parameters in units of the video frames and storing the filtering parameters in units of the video blocks is achieved, and therefore the memory space occupied by storing the filtering parameters can be reduced, and the cost is saved.
In some embodiments, filtering boundaries of a plurality of video sub-blocks in a video block based on filtering parameters of the boundaries of the plurality of video sub-blocks to obtain a filtered video block comprises:
filtering a plurality of first boundaries from the boundaries of the plurality of video sub-blocks based on the boundary strength of the boundaries of the plurality of video sub-blocks in the video block to obtain a plurality of second boundaries, wherein the first boundaries are used for representing boundaries without filtering pixels on two sides of the boundaries;
determining a filtering mode of the plurality of second boundaries based on boundary strength and quantization parameters in the filtering parameters of the plurality of second boundaries;
and respectively filtering pixels at two sides of the plurality of second boundaries based on the filtering modes of the plurality of second boundaries to obtain filtered video blocks.
According to the scheme provided by the embodiment of the disclosure, in the process of filtering any video block, the boundaries which do not need to be filtered are filtered from the boundaries of a plurality of video sub-blocks through the boundary strength of the boundaries of the video sub-blocks in the video block, so that useless filtering processing is avoided on pixels on two sides of some boundaries, filtering operation can be reduced under the condition of ensuring video quality, and filtering efficiency is improved; and for each second boundary selected, the second boundaries can be further analyzed according to boundary strength and quantization parameters in the filtering parameters to determine the filtering mode of each second boundary, so that the filtering is performed on different boundaries according to the boundary strength and quantization parameters of the boundary, and the filtering quality can be improved.
In some embodiments, the boundary strength of the boundary includes three cases of 0,1 and 2, where 0 is used to indicate that the corresponding boundary does not need to be subjected to filtering processing;
the method further comprises the steps of:
the boundary strength of the boundary is stored in the form of 2 bits, wherein 00 is used to represent the boundary strength of the boundary equal to 0, 01 is used to represent the boundary strength of the boundary equal to 1, and 11 is used to represent the boundary strength of the boundary equal to 2.
According to the scheme provided by the embodiment of the disclosure, the boundary strength of the boundary is stored in a 2-bit form, so that compared with the scheme of storing the boundary strength in a byte form in the prior art, the data is simpler, and the storage space of the boundary strength can be saved; in addition, in the subsequent filtering process based on the boundary strength, simpler data are adopted for processing, so that the filtering efficiency can be improved.
In some embodiments, filtering the plurality of first boundaries from the plurality of boundaries of video sub-blocks based on boundary strengths of the boundaries of the plurality of video sub-blocks in the video block to obtain the plurality of second boundaries comprises:
detecting boundary strength in filtering parameters of boundaries of a plurality of video sub-blocks by adopting a counting trailing zero instruction, and filtering a plurality of first boundaries from the boundaries of the plurality of video sub-blocks to obtain a plurality of third boundaries;
And further screening the third boundaries to obtain a plurality of second boundaries.
According to the scheme provided by the embodiment of the disclosure, since the counting trailing zero instruction is an instruction for calculating the number of binary trailing zeros, the boundary with zero boundary strength can be detected quickly, the boundary strength in the filtering parameters of the boundaries of a plurality of video sub-blocks can be detected through the counting trailing zero instruction, the boundary with zero boundary strength can be filtered out quickly, namely, the boundary without filtering can be filtered out quickly, the subsequent filtering process can be entered quickly, and the filtering efficiency is improved as a whole; and, because the count trailing zero instruction is not necessarily capable of completely filtering the boundary with zero boundary strength, a plurality of second boundaries are obtained by further screening a plurality of third boundaries, and all boundaries which do not need to be filtered can be more increased and accurately filtered, so that the method is beneficial to reducing subsequent filtering operation under the condition of ensuring video quality, and the filtering efficiency is improved.
In some embodiments, further filtering the plurality of third boundaries to obtain a plurality of second boundaries includes:
traversing a third boundary of the target number each time, wherein the target number is used for representing the maximum number of boundaries which can be processed in parallel;
For any traversal, detecting whether a boundary with zero boundary strength exists in a third boundary of the target number in the current traversal;
and when the boundary with the boundary strength of zero exists in the third boundary of the target quantity, filtering the boundary with the boundary strength of zero from the third boundary of the target quantity by adopting a gradient descent method to obtain a plurality of second boundaries.
According to the scheme provided by the embodiment of the disclosure, in the process of further screening the boundaries of the video sub-blocks, whether the boundaries with the boundary strength of zero exist in the third boundaries of the target number can be detected in parallel at most each time, so that the detection efficiency of the boundaries can be improved; and under the condition that the boundary with zero boundary strength exists in the third boundary of the target quantity, the position of the boundary with zero boundary strength is gradually detected by a gradient descent method, so that all boundaries which do not need to be filtered can be filtered more accurately, the subsequent filtering operation is reduced under the condition of ensuring the video quality, and the filtering efficiency is improved.
In some embodiments, in a case where there is a boundary with zero boundary strength in the third boundary of the target number, filtering the boundary with zero boundary strength from the third boundary of the target number by using a gradient descent method to obtain a plurality of second boundaries, including:
If the boundary with zero boundary strength exists in the third boundary of the target number, reducing the detection range to half of the target number, and detecting again;
and filtering out the boundary with zero boundary strength to obtain a plurality of second boundaries when the boundary with zero boundary strength is found.
According to the scheme provided by the embodiment of the disclosure, under the condition that the boundary with zero boundary strength exists in the third boundary of the target number, the detection range is gradually reduced to detect again according to the gradient descent method so as to determine the position of the boundary with zero boundary strength, so that all boundaries without filtering can be filtered more accurately, and the subsequent filtering operation is reduced under the condition that the video quality is ensured, and the filtering efficiency is improved.
In some embodiments, determining the filtering manner of the plurality of second boundaries based on the boundary strength and the quantization parameter among the filtering parameters of the plurality of second boundaries comprises:
determining the number of boundaries for performing filtering decision once as the target number based on the single instruction stream and the multiple data streams, wherein the filtering decision is used for representing the process of determining the filtering mode of the boundaries, and the target number is used for representing the maximum number of the boundaries which can be processed in parallel;
In the filtering decision process, for any one of the second boundaries of the target number, determining a filtering mode of the second boundary based on boundary strength and quantization parameters in filtering parameters of the second boundary.
According to the scheme provided by the embodiment of the disclosure, the single instruction stream and the multiple data streams are adopted, and the boundaries of the target quantity can be processed in parallel, namely, the filtering decision is carried out on the boundaries of the target quantity at the same time, so that the boundary filtering mode of the target quantity can be obtained at the same time, the efficiency of the filtering decision can be improved, and the filtering efficiency of the video can be improved.
In some embodiments, for any one of the second boundaries of the target number, determining a filtering manner of the second boundary based on a boundary strength and a quantization parameter of filtering parameters of the second boundary includes:
for any one of the second boundaries of the target number, determining a filtering threshold based on quantization parameters of the second boundary;
under the condition that the texture degree of the second boundary is smaller than a filtering threshold value, determining that the second boundary needs to be filtered, wherein the texture degree is used for representing the pixel change rate of two sides of the corresponding boundary;
determining a filtering intensity threshold based on the boundary intensity in the filtering parameters of the second boundary;
And determining a filtering mode of the second boundary based on the filtering strength threshold.
According to the scheme provided by the embodiment of the disclosure, in the process of deciding the filtering mode of any boundary, the filtering threshold of the boundary is determined according to the quantization parameter of the boundary, and the pixel change rate at two sides of the boundary is compared with the filtering threshold of the boundary, so that whether the boundary needs filtering or not can be more accurately analyzed, the subsequent filtering operation is reduced under the condition of ensuring the video quality, and the filtering efficiency is improved; and under the condition that the pixel change rate at two sides of the boundary is smaller than the filtering threshold value of the boundary, determining the filtering intensity threshold value of the boundary so as to determine the filtering mode of the boundary according to the filtering intensity threshold value later, so that the filtering mode is more accurate, and the filtering quality can be improved.
In some embodiments, determining a filtering manner of the second boundary based on the filtering strength threshold comprises:
determining a filtering range of the second boundary based on the maximum filtering length of the second boundary, wherein the filtering range is used for representing the number of pixels adopted when pixels on two sides of the second boundary are filtered;
in the case where the filtering range of the second boundary indicates that short tap filtering is required for pixels on both sides of the second boundary, a filtering strength of the second boundary is determined based on a filtering strength threshold, where the filtering strength is used to represent a correction range of the filtered pixels compared to the pixels before filtering.
According to the scheme provided by the embodiment of the disclosure, according to the maximum filtering length of the boundary, the range of the pixels adopted when the pixels at two sides of the boundary are filtered is determined, and under the condition that the pixels at two sides of the second boundary are required to be filtered with short taps, namely when the range of the pixels adopted when the pixels at two sides of the boundary are required to be filtered is smaller, the filtering strength of the boundary can be determined more finely according to the filtering strength threshold, so that the pixels at two sides of the boundary can be filtered according to the filtering strength of the boundary in the following process, and the filtering quality can be improved.
In some embodiments, for any boundary in any video sub-block, after determining the filtering parameters for the boundary based on the encoding information of the video block, the method further comprises:
in the case where the boundary is located on a common side of transform units in the video block, the filter parameters of other boundaries located at the common side, which refers to a side common to both transform units, are determined based on the filter parameters of the boundary, and the transform units are determined based on the encoded information.
According to the scheme provided by the embodiment of the disclosure, as the filtering parameters of the boundaries of the common sides of the same transformation unit are the same, after the filtering parameters of any boundary of the common sides are determined, the filtering parameters of other boundaries of the common sides can be directly determined, calculation is not needed, the operation is simple, and the video filtering efficiency can be improved.
In some embodiments, the method further comprises:
based on single instruction stream and multiple data streams, filtering decisions of a plurality of boundaries positioned on a common side of a transformation unit are processed in parallel to obtain filtering modes of the plurality of boundaries, and the filtering decisions are used for representing the process of determining the filtering modes of the boundaries;
the multiple boundaries are filtered in parallel based on the filtering mode of the single instruction stream multiple data streams and the multiple boundaries.
According to the scheme provided by the embodiment of the disclosure, the single instruction stream and the multiple data streams are adopted, and the boundaries of the target numbers located on the same common side are processed in parallel, namely, the filtering decision is carried out on the boundaries of the target numbers at the same time, so that the boundary filtering mode of the target numbers can be obtained at the same time, the efficiency of the filtering decision can be improved, and the filtering efficiency of videos can be improved; and after the filtering parameters of the boundaries positioned on the same common side are determined, the boundaries positioned on the same common side can be directly processed in parallel, the filtering parameters of the boundaries are not required to be stored, the storage and the reading of the filtering parameters are avoided, the memory overhead is saved, the parameter access time is saved, and the video filtering efficiency is improved.
The foregoing fig. 2 is merely a basic flow of the disclosure, and the scheme provided in the disclosure is further described below based on a specific implementation, and fig. 3 is a flowchart illustrating another video filtering method for decoding according to an exemplary embodiment. Taking an example in which an electronic device is provided as a terminal, see fig. 3, the method comprises:
In step 301, in decoding any video block within any video frame of a video, a terminal divides the video block into a plurality of video sub-blocks.
In the embodiment of the disclosure, in the process of decoding any video block, the terminal can divide the video block into a plurality of video sub-blocks. The terminal can then filter the video frame based on the plurality of video sub-blocks to reduce or eliminate blocking artifacts in the video frame. That is, the filtering method provided by the embodiments of the present disclosure may be regarded as a novel method of deblocking filtering (DBF). The blocking effect refers to the effect that the video frame is visually generated as a block due to the discontinuous coding boundary caused by distortion. That is, since the existing coding techniques are all based on block coding, the processes of prediction, transformation, quantization, etc. of different blocks are independent from each other, so that the size and distribution of quantization errors introduced are also independent from each other, thereby causing a block effect. The size of the video sub-blocks is used to reflect the granularity at which the video blocks are filtered. The embodiments of the present disclosure do not limit the size of video sub-blocks. For example, the video sub-block is 4*4 in size. The terminal then filters pixels on both sides of the boundary based on the boundaries of the plurality of video sub-blocks to achieve filtering of the video frame. The boundaries of video sub-blocks may be referred to as filter boundaries.
In step 302, for any boundary in any video sub-block, the terminal determines filtering parameters for the boundary based on the coding information of the video block.
In the embodiment of the disclosure, in the process of filtering a video block, a terminal can respectively filter a luminance component and a chrominance component of the video block. That is, the terminal may determine the luminance filtering parameters of the boundary based on the encoding information about the luminance components so as to subsequently filter the luminance components in the pixels on both sides of the boundary based on the luminance filtering parameters. The terminal may determine the chroma filtering parameters of the boundary based on the coding information about the chroma components so as to subsequently filter the chroma components in the pixels on both sides of the boundary based on the chroma filtering parameters. In filtering any one component, for any boundary in any video sub-block, the filtering parameters of the boundary include boundary strength (Boundary Strength, BS) and quantization parameters. The terminal is capable of determining filter parameters for boundaries of a plurality of video sub-blocks based on coding parameters of the video blocks and the video content. For any boundary, the terminal can determine filtering parameters such as boundary strength and quantization parameters of the boundary based on coding parameters of video sub-blocks at two sides of the boundary and coding information such as video content.
For any boundary, the terminal may determine the boundary strength of the boundary according to the coding mode, transform coefficient, motion vector, reference frame, and the like of the video sub-blocks at two sides of the boundary. The boundary strength can be used to indicate whether the corresponding boundary requires filtering. The boundary strength of the boundary includes three cases of 0,1 and 2. 0 is used to indicate that the corresponding boundary does not need to be filtered. For a boundary with boundary strength of 1 or 2, further analysis is required to determine whether the filtering process and the required filtering mode are needed. The terminal does not limit the storage manner of the boundary strength.
In some embodiments, the terminal stores the boundary strength of the boundary in the form of 2 bits. The boundary refers to the boundary of a video sub-block. That is, the boundary strength is stored in the form of 2 bits with the boundary of the video sub-block as a basic unit. Where 00 is used to represent the boundary with a boundary strength equal to 0, 01 is used to represent the boundary with a boundary strength equal to 1, and 11 is used to represent the boundary with a boundary strength equal to 2. Accordingly, for a video block of size 128 x 128, the boundary strength of each horizontal or vertical boundary may be stored using a 64bit binary data. According to the scheme provided by the embodiment of the disclosure, the boundary strength of the boundary is stored in a 2-bit form, so that compared with the scheme of storing the boundary strength in a multi-byte form in the prior art, the data is simpler, and the storage space of the boundary strength can be saved; in addition, in the subsequent filtering process based on the boundary strength, simpler data are adopted for processing, so that the filtering efficiency can be improved.
For any boundary, the terminal may determine the quantization parameter of the boundary according to the quantization parameters of the video sub-blocks on both sides of the boundary. The embodiment of the present disclosure does not limit the determination manner of the quantization parameter of the boundary. Alternatively, the terminal may perform weighted summation of quantization parameters of video sub-blocks on both sides of the boundary to determine the quantization parameters of the boundary. The disclosed embodiments do not limit the weights of video sub-blocks.
In some embodiments, the filtering parameters of the boundaries at the same common edge are the same in calculating the boundaries of the multiple video sub-blocks. The common edge may be a common edge of the transform unit or a common edge of the encoding unit, which is not limited by the embodiments of the present disclosure. Taking the example that the common side is the common side of the transform units, in the case that the boundary is located at the common side of the transform units in the video block, the filter parameters of other boundaries located at the common side are determined based on the filter parameters of the boundary, and the transform units are determined based on the encoding information. The common side refers to a side common to both transform units. For example, for a chrominance component, the terminal may traverse the common edge of each transform unit in the video block. The terminal then calculates the filter parameters of the common edge of each transform unit, resulting in filter parameters of a plurality of boundaries located at the common edge of the transform unit. The boundaries in the "plurality of boundaries" refer to boundaries of video sub-blocks. According to the scheme provided by the embodiment of the disclosure, as the filtering parameters of the boundaries of the common sides of the same transformation unit are the same, after the filtering parameters of any boundary of the common sides are determined, the filtering parameters of other boundaries of the common sides can be directly determined, calculation is not needed, the operation is simple, and the video filtering efficiency can be improved.
In step 303, the terminal filters out a plurality of first boundaries from the boundaries of the plurality of video sub-blocks based on the boundary strength of the boundaries of the plurality of video sub-blocks in the video block, to obtain a plurality of second boundaries, where the first boundaries are used to represent boundaries where filtering of pixels on both sides of the boundaries is not required.
In the embodiment of the disclosure, the terminal filters out boundaries with zero boundary strength from the boundaries of the plurality of video sub-blocks based on the boundary strength of the boundaries of the plurality of video sub-blocks in the video block, and obtains a plurality of second boundaries. The filtering manner of the boundary is not limited in the embodiments of the present disclosure.
In some embodiments, the terminal may employ a count trailing zero instruction (Count Trailing Zeroes, CTZ) to filter boundaries with zero boundary strength. Correspondingly, the terminal detects the boundary strength in the filtering parameters of the boundaries of the video sub-blocks by adopting a counting trailing zero instruction, and filters a plurality of first boundaries from the boundaries of the video sub-blocks to obtain a plurality of third boundaries. And then, the terminal further screens the third boundaries to obtain a plurality of second boundaries. According to the scheme provided by the embodiment of the disclosure, since the counting trailing zero instruction is an instruction for calculating the number of binary trailing zeros, the boundary with zero boundary strength can be detected quickly, the boundary strength in the filtering parameters of the boundaries of a plurality of video sub-blocks can be detected through the counting trailing zero instruction, the boundary with zero boundary strength can be filtered out quickly, namely, the boundary without filtering can be filtered out quickly, the subsequent filtering process can be entered quickly, and the filtering efficiency is improved as a whole; and, because the count trailing zero instruction is not necessarily capable of completely filtering the boundary with zero boundary strength, a plurality of second boundaries are obtained by further screening a plurality of third boundaries, and all boundaries which do not need to be filtered can be more increased and accurately filtered, so that the method is beneficial to reducing subsequent filtering operation under the condition of ensuring video quality, and the filtering efficiency is improved.
In some embodiments, in order to ensure that the terminal can filter out all boundaries with zero boundary strength, the terminal may further filter out a plurality of third boundaries based on the filtered out plurality of third boundaries, and accordingly, the process of further filtering out the plurality of third boundaries by the terminal to obtain a plurality of second boundaries includes: the terminal traverses the third boundary of the target number each time. Then, for any one traversal, the terminal detects whether there is a boundary with zero boundary strength in the third boundary of the target number in the current traversal. And under the condition that the boundary with the boundary strength of zero exists in the third boundary of the target number, the terminal adopts a gradient descent method to filter the boundary with the boundary strength of zero from the third boundary of the target number, so as to obtain a plurality of second boundaries. According to the scheme provided by the embodiment of the disclosure, in the process of further screening the boundaries of the video sub-blocks, a gradient descent method is adopted, so that whether boundaries with zero boundary strength exist in the third boundaries of the target number can be detected in parallel at most each time, and the detection efficiency of the boundaries can be improved; and under the condition that the boundary with zero boundary strength exists in the third boundary of the target quantity, the position of the boundary with zero boundary strength is gradually detected, so that all boundaries which do not need to be filtered can be filtered more accurately, the subsequent filtering operation is reduced under the condition that the video quality is ensured, and the filtering efficiency is improved.
Wherein the target number is used to represent the maximum number of boundaries that can be processed in parallel. For the luminance component, the target number is equal to 4. That is, in the filtering of the luminance component, the terminal may detect whether there is a boundary with zero boundary strength out of 4 boundaries at a time at most. For the chrominance components, since each chrominance component includes two components, a U component and a V component, the target number is equal to 2. That is, in the filtering process of the chrominance components, the terminal may detect that there is a boundary having zero boundary strength out of 2 boundaries at most.
Because the U component and the V component are stored alternately in the video frame storage, and the filtering parameters of the U component and the V component at the same position are the same, the scheme provided by the embodiment of the disclosure can also process the U component and the V component simultaneously in parallel for the chroma component at the same position so as to improve the parallel processing capability. For example, fig. 4 is a schematic diagram illustrating one manner of chroma component storage in accordance with an illustrative embodiment. See fig. 4,U where the components V and V are stored in an alternating fashion in the video frame store. The terminal adopts a single instruction stream and multiple data streams to process 2 boundaries at most at a time.
In some embodiments, the gradient descent method refers to progressively decreasing the number of boundaries to be processed. Correspondingly, in the case that the boundary with zero boundary strength exists in the third boundary of the target number, the terminal adopts a gradient descent method, filters the boundary with zero boundary strength from the third boundary of the target number, and the process of obtaining a plurality of second boundaries comprises the following steps: when a boundary with zero boundary strength exists in the third boundary of the target number, the terminal reduces the detection range to half of the target number, and the terminal detects again. Then, in the case of finding a boundary with zero boundary strength, the terminal filters out the boundary with zero boundary strength to obtain a plurality of second boundaries. The second boundary refers to the boundary of a video sub-block with a boundary strength that is not zero. According to the scheme provided by the embodiment of the disclosure, under the condition that the boundary with zero boundary strength exists in the third boundary of the target number, the detection range is gradually reduced to detect again according to the gradient descent method so as to determine the position of the boundary with zero boundary strength, so that all boundaries without filtering can be filtered more accurately, and the subsequent filtering operation is reduced under the condition that the video quality is ensured, and the filtering efficiency is improved.
For example, in the filtering process for the luminance component, the terminal can detect whether there is a boundary with zero boundary strength among the 4 boundaries. In the case where there is a boundary where the boundary strength is zero, the terminal narrows the detection range. That is, the terminal detects whether there is a boundary with zero boundary strength among 2 boundaries at a time. In the case where there is a boundary where the boundary strength is zero, the terminal continues to narrow the detection range. That is, the terminal detects whether the boundary strength of 1 boundary is zero at a time. In case there is no boundary of 4 boundaries with zero boundary strength, the terminal may continue to make filtering decisions and filtering operations for these 4 boundaries. That is, the terminal performs steps 304 and 305 for these 4 boundaries. Meanwhile, the terminal can further continuously detect whether the boundary with zero boundary strength exists in the next 4 boundaries. It follows that during the filtering process for the luminance component, the terminal traverses the number of boundaries with non-zero boundary intensities for maximum parallel processing according to the gradient descent method of 4, 2, 1. In the filtering process for the chrominance components, the terminal traverses the number of boundaries with non-zero boundary strength of maximum parallel processing according to a gradient descent method of 2 boundaries and 1 boundary.
In step 304, the terminal determines a filtering mode of the plurality of second boundaries based on the pixel values at both sides of the plurality of second boundaries and the boundary strength and quantization parameter among the filtering parameters.
In the embodiment of the present disclosure, the filtering manner of the boundary may be long tap filtering, short tap strong filtering, or short tap weak filtering, which is not limited by the embodiment of the present disclosure. Where "tap" refers to the pixels on both sides of the boundary employed in filtering. The "long tap" uses more pixels on both sides of the boundary than the "short tap". The strength of the filtering refers to the modified size of the filtered pixel compared to the pixel before the filtering. The pixel correction range of the "strong filter" is larger than that of the "weak filter". For any second boundary, the terminal determines the filtering mode of the second boundary based on the pixel values at two sides of the second boundary and the boundary strength and quantization parameters in the filtering parameters. In the filtering process for the brightness component, the terminal determines the brightness filtering mode of the second boundary based on the pixel values at two sides of the second boundary and the boundary strength and quantization parameter in the brightness filtering parameters. In the filtering process for the chrominance component, the terminal determines the chrominance filtering mode of the second boundary based on the pixel values at two sides of the second boundary and the boundary strength and quantization parameters in the chrominance filtering parameters. The filtering manners of the different boundaries may be the same or different, which is not limited by the embodiments of the present disclosure.
In some embodiments, the terminal may employ a single instruction stream multiple data stream (Single Instruction Multiple Data, SIMD) to make the filtering decision. Correspondingly, the process of determining the filtering modes of the plurality of second boundaries by the terminal based on the boundary strength and the quantization parameter in the filtering parameters of the plurality of second boundaries comprises the following steps: the terminal determines the number of boundaries for filtering decision in a single time as the target number based on the single instruction stream and multiple data streams. Then, in the filtering decision process, for any one of the second boundaries of the target number, the terminal determines a filtering manner of the second boundary based on the boundary strength and the quantization parameter in the filtering parameters of the second boundary. Wherein the filtering decision is used to represent the process of determining the filtering mode of the boundary, and the target number is used to represent the maximum number of boundaries that can be processed in parallel. According to the scheme provided by the embodiment of the disclosure, the single instruction stream and the multiple data streams are adopted, and the boundaries of the target quantity can be processed in parallel, namely, the filtering decision is carried out on the boundaries of the target quantity at the same time, so that the boundary filtering mode of the target quantity can be obtained at the same time, the efficiency of the filtering decision can be improved, and the filtering efficiency of the video can be improved.
In some embodiments, the process of determining the filtering mode of the second boundary by the terminal based on the boundary strength and the quantization parameter in the filtering parameters of the second boundary includes: for any one of the second boundaries of the target number, the terminal determines a filtering threshold based on the quantization parameter of the second boundary. And under the condition that the texture degree of the second boundary is smaller than the filtering threshold value, the terminal determines that the second boundary needs to be filtered. The terminal then determines a filter strength threshold based on the boundary strength in the filter parameters of the second boundary. Then, the terminal determines a filtering mode of the second boundary based on the filtering strength threshold. Wherein the texture degree is used to represent the rate of change of pixels on both sides of the corresponding boundary. According to the scheme provided by the embodiment of the disclosure, in the process of deciding the filtering mode of any boundary, the filtering threshold of the boundary is determined according to the quantization parameter of the boundary, and the pixel change rate at two sides of the boundary is compared with the filtering threshold of the boundary, so that whether the boundary needs filtering or not can be analyzed more accurately, the subsequent filtering operation is reduced under the condition of ensuring the video quality, the filtering efficiency can be improved, and error filtering can be avoided, namely, certain boundaries are real boundaries of objects in an image, and if the real boundaries of the objects are filtered, the conditions of boundary blurring, distortion and the like of the objects are caused.
Wherein the filtering threshold may be represented by β; the filtering strength threshold can be t c A representation; the embodiments of the present disclosure are not limited in this regard. The filtering threshold and the filtering strength threshold may be determined by looking up a table. Quantization parameters including boundaries in a relational tableCorrespondence between filtering threshold and filtering intensity threshold. Specifically, the terminal can determine the first quantization parameter of the boundary based on quantization parameters of video sub-blocks on both sides of the boundary. Then, the terminal finds a filter threshold corresponding to the first quantization parameter from the relation table as a filter parameter for the boundary. The terminal can determine a second quantization parameter for the boundary based on the boundary strength of the boundary. Then, the terminal finds a filter strength threshold corresponding to the second quantization parameter from the relation table as a filter strength threshold of the boundary.
In some embodiments, the process of determining the filtering manner of the second boundary by the terminal based on the filtering strength threshold includes: the terminal determines a filtering range of the second boundary based on the maximum filtering length of the second boundary. The filtering range is used to indicate how many pixels are used when filtering pixels on both sides of the second boundary. Then, in the case that the filtering range of the second boundary indicates that short tap filtering is required for the pixels on both sides of the second boundary, the terminal determines the filtering strength of the second boundary based on the filtering strength threshold, where the filtering strength is used to represent the correction range of the filtered pixels compared to the pixels before filtering. According to the scheme provided by the embodiment of the disclosure, according to the maximum filtering length of the boundary, the range of the pixels adopted when the pixels at two sides of the boundary are filtered is determined, and under the condition that the pixels at two sides of the second boundary are required to be filtered with short taps, namely when the range of the pixels adopted when the pixels at two sides of the boundary are required to be filtered is smaller, the filtering strength of the boundary can be determined more finely according to the filtering strength threshold, so that the pixels at two sides of the boundary can be filtered according to the filtering strength of the boundary in the following process, and the filtering quality can be improved.
Fig. 5 is a schematic diagram illustrating filtering for a luminance component according to an exemplary embodiment. Referring to fig. 5, in decoding any video block within any video frame of a video, a terminal divides the video block into a plurality of video sub-blocks. Then, for any boundary in any video sub-block, the terminal determines the filtering parameters of the boundary based on the coding information of the video block. The terminal then filters edges using count trailing zero instructionsBoundary with zero boundary strength. Then, for boundaries with non-zero boundary intensities, the terminal determines the maximum number of boundaries that can be processed in parallel based on a gradient descent method. Then, for any boundary with non-zero boundary strength, the terminal determines a filtering threshold beta based on the quantization parameter of the boundary; determining a filter strength threshold t based on the boundary strength and quantization parameter of the boundary c . Under the condition that the texture degree of the boundary is smaller than a filtering threshold value beta, the terminal determines that the boundary needs to be filtered; otherwise, the boundary need not be filtered. In the case where the boundary needs to be filtered, the terminal makes a long filtering decision based on the maximum filter length of the boundary to determine whether to perform long filtering (long tap filtering) or short filtering (short tap filtering) on the boundary. Specifically, at the filter threshold β and the filter intensity threshold t c And under the condition that the conditions are met, the terminal carries out long filtering on the boundary. Otherwise, the terminal performs short filtering on the boundary. In the case of determining to short filter the boundary, the terminal determines the filter strength of the boundary based on the filter strength threshold to determine whether to strong filter or short filter the boundary.
Fig. 6 is a schematic diagram illustrating filtering for chrominance components according to an exemplary embodiment. Referring to fig. 6, the terminal may traverse the common edges of each transform unit in the video block. The terminal then calculates the filter parameters of the common edge of each transform unit, resulting in filter parameters of a plurality of boundaries located at the common edge of the transform unit. Then, for boundaries with non-zero boundary intensities, the terminal determines the maximum number of boundaries that can be processed in parallel based on a gradient descent method. Then, for any boundary with non-zero boundary strength, the terminal determines a filtering threshold beta based on the quantization parameter of the boundary; determining a filter strength threshold t based on the boundary strength of a boundary c . Under the condition that the texture degree of the boundary is smaller than a filtering threshold value beta, the terminal determines that the boundary needs to be filtered; otherwise, the boundary need not be filtered. In the case where the boundary needs to be filtered, the terminal makes a long filtering decision based on the maximum filter length of the boundary to determine whether to perform long filtering (long tap filtering) or short filtering (short tap filtering) on the boundary. In case of determining to short filter the boundary, the terminal Based on the filter strength threshold, a filter strength of the boundary is determined to determine whether to strongly filter or short filter the boundary.
In step 305, the terminal filters pixels on both sides of the plurality of second boundaries based on the filtering manners of the plurality of second boundaries, to obtain filtered video blocks.
In the embodiment of the disclosure, for any second boundary, the terminal filters the luminance components of the pixels at both sides of the second boundary based on the filtering manner of the boundary for the luminance components. The terminal filters the chrominance components of the pixels on both sides of the second boundary based on the filtering means of the boundary for the chrominance components. The terminal may process the filtering process for the luminance component and the filtering process for the chrominance component in parallel. The filtering process of the luminance component and the chrominance component is described below.
In some embodiments, for the luminance component, the terminal processes filtering decisions of a plurality of boundaries with non-zero boundary strength in parallel based on a single instruction stream multiple data stream to obtain a filtering mode of the plurality of boundaries. Then, the terminal filters the multiple boundaries in parallel based on the filtering mode of the single instruction stream multiple data streams and the multiple boundaries. According to the scheme provided by the embodiment of the disclosure, the single instruction stream and multiple data streams are adopted, and the multiple boundaries with the boundary strength not being zero are directly processed in parallel, so that the efficiency of filtering decision can be improved; and the corresponding filtering parameters are not required to be searched from the memory storing the filtering parameters of a plurality of video blocks to filter the current video block, so that the time consumption of the reading process of the filtering parameters is reduced, and the filtering efficiency of the video can be improved.
In some embodiments, for the chrominance component, the terminal processes filtering decisions of a plurality of boundaries located on a common side of the transform unit in parallel based on a single instruction stream multiple data stream, resulting in a filtering manner of the plurality of boundaries. The filtering decision is used to represent the process of determining the filtering mode of the boundary. Then, the terminal filters the multiple boundaries in parallel based on the filtering mode of the single instruction stream multiple data streams and the multiple boundaries. According to the scheme provided by the embodiment of the disclosure, the single instruction stream and the multiple data streams are adopted, and the boundaries of the target numbers located on the same common side are processed in parallel, namely, the filtering decision is carried out on the boundaries of the target numbers at the same time, so that the boundary filtering mode of the target numbers can be obtained at the same time, the efficiency of the filtering decision can be improved, and the filtering efficiency of videos can be improved; and after the filtering parameters of the boundaries positioned on the same common side are determined, the boundaries positioned on the same common side can be directly processed in parallel, the filtering parameters of the boundaries are not required to be stored, the storage and the reading of the filtering parameters are avoided, the memory overhead is saved, the parameter access time is saved, and the video filtering efficiency is improved.
According to the scheme provided by the embodiment of the disclosure, in the process of decoding any video frame in video, the video block can be processed as a unit, specifically, when any video block in the video frame is decoded, the video block is divided into a plurality of video sub-blocks, then the filtering parameters of the boundaries of the video sub-blocks are calculated according to the coding information of the video block, and then the pixels at two sides of the boundaries of the video sub-blocks are filtered according to the filtering parameters of the boundaries of the video sub-blocks, so that the purpose of packaging the filtering parameters in the video block together with the filtering process is achieved, the current video block can be filtered after the filtering parameters of the current video block are obtained, the corresponding filtering parameters are not required to be searched from the memory storing the filtering parameters of the video blocks, the time consumption of the filtering parameters in the storage and reading processes is reduced, the follow-up access efficiency can be improved, and the filtering efficiency of the video is improved; in addition, as the filtering parameters of the video blocks lose the effect after the video blocks are filtered, the filtering parameters of the previous video blocks can be directly covered for storage in the process of filtering the next video block, the filtering parameters of a plurality of video blocks in the whole video frame do not need to be stored successively, the purpose of storing the filtering parameters in units of the video frames and storing the filtering parameters in units of the video blocks is achieved, and therefore the memory space occupied by storing the filtering parameters can be reduced, and the cost is saved.
Any combination of the above-mentioned optional solutions may be adopted to form an optional embodiment of the present disclosure, which is not described herein in detail.
Fig. 7 is a block diagram illustrating a video filtering apparatus for decoding according to an exemplary embodiment. Referring to fig. 7, the video filtering apparatus for decoding includes: a dividing unit 701, a first determining unit 702, and a filtering unit 703.
A dividing unit 701 configured to perform division of a video block into a plurality of video sub-blocks in decoding any video block within any video frame of a video;
a first determining unit 702 configured to perform determining, for any boundary in any video sub-block, filtering parameters of the boundary based on encoding information of the video block;
the filtering unit 703 is configured to perform filtering on pixels at both sides of the boundaries of the plurality of video sub-blocks based on the filtering parameters of the boundaries of the plurality of video sub-blocks in the video block, resulting in a filtered video block.
In some embodiments, fig. 8 is a block diagram illustrating another video filtering device for decoding according to an example embodiment. Referring to fig. 8, the filtering unit 703 includes:
a processing subunit 7031 configured to perform filtering a plurality of first boundaries from the boundaries of the plurality of video sub-blocks based on the boundary strength of the boundaries of the plurality of video sub-blocks in the video block to obtain a plurality of second boundaries, the first boundaries being used to represent boundaries that do not require filtering of pixels on both sides of the boundary;
A determining subunit 7032 configured to perform determining a filtering manner of the plurality of second boundaries based on the boundary strength and the quantization parameter among the filtering parameters of the plurality of second boundaries;
the filtering subunit 7033 is configured to perform a filtering manner based on the plurality of second boundaries, and filter pixels on two sides of the plurality of second boundaries respectively, so as to obtain a filtered video block.
In some embodiments, the boundary strength of the boundary includes three cases of 0,1 and 2, where 0 is used to indicate that the corresponding boundary does not need to be subjected to filtering processing;
with continued reference to fig. 8, the apparatus further includes:
the storage unit 704 is configured to perform storing the boundary strength of the boundary in the form of 2 bits, wherein 00 is used to represent the boundary strength of the boundary being equal to 0, 01 is used to represent the boundary strength of the boundary being equal to 1, and 11 is used to represent the boundary strength of the boundary being equal to 2.
In some embodiments, with continued reference to fig. 8, the processing subunit 7031 is configured to perform detecting boundary strength in filter parameters of boundaries of the plurality of video sub-blocks using a count trailing zero instruction, filtering the plurality of first boundaries from the boundaries of the plurality of video sub-blocks to obtain a plurality of third boundaries; and further screening the third boundaries to obtain a plurality of second boundaries.
In some embodiments, with continued reference to fig. 8, the processing subunit 7031 is configured to perform a third boundary of a target number per traversal, the target number being used to represent a maximum number of boundaries that can be processed in parallel; for any traversal, detecting whether a boundary with zero boundary strength exists in a third boundary of the target number in the current traversal; and when the boundary with the boundary strength of zero exists in the third boundary of the target quantity, filtering the boundary with the boundary strength of zero from the third boundary of the target quantity by adopting a gradient descent method to obtain a plurality of second boundaries.
In some embodiments, with continued reference to fig. 8, the processing subunit 7031 is configured to perform the step of narrowing the detection range to half of the target number and detecting again in the case where there is a boundary with a boundary strength of zero in the third boundary of the target number; and filtering out the boundary with zero boundary strength to obtain a plurality of second boundaries when the boundary with zero boundary strength is found.
In some embodiments, with continued reference to fig. 8, determining subunit 7032 includes:
a first determining subunit 70321 configured to perform a process of determining, based on the single instruction stream multiple data streams, the number of boundaries for which filtering decisions are made in a single time as a target number, the filtering decisions being used to represent a filtering manner for determining the boundaries, the target number being used to represent a maximum number of boundaries that can be processed in parallel;
The second determining subunit 70322 is configured to determine, in the filtering decision process, for any one of the second boundaries of the target number, a filtering manner of the second boundary based on the boundary strength and the quantization parameter among the filtering parameters of the second boundary.
In some embodiments, with continued reference to fig. 8, a second determination subunit 70322 configured to perform determining a filtering threshold for any of the second boundaries for the target number based on quantization parameters of the second boundaries; under the condition that the texture degree of the second boundary is smaller than a filtering threshold value, determining that the second boundary needs to be filtered, wherein the texture degree is used for representing the pixel change rate of two sides of the corresponding boundary; determining a filtering intensity threshold based on the boundary intensity in the filtering parameters of the second boundary; and determining a filtering mode of the second boundary based on the filtering strength threshold.
In some embodiments, with continued reference to fig. 8, a second determination subunit 70322 configured to perform determining a filter range for the second boundary based on a maximum filter length for the second boundary, the filter range being indicative of how many pixels are employed in filtering pixels on both sides of the second boundary; in the case where the filtering range of the second boundary indicates that short tap filtering is required for pixels on both sides of the second boundary, a filtering strength of the second boundary is determined based on a filtering strength threshold, where the filtering strength is used to represent a correction range of the filtered pixels compared to the pixels before filtering.
In some embodiments, with continued reference to fig. 8, the apparatus further comprises:
the second determining unit 705 is configured to perform, in case the boundary is located at a common side of the transforming units in the video block, determining the filtering parameters of other boundaries located at the common side based on the filtering parameters of the boundary, the transforming units determining based on the encoding information, the common side referring to a side common to both transforming units.
In some embodiments, the filtering unit 703 is configured to perform parallel processing on filtering decisions of a plurality of boundaries located on a common side of the transformation unit based on a single instruction stream multiple data stream, to obtain filtering modes of the plurality of boundaries, where the filtering decisions are used to represent a process of determining the filtering modes of the boundaries; the multiple boundaries are filtered in parallel based on the filtering mode of the single instruction stream multiple data streams and the multiple boundaries.
The embodiment of the disclosure provides a video filtering device for decoding, in the process of decoding any video frame in a video, the video frame can be processed by taking a video block as a unit, specifically, when any video block in the video frame is decoded, the video block is divided into a plurality of video sub-blocks, then according to coding information of the video block, filtering parameters of boundaries of the plurality of video sub-blocks are calculated, and then according to the filtering parameters of the boundaries of the plurality of video sub-blocks, pixels on two sides of the boundaries of the plurality of video sub-blocks are filtered to obtain a filtered video block, so that the purpose of packaging calculation and filtering processes of the filtering parameters in the video block together is realized, the current video block can be filtered after the filtering parameters of the current video block are obtained, the corresponding filtering parameters are not required to be searched from a memory storing the filtering parameters of the plurality of video blocks, the time consumption of storing and reading processes of the filtering parameters is reduced, and the subsequent access efficiency of video is improved, and thus the filtering efficiency of the video is improved; in addition, as the filtering parameters of the video blocks lose the effect after the video blocks are filtered, the filtering parameters of the previous video blocks can be directly covered for storage in the process of filtering the next video block, the filtering parameters of a plurality of video blocks in the whole video frame do not need to be stored successively, the purpose of storing the filtering parameters in units of the video frames and storing the filtering parameters in units of the video blocks is achieved, and therefore the memory space occupied by storing the filtering parameters can be reduced, and the cost is saved.
It should be noted that, when the video filtering apparatus for decoding provided in the foregoing embodiment filters a video, only the division of the foregoing functional units is used as an example, in practical application, the foregoing functional allocation may be performed by different functional units according to needs, that is, the internal structure of the electronic device is divided into different functional units, so as to complete all or part of the functions described above. In addition, the video filtering device for decoding provided in the above embodiment and the video filtering method embodiment for decoding belong to the same concept, and detailed implementation processes of the video filtering device for decoding are referred to the method embodiment, which is not repeated herein.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
The electronic device may be provided as a terminal or a server, and when the electronic device is provided as a terminal, operations performed by the video filtering method for decoding may be implemented by the terminal; when provided as a server, operations performed by the video filtering method for decoding may be implemented by the server; operations performed by the video filtering method for decoding may also be implemented by the server and terminal interaction, which is not limited by the embodiments of the present disclosure.
Fig. 9 is a block diagram of a terminal 900, shown in accordance with an exemplary embodiment, when the electronic device is provided as a terminal. Fig. 9 shows a block diagram of a terminal 900 according to an exemplary embodiment of the present disclosure. The terminal 900 may be: smart phones, tablet computers, MP3 players, MP4 players, notebook computers or desktop computers. Terminal 900 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, etc.
In general, the terminal 900 includes: a processor 901 and a memory 902.
Processor 901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 901 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 901 may also include a main processor and a coprocessor, the main processor being a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 901 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 901 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
The memory 902 may include one or more computer-readable storage media, which may be non-transitory. The memory 902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 902 is used to store at least one program code for execution by processor 901 to implement the video filtering method for decoding provided by the method embodiments in the present disclosure.
In some embodiments, the terminal 900 may further optionally include: a peripheral interface 903, and at least one peripheral. The processor 901, memory 902, and peripheral interface 903 may be connected by a bus or signal line. The individual peripheral devices may be connected to the peripheral device interface 903 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 904, a display 905, a camera assembly 906, audio circuitry 907, and a power source 908.
The peripheral interface 903 may be used to connect at least one peripheral device associated with an I/O (Input/Output) to the processor 901 and the memory 902. In some embodiments, the processor 901, memory 902, and peripheral interface 903 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 901, the memory 902, and the peripheral interface 903 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 904 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 904 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 904 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 904 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 904 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 904 may also include NFC (Near Field Communication, short range wireless communication) related circuitry, which is not limited by the present disclosure.
The display 905 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 905 is a touch display, the display 905 also has the ability to capture touch signals at or above the surface of the display 905. The touch signal may be input as a control signal to the processor 901 for processing. At this time, the display 905 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 905 may be one, providing a front panel of the terminal 900; in other embodiments, the display 905 may be at least two, respectively disposed on different surfaces of the terminal 900 or in a folded design; in still other embodiments, the display 905 may be a flexible display disposed on a curved surface or a folded surface of the terminal 900. Even more, the display 905 may be arranged in an irregular pattern other than rectangular, i.e., a shaped screen. The display 905 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 906 is used to capture images or video. Optionally, the camera assembly 906 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 906 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 907 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 901 for processing, or inputting the electric signals to the radio frequency circuit 904 for voice communication. For purposes of stereo acquisition or noise reduction, the microphone may be plural and disposed at different portions of the terminal 900. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 901 or the radio frequency circuit 904 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 907 may also include a headphone jack.
A power supply 908 is used to power the various components in the terminal 900. The power source 908 may be alternating current, direct current, disposable or rechargeable. When the power source 908 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 900 can further include one or more sensors 909. The one or more sensors 909 include, but are not limited to: acceleration sensor 910, gyroscope sensor 911, pressure sensor 912, optical sensor 913, and proximity sensor 914.
The acceleration sensor 910 may detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with the terminal 900. For example, the acceleration sensor 910 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 901 may control the display 905 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 910. The acceleration sensor 910 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 911 may detect the body direction and the rotation angle of the terminal 900, and the gyro sensor 911 may collect the 3D motion of the user to the terminal 900 in cooperation with the acceleration sensor 910. The processor 901 may implement the following functions based on the data collected by the gyro sensor 911: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
Pressure sensor 912 may be disposed on a side frame of terminal 900 and/or on an underside of display 905. When the pressure sensor 912 is disposed at a side frame of the terminal 900, a grip signal of the user to the terminal 900 may be detected, and the processor 901 performs a left-right hand recognition or a shortcut operation according to the grip signal collected by the pressure sensor 912. When the pressure sensor 912 is disposed at the lower layer of the display 905, the processor 901 performs control of an operability control on the UI interface according to the pressure operation of the user on the display 905. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 913 is used to collect the intensity of the ambient light. In one embodiment, the processor 901 may control the display brightness of the display panel 905 based on the intensity of ambient light collected by the optical sensor 913. Specifically, when the ambient light intensity is high, the display luminance of the display screen 905 is turned up; when the ambient light intensity is low, the display luminance of the display panel 905 is turned down. In another embodiment, the processor 901 may also dynamically adjust the shooting parameters of the camera assembly 906 according to the ambient light intensity collected by the optical sensor 913.
A proximity sensor 914, also referred to as a distance sensor, is typically provided on the front panel of the terminal 900. The proximity sensor 914 is used to collect the distance between the user and the front of the terminal 900. In one embodiment, when the proximity sensor 914 detects that the distance between the user and the front face of the terminal 900 is gradually decreasing, the processor 901 controls the display 905 to switch from the bright screen state to the off screen state; when the proximity sensor 914 detects that the distance between the user and the front surface of the terminal 900 gradually increases, the processor 901 controls the display 905 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 9 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
When the electronic device is provided as a server, fig. 10 is a block diagram illustrating a server 1000 according to an exemplary embodiment, where the server 1000 may be greatly different due to configuration or performance, and may include one or more processors (Central Processing Units, CPU) 1001 and one or more memories 1002, where the memory 802 stores at least one program code that is loaded and executed by the processor 1001 to implement the video filtering method for decoding provided in the above-described respective method embodiments. Of course, the server may also have a wired or wireless network interface, a keyboard, an input/output interface, etc. to perform input/output, and the server 1000 may also include other components for implementing the functions of the device, which are not described herein.
In an exemplary embodiment, a computer readable storage medium is also provided, such as a memory 902 or 1002, comprising instructions executable by the processor 901 of the terminal 900 or the processor 001 of the server 1000 to perform the video filtering method for decoding described above. Alternatively, the computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
A computer program product comprising computer programs/instructions which when executed by a processor implement the video filtering method for decoding described above.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (15)

1. A video filtering method for decoding, the method comprising:
in decoding any video block within any video frame of a video, dividing the video block into a plurality of video sub-blocks;
for any boundary in any video sub-block, determining a filtering parameter of the boundary based on the coding information of the video block;
and filtering pixels at two sides of the boundaries of the video sub-blocks based on the filtering parameters of the boundaries of the video sub-blocks in the video block to obtain the filtered video block.
2. The method according to claim 1, wherein filtering pixels on both sides of a boundary of a plurality of video sub-blocks in the video block based on filtering parameters of the boundary of the plurality of video sub-blocks to obtain the filtered video block, comprises:
filtering a plurality of first boundaries from the boundaries of the plurality of video sub-blocks based on the boundary strength of the boundaries of the plurality of video sub-blocks in the video block to obtain a plurality of second boundaries, wherein the first boundaries are used for representing boundaries without filtering pixels at two sides of the boundaries;
Determining a filtering mode of the plurality of second boundaries based on boundary strength and quantization parameters in the filtering parameters of the plurality of second boundaries;
and respectively filtering pixels at two sides of the plurality of second boundaries based on the filtering modes of the plurality of second boundaries to obtain the filtered video block.
3. The video filtering method for decoding according to claim 2, wherein the boundary strength of the boundary includes three cases of 0,1 and 2, 0 being used to indicate that the corresponding boundary does not need to be subjected to filtering processing;
the method further comprises the steps of:
the boundary strength of the boundary is stored in the form of 2 bits, wherein 00 is used to represent the boundary strength of the boundary equal to 0, 01 is used to represent the boundary strength of the boundary equal to 1, and 11 is used to represent the boundary strength of the boundary equal to 2.
4. The method of video filtering for decoding according to claim 2, wherein filtering the plurality of first boundaries from the plurality of boundaries of video sub-blocks based on boundary strengths of the boundaries of the plurality of video sub-blocks in the video block to obtain the plurality of second boundaries comprises:
detecting boundary strength in filtering parameters of boundaries of the video sub-blocks by adopting a counting trailing zero instruction, and filtering out a plurality of first boundaries from the boundaries of the video sub-blocks to obtain a plurality of third boundaries;
And further screening the third boundaries to obtain the second boundaries.
5. The method of video filtering for decoding as recited in claim 4, wherein said further filtering said plurality of third boundaries to obtain said plurality of second boundaries comprises:
traversing a third boundary of a target number each time, wherein the target number is used for representing the maximum number of boundaries which can be processed in parallel;
for any traversal, detecting whether a boundary with zero boundary strength exists in the third boundaries of the target number in the current traversal;
and when the boundary with the boundary strength of zero exists in the third boundaries of the target quantity, filtering the boundary with the boundary strength of zero from the third boundaries of the target quantity by adopting a gradient descent method to obtain the plurality of second boundaries.
6. The method according to claim 5, wherein filtering out boundaries with zero boundary strength from the third boundaries of the target number by using a gradient descent method in the case where boundaries with zero boundary strength exist in the third boundaries of the target number, to obtain the plurality of second boundaries, comprises:
If the boundary with zero boundary strength exists in the third boundary of the target number, reducing the detection range to half of the target number, and detecting again;
and filtering out the boundary with zero boundary strength to obtain the plurality of second boundaries in the case that the boundary with zero boundary strength is found.
7. The method according to claim 2, wherein determining the filtering mode of the plurality of second boundaries based on the boundary strength and quantization parameter among the filtering parameters of the plurality of second boundaries comprises:
determining the number of boundaries for performing filtering decision once as a target number based on a single instruction stream and multiple data streams, wherein the filtering decision is used for representing a process of determining a filtering mode of the boundary, and the target number is used for representing the maximum number of the boundaries which can be processed in parallel;
and in the filtering decision process, for any one of the second boundaries of the target number, determining a filtering mode of the second boundary based on boundary strength and quantization parameters in filtering parameters of the second boundary.
8. The method according to claim 7, wherein for any one of the second boundaries of the target number, determining a filtering manner of the second boundary based on a boundary strength and a quantization parameter among filtering parameters of the second boundary comprises:
For any one of the second boundaries of the target number, determining a filtering threshold based on quantization parameters of the second boundary;
determining that the second boundary needs to be filtered under the condition that the texture degree of the second boundary is smaller than the filtering threshold value, wherein the texture degree is used for representing the pixel change rate of two sides of the corresponding boundary;
determining a filtering intensity threshold based on the boundary intensity in the filtering parameters of the second boundary;
and determining a filtering mode of the second boundary based on the filtering intensity threshold.
9. The method of video filtering for decoding as recited in claim 8, wherein said determining a filtering manner of the second boundary based on the filtering strength threshold comprises:
determining a filtering range of the second boundary based on the maximum filtering length of the second boundary, wherein the filtering range is used for representing the quantity of pixels adopted when pixels on two sides of the second boundary are filtered;
in the case that the filtering range of the second boundary indicates that short tap filtering is required for pixels on two sides of the second boundary, determining the filtering strength of the second boundary based on the filtering strength threshold, wherein the filtering strength is used for representing the correction range of the filtered pixels compared with the pixels before filtering.
10. The video filtering method for decoding according to claim 1, wherein, for any boundary in any one of the video sub-blocks, after determining filtering parameters of the boundary based on coding information of the video block, the method further comprises:
in the case that the boundary is located on a common side of transform units in the video block, filter parameters of other boundaries located at the common side are determined based on the filter parameters of the boundary, the transform units are determined based on the coding information, and the common side refers to a side common to two transform units.
11. The method of video filtering for decoding as recited in claim 10, further comprising:
based on single instruction flow and multiple data flows, filtering decisions of a plurality of boundaries positioned on a common side of the transformation unit are processed in parallel to obtain filtering modes of the plurality of boundaries, and the filtering decisions are used for representing a process of determining the filtering modes of the boundaries;
and filtering the plurality of boundaries in parallel based on the single instruction stream multiple data streams and the filtering mode of the plurality of boundaries.
12. A video filtering apparatus for decoding, the apparatus comprising:
A dividing unit configured to perform division of a video block into a plurality of video sub-blocks in decoding any video block within any video frame of a video;
a first determining unit configured to perform determination of a filtering parameter of any boundary in any one of the video sub-blocks based on encoding information of the video block;
and the filtering unit is configured to execute filtering parameters based on the boundaries of a plurality of video sub-blocks in the video block, and filter pixels on two sides of the boundaries of the plurality of video sub-blocks to obtain the filtered video block.
13. An electronic device, the electronic device comprising:
one or more processors;
a memory for storing the processor-executable program code;
wherein the processor is configured to execute the program code to implement the video filtering method for decoding according to any of claims 1 to 11.
14. A computer readable storage medium, characterized in that instructions in the computer readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the video filtering method for decoding according to any one of claims 1 to 11.
15. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the video filtering method for decoding according to any of claims 1 to 11.
CN202310845545.1A 2023-07-11 2023-07-11 Video filtering method and device for decoding, electronic equipment and storage medium Pending CN116980627A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310845545.1A CN116980627A (en) 2023-07-11 2023-07-11 Video filtering method and device for decoding, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310845545.1A CN116980627A (en) 2023-07-11 2023-07-11 Video filtering method and device for decoding, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116980627A true CN116980627A (en) 2023-10-31

Family

ID=88474205

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310845545.1A Pending CN116980627A (en) 2023-07-11 2023-07-11 Video filtering method and device for decoding, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116980627A (en)

Similar Documents

Publication Publication Date Title
CN110234008B (en) Encoding method, decoding method and device
CN110708552B (en) Decoding method, encoding method and device
CN113347433B (en) Method and device for decoding and encoding prediction mode
CN111770340B (en) Video encoding method, device, equipment and storage medium
EP3787291A1 (en) Method and device for video encoding, storage medium, and equipment
CN110933334B (en) Video noise reduction method, device, terminal and storage medium
CN112532975B (en) Video encoding method, video encoding device, computer equipment and storage medium
CN110149517B (en) Video processing method and device, electronic equipment and computer storage medium
CN111586413B (en) Video adjusting method and device, computer equipment and storage medium
CN113038165A (en) Method, apparatus and storage medium for determining a set of coding parameters
CN113891074B (en) Video encoding method and apparatus, electronic apparatus, and computer-readable storage medium
CN111107357A (en) Image processing method, device and system
CN111901679A (en) Method and device for determining cover image, computer equipment and readable storage medium
CN110163835B (en) Method, device, equipment and computer readable storage medium for detecting screenshot
CN116074512A (en) Video encoding method, video encoding device, electronic equipment and storage medium
CN111698512B (en) Video processing method, device, equipment and storage medium
CN111770339B (en) Video encoding method, device, equipment and storage medium
CN116980627A (en) Video filtering method and device for decoding, electronic equipment and storage medium
CN110109813B (en) Information determination method and device for GPU (graphics processing Unit) performance, terminal and storage medium
CN110062226B (en) Video coding method, video decoding method, device, system and medium
CN109040753B (en) Prediction mode selection method, device and storage medium
CN114422782B (en) Video encoding method, video encoding device, storage medium and electronic equipment
CN110062225B (en) Picture filtering method and device
CN116506616A (en) Video frame coding method, device, electronic equipment and storage medium
CN115118979A (en) Image encoding method, image decoding method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination