CN111279706B - Loop filtering method, device, computer system and mobile equipment - Google Patents

Loop filtering method, device, computer system and mobile equipment Download PDF

Info

Publication number
CN111279706B
CN111279706B CN201980005266.6A CN201980005266A CN111279706B CN 111279706 B CN111279706 B CN 111279706B CN 201980005266 A CN201980005266 A CN 201980005266A CN 111279706 B CN111279706 B CN 111279706B
Authority
CN
China
Prior art keywords
pixel
types
filter
code stream
filters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201980005266.6A
Other languages
Chinese (zh)
Other versions
CN111279706A (en
Inventor
孟学苇
郑萧桢
贾川民
王苫社
马思伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
SZ DJI Technology Co Ltd
Original Assignee
Peking University
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University, SZ DJI Technology Co Ltd filed Critical Peking University
Publication of CN111279706A publication Critical patent/CN111279706A/en
Application granted granted Critical
Publication of CN111279706B publication Critical patent/CN111279706B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A loop filtering method, apparatus, computer system and mobile device are disclosed. The method comprises the following steps: determining the number of types of filters adopted by loop filtering; and determining whether to write the filter labels corresponding to the pixel categories into the code stream according to the number of the types of the filters. According to the technical scheme, the compression efficiency can be improved.

Description

Loop filtering method, device, computer system and mobile equipment
Copyright declaration
The disclosure of this patent document contains material which is subject to copyright protection. The copyright is owned by the copyright owner. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the patent and trademark office official records and records.
Technical Field
The present application relates to the field of video coding and decoding, and more particularly, to a loop filtering method, apparatus, computer system, and mobile device.
Background
Loop filtering is a key part in the video codec framework. It is mainly used to reduce the compression distortion such as blocking effect and ringing effect generated in the encoding process. Currently, there are three main loop filtering techniques, i.e., deblocking filtering, Adaptive sample compensation filtering, and Adaptive Loop Filtering (ALF).
Deblocking filtering and adaptive sample compensation filtering follow the methods in High Efficiency Video Coding (HEVC). The deblocking filter is used for the boundary of the prediction unit and the transformation unit, and the nonlinear weighting of boundary pixels is carried out by using the low-pass filter obtained by training, thereby reducing the blocking effect. The self-adaptive sample value compensation filtering classifies pixels in an image block, and then a reconstructed image is closer to an original image in a mode of adding the same compensation value to each type of pixels, so that the effect of suppressing the ringing effect is achieved.
Adaptive loop filtering is a wiener filter that is mainly used to minimize the mean square error between the original image and the reconstructed image. However, in the prior art, when the ALF correlation coefficient and the parameter are written into the code stream, redundancy exists, which affects compression efficiency.
Disclosure of Invention
The embodiment of the application provides a loop filtering method, a loop filtering device, a computer system and a mobile device, which can improve compression efficiency.
In a first aspect, a method for loop filtering is provided, including: determining the number of types of filters adopted by loop filtering; and determining whether to write the filter labels corresponding to the pixel categories into the code stream according to the number of the types of the filters.
In a second aspect, a method of loop filtering is provided, including: determining the number of types of filters adopted by loop filtering; and determining whether to analyze the filter labels corresponding to the pixel categories in the code stream according to the number of the types of the filters.
In a third aspect, a method for loop filtering is provided, including: performing loop filtering on the pixels of the pixel types by adopting filters corresponding to the pixel types; and generating a coded code stream, wherein if the number of the types of the filters adopted by loop filtering is equal to the number of the pixel types, the code stream does not include the filter labels corresponding to the pixel types.
In a fourth aspect, a method of loop filtering is provided, including: acquiring a coded code stream; analyzing the code stream to obtain the number of the types of filters adopted by loop filtering; and if the number of the types of the filters is equal to the number of the pixel types, performing loop filtering on the pixels of the pixel types by adopting the filters with the labels which are the same as or have the default corresponding relation with the labels of the pixel types.
In a fifth aspect, an apparatus for loop filtering is provided, including: the determining module is used for determining the number of the types of the filters adopted by the loop filtering; and the processing module is used for determining whether to write the filter labels corresponding to the pixel categories into the code stream according to the number of the types of the filters.
In a sixth aspect, an apparatus for loop filtering is provided, which includes: a determining module for determining the number of the types of the filters adopted by the loop filtering; and the processing module is used for determining whether to analyze the filter labels corresponding to the pixel categories in the code stream according to the number of the types of the filters.
In a seventh aspect, an apparatus for loop filtering is provided, including: the filtering module is used for performing loop filtering on the pixels of the pixel types by adopting the filters corresponding to the pixel types; and the processing module is used for generating a coded code stream, wherein if the number of the types of the filters adopted by the loop filtering is equal to the number of the pixel types, the code stream does not include the filter labels corresponding to the pixel types.
In an eighth aspect, an apparatus for loop filtering is provided, which includes: the acquisition module is used for acquiring the coded code stream; the processing module is used for analyzing the code stream to obtain the number of the types of the filters adopted by the loop filtering; and if the number of the types of the filters is equal to the number of the pixel types, performing loop filtering on the pixels of the pixel types by adopting the filters with the labels which are the same as or have the default corresponding relation with the labels of the pixel types.
In a ninth aspect, there is provided a computer system comprising: a memory for storing computer executable instructions; a processor for accessing the memory and executing the computer-executable instructions to perform the operations in the methods of the aspects described above.
In a tenth aspect, there is provided a mobile device comprising: means for loop filtering in accordance with the above aspects; alternatively, the computer system of the ninth aspect.
In an eleventh aspect, a computer storage medium having program code stored therein is provided, the program code being operable to direct a method of performing the above aspects.
According to the technical scheme of the embodiment of the application, whether the filter labels corresponding to the pixel categories are written into the code stream or not is determined according to the number of the types of the filters, so that redundancy caused by uniformly writing the filter labels corresponding to the pixel categories into the code stream can be avoided, coding bits are saved, and compression efficiency can be improved.
Drawings
Fig. 1 is an architecture diagram of a solution to which an embodiment of the present application is applied.
Fig. 2 is a schematic diagram of data to be encoded according to an embodiment of the present application.
Fig. 3 is a schematic diagram of an encoding framework according to an embodiment of the present application.
Fig. 4 is a schematic diagram of adaptive loop filtering according to an embodiment of the present application.
Fig. 5 is a schematic diagram of filter coefficients according to an embodiment of the present application.
FIG. 6 is a schematic flow chart diagram of a method of loop filtering according to one embodiment of the present application.
Fig. 7 is a schematic flow chart diagram of a method of loop filtering according to another embodiment of the present application.
Fig. 8 is a schematic flow chart diagram of a method of loop filtering of yet another embodiment of the present application.
Fig. 9 is a schematic flow chart diagram of a method of loop filtering of yet another embodiment of the present application.
FIG. 10 is a schematic block diagram of an apparatus for loop filtering according to an embodiment of the present application.
Fig. 11 is a schematic block diagram of an apparatus for loop filtering according to another embodiment of the present application.
Fig. 12 is a schematic block diagram of an apparatus for loop filtering according to another embodiment of the present application.
Fig. 13 is a schematic block diagram of an apparatus for loop filtering according to another embodiment of the present application.
FIG. 14 is a schematic block diagram of a computer system of an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
It should be understood that the specific examples are provided herein only to assist those skilled in the art in better understanding the embodiments of the present application and are not intended to limit the scope of the embodiments of the present application.
It should also be understood that the formula in the embodiment of the present application is only an example, and is not intended to limit the scope of the embodiment of the present application, and the formula may be modified, and the modifications should also fall within the scope of the protection of the present application.
It should also be understood that, in the various embodiments of the present application, the sequence numbers of the processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the inherent logic of the processes, and should not constitute any limitation to the implementation process of the embodiments of the present application.
It should also be understood that the various embodiments described in this specification can be implemented individually or in combination, and the examples in this application are not limited thereto.
Unless otherwise defined, all technical and scientific terms used in the examples of this application have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Fig. 1 is an architecture diagram of a solution to which an embodiment of the present application is applied.
As shown in FIG. 1, the system 100 can receive the data 102 to be processed, process the data 102 to be processed, and generate processed data 108. For example, the system 100 may receive data to be encoded, encoding the data to be encoded to produce encoded data, or the system 100 may receive data to be decoded, decoding the data to be decoded to produce decoded data. In some embodiments, the components in system 100 may be implemented by one or more processors, which may be processors in a computing device or in a mobile device (e.g., a drone). The processor may be any kind of processor, which is not limited in this application. In some possible designs, the processor may include an encoder, a decoder, a codec, or the like. One or more memories may also be included in the system 100. The memory may be used to store instructions and data, such as computer-executable instructions to implement aspects of embodiments of the present application, pending data 102, processed data 108, and so on. The memory may be any kind of memory, which is not limited in this embodiment of the present application.
The data to be encoded may include text, images, graphical objects, animation sequences, audio, video, or any other data that needs to be encoded. In some cases, the data to be encoded may include sensory data from sensors, which may be visual sensors (e.g., cameras, infrared sensors), microphones, near-field sensors (e.g., ultrasonic sensors, radar), position sensors, temperature sensors, touch sensors, and the like. In some cases, the data to be encoded may include information from the user, e.g., biometric information, which may include facial features, fingerprint scans, retinal scans, voice recordings, DNA samples, and the like.
Fig. 2 shows a schematic diagram of data to be encoded according to an embodiment of the present application.
As shown in fig. 2, the data to be encoded 202 may include a plurality of frames 204. For example, the plurality of frames 204 may represent successive image frames in a video stream. Each frame 204 may include one or more stripes (slices) or tiles (tiles) 206. Each stripe or tile206 may comprise one or more image blocks 208. For example, the image block 208 may be an encoding unit 208. Each image block 208 may include one or more pixels 212. Each pixel 212 may include one or more data sets corresponding to one or more data portions, e.g., a luminance data portion and a chrominance data portion. The data unit for which the image processing is directed may be a frame, slice, tile, coding unit, macroblock, block, pixel or a group of any of the above. The size of the data units may vary in different embodiments.
Encoding is necessary for efficient and/or secure transmission or storage of data. Encoding of data to be encoded may include data compression, encryption, error correction coding, format conversion, and the like. For example, compression of multimedia data (e.g., video or audio) may reduce the number of bits transmitted in a network. Sensitive data, such as financial information and personal identification information, may be encrypted prior to transmission and storage to protect confidentiality and/or privacy. In order to reduce the bandwidth occupied by video storage and transmission, video data needs to be subjected to encoding compression processing.
Any suitable encoding technique may be used to encode the data to be encoded. The type of encoding depends on the data being encoded and the specific encoding requirements.
In some embodiments, the encoder may implement one or more different codecs. Each codec may include code, instructions or computer programs implementing a different coding algorithm. An appropriate encoding algorithm may be selected to encode a given piece of data to be encoded based on a variety of factors, including the type and/or source of the data to be encoded, the receiving entity of the encoded data, available computing resources, network environment, business environment, rules and standards, and the like.
For example, the encoder may be configured to encode a series of video frames. A series of steps may be taken to encode the data in each frame. In some embodiments, the encoding step may include prediction, transform, quantization, entropy encoding, and like processing steps.
The prediction includes two types, intra prediction and inter prediction, and aims to remove redundant information of a current image block to be coded by using prediction block information. The intra prediction obtains prediction block data using information of the present frame image. The inter-frame prediction utilizes the information of a reference frame to obtain prediction block data, and the process comprises the steps of dividing an image to be coded into a plurality of image blocks; then, aiming at each image block, searching an image block which is most matched with the current image block in a reference image as a prediction block; then, the image block and the corresponding pixel value of the prediction block are subtracted to obtain a residual.
The method comprises the steps of using a transformation matrix to transform a residual block of an image to remove the correlation of the residual of the image block, namely removing redundant information of the image block so as to improve the coding efficiency, and using two-dimensional transformation to transform a data block in the image block, namely multiplying the residual information of the data block by an NxM transformation matrix and a transposition matrix at a coding end respectively to obtain a transformation coefficient after multiplication. The quantized coefficient can be obtained by quantizing the transformation coefficient, and finally entropy coding is carried out on the quantized coefficient, and finally the bit stream obtained by entropy coding and the coding mode information after coding, such as an intra-frame prediction mode, motion vector information and the like, are stored or sent to a decoding end. At the decoding end of the image, entropy coding is carried out after entropy coding bit streams are obtained, corresponding residual errors are obtained, according to the decoded motion vectors or intra-frame prediction and other information image blocks, prediction image blocks corresponding to the image blocks are obtained, and according to the prediction image blocks and the residual errors of the image blocks, values of all pixel points in the current sub image blocks are obtained.
Fig. 3 shows a schematic diagram of an encoding framework of an embodiment of the present application.
As shown in fig. 3, when inter prediction is used, the encoding process may be as follows:
in 301, a current frame image is acquired. In 302, a reference frame image is acquired. In 303a, Motion estimation is performed using the reference frame image to obtain Motion Vectors (MVs) of the image blocks of the current frame image. In 304a, motion compensation is performed using the motion vector obtained by motion estimation to obtain an estimated/predicted value of the current image block. At 305, the estimated/predicted value of the current image block is subtracted from the current image block to obtain a residual. In 306, the residual is transformed to obtain transform coefficients. In 307, the transform coefficients are quantized to obtain quantized coefficients. In 308, entropy coding is performed on the quantized coefficients, and finally, the bit stream obtained by entropy coding and the coded coding mode information are stored or transmitted to a decoding end. In 309, the result of the quantization is inverse quantized. At 310, the inverse quantization result is inverse transformed. At 311, a reconstructed pixel is obtained using the inverse transform result and the motion compensation result. At 312, the reconstructed pixels are filtered (i.e., loop filtered). In 313, the filtered reconstructed pixels are output. Subsequently, the reconstructed image can be used as a reference frame image of other frame images for inter-frame prediction.
When intra prediction is used, the flow of encoding can be as follows:
in 302, a current frame image is acquired. In 303b, intra prediction selection is performed on the current frame image. In 304b, the current image block in the current frame is intra predicted. At 305, the estimated value of the current image block is subtracted from the current image block to obtain a residual. In 306, the residuals of the image blocks are transformed to obtain transform coefficients. In 307, the transform coefficients are quantized to obtain quantized coefficients. At 308, entropy coding is performed on the quantized coefficients, and finally, the bit stream obtained by entropy coding and the coded coding mode information are stored or transmitted to a decoding end. In 309, the quantization result is inverse quantized. At 310, the inverse quantization result is inverse transformed, and at 311, a reconstructed pixel is obtained using the inverse transformation result and the intra prediction result. The reconstructed image block may be used for intra prediction of the next image block.
For the decoding end, the operation corresponding to the encoding end is performed. Firstly, residual error information is obtained by utilizing entropy decoding, inverse quantization and inverse transformation, and whether the current image block uses intra-frame prediction or inter-frame prediction is determined according to a decoded code stream. If the prediction is intra-frame prediction, the reconstructed image block in the current frame is utilized to construct prediction information according to an intra-frame prediction method; if the inter-frame prediction is carried out, motion information needs to be analyzed, and a reference block is determined in the reconstructed image by using the analyzed motion information to obtain prediction information; and then, superposing the prediction information and the residual information, and obtaining the reconstruction information through filtering operation.
The technical scheme of the embodiment of the application can be applied to the filtering process of encoding or decoding. The technical scheme of the embodiment of the application mainly relates to a filtering step, namely, the compression efficiency is improved through improvement in the filtering step, and other steps can refer to related steps in the encoding process.
The technical scheme of the embodiment of the application can be applied to adaptive loop filtering in encoding or decoding.
The adaptive loop filter is an optimal filter in the mean square sense calculated according to an original signal and a coded distortion signal, and is essentially a wiener filter.
As shown in fig. 4, X is the original signal, e is noise or distortion, Y is the distorted signal,
Figure BDA0002459061400000071
is a filtered signal. The filter satisfies:
Figure BDA0002459061400000072
in the adaptive loop filtering, for each pixel point, the result after the current point filtering is obtained by using the weighted average of the surrounding pixel points. The positions of the used adjacent pixel points are shown in fig. 5, wherein the point corresponding to C12 is the current point to be filtered, the filtering process is obtained by using the weighted average of all the position points in fig. 5, the filtering coefficient is the weight of each point, and there are 13 filtering coefficients C0-C13 in total. The final filtering process is the summation of the products of the points shown in fig. 5 and their corresponding filter coefficients. The points used in this process are all reconstructed points obtained before the adaptive loop filtering.
In consideration of the complexity, it is impossible to set a set of filter coefficients for each pixel, and thus the pixel needs to be classified. Pixels of the same class use the same set of filter coefficients (a filter). The manner in which the pixels are classified may be many. For example, only the Y component of a pixel may be classified, and the UV component may not be classified. For example, the Y component may be classified into 25 types, and the UV components are all of only one type. This means that for one frame of image there can be up to 25 sets of filters for the Y component and only one set for the UV component.
It should be understood that in the embodiment of the present application, the pixel category may be a category corresponding to the Y component, but the embodiment of the present application is not limited thereto, and the pixel category may also be a category corresponding to other components or all components.
For example, each 4x4 block may be classified according to the laplacian direction:
Figure BDA0002459061400000073
c represents the class to which the pixel block belongs.
Figure BDA0002459061400000081
Corresponding to a fine classification result after Direction (Direction) classification,
Figure BDA0002459061400000082
there are many ways to obtain the result, here representing only a fine classification.
The direction d (direction) is calculated as follows,
Figure BDA0002459061400000083
Vk,l=|2R(k,l)-R(k,l-1)-R(k,l+1)|,
Figure BDA0002459061400000084
Hk,l=|2R(k,l)-R(k-1,l)-R(k+1,l)|,
Figure BDA0002459061400000085
D1k,l=|2R(k,l)-R(k-1,l-1)-R(k+1,l+1)|
Figure BDA0002459061400000086
D2k,l=|2R(k,l)-R(k-1,l+1)-R(k+1,l-1)|
(i, j) represents the coordinate position of the current 4x4 block in the entire image.
R (k, l) represents the pixel value at the (k, l) position in the 4x4 block.
Vk,lRepresenting the laplacian gradient in the vertical direction for the pixel points located at (i, j) coordinates in the 4x4 block.
Hk,lThe pixel points representing the 4x4 block at the (i, j) coordinates have a laplacian gradient in the horizontal direction.
D1k,lThe pixels representing the 4x4 block at the (i, j) coordinate have a laplacian gradient in the 135 degree direction.
D2k,lThe pixel points representing the 4x4 block at the (i, j) coordinate are at a 45 degree laplacian gradient.
gvRepresenting the laplacian gradient of 4x4 blocks in the vertical direction.
ghRepresenting the laplacian gradient of 4x4 blocks in the horizontal direction.
gd1Representing the laplacian gradient of the 4x4 block in the 135 degree direction.
gd2Representing the laplacian gradient of 4x4 blocks in the 45 degree direction.
i and j are the coordinates of the top left pixel point of the 4x4 block. R (i, j) represents the reconstructed pixel value at coordinate (i, j).
Figure BDA0002459061400000087
Figure BDA0002459061400000088
Figure BDA0002459061400000089
Figure BDA00024590614000000810
Figure BDA00024590614000000811
Represents the maximum value of the laplace gradient values in the horizontal and vertical directions.
Figure BDA00024590614000000812
Represents the minimum of the laplace gradient values in the horizontal and vertical directions.
Figure BDA00024590614000000813
Representing the maximum value of the laplace gradient values in the 45, 135 directions.
Figure BDA00024590614000000814
Representing the minimum of the laplacian gradient values in the 45, 135 directions.
Rh,vRepresents the ratio of the laplace gradient in the horizontal and vertical directions.
Rd0,d1Representing the ratio of laplace gradients in the 45, 135 directions.
If it is not
Figure BDA0002459061400000091
And also
Figure BDA0002459061400000092
D is set to 0.
If it is not
Figure BDA0002459061400000093
And also
Figure BDA0002459061400000094
D is set to 1.
If it is not
Figure BDA0002459061400000095
And also
Figure BDA0002459061400000096
D is set to 2.
If it is not
Figure BDA0002459061400000097
And also
Figure BDA0002459061400000098
D is set to 3.
If it is not
Figure BDA0002459061400000099
And also
Figure BDA00024590614000000910
D is set to 4.
t1 and t2 represent preset thresholds.
Figure BDA00024590614000000911
The way of calculating (a) is as follows,
Figure BDA00024590614000000912
then quantizing A to obtain an integer between 0 and 4 to obtain
Figure BDA00024590614000000913
In order to further improve the compression performance and reduce the number of bits (bits) required for encoding the coefficients, the ALF may employ a coefficient merging technique, for example, coefficient merging between different types of pixels, a filter coefficient zeroing operation, and a coefficient differential encoding method.
The coefficients between different types of pixel points are combined, namely different types use the same filter. If some classes use the same filter, then fewer filter coefficients can be passed in the stream.
Setting the filter coefficient to zero: some types of pixel points may be better without using ALF filtering. That is, the number of bits it saves without ALF filtering is more significant than the increase it brings with the ALF filter. That is, in this case, some kinds of pixels may have their filter coefficients set to zero directly.
Coefficient difference encoding method: the filter coefficients can be selected from two coding modes, wherein the first mode is to directly write the filter coefficients into a code stream, the other mode is to write a first set of filter coefficients into the code stream, write the result of subtracting the first set of coefficients from a second set into the code stream, write the coefficients of subtracting the second set from a third set into the code stream, and so on. This scheme is a coefficient differential coding scheme. The encoding end can select the encoding mode.
The above coefficient combining modes all need to make decisions at the encoding end, for example, it is better to decide which types of coefficients are combined, it is better to decide which of the types of filters is to be subjected to coefficient combining, it is better to decide whether the coefficients are set to zero or not, and it is better to decide whether the coefficients are coded differentially or directly.
In case of using filter coefficient combining, the number of types of filters used for loop filtering may or may not be equal to the number of pixel classes, i.e. multiple pixel classes may use the same filter. Thus, which filter to use per pixel class needs to be identified in the code stream. However, the current identification method has redundancy, and the compression efficiency is influenced. In view of this, the embodiments of the present application provide an improved technical solution.
The technical scheme of the embodiment of the application can be applied to an encoding end and a decoding end. The technical solutions of the embodiments of the present application are described below from the encoding side and the decoding side, respectively.
Fig. 6 shows a schematic flow diagram of a method 600 of loop filtering of one embodiment of the present application. The method 600 may be performed by an encoding side. For example, it may be performed by the system 100 shown in FIG. 1 when performing encoding operations.
The number of types of filters employed for loop filtering is determined 610.
As previously mentioned, in adaptive loop filtering, a variety of filters may be employed for each frame of image or each slice or each tile. In the case of filter coefficient combining, the number of types of filters used for loop filtering may or may not be equal to the number of pixel classes. The encoding end can write in the code stream according to the number of the types of the actually adopted filters. Specifically, the code stream includes a syntax element of the number of filter types. And after confirming the type number of the filter, the encoding end writes the type number into the code stream.
And 620, determining whether to write the filter labels corresponding to the pixel categories into the code stream according to the number of the types of the filters.
Since the number of the types of the filters used for loop filtering may be equal to the number of the pixel types, or may not be equal to the number of the pixel types, it may be determined whether to write the filter label corresponding to the pixel type into the code stream according to the number of the types of the filters, for example, in a case where the decoding end can determine the filter label corresponding to the pixel type, the filter label corresponding to the pixel type may not be written into the code stream. Therefore, redundancy caused by uniformly writing the filter labels corresponding to the pixel types into the code stream can be avoided, coding bits are saved, and compression efficiency is improved.
Specifically, after determining the initial filter corresponding to each pixel type, the encoding end may combine the initial filters corresponding to different pixel types to obtain a filter corresponding to each pixel type. Then, according to the corresponding filter, loop filtering is carried out on the pixels of each pixel type, and the number of the types of the adopted filters is written into the code rate, so that the decoding end knows the number of the types of the filters adopted by the encoding end.
Optionally, the encoding end may write the code stream after subtracting one from the number of the filter types. Because the number of the filter types is a numerical value which is more than or equal to 1, the number is uniformly reduced when the code stream is written by the encoding end, and the number is uniformly increased after the decoding end analyzes the code stream. This saves coding bits. For example, if the number of filter types can be any number between 1 and 25, and the number is reduced by one to be between 0 and 24, the number of bits used for the number between 0 and 24 is less than the number between 1 and 25.
For example, taking a frame of image as an example, the sequence and steps when the encoding parameters of the ALF are written into the code stream may be as follows:
and writing the frame level switch identification of the Y component into the code stream to represent whether the current frame uses ALF or not. If the flag is 0, it represents that the current frame does not use ALF. If 1, the ALF is used for the current frame, and the following steps are continued.
And writing the switch of the UV component into the code stream.
And writing the filter number-1 of the Y component of the current frame into a code stream.
Then, whether the filter label corresponding to the pixel category is written into the code stream is determined.
Specifically, in addition to the number of types of filters, the decoding end needs to know the filter corresponding to each pixel class, so as to perform loop filtering on the pixels of each pixel class. Accordingly, the encoding end may need to write the filter label corresponding to the pixel category into the code stream. In the embodiment of the application, the encoding end can determine whether to write the filter label corresponding to the pixel type into the code stream according to the number of the types of the filters.
Optionally, in an embodiment of the present application, if the number of the types of the filters is equal to the number of the pixel types, the filter labels corresponding to the pixel types are not written into the code stream.
In case the number of kinds of the filter is equal to the number of pixel classes, the decoding end may be able to determine the filter label corresponding to the pixel class. For example, if the filter label corresponding to the pixel type is the same as or in a default correspondence with the label of the pixel type, the decoding end can determine the filter label corresponding to the pixel type without analyzing the filter label corresponding to the pixel type in the code stream. In this case, the encoding end does not write the filter label corresponding to the pixel type into the code stream.
For example, taking the number of pixel types as 25 as an example, if the number of filter types is also 25, that is, if there is one filter for each pixel type, the decoding end can directly identify the filter number corresponding to the pixel type when the two numbers are the same or have a default correspondence relationship. Therefore, the encoding end does not need to write the filter label corresponding to the pixel type into the code stream.
Optionally, in the above situation, the encoding end may further write first indication information into the code stream, where the first indication information is used to indicate that the code stream does not include the filter label corresponding to the pixel type. Correspondingly, when the decoding end analyzes the first indication information, the filter label corresponding to the pixel category in the code stream is not analyzed.
Optionally, in an embodiment of the present application, if the number of the types of the filters is equal to the number of the pixel types, and the filter labels corresponding to the pixel types are different from the labels of the pixel types or are not in a default correspondence, the filter labels corresponding to the pixel types are written into the code stream.
When the number of the filter types is equal to the number of the pixel categories, but the filter labels corresponding to the pixel categories are different from the labels of the pixel categories or do not have a default correspondence, the decoding end cannot directly determine the filter labels corresponding to the pixel categories. In this case, the encoding end writes the filter label corresponding to the pixel type into the code stream.
For example, when encoding filter coefficients, such as when using a coefficient difference encoding scheme, the order of the filters may be adjusted to reduce the number of encoded bits, and thus the labels (correspondence order) of the filters may not match the pixel type or may not be in a default correspondence relationship. The decoding end cannot directly determine the filter label corresponding to the pixel class. Therefore, the encoding end needs to write the filter label corresponding to the pixel type into the code stream.
Optionally, in the above situation, the encoding end may further write second indication information into the code stream, where the second indication information is used to indicate that the code stream includes the filter label corresponding to the pixel type. Correspondingly, when the decoding end analyzes the second indication information, the filter label corresponding to the pixel type in the code stream is analyzed.
The first indication information and the second indication information may be used as filter identifier indication information in the code stream, that is, in different cases, the filter identifier indication information in the code stream may be the first indication information or the second indication information.
Optionally, in an embodiment of the present application, if the number of the types of the filters is greater than 1 and less than the number of the pixel types, the filter labels corresponding to the pixel types are written into the code stream.
When the number of the types of the filters is greater than 1 and less than the number of the pixel types, the correspondence between the pixel types and the types of the filters is uncertain, and therefore, the encoding end needs to write the filter labels corresponding to the pixel types into the code stream.
Optionally, in an embodiment of the present application, if the number of the types of the filters is 1, the filter label corresponding to the pixel type is not written into the code stream.
The number of filter types is 1, i.e. only one filter is used. In this case, the encoding end also does not need to write the filter label corresponding to the pixel type into the code stream. At the decoding end, for the pixels of the pixel category, only one filter in the code stream can be adopted for loop filtering.
According to the technical scheme of the embodiment of the application, whether the filter labels corresponding to the pixel categories are written into the code stream or not is determined according to the number of the types of the filters, so that redundancy caused by uniformly writing the filter labels corresponding to the pixel categories into the code stream can be avoided, coding bits are saved, and compression efficiency can be improved.
The technical solution of the embodiment of the present application is described above from the perspective of the encoding end, and the technical solution of the embodiment of the present application is described below from the perspective of the decoding end. It should be understood that, except for the following description, the general description of the encoding side and the decoding side may refer to the foregoing description and will not be repeated for brevity.
Fig. 7 shows a schematic flow chart of a method 700 of loop filtering of another embodiment of the present application. The method 700 may be performed by a decoding end. For example, it may be performed by the system 100 shown in fig. 1 when performing a decoding operation.
The number of types of filters employed for loop filtering is determined 710.
The encoding end writes the number of the types of the filters adopted by the loop filtering into the code stream, and correspondingly, the decoding end can obtain the number of the types of the filters by analyzing the code stream.
Optionally, in a case that the encoding end writes the number of the filter types into the code stream after subtracting one, the decoding end adds one more number of the filter types after parsing.
And 720, determining whether to analyze the filter labels corresponding to the pixel categories in the code stream according to the number of the types of the filters.
After obtaining the number of the types of the filters, the decoding end performs different processing for different situations.
Optionally, in an embodiment of the present application, if the number of the types of the filters is equal to the number of the pixel types, the filter labels corresponding to the pixel types in the code stream are not decoded.
In this case, the decoding end performs loop filtering on the pixels of the pixel class by using a filter having a label that is the same as or has a default correspondence with the label of the pixel class.
Specifically, when the number of the types of the filters is equal to the number of the pixel types, if the filter label corresponding to the pixel type is the same as or in a default correspondence with the label of the pixel type, the decoding end can determine the filter label corresponding to the pixel type without analyzing the filter label corresponding to the pixel type in the code stream, and thus the loop filtering can be performed by directly using the filter having the label that is the same as or has the default correspondence with the label of the pixel type.
Optionally, in an embodiment of the present application, if the number of the types of the filters is equal to the number of the pixel types, analyzing filter label indication information in the code stream first, and if the filter label indication information is first indication information, where the first indication information is used to indicate that the code stream does not include a filter label corresponding to the pixel type, not analyzing the filter label corresponding to the pixel type in the code stream, and performing loop filtering on the pixels of the pixel type by using a filter having a label that is the same as or has a default correspondence with the label of the pixel type.
Optionally, in an embodiment of the present application, if the number of the types of the filters is equal to the number of the pixel types, analyzing filter label indication information in the code stream first, and if the filter label indication information is second indication information, where the second indication information is used to indicate that the code stream includes a filter label corresponding to the pixel type, analyzing the filter label corresponding to the pixel type in the code stream. And then, for the pixels of the pixel type, performing loop filtering by adopting the analyzed filter with the filter label corresponding to the pixel type.
Optionally, in an embodiment of the present application, if the number of the types of the filters is greater than 1 and less than the number of the pixel types, the filter labels corresponding to the pixel types in the code stream are analyzed. And then, for the pixels of the pixel type, performing loop filtering by adopting the analyzed filter with the filter label corresponding to the pixel type.
Optionally, in an embodiment of the present application, if the number of the types of the filters is 1, the filter labels corresponding to the pixel types in the code stream are not decoded. And for the pixels of the pixel category, performing loop filtering by adopting a unique filter in the code stream.
Fig. 8 shows a schematic flow chart of a method 800 of loop filtering of yet another embodiment of the present application. The method 800 may be performed by an encoding side. For example, it may be performed by the system 100 shown in FIG. 1 when performing encoding operations. Except for the following description, the embodiment may refer to the related description of the embodiment described in the foregoing encoding end, and for brevity, will not be described again.
And 810, performing loop filtering on the pixels of the pixel category by adopting a filter corresponding to the pixel category.
Optionally, the encoding end may first determine an initial filter corresponding to each pixel type; and then combining the initial filters corresponding to different pixel categories to obtain the filter corresponding to each pixel category. Then, the pixels of each pixel type are loop filtered using a filter corresponding to each pixel type.
820, generating a coded code stream, wherein if the number of the types of the filters adopted by the loop filtering is equal to the number of the pixel types, the code stream does not include the filter labels corresponding to the pixel types.
In this embodiment of the present application, when the number of the types of the filters used for loop filtering is equal to the number of the pixel types, the code stream generated by the encoding end does not include the filter labels corresponding to the pixel types. That is, at the encoding end, if the number of the filter types is equal to the number of the pixel types, the filter labels corresponding to the pixel types are not written into the code stream.
In the above case, the filter label corresponding to the pixel class is the same as the label of the pixel class or in a default correspondence.
Optionally, the encoding end may further write first indication information into the code stream, where the first indication information is used to indicate that the code stream does not include a filter label corresponding to the pixel type.
Optionally, if the number of the types of the filters used for loop filtering is equal to the number of the pixel types, and the filter labels corresponding to the pixel types are different from the labels of the pixel types or are not in a default correspondence relationship, the code stream generated by the encoding end includes the filter labels corresponding to the pixel types.
Optionally, if the number of the types of the filters is greater than 1 and less than the number of the pixel types, the code stream generated by the encoding end includes the filter labels corresponding to the pixel types. That is, at the encoding end, if the number of the filter types is greater than 1 and less than the number of the pixel types, the filter labels corresponding to the pixel types are written into the code stream.
Optionally, if the number of the types of the filters is 1, the filter labels corresponding to the pixel types are not written into the code stream.
Optionally, the encoding end may further reduce the number of the filter types by one and write the filter types into the code stream.
In the technical scheme of the embodiment of the application, when the number of the types of the filters adopted by the loop filtering is equal to the number of the pixel types, the code stream does not include the filter labels corresponding to the pixel types, so that redundancy caused by uniformly writing the filter labels corresponding to the pixel types into the code stream can be avoided, coding bits are saved, and the compression efficiency can be improved.
Fig. 9 shows a schematic flow chart of a method 900 of loop filtering of yet another embodiment of the present application. The method 900 may be performed by a decoding end. For example, it may be performed by the system 100 shown in fig. 1 when performing a decoding operation. Except for the following description, the embodiment may refer to the related description of the embodiment described at the decoding end, and is not repeated for brevity.
And 910, acquiring the coded code stream.
And 920, analyzing the code stream to obtain the number of the types of the filters adopted by the loop filtering.
The encoding end writes the number of the types of the filters adopted by the loop filtering into the code stream, and correspondingly, the decoding end can obtain the number of the types of the filters by analyzing the code stream.
930, if the number of the filter types is equal to the number of the pixel types, performing loop filtering on the pixels of the pixel types by using a filter with a label which is the same as or has a default corresponding relation with the label of the pixel type.
And under the condition that the number of the types of the filters is equal to the number of the pixel categories, the code stream generated by the encoding end does not include the filter labels corresponding to the pixel categories. The decoding end does not need to analyze the label of the filter corresponding to the pixel type in the code stream, and can directly adopt the filter with the label which is the same as the label of the pixel type or has a default corresponding relation to carry out loop filtering.
Optionally, if the number of the types of the filters is equal to the number of the pixel types, analyzing filter label indication information in the code stream, and if the filter label indication information is first indication information, performing loop filtering by using a filter having a label that is the same as or has a default correspondence with a label of the pixel type, where the first indication information is used to indicate that the code stream does not include a filter label corresponding to the pixel type.
Optionally, if the number of the types of the filters is equal to the number of the pixel types, analyzing filter label indication information in the code stream, and if the filter label indication information is second indication information, analyzing a filter label corresponding to the pixel type in the code stream, where the second indication information is used to indicate that the code stream includes a filter label corresponding to the pixel type. And then, for the pixels of the pixel type, performing loop filtering by adopting the analyzed filter with the filter label corresponding to the pixel type.
Optionally, if the number of the types of the filters is greater than 1 and less than the number of the pixel types, analyzing the filter labels corresponding to the pixel types in the code stream. And then, for the pixels of the pixel type, performing loop filtering by adopting the analyzed filter with the filter label corresponding to the pixel type.
Optionally, if the number of the types of the filters is 1, for the pixels of the pixel type, performing loop filtering by using only one filter in the code stream.
The method of loop filtering of the embodiments of the present application is described above in detail, and the apparatus, the computer system, and the removable device of loop filtering of the embodiments of the present application will be described below.
Fig. 10 shows a schematic block diagram of an apparatus 1000 for loop filtering according to an embodiment of the present application. The apparatus 1000 may perform the loop filtering method 600 of the embodiment of the present application. For example, the apparatus 1000 may be an encoder.
As shown in fig. 10, the apparatus 1000 may include:
a determining module 1010 for determining the number of types of filters used for loop filtering;
the processing module 1020 determines whether to write the filter label corresponding to the pixel type into the code stream according to the number of the types of the filters.
Optionally, the processing module 1020 is configured to:
and if the number of the types of the filters is equal to the number of the pixel types, not writing the filter labels corresponding to the pixel types into the code stream.
Optionally, the label of the filter corresponding to the pixel category is the same as the label of the pixel category or is a default correspondence.
Optionally, the processing module 1020 is further configured to:
and writing first indication information into the code stream, wherein the first indication information is used for indicating that the code stream does not include the filter label corresponding to the pixel type.
Optionally, the processing module 1020 is configured to:
and if the number of the types of the filters is equal to the number of the pixel types, and the filter labels corresponding to the pixel types are different from the labels of the pixel types or are not in a default corresponding relationship, writing the filter labels corresponding to the pixel types into the code stream.
Optionally, the processing module 1020 is further configured to:
and writing second indication information into the code stream, wherein the second indication information is used for indicating that the code stream comprises the filter labels corresponding to the pixel categories.
Optionally, the processing module 1020 is configured to:
and if the number of the types of the filters is greater than 1 and less than the number of the pixel types, writing the filter labels corresponding to the pixel types into the code stream.
Optionally, the processing module 1020 is configured to:
and if the number of the types of the filters is 1, not writing the filter labels corresponding to the pixel types into the code stream.
Optionally, the processing module 1020 is further configured to:
determining an initial filter corresponding to each pixel category;
and combining the initial filters corresponding to different pixel categories to obtain a filter corresponding to each pixel category.
Optionally, the processing module 1020 is further configured to:
and writing the code stream after reducing the number of the types of the filters by one.
Fig. 11 shows a schematic block diagram of an apparatus 1100 for loop filtering according to another embodiment of the present application. The apparatus 1100 may perform the loop filtering method 700 of the embodiment of the present application. The apparatus 1100 may be, for example, a decoder.
As shown in fig. 11, the apparatus 1100 may include:
a determining module 1110, configured to determine the number of types of filters used for loop filtering;
the processing module 1120 is configured to determine whether to analyze a filter label corresponding to a pixel category in the code stream according to the number of the types of the filters.
Optionally, the processing module 1120 is configured to:
and if the number of the types of the filters is equal to the number of the pixel types, not analyzing the filter labels corresponding to the pixel types in the code stream.
Optionally, the processing module 1120 is configured to:
if the number of the types of the filters is equal to the number of the pixel types, analyzing filter label indication information in the code stream, and if the filter label indication information is first indication information, not analyzing filter labels corresponding to the pixel types in the code stream, wherein the first indication information is used for indicating that the filter labels corresponding to the pixel types are not included in the code stream.
Optionally, the processing module 1120 is further configured to:
and for the pixels of the pixel category, performing loop filtering by adopting a filter with the same label as the label of the pixel category or a label with default corresponding relation.
Optionally, the processing module 1120 is configured to:
and analyzing the filter label indication information in the code stream if the number of the types of the filters is equal to the number of the pixel types, and analyzing the filter label corresponding to the pixel type in the code stream if the filter label indication information is second indication information, wherein the second indication information is used for indicating that the code stream comprises the filter label corresponding to the pixel type.
Optionally, the processing module 1120 is configured to:
and if the number of the types of the filters is greater than 1 and less than the number of the pixel types, analyzing the filter labels corresponding to the pixel types in the code stream.
Optionally, the processing module 1120 is further configured to:
and performing loop filtering on the pixels of the pixel type by adopting the analyzed filter with the filter label corresponding to the pixel type.
Optionally, the processing module 1120 is configured to:
and if the number of the types of the filters is 1, not analyzing the filter labels corresponding to the pixel types in the code stream.
Optionally, the processing module 1120 is further configured to:
and for the pixels of the pixel category, performing loop filtering by adopting a unique filter in the code stream.
Optionally, the determining module 1110 is configured to:
and analyzing the code stream to obtain the number of the types of the filters.
Fig. 12 shows a schematic block diagram of an apparatus 1200 for loop filtering according to yet another embodiment of the present application. The apparatus 1200 may perform the loop filtering method 800 of the embodiment of the present application. The apparatus 1200 may be, for example, an encoder.
As shown in fig. 12, the apparatus 1200 may include:
a filtering module 1210, configured to perform loop filtering on a pixel of a pixel category by using a filter corresponding to the pixel category;
the processing module 1220 is configured to generate a coded code stream, where if the number of the types of the filters used for loop filtering is equal to the number of the pixel types, the code stream does not include the filter labels corresponding to the pixel types.
Optionally, the processing module 1220 is configured to:
and if the number of the types of the filters is equal to the number of the pixel types, not writing the filter labels corresponding to the pixel types into the code stream.
Optionally, the label of the filter corresponding to the pixel category is the same as the label of the pixel category or is a default correspondence.
Optionally, the processing module 1220 is further configured to:
and writing first indication information into the code stream, wherein the first indication information is used for indicating that the code stream does not include the filter label corresponding to the pixel type.
Optionally, the processing module 1220 is configured to:
and if the number of the types of the filters is greater than 1 and less than the number of the pixel types, writing the filter labels corresponding to the pixel types into the code stream.
Optionally, the processing module 1220 is configured to:
and if the number of the types of the filters is 1, not writing the filter labels corresponding to the pixel types into the code stream.
Optionally, the processing module 1220 is further configured to:
determining an initial filter corresponding to each pixel category;
and combining the initial filters corresponding to different pixel categories to obtain a filter corresponding to each pixel category.
Optionally, the processing module 1220 is further configured to:
and writing the code stream after reducing the number of the types of the filters by one.
Fig. 13 shows a schematic block diagram of an apparatus 1300 for loop filtering according to another embodiment of the present application. The apparatus 1300 may perform the loop filtering method 900 of the embodiment of the present application. For example, the apparatus 1300 may be a decoder.
As shown in fig. 13, the apparatus 1300 may include:
an obtaining module 1310 configured to obtain a coded code stream;
a processing module 1320, configured to analyze the code stream to obtain the number of types of filters used for loop filtering; and if the number of the types of the filters is equal to the number of the pixel types, performing loop filtering on the pixels of the pixel types by adopting the filters with the labels which are the same as or have the default corresponding relation with the labels of the pixel types.
Optionally, the processing module 1320 is configured to:
and if the number of the types of the filters is equal to the number of the pixel types, analyzing the indication information of the filter labels in the code stream, and if the indication information of the filter labels is first indication information, performing loop filtering by using the filter with the label which is the same as the label of the pixel type or has a default corresponding relation, wherein the first indication information is used for indicating that the filter label corresponding to the pixel type is not included in the code stream.
Optionally, the processing module 1320 is configured to:
and analyzing the filter label indication information in the code stream if the number of the types of the filters is equal to the number of the pixel types, and analyzing the filter label corresponding to the pixel type in the code stream if the filter label indication information is second indication information, wherein the second indication information is used for indicating that the code stream comprises the filter label corresponding to the pixel type.
Optionally, the processing module 1320 is configured to:
and if the number of the types of the filters is greater than 1 and less than the number of the pixel types, analyzing the filter labels corresponding to the pixel types in the code stream.
Optionally, the processing module 1320 is configured to:
and performing loop filtering on the pixels of the pixel type by adopting the analyzed filter with the filter label corresponding to the pixel type.
Optionally, the processing module 1320 is configured to:
and if the number of the types of the filters is 1, performing loop filtering on the pixels of the pixel types by adopting the only one filter in the code stream.
It should be understood that, the loop filtering apparatus according to the embodiment of the present application may be a chip, which may be specifically implemented by a circuit, but the embodiment of the present application is not limited to a specific implementation form.
The embodiment of the present application further provides an encoder, where the encoder is configured to implement the function of the encoding end in the embodiment of the present application, and the encoder may include the apparatus for loop filtering at the encoding end in the embodiment of the present application.
The embodiment of the present application further provides a decoder, where the decoder is configured to implement the function of a decoding end in the embodiment of the present application, and the decoder may include the apparatus for loop filtering at the decoding end in the embodiment of the present application.
The embodiment of the present application further provides a codec, where the codec includes the apparatus for loop filtering at the encoding end and the apparatus for loop filtering at the decoding end in the embodiment of the present application.
FIG. 14 shows a schematic block diagram of a computer system 1400 of an embodiment of the present application.
As shown in fig. 14, the computer system 1400 may include a processor 1410 and a memory 1420.
It should be understood that the computer system 1400 may also include other components typically included in computer systems, such as input/output devices, communication interfaces, etc., which are not limited by the embodiments of the present application.
The memory 1420 is used to store computer-executable instructions.
The Memory 1420 may be various types of memories, and may include a high-speed Random Access Memory (RAM), and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory, which is not limited in this embodiment of the present invention.
The processor 1410 is configured to access the memory 1420 and execute the computer-executable instructions to perform the operations in the loop filtering method of the embodiments of the present application described above.
The processor 1410 may include a microprocessor, a Field-Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), and the like, which are not limited in the embodiments of the present application.
The embodiment of the present application further provides a removable device, which may include the loop filtering apparatus or the computer system according to the various embodiments of the present application.
The apparatus, the computer system, and the mobile device for loop filtering in the embodiments of the present application may correspond to an execution main body of the method for loop filtering in the embodiments of the present application, and the above and other operations and/or functions of each module in the apparatus, the computer system, and the mobile device for loop filtering are respectively for implementing corresponding processes of each of the foregoing methods, and are not described herein again for brevity.
The embodiment of the present application further provides a computer storage medium, in which a program code is stored, where the program code may be used to instruct to perform the loop filtering method according to the embodiment of the present application.
It should be understood that, in the embodiment of the present application, the term "and/or" is only one kind of association relation describing an associated object, and means that three kinds of relations may exist. For example, a and/or B, may represent: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present application.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially or partially contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (62)

1. A method of loop filtering, comprising:
determining an initial filter corresponding to each pixel category;
combining the initial filters corresponding to different pixel categories to obtain a filter corresponding to each pixel category;
determining the number of types of filters adopted by loop filtering;
determining whether to write the filter labels corresponding to the pixel categories into a code stream according to the number of the types of the filters;
wherein, the determining whether to write the filter label corresponding to the pixel type into the code stream according to the number of the filter types comprises:
and if the number of the types of the filters is equal to the number of the pixel types, not writing the filter labels corresponding to the pixel types into the code stream.
2. The method of claim 1, wherein the filter label corresponding to the pixel class is the same as or a default correspondence with the label of the pixel class.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
and writing first indication information into the code stream, wherein the first indication information is used for indicating that the code stream does not include the filter label corresponding to the pixel type.
4. The method according to claim 1, wherein said determining whether to write the filter label corresponding to the pixel class into the code stream according to the number of the filter types comprises:
and if the number of the types of the filters is equal to the number of the pixel types, and the filter labels corresponding to the pixel types are different from the labels of the pixel types or are not in a default corresponding relationship, writing the filter labels corresponding to the pixel types into the code stream.
5. The method of claim 4, further comprising:
and writing second indication information into the code stream, wherein the second indication information is used for indicating that the code stream comprises the filter labels corresponding to the pixel categories.
6. The method according to claim 1, wherein said determining whether to write the filter label corresponding to the pixel class into the code stream according to the number of the filter types comprises:
and if the number of the types of the filters is greater than 1 and less than the number of the pixel types, writing the filter labels corresponding to the pixel types into the code stream.
7. The method according to claim 1, wherein said determining whether to write the filter label corresponding to the pixel class into the code stream according to the number of the filter types comprises:
and if the number of the types of the filters is 1, not writing the filter labels corresponding to the pixel types into the code stream.
8. The method according to claim 1 or 2, characterized in that the method further comprises:
and writing the code stream after reducing the number of the types of the filters by one.
9. A method of loop filtering, comprising:
determining an initial filter corresponding to each pixel category;
combining the initial filters corresponding to different pixel categories to obtain a filter corresponding to each pixel category;
determining the number of types of filters adopted by loop filtering;
determining whether to analyze the filter labels corresponding to the pixel categories in the code stream according to the number of the types of the filters;
wherein, the determining whether to analyze the filter label corresponding to the pixel category in the code stream according to the number of the filter categories includes:
and if the number of the types of the filters is equal to the number of the pixel types, not analyzing the filter labels corresponding to the pixel types in the code stream.
10. The method according to claim 9, wherein determining whether to parse the filter label corresponding to the pixel class in the code stream according to the number of the filter classes comprises:
if the number of the types of the filters is equal to the number of the pixel types, analyzing filter label indication information in the code stream, and if the filter label indication information is first indication information, not analyzing filter labels corresponding to the pixel types in the code stream, wherein the first indication information is used for indicating that the filter labels corresponding to the pixel types are not included in the code stream.
11. The method according to claim 9 or 10, characterized in that the method further comprises:
and for the pixels of the pixel category, performing loop filtering by adopting a filter with the same label as the label of the pixel category or a label with default corresponding relation.
12. The method according to claim 9, wherein determining whether to parse the filter label corresponding to the pixel class in the code stream according to the number of the filter classes comprises:
and analyzing the filter label indication information in the code stream if the number of the types of the filters is equal to the number of the pixel types, and analyzing the filter label corresponding to the pixel type in the code stream if the filter label indication information is second indication information, wherein the second indication information is used for indicating that the code stream comprises the filter label corresponding to the pixel type.
13. The method according to claim 9, wherein determining whether to parse the filter label corresponding to the pixel class in the code stream according to the number of the filter classes comprises:
and if the number of the types of the filters is greater than 1 and less than the number of the pixel types, analyzing the filter labels corresponding to the pixel types in the code stream.
14. The method according to claim 12 or 13, characterized in that the method further comprises:
and performing loop filtering on the pixels of the pixel type by adopting the analyzed filter with the filter label corresponding to the pixel type.
15. The method according to claim 9, wherein determining whether to parse the filter label corresponding to the pixel class in the code stream according to the number of the filter classes comprises:
and if the number of the types of the filters is 1, not analyzing the filter labels corresponding to the pixel types in the code stream.
16. The method of claim 15, further comprising:
and for the pixels of the pixel category, performing loop filtering by adopting a unique filter in the code stream.
17. The method of claim 9 or 10, wherein the determining the number of classes of filters employed for loop filtering comprises:
and analyzing the code stream to obtain the number of the types of the filters.
18. A method of loop filtering, comprising:
determining an initial filter corresponding to each pixel category;
combining the initial filters corresponding to different pixel categories to obtain a filter corresponding to each pixel category;
performing loop filtering on the pixels of the pixel types by adopting filters corresponding to the pixel types;
and generating a coded code stream, wherein if the number of the types of the filters adopted by loop filtering is equal to the number of the pixel types, the code stream does not include the filter labels corresponding to the pixel types.
19. The method of claim 18, wherein generating the encoded codestream comprises:
and if the number of the types of the filters is equal to the number of the pixel types, not writing the filter labels corresponding to the pixel types into the code stream.
20. The method of claim 19, wherein the filter label corresponding to the pixel class is the same as or a default correspondence with the label of the pixel class.
21. The method according to claim 19 or 20, wherein the generating an encoded codestream further comprises:
and writing first indication information into the code stream, wherein the first indication information is used for indicating that the code stream does not include the filter label corresponding to the pixel type.
22. The method of claim 18, wherein generating the encoded codestream comprises:
and if the number of the types of the filters is greater than 1 and less than the number of the pixel types, writing the filter labels corresponding to the pixel types into the code stream.
23. The method of claim 18, wherein generating the encoded codestream comprises:
and if the number of the types of the filters is 1, not writing the filter labels corresponding to the pixel types into the code stream.
24. The method of any of claims 18 to 20, wherein the generating an encoded codestream further comprises:
and writing the code stream after reducing the number of the types of the filters by one.
25. A method of loop filtering, comprising:
acquiring a coded code stream;
analyzing the code stream to obtain the number of the types of filters adopted by loop filtering;
and if the number of the types of the filters is equal to the number of the pixel types, performing loop filtering on the pixels of the pixel types by adopting the filters with the labels which are the same as or have the default corresponding relation with the labels of the pixel types.
26. The method according to claim 25, wherein if the number of the filter types is equal to the number of the pixel types, analyzing filter label indication information in the code stream, and if the filter label indication information is first indication information, performing loop filtering by using a filter having a label that is the same as or has a default correspondence with a label of the pixel type, wherein the first indication information is used to indicate that the filter label corresponding to the pixel type is not included in the code stream.
27. The method according to claim 25, wherein if the number of the filter types is equal to the number of the pixel types, analyzing filter label indication information in the code stream, and if the filter label indication information is second indication information, analyzing a filter label corresponding to the pixel type in the code stream, wherein the second indication information is used to indicate that the filter label corresponding to the pixel type is included in the code stream.
28. The method of claim 25, wherein if the number of the filter types is greater than 1 and less than the number of the pixel types, the filter label corresponding to the pixel type in the code stream is analyzed.
29. The method of claim 27 or 28, further comprising:
and performing loop filtering on the pixels of the pixel type by adopting the analyzed filter with the filter label corresponding to the pixel type.
30. The method of claim 25, wherein if the number of the filter types is 1, performing loop filtering on the pixels of the pixel type by using only one filter in the code stream.
31. An apparatus for loop filtering, comprising:
the determining module is used for determining the number of the types of the filters adopted by the loop filtering;
the processing module is used for determining whether to write the filter labels corresponding to the pixel categories into the code stream according to the number of the types of the filters;
wherein the processing module is configured to:
if the number of the types of the filters is equal to the number of the pixel types, not writing the filter labels corresponding to the pixel types into the code stream;
the processing module is further configured to:
determining an initial filter corresponding to each pixel category;
and combining the initial filters corresponding to different pixel categories to obtain a filter corresponding to each pixel category.
32. The apparatus of claim 31, wherein the filter label corresponding to the pixel class is the same as or a default correspondence with the label of the pixel class.
33. The apparatus of claim 31 or 32, wherein the processing module is further configured to:
and writing first indication information into the code stream, wherein the first indication information is used for indicating that the code stream does not include the filter label corresponding to the pixel type.
34. The apparatus of claim 31, wherein the processing module is configured to:
and if the number of the types of the filters is equal to the number of the pixel types, and the filter labels corresponding to the pixel types are different from the labels of the pixel types or are not in a default corresponding relationship, writing the filter labels corresponding to the pixel types into the code stream.
35. The apparatus of claim 34, wherein the processing module is further configured to:
and writing second indication information into the code stream, wherein the second indication information is used for indicating that the code stream comprises the filter labels corresponding to the pixel categories.
36. The apparatus of claim 31, wherein the processing module is configured to:
and if the number of the types of the filters is greater than 1 and less than the number of the pixel types, writing the filter labels corresponding to the pixel types into the code stream.
37. The apparatus of claim 31, wherein the processing module is configured to:
and if the number of the types of the filters is 1, not writing the filter labels corresponding to the pixel types into the code stream.
38. The apparatus of claim 31 or 32, wherein the processing module is further configured to:
and writing the code stream after reducing the number of the types of the filters by one.
39. An apparatus for loop filtering, comprising:
a determining module for determining the number of the types of the filters adopted by the loop filtering;
the processing module is used for determining whether to analyze the filter labels corresponding to the pixel categories in the code stream according to the number of the types of the filters;
wherein the processing module is configured to:
if the number of the types of the filters is equal to the number of the pixel types, not analyzing the filter labels corresponding to the pixel types in the code stream;
the processing module is further configured to:
determining an initial filter corresponding to each pixel category;
and combining the initial filters corresponding to different pixel categories to obtain a filter corresponding to each pixel category.
40. The apparatus of claim 39, wherein the processing module is configured to:
if the number of the types of the filters is equal to the number of the pixel types, analyzing filter label indication information in the code stream, and if the filter label indication information is first indication information, not analyzing filter labels corresponding to the pixel types in the code stream, wherein the first indication information is used for indicating that the filter labels corresponding to the pixel types are not included in the code stream.
41. The apparatus of claim 39 or 40, wherein the processing module is further configured to:
and for the pixels of the pixel category, performing loop filtering by adopting a filter with the same label as the label of the pixel category or a label with default corresponding relation.
42. The apparatus of claim 39, wherein the processing module is configured to:
and analyzing the filter label indication information in the code stream if the number of the types of the filters is equal to the number of the pixel types, and analyzing the filter label corresponding to the pixel type in the code stream if the filter label indication information is second indication information, wherein the second indication information is used for indicating that the code stream comprises the filter label corresponding to the pixel type.
43. The apparatus of claim 39, wherein the processing module is configured to:
and if the number of the types of the filters is greater than 1 and less than the number of the pixel types, analyzing the filter labels corresponding to the pixel types in the code stream.
44. The apparatus of claim 42 or 43, wherein the processing module is further configured to:
and performing loop filtering on the pixels of the pixel type by adopting the analyzed filter with the filter label corresponding to the pixel type.
45. The apparatus of claim 39, wherein the processing module is configured to:
and if the number of the types of the filters is 1, not analyzing the filter labels corresponding to the pixel types in the code stream.
46. The apparatus of claim 45, wherein the processing module is further configured to:
and for the pixels of the pixel category, performing loop filtering by adopting a unique filter in the code stream.
47. The apparatus of claim 39 or 40, wherein the determining module is configured to:
and analyzing the code stream to obtain the number of the types of the filters.
48. An apparatus for loop filtering, comprising:
the filtering module is used for performing loop filtering on the pixels of the pixel types by adopting the filters corresponding to the pixel types;
the processing module is used for generating a coded code stream, wherein if the number of the types of the filters adopted by loop filtering is equal to the number of the pixel types, the code stream does not include the filter labels corresponding to the pixel types;
the processing module is further configured to:
determining an initial filter corresponding to each pixel category;
and combining the initial filters corresponding to different pixel categories to obtain a filter corresponding to each pixel category.
49. The apparatus of claim 48, wherein the processing module is configured to:
and if the number of the types of the filters is equal to the number of the pixel types, not writing the filter labels corresponding to the pixel types into the code stream.
50. The apparatus of claim 49, wherein the filter label corresponding to the pixel class is the same as the label of the pixel class or a default correspondence.
51. The apparatus of claim 49 or 50, wherein the processing module is further configured to:
and writing first indication information into the code stream, wherein the first indication information is used for indicating that the code stream does not include the filter label corresponding to the pixel type.
52. The apparatus of claim 48, wherein the processing module is configured to:
and if the number of the types of the filters is greater than 1 and less than the number of the pixel types, writing the filter labels corresponding to the pixel types into the code stream.
53. The apparatus of claim 48, wherein the processing module is configured to:
and if the number of the types of the filters is 1, not writing the filter labels corresponding to the pixel types into the code stream.
54. The apparatus of any one of claims 48 to 50, wherein the processing module is further configured to:
and writing the code stream after reducing the number of the types of the filters by one.
55. An apparatus for loop filtering, comprising:
the acquisition module is used for acquiring the coded code stream;
the processing module is used for analyzing the code stream to obtain the number of the types of the filters adopted by the loop filtering; and if the number of the types of the filters is equal to the number of the pixel types, performing loop filtering on the pixels of the pixel types by adopting the filters with the labels which are the same as or have the default corresponding relation with the labels of the pixel types.
56. The apparatus of claim 55, wherein the processing module is configured to:
and if the number of the types of the filters is equal to the number of the pixel types, analyzing the indication information of the filter labels in the code stream, and if the indication information of the filter labels is first indication information, performing loop filtering by using the filter with the label which is the same as the label of the pixel type or has a default corresponding relation, wherein the first indication information is used for indicating that the filter label corresponding to the pixel type is not included in the code stream.
57. The apparatus of claim 55, wherein the processing module is configured to:
and analyzing the filter label indication information in the code stream if the number of the types of the filters is equal to the number of the pixel types, and analyzing the filter label corresponding to the pixel type in the code stream if the filter label indication information is second indication information, wherein the second indication information is used for indicating that the code stream comprises the filter label corresponding to the pixel type.
58. The apparatus of claim 55, wherein the processing module is configured to:
and if the number of the types of the filters is greater than 1 and less than the number of the pixel types, analyzing the filter labels corresponding to the pixel types in the code stream.
59. The apparatus of claim 57 or 58, wherein the processing module is configured to:
and performing loop filtering on the pixels of the pixel type by adopting the analyzed filter with the filter label corresponding to the pixel type.
60. The apparatus of claim 55, wherein the processing module is configured to:
and if the number of the types of the filters is 1, performing loop filtering on the pixels of the pixel types by adopting the only one filter in the code stream.
61. A computer system, comprising:
a memory for storing computer executable instructions;
a processor for accessing the memory and executing the computer-executable instructions to perform operations in the method of any one of claims 1 to 30.
62. A mobile device, comprising:
the device of any one of claims 31 to 60; alternatively, the first and second electrodes may be,
the computer system of claim 61.
CN201980005266.6A 2019-03-13 2019-03-13 Loop filtering method, device, computer system and mobile equipment Active CN111279706B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/078053 WO2020181541A1 (en) 2019-03-13 2019-03-13 In-loop filtering method and apparatus, computer system, and mobile device

Publications (2)

Publication Number Publication Date
CN111279706A CN111279706A (en) 2020-06-12
CN111279706B true CN111279706B (en) 2022-03-22

Family

ID=70999808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980005266.6A Active CN111279706B (en) 2019-03-13 2019-03-13 Loop filtering method, device, computer system and mobile equipment

Country Status (2)

Country Link
CN (1) CN111279706B (en)
WO (1) WO2020181541A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103096054A (en) * 2011-11-04 2013-05-08 华为技术有限公司 Video image filtering processing method and device thereof
WO2016204374A1 (en) * 2015-06-18 2016-12-22 엘지전자 주식회사 Image filtering method and device in image coding system
CN108293111A (en) * 2015-10-16 2018-07-17 Lg电子株式会社 For improving the filtering method and device predicted in image encoding system
CN109076228A (en) * 2016-05-09 2018-12-21 高通股份有限公司 The signalling of filtering information

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012092841A1 (en) * 2011-01-03 2012-07-12 Mediatek Inc. Method of filter-unit based in-loop filtering

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103096054A (en) * 2011-11-04 2013-05-08 华为技术有限公司 Video image filtering processing method and device thereof
WO2016204374A1 (en) * 2015-06-18 2016-12-22 엘지전자 주식회사 Image filtering method and device in image coding system
CN108293111A (en) * 2015-10-16 2018-07-17 Lg电子株式会社 For improving the filtering method and device predicted in image encoding system
CN109076228A (en) * 2016-05-09 2018-12-21 高通股份有限公司 The signalling of filtering information

Also Published As

Publication number Publication date
WO2020181541A1 (en) 2020-09-17
CN111279706A (en) 2020-06-12

Similar Documents

Publication Publication Date Title
JP6595711B2 (en) Method and apparatus for transform coding with block-level transform selection and implicit signaling within a hierarchical partition
WO2021030019A1 (en) Block partitioning methods for video coding
EP3350992B1 (en) Methods and apparatuses for encoding and decoding digital images or video streams
CN110337811A (en) The method, apparatus and computer system of motion compensation
CN112544081B (en) Loop filtering method and device
CN111742552B (en) Method and device for loop filtering
CN112514401A (en) Method and device for loop filtering
US8249372B2 (en) Methods and devices for coding and decoding multidimensional digital signals
CN110383837B (en) Method and apparatus for video processing
US20210021821A1 (en) Video encoding and decoding method and apparatus
Nguyen et al. A novel steganography scheme for video H. 264/AVC without distortion drift
US9712828B2 (en) Foreground motion detection in compressed video data
US20220239901A1 (en) Video picture component prediction method and apparatus, and computer storage medium
Tsaig et al. Variable projection for near-optimal filtering in low bit-rate block coders
Vivek et al. Video steganography using chaos encryption algorithm with high efficiency video coding for data hiding
CN111279706B (en) Loop filtering method, device, computer system and mobile equipment
CN116982262A (en) State transition for dependent quantization in video coding
Liu et al. Context-adaptive inverse quantization for inter-frame coding
WO2019191888A1 (en) Loop filtering method and apparatus, and computer system
CN110710209A (en) Method, device and computer system for motion compensation
CN110720221A (en) Method, device and computer system for motion compensation
WO2024081872A1 (en) Method, apparatus, and medium for video processing
JP6485045B2 (en) Index operation apparatus, program and method
JP2023162141A (en) Point cloud encoding device, point cloud decoding device, point cloud encoding method, point cloud decoding method, and program
CN110677681A (en) Video coding and decoding method and device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant