WO2023130899A1 - 环路滤波方法、视频编解码方法、装置、介质及电子设备 - Google Patents
环路滤波方法、视频编解码方法、装置、介质及电子设备 Download PDFInfo
- Publication number
- WO2023130899A1 WO2023130899A1 PCT/CN2022/137908 CN2022137908W WO2023130899A1 WO 2023130899 A1 WO2023130899 A1 WO 2023130899A1 CN 2022137908 W CN2022137908 W CN 2022137908W WO 2023130899 A1 WO2023130899 A1 WO 2023130899A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- component
- loop filtering
- adaptive loop
- alf
- chrominance
- Prior art date
Links
- 238000001914 filtration Methods 0.000 title claims abstract description 236
- 238000000034 method Methods 0.000 title claims abstract description 208
- 230000003044 adaptive effect Effects 0.000 claims abstract description 183
- 238000012545 processing Methods 0.000 claims abstract description 133
- 230000008569 process Effects 0.000 claims description 47
- 241000023320 Luma <angiosperm> Species 0.000 claims description 36
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 claims description 36
- 230000009466 transformation Effects 0.000 claims description 22
- 230000006978 adaptation Effects 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 238000003672 processing method Methods 0.000 description 41
- 238000005516 engineering process Methods 0.000 description 19
- 238000010586 diagram Methods 0.000 description 16
- 238000013139 quantization Methods 0.000 description 10
- 230000005540 biological transmission Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 8
- 238000012937 correction Methods 0.000 description 7
- 238000013461 design Methods 0.000 description 7
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 229910003460 diamond Inorganic materials 0.000 description 3
- 239000010432 diamond Substances 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000011426 transformation method Methods 0.000 description 2
- 108010063123 alfare Proteins 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/147—Data rate or code amount at the encoder output according to rate distortion criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
Definitions
- the present application relates to the field of computer and communication technologies, and in particular, to a loop filtering method, a video encoding and decoding method, device, medium and electronic equipment.
- Cross-Component Adaptive Loop Filtering is a Wiener filter that can be adaptively generated and used according to the characteristics of video content (such as game videos, online conference videos, etc.) different filter coefficients.
- filter coefficients need to be adaptively selected by classification, but there is often a problem of low classification accuracy, resulting in poor performance in cross-component adaptive loop filtering.
- a loop filtering method, video encoding and decoding method, device, medium and electronic equipment are provided.
- a loop filtering method which is executed by a video encoding device or a video decoding device, the method comprising: acquiring the brightness component in a video image frame when performing adaptive loop filtering Block classification information; according to the block classification information of the luminance component when performing adaptive loop filtering, determine the block classification information of the chrominance component in the video image frame when performing cross-component adaptive loop filtering; and according to The block classification information of the chrominance component when cross-component adaptive loop filtering is performed, and the corresponding filter coefficients are selected to perform cross-component adaptive loop filtering processing on the chrominance component.
- a video decoding method executed by a video decoding device comprising: acquiring block classification information of a luminance component in a video image frame when adaptive loop filtering is performed; according to The block classification information of the luminance component when performing adaptive loop filtering is used to determine the block classification information of the chrominance components in the video image frame when performing cross-component adaptive loop filtering; according to the chrominance components in the Selecting the corresponding filter coefficients to perform cross-component adaptive loop filtering processing on the chroma component based on the block classification information when performing cross-component adaptive loop filtering; and according to the processing result of the adaptive loop filtering of the luminance component , and the cross-component adaptive loop filtering processing result of the chrominance component, and decode the video code stream.
- a video coding method which is executed by a video coding device, and the method includes: acquiring block classification information of a luminance component in a video image frame when adaptive loop filtering is performed; according to The block classification information of the luminance component when performing adaptive loop filtering is used to determine the block classification information of the chrominance components in the video image frame when performing cross-component adaptive loop filtering; according to the chrominance components in the Selecting the corresponding filter coefficients to perform cross-component adaptive loop filtering processing on the chroma component based on the block classification information when performing cross-component adaptive loop filtering; and according to the processing result of the adaptive loop filtering of the luminance component , and the cross-component adaptive loop filtering process of the chrominance component, encoding the video image frame to obtain a video code stream.
- a loop filtering device including: an acquisition unit configured to acquire block classification information of a luminance component in a video image frame when adaptive loop filtering is performed
- the determination unit is configured to determine, according to the block classification information of the luminance component when performing adaptive loop filtering, that the chroma component in the video image frame is performing cross-component adaptive loop filtering and cross-component adaptive loop filtering Block classification information during filtering;
- the filtering unit is configured to select corresponding filter coefficients to perform cross-component adaptation on the chroma component according to the block classification information of the chroma component when performing cross-component adaptive loop filtering loop filter processing.
- a video decoding device including: an acquisition unit configured to acquire block classification information of a luminance component in a video image frame when adaptive loop filtering is performed; a determination unit configured to According to the block classification information of the luminance component when performing adaptive loop filtering, determine the block classification information of the chroma component in the video image frame when performing cross-component adaptive loop filtering; the filtering unit is configured according to The block classification information of the chrominance component when performing cross-component adaptive loop filtering, selects the corresponding filter coefficient to perform cross-component adaptive loop filtering processing on the chrominance component; the first processing unit is configured according to The processing result of the adaptive loop filtering of the luminance component and the processing result of the cross-component adaptive loop filtering of the chrominance component are used to decode the video code stream.
- a video encoding device including: an acquisition unit configured to acquire block classification information of a luminance component in a video image frame when adaptive loop filtering is performed; a determination unit configured to According to the block classification information of the luminance component when performing adaptive loop filtering, determine the block classification information of the chroma component in the video image frame when performing cross-component adaptive loop filtering; the filtering unit is configured according to The block classification information of the chrominance component when performing cross-component adaptive loop filtering, selects the corresponding filter coefficient to perform cross-component adaptive loop filtering processing on the chrominance component; the second processing unit is configured according to The adaptive loop filtering processing result of the luminance component and the cross-component adaptive loop filtering processing of the chrominance component encode the video image frame to obtain a video code stream.
- an electronic device includes a memory and a processor, the memory stores computer-readable instructions, and the processor implements the above-mentioned Methods provided in various alternative embodiments.
- a computer storage medium has computer-readable instructions stored thereon.
- the computer-readable instructions are executed by a processor, the above-mentioned various optional Methods provided in the Examples.
- a computer program product includes computer-readable instructions, and when the computer-readable instructions are executed by a processor, the above-mentioned various optional embodiments provide method.
- FIG. 1 shows a schematic diagram of an exemplary system architecture to which the technical solutions of the embodiments of the present application can be applied;
- FIG. 2 shows a schematic diagram of a placement manner of a video encoding device and a video decoding device in a streaming system
- Figure 3 shows a basic flow diagram of a video encoder
- Fig. 4 shows the overall structure of VVC and the schematic diagram of loop filtering process
- Figure 5 shows a schematic diagram of the flow of CC-ALF and the relationship with ALF
- Fig. 6 shows a schematic diagram of a diamond filter
- FIG. 7 shows a flow chart of a ring filtering method according to an embodiment of the present application.
- FIG. 8 shows a flowchart of a video encoding method according to an embodiment of the present application
- FIG. 9 shows a flowchart of a video decoding method according to an embodiment of the present application.
- FIG. 10 shows a block diagram of a loop filtering device according to an embodiment of the present application.
- Fig. 11 shows a block diagram of a video decoding device according to an embodiment of the present application.
- FIG. 12 shows a block diagram of a video encoding device according to an embodiment of the present application.
- Fig. 13 shows a schematic structural diagram of a computer system suitable for implementing the electronic device of the embodiment of the present application.
- the "plurality” mentioned in this article refers to two or more than two.
- “And/or” describes the association relationship of associated objects, indicating that there may be three types of relationships. For example, A and/or B may indicate: A exists alone, A and B exist simultaneously, and B exists independently. The character “/” generally indicates that the contextual objects are an "or” relationship.
- Fig. 1 shows a schematic diagram of an exemplary system architecture to which the technical solutions of the embodiments of the present application can be applied.
- the system architecture 100 includes a plurality of end devices that can communicate with each other through, for example, a network 150 .
- the system architecture 100 may include a first terminal device 110 and a second terminal device 120 interconnected by a network 150 .
- the first terminal device 110 and the second terminal device 120 perform unidirectional data transmission.
- the first terminal device 110 can encode video data (such as a video picture stream collected by the terminal device 110) to be transmitted to the second terminal device 120 through the network 150, and the encoded video data is encoded in one or more The coded video stream is transmitted, and the second terminal device 120 can receive the coded video data from the network 150, decode the coded video data to recover the video data, and display the video picture according to the recovered video data.
- video data such as a video picture stream collected by the terminal device 110
- the second terminal device 120 can receive the coded video data from the network 150, decode the coded video data to recover the video data, and display the video picture according to the recovered video data.
- the system architecture 100 may include a third terminal device 130 and a fourth terminal device 140 performing bidirectional transmission of encoded video data, such as may occur during a video conference.
- each of the third terminal device 130 and the fourth terminal device 140 can encode video data (such as a video picture stream captured by the terminal device) for transmission to the third terminal device through the network 150 130 and the other terminal device in the fourth terminal device 140 .
- Each of the third terminal device 130 and the fourth terminal device 140 can also receive the encoded video data transmitted by the other terminal device of the third terminal device 130 and the fourth terminal device 140, and can modify the encoded video data.
- the video data is decoded to recover the video data, and video pictures may be displayed on an accessible display device based on the recovered video data.
- the first terminal device 110, the second terminal device 120, the third terminal device 130 and the fourth terminal device 140 can be servers or terminals, and the servers can be independent physical servers or multiple
- a server cluster or distributed system composed of physical servers can also provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, CDN (Content Delivery Network, content distribution network), and cloud servers for basic cloud computing services such as big data and artificial intelligence platforms.
- the terminal may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, a smart voice interaction device, a smart home appliance, a vehicle terminal, an aircraft, etc., but is not limited thereto.
- Network 150 represents any number of networks, including for example wired and/or wireless communication networks, that communicate encoded video data between first terminal device 110, second terminal device 120, third terminal device 130, and fourth terminal device 140.
- Communication network 150 may exchange data in circuit-switched and/or packet-switched channels.
- the network may include a telecommunications network, a local area network, a wide area network and/or the Internet.
- the architecture and topology of network 150 may be immaterial to the operation of the present disclosure.
- Fig. 2 shows how a video encoding device and a video decoding device are placed in a streaming environment.
- the subject matter disclosed herein is equally applicable to other video-enabled applications including, for example, videoconferencing, digital TV (television), storing compressed video on digital media including CDs, DVDs, memory sticks, and the like.
- the streaming transmission system may include an acquisition subsystem 213 , the acquisition subsystem 213 may include a video source 201 such as a digital camera, and the video source creates an uncompressed video picture stream 202 .
- video picture stream 202 includes samples captured by a digital camera. Compared to the encoded video data 204 (or encoded video code stream 204), the video picture stream 202 is depicted as a thick line to emphasize the video picture stream of high data volume.
- the video picture stream 202 can be processed by the electronic device 220, the electronic device 220
- the device 220 comprises a video encoding device 203 coupled to the video source 201 .
- Video encoding device 203 may include hardware, software, or a combination of hardware and software to implement or implement aspects of the disclosed subject matter as described in more detail below.
- the encoded video data 204 (or encoded video code stream 204 ) is depicted as a thinner line to emphasize the lower data volume of the encoded video data 204 (or encoded video code stream 204 ) compared to the video picture stream 202 . 204), which may be stored on the streaming server 205 for future use.
- One or more streaming client subsystems such as client subsystem 206 and client subsystem 208 in FIG. 2 , may access streaming server 205 to retrieve copies 207 and 209 of encoded video data 204 .
- Client subsystem 206 may include, for example, video decoding device 210 in electronic device 230 .
- a video decoding device 210 decodes an incoming copy 207 of encoded video data and produces an output video picture stream 211 that may be presented on a display 212, such as a display screen, or another presentation device.
- encoded video data 204, video data 207, and video data 209 may be encoded according to certain video encoding/compression standards.
- the electronic device 220 and the electronic device 230 may include other components not shown in the figure.
- the electronic device 220 may include a video decoding device
- the electronic device 230 may also include a video encoding device.
- the video frame image when After inputting a video frame image, the video frame image will be divided into several non-overlapping processing units according to a block size, and each processing unit will perform a similar compression operation.
- This processing unit is called CTU, or LCU (Largest Coding Unit, maximum coding unit).
- the CTU can be further divided into finer divisions to obtain one or more basic coding units CU (Coding Unit, coding unit).
- CU is the most basic element in a coding process.
- Predictive Coding includes intra-frame prediction and inter-frame prediction. After the original video signal is predicted by the selected reconstructed video signal, a residual video signal is obtained. The encoder needs to decide which predictive coding mode to choose for the current CU, and inform the decoder. Among them, intra-frame prediction means that the predicted signal comes from the area that has been coded and reconstructed in the same image; inter-frame prediction means that the predicted signal comes from another image (called a reference image) that has been encoded and is different from the current image. .
- Transform&Quantization After the residual video signal undergoes transformation operations such as DFT (Discrete Fourier Transform, Discrete Fourier Transform), DCT (Discrete Cosine Transform, Discrete Cosine Transform), the signal is converted into the transform domain, called is the transformation coefficient.
- the transform coefficients are further subjected to a lossy quantization operation, and certain information is lost, so that the quantized signal is conducive to compressed expression.
- there may be more than one transformation method for selection so the encoder also needs to select one of the transformation methods for the current CU and inform the decoder.
- the fineness of quantization is usually determined by the quantization parameter (Quantization Parameter, referred to as QP).
- a larger value of QP means that coefficients with a larger range of values will be quantized into the same output, which usually results in greater distortion and Lower code rate; on the contrary, the QP value is smaller, which means that the coefficients with a smaller range of values will be quantized to the same output, so it usually brings smaller distortion and corresponds to a higher code rate.
- Entropy Coding or Statistical Coding The quantized transform domain signal will be statistically compressed and coded according to the frequency of occurrence of each value, and finally output a binary (0 or 1) compressed code stream. At the same time, other information generated by encoding, such as the selected encoding mode, motion vector data, etc., also needs to be entropy encoded to reduce the bit rate.
- Statistical coding is a lossless coding method that can effectively reduce the bit rate required to express the same signal. Common statistical coding methods include variable length coding (Variable Length Coding, VLC for short) or context-based binary arithmetic coding ( Content Adaptive Binary Arithmetic Coding, referred to as CABAC).
- the context-based binary arithmetic coding (CABAC) process mainly includes three steps: binarization, context modeling and binary arithmetic coding.
- the binary data can be encoded by the regular coding mode and the bypass coding mode (Bypass Coding Mode).
- the bypass encoding mode does not need to assign a specific probability model to each binary bit, and the input binary bit bin value is directly encoded by a simple bypass encoder to speed up the entire encoding and decoding.
- different grammatical elements are not completely independent, and the same grammatical element itself has a certain memory.
- coded syntax elements for conditional coding can further improve the coding performance compared to independent coding or memoryless coding.
- coded symbolic information used as conditions are called contexts.
- the binary bits of the syntax elements enter the context modeler sequentially, and the encoder assigns an appropriate probability model to each input binary bit according to the values of the previously encoded syntax elements or binary bits.
- Processes model contexts Through ctxIdxInc (context index increment, context index increment) and ctxIdxStart (context index Start, context start index), the context model corresponding to the syntax element can be located. After the bin value and the assigned probability model are sent to the binary arithmetic coder for encoding, the context model needs to be updated according to the bin value, which is the adaptive process in encoding.
- Loop Filtering The changed and quantized signal will be reconstructed through inverse quantization, inverse transformation and prediction compensation operations. Compared with the original image, due to the influence of quantization, some information of the reconstructed image is different from the original image, that is, the reconstructed image will produce distortion (Distortion). Therefore, a filtering operation can be performed on the reconstructed image to effectively reduce the degree of distortion caused by quantization. Since these filtered reconstructed images will be used as references for subsequent encoded images to predict future image signals, the above filtering operation is also called in-loop filtering, that is, a filtering operation in an encoding loop.
- FIG. 3 shows a basic flow chart of a video encoder, in which intra prediction is taken as an example for illustration.
- the original image signal s k [x,y] and the predicted image signal Do the difference operation to get the residual signal u k [x, y], and the residual signal u k [x, y] is transformed and quantized to obtain the quantized coefficient.
- the quantized coefficient is obtained by entropy coding to obtain the coded bit
- the reconstructed residual signal u' k [x,y] is obtained through inverse quantization and inverse transformation processing, and the predicted image signal Superimposed with the reconstructed residual signal u' k [x,y] to generate a reconstructed image signal reconstruct image signal
- it is input to the intra-frame mode decision module and the intra-frame prediction module for intra-frame prediction processing, and on the other hand, it is filtered through loop filtering, and the filtered image signal s' k [x, y] is output, and the filtered
- the image signal s' k [x, y] can be used as a reference image of the next frame for motion estimation and motion compensation prediction. Then based on the result of motion compensation prediction s' r [x+m x ,y+m y ] and the intra prediction result Get the predicted image signal of the next frame And continue to repeat the above process until the encoding is
- loop filtering is one of the core modules of video encoding, which can effectively remove various encoding distortions.
- VVC deblocking filter
- SAO Sample Adaptive Offset
- ALF adaptive loop filter
- CC-ALF cross-component adaptive loop filtering
- the overall structure of VVC and the loop filtering process are shown in Figure 4, and its overall flow is similar to the encoder flow shown in Figure 3.
- ALF and CC-ALF are a kind of Wiener filtering
- the filter can adaptively determine the filter coefficient according to the content of different video components, thereby reducing the mean square error (Mean Square Error, MSE for short) between the reconstructed component and the original component.
- MSE mean Square Error
- the input of ALF is the reconstructed pixel value filtered by DF and SAO, and the output is the enhanced reconstructed luminance image and reconstructed chrominance image; while the input of CC-ALF is filtered by DF and SAO without ALF
- the output is the correction value of the corresponding chrominance component. That is, CC-ALF only acts on the chrominance component.
- the correlation between the luminance component and the chrominance component is used to obtain the correction value of the chrominance component through linear filtering of the luminance component.
- the correction value is the same as the ALF filter.
- the chrominance components are added together as the final reconstructed chrominance components.
- the Wiener filter can generate different filter coefficients for video content with different characteristics. Therefore, ALF and CC-ALF need to classify video content and use corresponding filters for each category of video content.
- the ALF of the luma component supports 25 different types of filters
- the ALF of each chroma component supports up to 8 different types of filters
- the CC-ALF of each chroma component supports up to 4 different types of filters.
- ALF will adaptively use different filters at the sub-block level (4 ⁇ 4 luminance blocks in VVC), that is, each 4 ⁇ 4 luminance component pixel block needs to be divided into 25 categories of a class.
- the classification index C of the luminance component pixel block is composed of the directional feature (Directionality) D and the quantized activity feature (Activity) of the luminance component pixel block. jointly obtained. Specifically, it is shown in the following formula (1):
- V k,l
- R(k,l) represents the reconstructed pixel value at position (k,l) before ALF filtering
- R(k-1,l) represents the reconstruction at position (k-1,l) before ALF filtering Pixel value
- R(k+1,l) indicates the reconstructed pixel value at (k+1,l) position before ALF filtering
- R(k,l-1) indicates (k,l-1) position before ALF filtering
- R(k,l+1) represents the reconstructed pixel value at position (k,l+1) before ALF filtering
- R(k-1,l-1) represents (k-1,l-1 ) is the reconstructed pixel value before ALF filtering
- R(k+1,l+1) represents the reconstructed pixel value at (k+1,l+1) before ALF filtering
- R(k-1,l+1 ) represents the reconstructed pixel value at position (k-1, l+1) before ALF filtering
- R(k+1, l-1) represents the reconstructed pixel value at position (
- the overall horizontal gradient g h , vertical gradient g v , diagonal direction gradient g d0 and anti-diagonal direction gradient g d1 of each 4 ⁇ 4 pixel block can be calculated, specifically as the following formula ( 6) and formula (7):
- i and j represent the pixel coordinates of the upper left corner of the 4 ⁇ 4 pixel block.
- the maximum value of the gradient value in the horizontal direction and the gradient value in the vertical direction and minimum are:
- the maximum value of the gradient value in the diagonal direction and the gradient value in the anti-diagonal direction and minimum are:
- the directional feature D is derived by comparing the maximum and minimum values of the gradient values in the four directions obtained by formula (8) and formula (9).
- the specific process is:
- t 1 and t 2 are set constants.
- the filter coefficients and the corresponding limit values will be geometrically transformed according to the gradient value of the current block according to the rules shown in Table 1 below, including not Change (No transformation), Diagonal transformation (Diagonal), Vertical flip (Vertical flip) and Rotation transformation (Rotation).
- applying geometric transformation to the filter coefficients is equivalent to applying geometric transformation to pixel values and then filtering the coefficients.
- the purpose of geometric transformation is to align the directionality of different block contents as much as possible, thereby reducing ALF.
- the number of classes required so that different pixels share the same filter coefficients. Using geometric transformation can improve the real classification from 25 to 100 without increasing the number of ALF filters, which improves its adaptability.
- CC-ALF generates a corresponding correction value for each chrominance component through linear filtering of the luminance component.
- the process and the relationship with ALF are shown in Figure 5.
- the luminance component RY after SAO filtering is input to the ALF filter for further processing.
- the filtering process of the luminance component and the value Y of the luminance component are output; at the same time, the value RY of the luminance component after SAO filtering is input to the CC-ALF filter to filter the two chroma components Cb and Cr respectively, and two color components are obtained.
- Correction values ⁇ R Cb and ⁇ R Cr of degree components are obtained.
- the values of the two chroma components after SAO filtering are input to the ALF filter to filter the chroma components, and then the ALF filter superimposes the filtering results of the two chroma components with the correction values ⁇ R Cb and ⁇ R Cr respectively, and finally outputs the color The values of the degree components Cb and Cr.
- ⁇ R i (x, y) represents the correction value (that is, the offset value) of the chroma component i at the sample position (x, y);
- S i represents the filtering area supported by the CC-ALF filter on the luminance component ;
- c i (x 0 , y 0 ) represents the filter coefficient corresponding to the chroma component i;
- R Y represents the luminance component;
- (x c , y C ) represents the position of the luminance component obtained from the chroma component;
- (x 0 , y 0 ) represents the offset position corresponding to the luminance component, which is obtained by transforming the coordinates of the chroma component according to the scaling relationship between luminance and chrominance corresponding to the video sequence.
- CC-ALF supports a 3 ⁇ 4 diamond filter as shown in Figure 6.
- the filter coefficient of CC-ALF cancels the limitation of symmetry, so that it can flexibly adapt to the relative relationship of various luma components and chrominance components.
- CC-ALF has the following two restrictions on its filter coefficients: 1. The sum of all CC-ALF coefficients is limited to 0, so for 3 ⁇ 4 As far as the diamond filter is concerned, only 7 filter coefficients need to be calculated and transmitted, and the filter coefficient at the center position can be automatically inferred at the decoding end according to this condition; 2.
- each filter coefficient to be transmitted must be is a power of 2, and can be represented by 6 bits at most, so the absolute value of the filter coefficient of CC-ALF is ⁇ 0, 2, 4, 8, 16, 32, 64 ⁇ .
- the design can use shift operations instead of multiplication operations to reduce the number of multiplication operations.
- CC-ALF Unlike the ALF of the luminance component that supports sub-block-level classification and adaptive selection, CC-ALF only supports CTU-level classification and adaptive selection. For each chroma component, all chroma pixels in a CTU belong to the same category and use the same filter.
- an APS Adaptation Parameter Set, adaptive parameter set
- APS can contain up to 25 sets of luma filter coefficients and corresponding limit value indexes
- two chroma components can contain up to 8 sets of chroma filter coefficients and corresponding limit values index, and up to 4 sets of CC-ALF filter coefficients per chroma component.
- the filter coefficients of different categories can be merged (Merge), so that multiple categories share a set of filter coefficients, and the encoding end will pass the rate-distortion optimization (Rate -Distortion Optimization (RDO for short) determines which types of coefficients can be merged, and the APS index used by the current slice is marked in the slice header (Slice Header).
- CC-ALF supports CTU-level adaptation. For the case of multiple filters, at the CTU level, it will adaptively select whether to use CC-ALF and the index of the filter used for each chrominance component.
- the technical solution of the embodiment of the present application proposes a new technical solution, which can increase the block classification category of the chrominance component when performing CC-ALF, so as to improve the accuracy of content classification in CC-ALF, and further improve The self-adaptive capability and filtering performance of CC-ALF help to improve the coding and decoding efficiency.
- Fig. 7 shows a flowchart of a loop filtering method according to an embodiment of the present application, and the loop filtering method may be executed by a video encoding device or a video decoding device.
- the loop filtering method includes at least step S710 to step S730, which are described in detail as follows:
- step S710 the block classification information of the luminance component in the video image frame when adaptive loop filtering is performed is acquired.
- the block classification information refers to information used to indicate subblock level classification results, and the block classification information may be identification information corresponding to a classification category, specifically, it may be a classification index.
- the block classification process of the luminance component when ALF is performed can refer to the foregoing formula (1) to formula (10) to calculate specific classification indexes, and then the block classification information of the luminance component when ALF is performed can be determined.
- step S720 according to the block classification information of the luminance component when adaptive loop filtering is performed, the block classification information of the chrominance component in the video image frame when performing cross-component adaptive loop filtering is determined.
- the classification result of the luma component for sub-blocks when ALF is performed may be used as the classification result of the chrominance component for blocks of the same size when CC-ALF is performed.
- the classification result of the luminance component of a sub-block indicates that the sub-block belongs to the third category when ALF is performed
- the chrominance component of the sub-block also belongs to the third category when CC-ALF is performed, that is, the sub-block
- the category of the luma component when performing ALF may be shared with its category when performing CC-ALF.
- the classification result of the luminance component for sub-blocks when ALF is performed is used as the classification result of the same size block when the chrominance component is performed CC-ALF, so that it can be increased.
- the block classification category of the chrominance component when performing CC-ALF improves the accuracy of content classification in CC-ALF, which in turn can improve the adaptive ability and filtering performance of CC-ALF, which is conducive to improving the encoding and decoding efficiency.
- the classification result and the corresponding geometric transformation type of the luma component for sub-blocks when performing ALF can be used as the classification result and geometric transformation of the same size block when the chroma component is performing CC-ALF type.
- the technical solution of this embodiment can also increase the block classification category of the chrominance component when performing CC-ALF, improve the accuracy of content classification in CC-ALF, and then can improve the adaptive ability and filtering performance of CC-ALF, thereby It is beneficial to improve the encoding and decoding efficiency.
- step S730 according to the block classification information of the chrominance component when cross-component adaptive loop filtering is performed, corresponding filter coefficients are selected to perform cross-component adaptive loop filtering processing on the chrominance component.
- the corresponding filter coefficients can be determined respectively for the block classification category of the chroma component when CC-ALF is performed, and then the corresponding filter can be selected according to the block classification information of the chroma component when CC-ALF is performed
- the coefficients perform CC-ALF processing on the chrominance components.
- the combination results of chrominance components performed on various types of filters when CC-ALF is performed may also be determined.
- the combination result of various types of filters when the luma component is performing ALF is used as the combination result of various types of filters when the chroma component is performing CC-ALF.
- various types of filters are combined, and at least two ALF filters can be combined by traversing every possible combination method, and the corresponding rate-distortion overhead is calculated, which will be calculated according to the rate
- the combination method with the least distortion overhead obtains the combination result as the combination result of various filters when ALF is performed.
- the rate-distortion overhead of filter combining in the ALF process of the luma component and the rate-distortion overhead of filter combining in the CC-ALF process of the chroma component it is determined that the luma component is When performing ALF, and when the chrominance component is performing CC-ALF, the combined results of various filters.
- the technical solution of this embodiment can jointly optimize the ALF of the luminance component and the CC-ALF of the chrominance component, so as to simultaneously determine the combination results of the ALF of the luminance component and the CC-ALF of the chrominance component for various types of filters.
- the number of filters available for the chrominance component when performing CC-ALF may be determined according to the number of filters determined when performing ALF for the luma component. For example, the number of filters determined when the luma component is performing ALF may be used as the number of available filters when the chrominance component is performing CC-ALF.
- loop filtering method in the embodiment shown in FIG. 7 can be applied in the encoding process of the video encoding end, and can also be applied in the decoding process of the video decoding end.
- Fig. 8 shows a flow chart of a video encoding method according to an embodiment of the present application, and the video encoding method may be executed by a video encoding device.
- the video encoding method includes at least steps S810 to S840, which are described in detail as follows:
- step S810 the block classification information of the luminance component in the video image frame when ALF is performed is obtained.
- step S820 according to the block classification information of the luma component when performing ALF, determine the block classification information of the chrominance component in the video image frame when CC-ALF is performed.
- step S830 according to the block classification information of the chroma component when CC-ALF is performed, corresponding filter coefficients are selected to perform CC-ALF processing on the chroma component.
- step S840 according to the ALF processing result of the luminance component and the CC-ALF processing of the chrominance component, the video image frame is encoded to obtain a video code stream.
- the corresponding filter coefficient can be determined according to the block classification information of the luminance component when ALF is performed, and the ALF processing is performed on the luminance component according to the filter coefficient to obtain the ALF processing result, so that the ALF processing result and the CC-ALF processing result can be , and encode the video image frame to obtain a video code stream.
- the block classification category of the chrominance component can be increased when CC-ALF is performed to improve the accuracy of content classification in CC-ALF, thereby improving the adaptive ability and Filtering performance, which is beneficial to improve coding efficiency.
- the block classification strategy in the video coding method shown in FIG. 8 (that is, according to the block classification information of the luminance component when performing ALF, determine the block classification information of the chrominance component when performing CC-ALF) It can be used alone, and can also be used together with other classification strategies (such as classification strategies in the related art). The two situations are described below:
- the encoder can encode the first flag bit corresponding to the current slice of the video image frame in the video code stream.
- the first flag The value of the bit is used to indicate whether the chrominance component of the target block in the current slice adopts the CC-ALF processing method proposed in the embodiment of the present application (that is, the CC-ALF processing method using the block classification strategy in FIG. 8 ).
- the first flag corresponding to the current slice can directly indicate whether the chrominance component of the target block in the current slice adopts the embodiment of the present application The CC-ALF processing method proposed in .
- the value of the first flag bit is the first value (for example, 1), it indicates that the chrominance components of some target blocks in the current slice adopt the CC-ALF processing method proposed in the embodiment of the present application, or The chrominance components indicating all target blocks in the current slice adopt the CC-ALF processing method proposed in the embodiment of the present application.
- the value of the first flag bit is the second value (for example, 0), it indicates that the chrominance components of all target blocks in the current slice do not use the CC-ALF processing method proposed in the embodiment of the present application.
- the first flag bit is a slice-level flag bit, if the value of the first flag bit indicates that the chrominance components of all target blocks in the current slice adopt the CC proposed in the embodiment of the present application -ALF processing mode, or indicate that the chrominance components of all target blocks in the current slice do not adopt the CC-ALF processing mode proposed in the embodiment of the present application, then there is no need to encode a block-level flag bit.
- the encoder can encode in the video stream to obtain the current slice contains
- the second flag bit corresponding to each target block the value of the second flag bit is used to indicate whether the chrominance component of the corresponding target block adopts the CC-ALF processing method proposed in the embodiment of the present application. That is to say, in this embodiment, on the basis of the slice-level flag bit, the block-level flag bit (that is, the second flag bit) can be further used to indicate that the chrominance components of those target blocks need to adopt the method proposed in the embodiment of the present application.
- the CC-ALF processing method the block-level flag bit
- a second flag bit can be set for each of the two chroma components of the target block, and each second flag bit The value is used to indicate whether the corresponding chrominance component in the target block adopts the CC-ALF processing method proposed in the embodiment of the present application.
- the two chrominance components (Cr and Cb) of the target block may also correspond to the same second flag bit, and the value of the same second flag bit is used to indicate two Whether the chrominance component adopts the CC-ALF processing method proposed in the embodiment of this application.
- a first flag bit can be set for each of the two chroma components of the current slice, and the value of each first flag bit is used to indicate whether the corresponding chroma component in the current slice.
- the two chroma components of the current slice may also correspond to the same first flag bit, and the value of the same first flag bit is used to indicate the two chroma components in the current slice Whether the component adopts the CC-ALF processing method proposed in the embodiment of this application.
- the block-level flags can also set a second flag for the two chroma components respectively . If the block-level flag bit sets a second flag bit for two chroma components, then the slice-level flag bit only needs to set a first flag bit for two chroma components.
- the encoding end may determine whether the chrominance components of each target block adopt the CC-ALF processing method proposed in the embodiment of the present application through rate-distortion optimization.
- the encoder can calculate the first rate-distortion overhead of the chrominance components of each target block when the CC-ALF processing proposed in the embodiment of the present application is used, and calculate the chrominance components of each target block without CC. - the second rate-distortion overhead during ALF processing, and then according to the first rate-distortion overhead and the second rate-distortion overhead, determine whether the chroma component of each target block adopts the CC-ALF processing method proposed in the embodiment of the present application.
- the rate-distortion overhead corresponding to the chrominance component of a certain target block is smaller than the second rate-distortion overhead, it means that the chrominance component of the target block adopts the CC-ALF processing method proposed in the embodiment of the present application; if If the first rate-distortion overhead corresponding to the chrominance component of a certain target block is greater than the second rate-distortion overhead, it means that the chrominance component of the target block does not use the CC-ALF processing method proposed in the embodiment of the present application.
- the rate-distortion overhead can be saved as much as possible while ensuring the coding efficiency.
- the classification strategy includes the block classification strategy shown in Figure 8 (that is, according to the block classification information of the luma component when performing ALF Block classification information of chrominance components when performing CC
- the block classification strategy shown in FIG. 8 and other classification strategies are used at the same time, it is necessary to indicate the chroma of the target block in the current stripe through the slice-level flag bit (ie, the third flag bit)
- the slice-level flag bit ie, the third flag bit
- the component undergoes CC-ALF processing if CC-ALF processing is performed, then it needs to be in the corresponding adaptive parameter set (referenced by the index of the adaptive parameter set) encodes the flag bit of the classification strategy (that is, the fourth flag bit), so as to clearly indicate whether to adopt the block classification strategy shown in FIG. 8 or other classification strategies.
- the fourth flag is 1, it means that the block classification strategy shown in FIG. 8 needs to be adopted; if the value of the fourth flag is 0, it means that other classification strategies need to be adopted.
- the video code stream can The fifth flag bit corresponding to each target block included in the current slice is encoded, and the value of the fifth flag bit is used to indicate whether the chrominance component of the corresponding target block is subjected to CC-ALF processing.
- the value of the fifth flag bit corresponding to a certain target block is 1, it means that the chrominance component of the target block needs to be processed by CC-ALF; if the value of the fifth flag bit corresponding to a certain target block is 0, it means that the chrominance component of the target block does not need to be processed by CC-ALF.
- the slice-level flag bit (that is, the third flag bit) indicates that the chrominance components of all target blocks in the current slice do not need to be processed by CC-ALF, or indicate that the chrominance components of all target blocks If CC-ALF processing is required, then there is no need to introduce a block-level flag (that is, the fifth flag).
- the value of the slice-level flag indicates that some target blocks in the slice need to be processed by CC-ALF
- the flag of the coding classification strategy that is, the fourth flag
- the block classification strategy shown in Figure 8 if the block-level flag bit (i.e. the fifth flag bit) of a certain target block indicates that CC-ALF processing is required, then for the target block, the Perform CC-ALF processing with the block classification strategy shown.
- a technical solution similar to that of the foregoing embodiment may be adopted, that is, a second chroma component is respectively set for the two chroma components of the current slice Three flags, or set the same third flag for the two chroma components of the current stripe.
- a fifth flag bit may be set for each of the two chrominance components of the target block, or the same fifth flag bit may be set for the two chrominance components of the target block.
- a fourth flag bit may be set in the APS for the two chroma components, or the same fourth flag bit may be set in the APS for the two chroma components.
- the encoding end can determine whether the chrominance component of the current slice uses the block classification strategy shown in FIG. 8 or other classification strategies when performing CC-ALF processing through rate-distortion optimization. .
- the encoder can calculate the third rate-distortion cost of the chrominance components of all target blocks in the current slice when CC-ALF processing is performed using the block classification strategy shown in Figure 8, and calculate all target blocks in the current slice
- the fourth rate-distortion overhead of the chrominance component of the block when other classification strategies are used for CC-ALF processing and then according to the third rate-distortion overhead and the fourth rate-distortion overhead, it is determined that the chrominance component of the current slice is performing CC-ALF Classification strategy used in ALF processing.
- the third rate-distortion overhead corresponding to a certain slice is less than the fourth rate-distortion overhead, it means that the chrominance component of the slice adopts the block classification strategy shown in Figure 8 when performing CC-ALF processing; if If the third rate-distortion overhead corresponding to a certain slice is greater than the fourth rate-distortion overhead, it means that the CC-ALF processing of the chrominance components of the slice adopts other classification strategies.
- the size information of the target block in the foregoing embodiments may be preset by the encoding end and the decoding end; or it may be determined by the encoding end, and after determining the target block
- encode the size information of the target block in the sequence parameter set, picture parameter set, picture header or slice header of the video code stream may be a CTU, or a block smaller than the CTU.
- Fig. 9 shows a flowchart of a video decoding method according to an embodiment of the present application, and the video decoding method can be executed by a video decoding device.
- the video decoding method includes at least step S910 to step S940, described in detail as follows:
- step S910 the block classification information of the luminance component in the video image frame when ALF is performed is obtained.
- step S920 according to the block classification information of the luma component when performing ALF, determine the block classification information of the chrominance component in the video image frame when CC-ALF is performed.
- step S930 according to the block classification information of the chroma component when CC-ALF is performed, corresponding filter coefficients are selected to perform CC-ALF processing on the chroma component.
- step S940 the video code stream is decoded according to the ALF processing result of the luma component and the CC-ALF processing result of the chrominance component.
- the corresponding filter coefficient can be determined according to the block classification information of the luminance component when ALF is performed, and the ALF processing is performed on the luminance component according to the filter coefficient to obtain the ALF processing result, so that the ALF processing result and the CC-ALF processing result can be , to decode the video code stream.
- the block classification category of the chrominance component when performing CC-ALF can be increased to improve the accuracy of content classification in CC-ALF, thereby improving the adaptive ability and Filtering performance, which is conducive to improving the efficiency of encoding and decoding.
- the block classification strategy in the video decoding method shown in FIG. 9 (that is, according to the block classification information of the luminance component when performing ALF, determine the block classification information of the chrominance component when performing CC-ALF) It can be used alone, and can also be used together with other classification strategies (such as classification strategies in the related art). The two situations are described below:
- the decoding end can learn from The first flag bit corresponding to the current slice is obtained by decoding the video code stream, and the value of the first flag bit is used to indicate whether the chrominance component of the target block in the current slice adopts the CC- The ALF processing method (that is, the CC-ALF processing method adopting the block classification strategy in FIG. 9 ).
- the first flag corresponding to the current slice can directly indicate whether the chrominance component of the target block in the current slice adopts the embodiment of the present application.
- the value of the first flag bit is the first value (for example, 1), it indicates that the chrominance components of some target blocks in the current slice adopt the CC-ALF processing method proposed in the embodiment of the present application, or The chrominance components indicating all target blocks in the current slice adopt the CC-ALF processing method proposed in the embodiment of the present application.
- the value of the first flag bit is the second value (for example, 0), it indicates that the chrominance components of all target blocks in the current slice do not use the CC-ALF processing method proposed in the embodiment of the present application.
- the first flag bit is a slice-level flag bit, if the value of the first flag bit indicates that the chrominance components of all target blocks in the current slice adopt the CC proposed in the embodiment of the present application -ALF processing method, or indicate that the chrominance components of all target blocks in the current slice do not use the CC-ALF processing method proposed in the embodiment of the present application, then there is no need to decode the block-level flag bit (the encoding end does not need to encode the block-level flag bit).
- the decoder needs to decode the video code stream to obtain the current slice contains
- the second flag bit corresponding to each target block the value of the second flag bit is used to indicate whether the chrominance component of the corresponding target block adopts the CC-ALF processing method proposed in the embodiment of the present application. That is, in this embodiment, on the basis of the slice-level flag bits, the block-level flag bits (that is, the second flag bits) can be further obtained by decoding to indicate that the chrominance components of those target blocks need to use the Proposed CC-ALF processing.
- a second flag bit can be set for each of the two chroma components of the target block, and each second flag bit The value is used to indicate whether the corresponding chrominance component in the target block adopts the CC-ALF processing method proposed in the embodiment of the present application.
- the two chrominance components (Cr and Cb) of the target block may also correspond to the same second flag bit, and the value of the same second flag bit is used to indicate two Whether the chrominance component adopts the CC-ALF processing method proposed in the embodiment of this application.
- a first flag bit can be set for each of the two chroma components of the current slice, and the value of each first flag bit is used to indicate whether the corresponding chroma component in the current slice.
- the two chroma components of the current slice may also correspond to the same first flag bit, and the value of the same first flag bit is used to indicate the two chroma components in the current slice Whether the component adopts the CC-ALF processing method proposed in the embodiment of this application.
- the block-level flags can also set a second flag for the two chroma components respectively . If the block-level flag bit sets a second flag bit for two chroma components, then the slice-level flag bit only needs to set a first flag bit for two chroma components.
- the decoding end also needs to decode the adaptive parameter set from the video code stream during decoding, which contains the filter coefficients of CC-ALF.
- the first flag The value of indicates that the chrominance component of at least one target block in the current slice adopts the CC-ALF processing proposed in the embodiment of this application
- the index of the adaptive parameter set corresponding to the current slice can be decoded from the video code stream , and then select corresponding filter coefficients from the adaptive parameter set corresponding to the index of the adaptive parameter set to perform filtering processing on the chrominance component of the corresponding target block under the current condition.
- the block classification strategy shown in Figure 9 and other classification strategies are used at the same time, then when decoding, it is necessary to decode the adaptive parameter set from the video stream, and the current slice
- the corresponding third flag bit, the value of the third flag bit is used to indicate whether the chrominance component of the target block in the current slice is subjected to CC-ALF processing; if the chrominance component of the target block in the current slice needs to be CC-ALF -ALF processing, then decode the index of the adaptive parameter set corresponding to the current slice from the video code stream; then obtain the corresponding chroma component of the current slice from the adaptive set corresponding to the index of the adaptive parameter set
- the fourth flag bit, the value of the fourth flag bit is used to indicate the classification strategy adopted by the chrominance component of the current slice when performing CC-ALF processing, and the classification strategy includes the block classification strategy shown in Figure 9 (ie According to the block classification information of the luminance component when performing ALF, determine the block classification information
- the block classification strategy shown in FIG. 9 and other classification strategies are used at the same time, it is necessary to indicate the chroma of the target block in the current stripe through the slice-level flag bit (ie, the third flag bit)
- the slice-level flag bit ie, the third flag bit
- the component undergoes CC-ALF processing it can be the CC-ALF processing method proposed in the embodiment of this application or other CC-ALF processing methods
- CC-ALF processing it needs to be in the corresponding adaptive parameter set (referenced by the index of the adaptive parameter set) is decoded to obtain the flag bit of the classification strategy (that is, the fourth flag bit), so as to clearly indicate whether to adopt the block classification strategy shown in FIG. 9 or other classification strategies.
- the classification strategy is the classification strategy shown in Figure 9; if the value of the fourth flag bit corresponding to the chroma component of the current slice is the second value (such as 0), it indicates that the chroma component of the current slice is performing CC-ALF
- the classification strategy adopted during processing is other classification strategies.
- the video code stream can be The fifth flag bit corresponding to each target block included in the current slice is obtained by decoding, and the value of the fifth flag bit is used to indicate whether the chrominance component of the corresponding target block is subjected to CC-ALF processing.
- the value of the fifth flag bit corresponding to a certain target block is 1, it means that the chrominance component of the target block needs to be processed by CC-ALF; if the value of the fifth flag bit corresponding to a certain target block is 0, it means that the chrominance component of the target block does not need to be processed by CC-ALF.
- the slice-level flag bit (that is, the third flag bit) indicates that the chrominance components of all target blocks in the current slice do not need to be processed by CC-ALF, or indicate that the chrominance components of all target blocks If CC-ALF processing is required, then there is no need to introduce a block-level flag (that is, the fifth flag).
- the value of the slice-level flag indicates that some target blocks in the slice need to be processed by CC-ALF
- the flag of the coding classification strategy that is, the fourth flag
- the block-level flag bit indicates that CC-ALF processing is required, then for the target block, the Perform CC-ALF processing with the block classification strategy shown.
- a technical solution similar to that of the foregoing embodiment may be adopted, that is, a second chroma component is respectively set for the two chroma components of the current slice Three flags, or set the same third flag for the two chroma components of the current stripe.
- a fifth flag bit can also be set for each of the two chroma components of the target block, or the same fifth flag bit can be set for the two chroma components of the target block.
- the size information of the target block in the foregoing embodiments may be preset by the encoding end and the decoding end; or it may be determined by the encoding end, and after determining the target block
- the size information encode the size information of the target block in the sequence parameter set, picture parameter set, picture header or slice header of the video code stream, so that the decoder needs to decode the corresponding size information from the code stream.
- the target block may be a CTU, or a block smaller than the CTU.
- the embodiment of the present application proposes to determine the classification result of the CC-ALF of the chrominance component at the sub-block level at the same level according to the classification of the ALF of the luma component at the sub-block level, and proposes an adaptive selection filter at different levels according to the classification result method, which is described in detail below:
- the embodiment of the present application proposes to set the number of CC-ALF filters that can be supported by each chroma component as the number of ALF filters that can be supported by the luma component, and at the same time determine the chroma according to the ALF classification process of the luma component
- the CC-ALF classification of the component can be used alone or in combination.
- the classification result of the CC-ALF of the chrominance component may be determined according to the classification of the ALF of the luma component at the subblock level.
- the classification result of the ALF of the luminance component at the subblock level can be used as the classification result of the CC-ALF on the block of the same size.
- the classification process of the ALF of the luminance component at the subblock level is as described in the foregoing formula (1) to formula (10).
- the classification result and the corresponding geometric transformation type of the ALF of the luminance component at the subblock level may also be used as the classification result and geometric transformation type of the CC-ALF of the chrominance component on blocks of the same size.
- the classification process of the ALF of the luminance component at the subblock level is as described in the foregoing formula (1) to formula (10).
- the merging process of the CC-ALF of the chrominance component for various types of filters may be determined according to the merging process (Merge) of the ALF of the luminance component for various types of filters.
- the combination result of the ALF of the luma component and various filters may be used as the combination result of the CC-ALF of the chrominance component and various filters.
- the ALF of the luma component and the CC-ALF of the two chrominance components may be jointly optimized to simultaneously determine the combined results of the ALF of the luma component and the CC-ALF of the chrominance component for various types of filters.
- the number of filters available for the CC-ALF of each chrominance component may be determined according to the final number of filters of the ALF of the luma component.
- the embodiment of the present application proposes a technical solution for adaptively selecting filter types and transmitting filter-related parameters at different levels based on the aforementioned CC-ALF classification method of chrominance components. Specifically, it can be divided into the embodiment of using the classification method proposed in the embodiment of the present application alone, and the embodiment of using the classification method proposed in the embodiment of the present application and the existing classification method in the related CC-ALF technology at the same time.
- the classification method proposed in the embodiment of the present application is used alone, and the two chrominance components are respectively used for CC-ALF selection decision:
- the CC-ALF selection decision can be made separately for the two chroma components Cb and Cr and parameter transfer.
- the CC-ALF related parameters that need to be transmitted by the encoding end are as follows:
- CC-ALF On/Off flag at the CTU level. For example, if the flag bit is 1, it means that the corresponding chrominance component samples in the current CTU are filtered using CC-ALF (since the classification method proposed in the embodiment of this application is used alone, if it is indicated to use CC-ALF for filtering If the CC-ALF is used for filtering, then the classification method proposed in the embodiment of the present application is adopted); if the flag bit is 0, it means that the corresponding chrominance component in the current CTU does not use CC-ALF filtering.
- CC-ALF flag at the slice level For example, if the flag bit is 1, it means that the chrominance component corresponding to at least one CTU in the current slice selects to use CC-ALF (since the classification method proposed in the embodiment of this application is used alone, so if it is indicated to use CC-ALF If filtering is performed, then the classification method proposed in the embodiment of the present application is adopted when using CC-ALF for filtering); if the flag bit is 0, it means that the chrominance components corresponding to all CTUs in the current slice do not use CC -ALF.
- the flag bit of whether to use CC-ALF at the slice level is 1, it may also indicate that the chrominance components corresponding to all CTUs in the current slice use CC-ALF.
- the Slice level uses the CC-ALF flag to indicate that the chroma components corresponding to all CTUs in the current slice do not use CC-ALF, or indicate that the chroma components corresponding to all CTUs in the current slice use CC-ALF, then the encoding end does not need to encode whether the CTU level uses the CC-ALF flag, and the decoding end does not need to decode.
- CC-ALF flag bit of the slice level is 1, the coefficients of each filter in the CC-ALF filter bank corresponding to the relevant chrominance component need to be transmitted.
- Other CC-ALF related control parameters such as the number of filters included in the filter bank and the filter combination index, etc. do not require additional transmission, and can be inferred from the parameters corresponding to the ALF of the luminance component.
- the classification method proposed in the embodiment of the present application is used alone, and the two chrominance components are used together to make the selection decision of CC-ALF :
- the CC-ALF selection decision can be made jointly for the two chroma components Cb and Cr and parameter transfer.
- the CC-ALF related parameters that need to be transmitted by the encoding end are as follows:
- CC-ALF On/Off flag at the CTU level. For example, if the flag bit is 1, it means that the two chrominance component samples in the current CTU are filtered using CC-ALF (since the classification method proposed in the embodiment of this application is used alone, if it is indicated to use CC-ALF If filtering, then the classification method proposed in the embodiment of the present application is adopted when using CC-ALF for filtering); if the flag bit is 0, it means that the two chrominance components in the current CTU do not use CC-ALF filtering .
- CC-ALF flag at the slice level For example, if the flag bit is 1, it means that the two chrominance components of at least one CTU in the current slice choose to use CC-ALF (since the classification method proposed in the embodiment of this application is used alone, if it is indicated to use CC-ALF If the CC-ALF is used for filtering, then the classification method proposed in the embodiment of the present application is adopted when using CC-ALF for filtering); if the flag bit is 0, it means that the two chrominance components of all CTUs in the current slice are not Use CC-ALF.
- the flag bit of whether to use CC-ALF at the slice level is 1, it may also indicate that both chrominance components of all CTUs in the current slice use CC-ALF.
- the Slice level uses the CC-ALF flag bit, it indicates that the two chroma components of all CTUs in the current slice do not use CC-ALF, or indicates that the two chroma components of all CTUs in the current slice Both use CC-ALF, so the encoding end does not need to encode the flag of whether the CTU level uses CC-ALF, and the decoding end does not need to decode.
- CC-ALF flag bit of the slice level is 1, the coefficients of each filter in the CC-ALF filter bank corresponding to the two chrominance components need to be transmitted.
- Other CC-ALF related control parameters such as the number of filters included in the filter bank and the filter combination index, etc. do not require additional transmission, and can be inferred from the parameters corresponding to the ALF of the luminance component.
- the classification method proposed in the embodiment of the present application and the existing classification method in the related CC-ALF technology are used in the CC-ALF of the chrominance component
- the chroma components Cb and Cr respectively specify classification methods, so as to use the corresponding classification methods for all samples of the corresponding chrominance components in the current frame, and perform CC-ALF selection decisions and related parameter transmissions respectively.
- the CC-ALF related parameters that need to be transmitted by the encoding end are as follows:
- CC-ALF On/Off flag at the CTU level. For example, if the flag bit is 1, it means that the corresponding chrominance component samples in the current CTU are filtered using CC-ALF (the classification method used in the specific filtering can be the classification method proposed in the embodiment of this application, or it can be The existing classification method in the relevant CC-ALF technology, which one needs to be further indicated by the flag bit in the APS); if the flag bit is 0, it means that the corresponding chrominance component in the current CTU does not use CC-ALF ALF filtering.
- the flag bit is 1, it means that the chrominance component corresponding to at least one CTU in the current slice is selected to use CC-ALF (similarly, the classification method adopted during specific filtering can be the classification proposed in the embodiment of this application method, it can also be the existing classification method in the relevant CC-ALF technology, which one needs to be further indicated through the flag bit in APS); if the flag bit is 0, it means that all CTUs in the current stripe correspond to None of the chroma components use CC-ALF.
- the flag bit of whether to use CC-ALF at the slice level is 1, it may also indicate that the chrominance components corresponding to all CTUs in the current slice use CC-ALF.
- the Slice level uses the CC-ALF flag to indicate that the chroma components corresponding to all CTUs in the current slice do not use CC-ALF, or indicate that the chroma components corresponding to all CTUs in the current slice use CC-ALF, then the encoding end does not need to encode whether the CTU level uses the CC-ALF flag, and the decoding end does not need to decode.
- CC-ALF When CC-ALF needs to be used, it is necessary to indicate an APS index, and use a classification method flag bit in the APS corresponding to the index to indicate the classification method used by the corresponding chroma component. For example, if the classification method flag is 0, it indicates that the existing classification method in the relevant CC-ALF technology is used; if the classification method flag is 1, it indicates that the classification method proposed in the embodiment of the present application is used.
- classification method flag in the APS is 0, it means that the existing classification method in the relevant CC-ALF technology needs to be used, and then relevant parameters at all levels can be transmitted according to the existing design in the relevant CC-ALF technology .
- the flag bit of the classification method in the APS is 1, it means that the classification method proposed in the embodiment of the present application needs to be adopted, and then relevant parameters at various levels can be transmitted in the manner in the foregoing embodiments.
- the decision-making process for the selection of the CC-ALF classification method at the encoding end is as follows:
- A) Perform CC-ALF decisions on all CTUs in the current slice (Slice) according to the existing classification method in the relevant CC-ALF technology, and obtain the optimal rate-distortion overhead of the CC-ALF of the current slice.
- the two chroma components Cb and Cr can share a classification method flag in the APS to use the same classification method for all samples of the two chrominance components in the current frame, and make CC-ALF selection decisions for the two chrominance components Cb and Cr with the transfer of relevant parameters.
- the CC-ALF related parameters that need to be transmitted by the encoding end are as follows:
- CC-ALF On/Off flag at the CTU level.
- the flag bit is 1, it means that the two chrominance components in the current CTU are filtered using CC-ALF (the classification method adopted during the specific filtering can be the classification method proposed in the embodiment of the present application, or it can be The existing classification method in the related CC-ALF technology, which one needs to be further indicated by the flag bit in the APS); if the flag bit is 0, it means that the two chrominance components in the current CTU do not use CC -ALF filtering.
- the flag bit is 1, it means that the two chrominance components of at least one CTU in the current slice use CC-ALF (similarly, the classification method used in specific filtering can be the classification proposed in the embodiment of this application The method can also be the existing classification method in the related CC-ALF technology, which one needs to be further indicated by the flag bit in the APS); if the flag bit is 0, it means that the two None of the chroma components use CC-ALF.
- the flag bit of whether to use CC-ALF at the Slice level is 1, it can also indicate that both chrominance components of all CTUs in the current slice use CC-ALF.
- the Slice level uses the CC-ALF flag bit, it indicates that the two chroma components of all CTUs in the current slice do not use CC-ALF, or indicates that the two chroma components of all CTUs in the current slice Both use CC-ALF, so the encoding end does not need to encode the flag of whether the CTU level uses CC-ALF, and the decoding end does not need to decode.
- a classification method flag bit is used in the APS corresponding to the index to indicate the classification method of the two chrominance components. For example, if the classification method flag is 1, it means that the classification method proposed in the embodiment of the application is used for all samples of the two chroma components; if the classification method flag is 0, it means that the two chroma components The entire sample uses the existing classification method in the related CC-ALF technique.
- classification method flag in APS is 0, it means that all samples of the two chrominance components use the existing classification method in the relevant CC-ALF technology, and then can follow the existing classification method in the relevant CC-ALF technology
- the design transmits relevant parameters at all levels.
- classification method flag in the APS is 1, it means that the classification method proposed in the embodiment of this application is used for all samples of the two chrominance components, and then all levels of Related parameters.
- the decision-making process for the selection of the CC-ALF classification method at the encoding end is as follows:
- A) Perform CC-ALF decision for all CTUs in the current Slice for the two chroma components Cb and Cr according to the existing classification method in the relevant CC-ALF technology, and obtain the total of the two chroma components in the current Slice CC-ALF optimal rate-distortion overhead.
- whether the CTU level uses the CC-ALF flag, whether the Slice level uses the CC-ALF flag, and the classification method flag in the APS can be set for each color
- One flag bit is set for the degree component (that is, there are two CTU-level flag bits, two Slice-level flag bits and two APS-level flag bits to correspond to the two chroma components respectively), or it can be for the two chroma components Only one flag bit is set (that is, there are 1 CTU-level flag bit, 1 Slice-level flag bit and 1 APS-level flag bit, and correspond to the two chrominance components).
- the CTU-level block is used as an example for illustration. In other embodiments of the present application, it can also be used to process blocks of other sizes. For example, in addition to the size In addition to the 128 ⁇ 128 CTU-level block, it may also be a 64 ⁇ 64 or other smaller size block.
- the size of the block-level unit for block-level CC-ALF filter selection may be specified at both the encoding end and the decoding end, so that the size information of the block-level unit does not need to be transmitted.
- a flag bit needs to be transmitted for each block-level unit to indicate whether the corresponding block unit uses CC-ALF.
- CC-ALF filtering at the encoding end, and write the size of the corresponding block-level unit into the code stream, and the decoding end parses to obtain the size information of the corresponding block-level unit .
- the size information can be written in SPS (Sequence Parameter Set, sequence parameter set), PPS (Picture Parameter Set, image parameter set), picture header (Picture Header) or slice header (Slice Header).
- SPS Sequence Parameter Set, sequence parameter set
- PPS Picture Parameter Set, image parameter set
- picture header Picture Header
- slice header Slice header
- a flag bit needs to be transmitted for each block-level unit to indicate whether the corresponding block unit uses CC-ALF.
- steps in the flow charts involved in the above embodiments are shown sequentially according to the arrows, these steps are not necessarily executed sequentially in the order indicated by the arrows. Unless otherwise specified herein, there is no strict order restriction on the execution of these steps, and these steps can be executed in other orders. Moreover, at least some of the steps in the flow charts involved in the above-mentioned embodiments may include multiple steps or stages, and these steps or stages are not necessarily executed at the same time, but may be performed at different times For execution, the execution order of these steps or stages is not necessarily performed sequentially, but may be executed in turn or alternately with other steps or at least a part of steps or stages in other steps.
- Fig. 10 shows a block diagram of a loop filtering device according to an embodiment of the present application, and the loop filtering device may be set in a video encoding device or a video decoding device.
- a loop filtering device 1000 includes: an acquiring unit 1002 , a determining unit 1004 and a filtering unit 1006 .
- the obtaining unit 1002 is configured to obtain the block classification information of the luminance component in the video image frame when performing adaptive loop filtering ALF;
- the determining unit 1004 is configured to determine the block classification information according to the block classification information of the luminance component when performing ALF The block classification information of the chrominance component in the video image frame when performing cross-component adaptive loop filtering CC-ALF;
- the filtering unit 1006 is configured to select according to the block classification information of the chrominance component when performing CC-ALF The corresponding filter coefficients perform CC-ALF processing on the chrominance components.
- the determining unit 1004 is configured to: use the classification result of the luma component for sub-blocks when performing ALF as the classification result of the chrominance component for the same size when performing CC-ALF the classification result of the block; or
- the classification result and the corresponding geometric transformation type of the luma component for sub-blocks when ALF is performed are used as the classification result and geometric transformation type of the chrominance component for blocks of the same size when CC-ALF is performed.
- the determining unit 1004 is further configured to: determine the Combined results for various filters.
- the determining unit 1004 is further configured to: perform the rate-distortion overhead of filter combination in the ALF process of the luma component, and the CC-ALF process of the chrominance component
- the rate-distortion overhead of filter combination determines the result of combining various filters when the luminance component performs ALF and the chrominance component performs CC-ALF.
- the determining unit 1004 is further configured to: determine the number of filters available for the chrominance component when performing CC-ALF according to the number of filters determined when the luminance component is performing ALF number of filters.
- Fig. 11 shows a block diagram of a video decoding device according to an embodiment of the present application, and the video decoding device may be set in a video decoding device.
- a video decoding device 1100 includes: an acquiring unit 1102 , a determining unit 1104 , a filtering unit 1106 and a first processing unit 1108 .
- the obtaining unit 1102 is configured to obtain the block classification information of the luminance component in the video image frame when performing adaptive loop filtering (ALF);
- the determining unit 1104 is configured to determine the block classification information according to the block classification information of the luminance component when performing ALF The block classification information of the chrominance component in the video image frame when performing cross-component adaptive loop filtering CC-ALF;
- the filtering unit 1106 is configured to select according to the block classification information of the chrominance component when performing CC-ALF The corresponding filter coefficient performs CC-ALF processing on the chrominance component;
- the first processing unit 1108 is configured to process the video code according to the ALF processing result of the luminance component and the CC-ALF processing result of the chrominance component The stream is decoded.
- the video decoding device 1100 further includes: a first decoding unit: configured to determine the video image according to the block classification information when the luminance component is performing ALF The chrominance component in the frame is decoded from the video code stream before the block classification information when performing CC-ALF to obtain the first flag bit corresponding to the current slice, and the value of the first flag bit is used to indicate the current Whether the chrominance component of the target block in the slice is processed by the CC-ALF.
- a first decoding unit configured to determine the video image according to the block classification information when the luminance component is performing ALF
- the chrominance component in the frame is decoded from the video code stream before the block classification information when performing CC-ALF to obtain the first flag bit corresponding to the current slice, and the value of the first flag bit is used to indicate the current Whether the chrominance component of the target block in the slice is processed by the CC-ALF.
- the value of the first flag bit is the first value, it indicates that the chrominance components of some target blocks in the current slice are processed by the CC-ALF, Or indicate that the chrominance components of all target blocks in the current slice are processed by the CC-ALF; if the value of the first flag bit is the second value, indicate the chrominance components of all target blocks in the current slice None of the degree components are processed using the CC-ALF.
- the first decoding unit is further configured to: if the value of the first flag indicates that the chrominance components of some target blocks in the current slice use the CC -ALF processing, decoding from the video code stream to obtain the second flag bit corresponding to each target block included in the current slice, the value of the second flag bit is used to indicate the chrominance component of the corresponding target block Whether to use the CC-ALF treatment.
- the two chrominance components of the target block each correspond to a second flag bit, and the value of each second flag bit is used to indicate the corresponding chroma component in the target block. whether the degree component is processed using the CC-ALF; or
- the two chroma components of the target block correspond to the same second flag bit, and the value of the same second flag bit is used to indicate whether the two chroma components in the target block are processed by the CC-ALF .
- the first decoding unit is further configured to: decode the video code stream to obtain an adaptive parameter set; if the value of the first flag indicates that the current The chrominance component of at least one target block in the slice is processed by the CC-ALF, then the index of the adaptive parameter set corresponding to the current slice is obtained by decoding from the video code stream; The corresponding filter coefficients are selected in the adaptive parameter set corresponding to the index of the parameter set to perform filtering processing on the chrominance component of the target block.
- the current slice corresponds to a first flag bit for each of the two chroma components, and the value of each first flag bit is used to indicate the corresponding whether the chrominance component is processed using the CC-ALF; or
- the current slice corresponds to the same first flag bit for the two chroma components, and the value of the same first flag bit is used to indicate whether the current slice uses the CC-ALF treatment.
- the video decoding device 1100 further includes: a second decoding unit configured to determine the video image according to the block classification information when the luminance component is performing ALF
- the chrominance component in the frame is decoded from the video code stream to obtain the adaptive parameter set and the third flag bit corresponding to the current slice before the block classification information when CC-ALF is performed, and the value of the third flag bit Used to indicate whether the chrominance component of the target block in the current slice is subjected to CC-ALF processing; if the value of the third flag indicates that the chrominance component of the target block in the current slice needs to perform CC-ALF processing, then decode from the video code stream to obtain the index of the adaptive parameter set corresponding to the current slice; obtain the chroma of the current slice from the adaptive parameter set corresponding to the index of the adaptive parameter set The fourth flag bit corresponding to the component, the value of the fourth flag bit is used to indicate the classification strategy adopted by the chrom
- the value of the fourth flag bit corresponding to the chroma component of the current slice is the first value, it indicates that the chroma component of the current slice is performing
- the classification strategy adopted during CC-ALF processing is to determine the block classification information of the chroma component when performing CC-ALF according to the block classification information of the luma component when performing ALF; if the chroma component of the current slice corresponds to The value of the fourth flag bit is the second value, indicating that the classification strategy adopted by the chrominance component of the current slice when CC-ALF processing is performed is the other classification strategy.
- the second decoding unit is further configured to: if the value of the third flag indicates that the chrominance components of some target blocks in the current slice need to perform CC- ALF processing, then decode from the video code stream to obtain the fifth flag bit corresponding to each target block contained in the current slice, and the value of the fifth flag bit is used to indicate whether the chrominance component of the corresponding target block is Perform CC-ALF treatment.
- each of the two chrominance components of the current slice corresponds to a fourth flag bit, and the value of each fourth flag bit is used to indicate the corresponding The classification strategy adopted by the chrominance component of CC-ALF processing; or
- the two chroma components of the current slice correspond to the same fourth flag bit, and the value of the same fourth flag bit is used to indicate that the two chroma components in the current slice are performing CC-ALF
- the first processing unit 1108 is further configured to: determine the size information of the target block according to a preset size; or
- the size information of the target block is obtained by decoding from the sequence parameter set, picture parameter set, picture header or slice header of the video code stream.
- the target block includes: a coding tree unit or a block whose size is smaller than the coding tree unit.
- Fig. 12 shows a block diagram of a video encoding device according to an embodiment of the present application, and the video encoding device may be set in a video encoding device.
- a video coding apparatus 1200 includes: an acquiring unit 1202 , a determining unit 1204 , a filtering unit 1206 and a second processing unit 1208 .
- the obtaining unit 1202 is configured to obtain the block classification information of the luminance component in the video image frame when performing adaptive loop filtering (ALF);
- the determining unit 1204 is configured to determine the block classification information according to the block classification information of the luminance component when performing ALF The block classification information of the chrominance component in the video image frame when performing cross-component adaptive loop filtering CC-ALF;
- the filtering unit 1206 is configured to select according to the block classification information of the chrominance component when performing CC-ALF The corresponding filter coefficients perform CC-ALF processing on the chrominance component;
- the second processing unit 1208 is configured to perform a CC-ALF processing on the video image according to the ALF processing result of the luminance component and the CC-ALF processing result of the chrominance component
- the frame is encoded to obtain the video code stream.
- the video encoding device 1200 further includes: a first encoding unit configured to encode the first slice corresponding to the current slice of the video image frame in the video code stream A flag bit, the value of the first flag bit is used to indicate whether the chrominance component of the target block in the current slice is processed by the CC-ALF.
- the first encoding unit is further configured to: if the value of the first flag indicates that the chrominance components of some target blocks in the current slice use the CC -ALF processing, encoding in the video code stream to obtain the second flag bit corresponding to each target block included in the current slice, the value of the second flag bit is used to indicate the chrominance component of the corresponding target block Whether to use the CC-ALF treatment.
- the first encoding unit is further configured to: before encoding the second flag bits corresponding to each target block included in the current slice in the video code stream , calculating the first rate-distortion cost of the chroma components of each target block when the CC-ALF processing is used, and the chroma components of each target block are performed based on the luminance component when the CC-ALF processing is performed.
- the CC-ALF filter selected by the block classification information during ALF; calculate the second rate-distortion overhead of the chrominance component of each target block when CC-ALF processing is not performed; according to the first rate-distortion overhead and the obtained The second rate-distortion overhead is used to determine whether the chrominance components of the target blocks are processed by the CC-ALF.
- the video encoding device 1200 further includes: a second encoding unit configured to encode a third flag bit corresponding to the current slice in the video code stream, the The value of the third flag bit is used to indicate whether the chrominance component of the target block in the current slice is subjected to CC-ALF processing; if the chrominance component of the target block in the current slice needs to perform CC-ALF processing, then Encode the index of the corresponding adaptive parameter set in the video code stream; encode the fourth flag bit corresponding to the chroma component of the current slice in the adaptive set corresponding to the index of the adaptive parameter set, the The value of the fourth flag bit is used to indicate the classification strategy adopted by the chroma component of the current slice when performing CC-ALF processing, and the classification strategy includes: according to the block classification information of the luma component when performing ALF, determine Block classification information of chrominance components when performing CC-ALF; or other classification strategies.
- the second encoding unit is further configured to: if the value of the third flag indicates that the chrominance components of some target blocks in the current slice need to perform CC- ALF processing, encoding the fifth flag bit corresponding to each target block included in the current slice in the video code stream, the value of the fifth flag bit is used to indicate whether the chrominance component of the corresponding target block is CC-ALF treatment.
- the second encoding unit is further configured to: encode the first corresponding to the chroma component of the current slice in the adaptation set corresponding to the index of the adaptation parameter set Before the four flag bits, calculate the third rate-distortion cost of the chrominance components of all target blocks in the current slice when the target classification strategy performs CC-ALF processing, and the target classification strategy is based on the luminance component when performing ALF Block classification information, to determine the block classification information of the chroma component when performing CC-ALF; calculate the fourth chroma component of all target blocks in the current slice when using the other classification strategy to perform CC-ALF processing Rate-distortion overhead: according to the third rate-distortion overhead and the fourth rate-distortion overhead, determine the classification strategy adopted by the chrominance components of the current slice when CC-ALF processing is performed.
- the second processing unit 1208 is further configured to: determine the size information of the target block according to a preset size; or
- the size information of the target block is encoded in the sequence parameter set, picture parameter set, picture header or slice header of the video code stream.
- the present application also provides an electronic device, the electronic device includes a memory and a processor, the memory stores computer-readable instructions, and the processor implements any of the above implementations when executing the computer-readable instructions method described in the example.
- Fig. 13 shows a schematic structural diagram of a computer system suitable for implementing the electronic device of the embodiment of the present application.
- the computer system 1300 includes a central processing unit (Central Processing Unit, CPU) 1301, which can be stored in a program in a read-only memory (Read-Only Memory, ROM) 1302 or loaded from a storage part 1308 to a random Access programs in the memory (Random Access Memory, RAM) 1303 to perform various appropriate actions and processes, such as performing the methods described in the above-mentioned embodiments.
- CPU Central Processing Unit
- RAM Random Access Memory
- RAM 1303 various programs and data necessary for system operation are also stored.
- the CPU 1301, ROM 1302, and RAM 1303 are connected to each other through a bus 1304.
- An input/output (Input/Output, I/O) interface 1305 is also connected to the bus 1304 .
- the following components are connected to the I/O interface 1305: an input part 1306 including a keyboard, a mouse, etc.; an output part 1307 including a cathode ray tube (Cathode Ray Tube, CRT), a liquid crystal display (Liquid Crystal Display, LCD) etc., and a speaker ; comprise the storage part 1308 of hard disk etc.; And comprise the communication part 1309 of the network interface card such as LAN (Local Area Network, local area network) card, modem etc. The communication section 1309 performs communication processing via a network such as the Internet.
- a drive 1310 is also connected to the I/O interface 1305 as needed.
- a removable medium 1311 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is mounted on the drive 1310 as necessary so that a computer program read therefrom is installed into the storage section 1308 as necessary.
- the processes described above with reference to the flowcharts can be implemented as computer software programs.
- the present application also provides a computer program product, where the computer program product includes computer-readable instructions, and when the computer-readable instructions are executed by a processor, the method described in any of the foregoing embodiments is implemented.
- the computer readable instructions may be downloaded and installed from a network via communications portion 1309 and/or installed from removable media 1311 .
- the units described in the embodiments of the present application may be implemented by software or by hardware, and the described units may also be set in a processor. Wherein, the names of these units do not constitute a limitation of the unit itself under certain circumstances.
- the present application also provides a computer-readable medium.
- the computer-readable medium may be included in the electronic device described in the above embodiments; it may also exist independently without being assembled into the electronic device. middle.
- the above-mentioned computer-readable medium carries one or more computer-readable instructions, and when the above-mentioned one or more computer-readable instructions are executed by an electronic device, the electronic device is made to implement the method described in any of the above-mentioned embodiments.
- this division is not mandatory.
- the features and functions of two or more modules or units described above may be embodied in one module or unit.
- the features and functions of one module or unit described above can be further divided to be embodied by a plurality of modules or units.
- the technical solutions according to the embodiments of the present application can be embodied in the form of software products, which can be stored in a non-volatile storage medium (which can be CD-ROM, U disk, mobile hard disk, etc.) or on the network , including several instructions to make a computing device (which may be a personal computer, server, touch terminal, or network device, etc.) execute the method according to the embodiment of the present application.
- a non-volatile storage medium which can be CD-ROM, U disk, mobile hard disk, etc.
- a computing device which may be a personal computer, server, touch terminal, or network device, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
梯度值 | 几何变换 |
g d1<g d0并且g h<g v | 不变 |
g d1<g d0并且g v≤g h | 对角变换 |
g d0≤g d1并且g h<g v | 竖直翻转 |
g d0≤g d1并且g v≤g h | 旋转变换 |
Claims (26)
- 一种环路滤波方法,由视频编码设备或者视频解码设备执行,包括:获取视频图像帧中的亮度分量在进行自适应环路滤波时的块分类信息;根据所述亮度分量在进行自适应环路滤波时的块分类信息,确定所述视频图像帧中的色度分量在进行跨分量自适应环路滤波时的块分类信息;及根据所述色度分量在进行跨分量自适应环路滤波时的块分类信息,选择对应的滤波器系数对所述色度分量进行跨分量自适应环路滤波处理。
- 根据权利要求1所述的方法,其特征在于,根据所述亮度分量在进行自适应环路滤波时的块分类信息,确定所述视频图像帧中的色度分量在进行跨分量自适应环路滤波时的块分类信息,包括:将所述亮度分量在进行自适应环路滤波时针对子块的分类结果,作为所述色度分量在进行跨分量自适应环路滤波时对相同尺寸块的分类结果。
- 根据权利要求1所述的方法,其特征在于,根据所述亮度分量在进行自适应环路滤波时的块分类信息,确定所述视频图像帧中的色度分量在进行跨分量自适应环路滤波时的块分类信息,包括:将所述亮度分量在进行自适应环路滤波时针对子块的分类结果及对应的几何变换类型,作为所述色度分量在进行跨分量自适应环路滤波时对相同尺寸块的分类结果和几何变换类型。
- 根据权利要求1或3所述的方法,其特征在于,所述方法还包括:根据所述亮度分量在进行自适应环路滤波时对各类自适应环路滤波器的合并结果,确定所述色度分量在进行跨分量自适应环路滤波时对各类跨分量自适应环路滤波器的合并结果。
- 根据权利要求1或3所述的方法,其特征在于,所述方法还包括:根据所述亮度分量在自适应环路滤波过程中进行滤波器合并的率失真开销,以及所述色度分量在跨分量自适应环路滤波过程中进行滤波器合并的率失真开销,确定所述亮度分量在进行自适应环路滤波时,以及所述色度分量在进行跨分量自适应环路滤波时对各类滤波器的合并结果。
- 根据权利要求1或3所述的方法,其特征在于,所述方法还包括:根据所述亮度分量在进行自适应环路滤波时所确定的滤波器数量,确定所述色度分量在进行跨分量自适应环路滤波时的可用滤波器数量。
- 一种视频解码方法,由视频解码设备执行,包括:获取视频图像帧中的亮度分量在进行自适应环路滤波时的块分类信息;根据所述亮度分量在进行自适应环路滤波时的块分类信息,确定所述视频图像帧中的色度分量在进行跨分量自适应环路滤波时的块分类信息;根据所述色度分量在进行跨分量自适应环路滤波时的块分类信息,选择对应的滤波器系数对所述色度分量进行跨分量自适应环路滤波处理;及根据所述亮度分量的自适应环路滤波处理结果,以及所述色度分量的跨分量自适应环路滤波处理结果,对视频码流进行解码处理。
- 根据权利要求7所述的方法,其特征在于,在根据所述亮度分量在进行自适应环路滤波时的块分类信息,确定所述视频图像帧中的色度分量在进行跨分量自适应环路滤波时的块分类信息之前,所述方法还包括:从视频码流中解码得到当前条带所对应的第一标志位,所述第一标志位的值用于指示所述当前条带中的目标块的色度分量是否采用所述跨分量自适应环路滤波处理。
- 根据权利要求8所述的方法,其特征在于,若所述第一标志位的值为第一值,则指示所述当前条带中部分目标块的色度分量采用所述跨分量自适应环路滤波处理,或指示所述当前条带中全部目标块的色度分量采用所述 跨分量自适应环路滤波处理;若所述第一标志位的值为第二值,则指示所述当前条带中全部目标块的色度分量均不采用所述跨分量自适应环路滤波处理。
- 根据权利要求8所述的方法,其特征在于,所述方法还包括:若所述第一标志位的值指示所述当前条带中部分目标块的色度分量采用所述跨分量自适应环路滤波处理,则从所述视频码流中解码得到所述当前条带包含的各个目标块所对应的第二标志位,所述第二标志位的值用于指示对应目标块的色度分量是否采用所述跨分量自适应环路滤波处理。
- 根据权利要求10所述的方法,其特征在于,所述目标块的两个色度分量各自对应一个第二标志位,每个第二标志位的值用于指示所述目标块中对应的色度分量是否采用所述跨分量自适应环路滤波处理。
- 根据权利要求10所述的方法,其特征在于,所述目标块的两个色度分量对应于同一个第二标志位,所述同一个第二标志位的值用于指示所述目标块中两个色度分量是否采用所述跨分量自适应环路滤波处理。
- 根据权利要求8所述的方法,其特征在于,所述方法还包括:从所述视频码流中解码得到自适应参数集;若所述第一标志位的值指示所述当前条带中至少有一个目标块的色度分量采用所述跨分量自适应环路滤波处理,则从所述视频码流中解码得到所述当前条带对应的自适应参数集的索引;及从与所述自适应参数集的索引相对应的自适应参数集中选择对应的滤波器系数对目标块的色度分量进行滤波处理。
- 根据权利要求8至13中任一项所述的方法,其特征在于,所述当前条带针对两个色度分量各自对应一个第一标志位,每个第一标志位的值用于指示所述当前条带中对应的色度分量是否采用所述跨分量自适应环路滤波处理。
- 根据权利要求8至13中任一项所述的方法,其特征在于,所述当前条带针对两个色度分量对应于同一个第一标志位,所述同一个第一标志位的值用于指示所述当前条带针对所述两个色度分量是否采用所述跨分量自适应环路滤波处理。
- 根据权利要求7所述的方法,其特征在于,在所述根据所述亮度分量在进行自适应环路滤波时的块分类信息,确定所述视频图像帧中的色度分量在进行跨分量自适应环路滤波时的块分类信息之前,所述方法还包括:从视频码流中解码得到自适应参数集,以及当前条带所对应的第三标志位,所述第三标志位的值用于指示所述当前条带中的目标块的色度分量是否进行跨分量自适应环路滤波处理;若所述第三标志位的值指示所述当前条带中目标块的色度分量需要进行跨分量自适应环路滤波处理,则从所述视频码流中解码得到所述当前条带对应的自适应参数集的索引;及从与所述自适应参数集的索引相对应的自适应参数集中获取当前条带的色度分量所对应的第四标志位,所述第四标志位的值用于指示所述当前条带的色度分量在进行跨分量自适应环路滤波处理时所采用的分类策略,所述分类策略包括:根据亮度分量在进行自适应环路滤波时的块分类信息,确定色度分量在进行跨分量自适应环路滤波时的块分类信息;或者其它的分类策略。
- 根据权利要求16所述的方法,其特征在于,若所述当前条带的色度分量所对应的第四标志位的值为第一值,则指示所述当前条带的色度分量在进行跨分量自适应环路滤波处理时所采用的分类策略为根据亮度分量在进行自适应环路滤波时的块分类信息,确定色度分量在进行跨分量自适应环路滤波时的块分类信息;及若所述当前条带的色度分量对应的第四标志位的值为第二值,则指示所述当前条带的色度分量在进行跨分量自适应环路滤波处理时所采用的分类策略为所述其它的分类策略。
- 根据权利要求16所述的方法,其特征在于,所述方法还包括:若所述第三标志位的值指示所述当前条带中部分目标块的色度分量需要进行跨分量自适应环路滤波处理,则从所述视频码流中解码得到所述当前条带包含的各个目标块所对应的第五标志位,所述第五标志位的值用于指示对应目标块的色度分量是否进行跨分量自适应环路滤波处理。
- 根据权利要求16至18中任一项所述的方法,其特征在于,所述当前条带的两个色度分量各自对应一个第四标志位,每个第四标志位的值用于指示所述当前条带中对应的色度分量在进行跨分量自适应环路滤波处理时所采用的分类策略。
- 根据权利要求16至18中任一项所述的方法,其特征在于,所述当前条带的两个色度分量对应于同一个第四标志位,所述同一个第四标志位的值用于指示所述当前条带中的两个色度分量在进行跨分量自适应环路滤波处理时所采用的分类策略。
- 一种视频编码方法,由视频编码设备执行,包括:获取视频图像帧中的亮度分量在进行自适应环路滤波时的块分类信息;根据所述亮度分量在进行自适应环路滤波时的块分类信息,确定所述视频图像帧中的色度分量在进行跨分量自适应环路滤波时的块分类信息;根据所述色度分量在进行跨分量自适应环路滤波时的块分类信息,选择对应的滤波器系数对所述色度分量进行跨分量自适应环路滤波处理;及根据所述亮度分量的自适应环路滤波处理结果,以及所述色度分量的跨分量自适应环路滤波处理,对视频图像帧进行编码处理,得到视频码流。
- 一种环路滤波装置,其特征在于,包括:获取单元,配置为获取视频图像帧中的亮度分量在进行自适应环路滤波自适应环路滤波时的块分类信息;确定单元,配置为根据所述亮度分量在进行自适应环路滤波时的块分类信息,确定所述视频图像帧中的色度分量在进行跨分量自适应环路滤波跨分量自适应环路滤波时的块分类信息;滤波单元,配置为根据所述色度分量在进行跨分量自适应环路滤波时的块分类信息,选择对应的滤波器系数对所述色度分量进行跨分量自适应环路滤波处理。
- 一种视频解码装置,其特征在于,包括:获取单元,配置为获取视频图像帧中的亮度分量在进行自适应环路滤波时的块分类信息;确定单元,配置为根据所述亮度分量在进行自适应环路滤波时的块分类信息,确定所述视频图像帧中的色度分量在进行跨分量自适应环路滤波时的块分类信息;滤波单元,配置为根据所述色度分量在进行跨分量自适应环路滤波时的块分类信息,选择对应的滤波器系数对所述色度分量进行跨分量自适应环路滤波处理;第一处理单元,配置为根据所述亮度分量的自适应环路滤波处理结果,以及所述色度分量的跨分量自适应环路滤波处理结果,对视频码流进行解码处理。
- 一种视频编码装置,其特征在于,包括:获取单元,配置为获取视频图像帧中的亮度分量在进行自适应环路滤波时的块分类信息;确定单元,配置为根据所述亮度分量在进行自适应环路滤波时的块分类信息,确定所述视频图像帧中的色度分量在进行跨分量自适应环路滤波时的块分类信息;滤波单元,配置为根据所述色度分量在进行跨分量自适应环路滤波时的块分类信息, 选择对应的滤波器系数对所述色度分量进行跨分量自适应环路滤波处理;第二处理单元,配置为根据所述亮度分量的自适应环路滤波处理结果,以及所述色度分量的跨分量自适应环路滤波处理,对视频图像帧进行编码处理,得到视频码流。
- 一种电子设备,包括存储器和处理器,所述存储器存储有计算机可读指令,所述处理器执行所述计算机可读指令时实现权利要求1至21中任一项所述的方法的步骤。
- 一种计算机程序产品,包括计算机可读指令,所述计算机可读指令被处理器执行时实现权利要求1至21中任一项所述的方法的步骤。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020247012255A KR20240074789A (ko) | 2022-01-07 | 2022-12-09 | 루프 필터링 방법, 비디오 인코딩/디코딩 방법 및 장치, 매체, 및 전자 디바이스 |
JP2024516715A JP2024535840A (ja) | 2022-01-07 | 2022-12-09 | ループフィルタリング方法、ビデオ復号方法、ビデオ符号化方法、ループフィルタリング装置、ビデオ復号装置、ビデオ符号化装置、電子機器及びコンピュータプログラム |
US18/498,611 US20240064298A1 (en) | 2022-01-07 | 2023-10-31 | Loop filtering, video encoding, and video decoding methods and apparatus, storage medium, and electronic device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210017814.0 | 2022-01-07 | ||
CN202210017814.0A CN116456086A (zh) | 2022-01-07 | 2022-01-07 | 环路滤波方法、视频编解码方法、装置、介质及电子设备 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/498,611 Continuation US20240064298A1 (en) | 2022-01-07 | 2023-10-31 | Loop filtering, video encoding, and video decoding methods and apparatus, storage medium, and electronic device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023130899A1 true WO2023130899A1 (zh) | 2023-07-13 |
Family
ID=87073083
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/137908 WO2023130899A1 (zh) | 2022-01-07 | 2022-12-09 | 环路滤波方法、视频编解码方法、装置、介质及电子设备 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20240064298A1 (zh) |
JP (1) | JP2024535840A (zh) |
KR (1) | KR20240074789A (zh) |
CN (1) | CN116456086A (zh) |
WO (1) | WO2023130899A1 (zh) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021021590A1 (en) * | 2019-07-26 | 2021-02-04 | Mediatek Inc. | Method and apparatus of cross-component adaptive loop filtering for video coding |
CN112543335A (zh) * | 2019-09-23 | 2021-03-23 | 腾讯美国有限责任公司 | 对视频数据进行编解码的方法、装置、计算机设备和存储介质 |
WO2021083258A1 (en) * | 2019-10-29 | 2021-05-06 | Beijing Bytedance Network Technology Co., Ltd. | Cross-component adaptive loop filter using luma differences |
WO2021101345A1 (ko) * | 2019-11-22 | 2021-05-27 | 한국전자통신연구원 | 적응적 루프내 필터링 방법 및 장치 |
CN113615189A (zh) * | 2019-03-26 | 2021-11-05 | 高通股份有限公司 | 在视频译码中具有自适应参数集(aps)的基于块的自适应环路滤波器(alf) |
-
2022
- 2022-01-07 CN CN202210017814.0A patent/CN116456086A/zh active Pending
- 2022-12-09 KR KR1020247012255A patent/KR20240074789A/ko active Search and Examination
- 2022-12-09 JP JP2024516715A patent/JP2024535840A/ja active Pending
- 2022-12-09 WO PCT/CN2022/137908 patent/WO2023130899A1/zh active Application Filing
-
2023
- 2023-10-31 US US18/498,611 patent/US20240064298A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113615189A (zh) * | 2019-03-26 | 2021-11-05 | 高通股份有限公司 | 在视频译码中具有自适应参数集(aps)的基于块的自适应环路滤波器(alf) |
WO2021021590A1 (en) * | 2019-07-26 | 2021-02-04 | Mediatek Inc. | Method and apparatus of cross-component adaptive loop filtering for video coding |
CN112543335A (zh) * | 2019-09-23 | 2021-03-23 | 腾讯美国有限责任公司 | 对视频数据进行编解码的方法、装置、计算机设备和存储介质 |
WO2021083258A1 (en) * | 2019-10-29 | 2021-05-06 | Beijing Bytedance Network Technology Co., Ltd. | Cross-component adaptive loop filter using luma differences |
WO2021101345A1 (ko) * | 2019-11-22 | 2021-05-27 | 한국전자통신연구원 | 적응적 루프내 필터링 방법 및 장치 |
Also Published As
Publication number | Publication date |
---|---|
US20240064298A1 (en) | 2024-02-22 |
CN116456086A (zh) | 2023-07-18 |
JP2024535840A (ja) | 2024-10-02 |
KR20240074789A (ko) | 2024-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022022297A1 (zh) | 视频解码方法、视频编码方法、装置、设备及存储介质 | |
WO2022078163A1 (zh) | 视频解码方法、视频编码方法及相关装置 | |
WO2022116836A1 (zh) | 视频解码方法、视频编码方法、装置及设备 | |
WO2022063033A1 (zh) | 视频解码方法、视频编码方法、装置、计算机可读介质及电子设备 | |
WO2022062880A1 (zh) | 视频解码方法、装置、计算机可读介质及电子设备 | |
WO2022174660A1 (zh) | 视频编解码方法、装置、计算机可读介质及电子设备 | |
WO2022078304A1 (zh) | 视频解码方法、装置、计算机可读介质、程序及电子设备 | |
US20230082386A1 (en) | Video encoding method and apparatus, video decoding method and apparatus, computer-readable medium, and electronic device | |
WO2023130899A1 (zh) | 环路滤波方法、视频编解码方法、装置、介质及电子设备 | |
CN115209157A (zh) | 视频编解码方法、装置、计算机可读介质及电子设备 | |
WO2023202097A1 (zh) | 环路滤波方法、视频编解码方法、装置、介质、程序产品及电子设备 | |
WO2022037478A1 (zh) | 视频解码方法、视频编码方法、装置、介质及电子设备 | |
WO2022037477A1 (zh) | 视频解码方法、装置、计算机可读介质及电子设备 | |
WO2024082632A1 (zh) | 视频编解码方法、装置、计算机可读介质及电子设备 | |
WO2022174701A1 (zh) | 视频编解码方法、装置、计算机可读介质及电子设备 | |
WO2024174253A1 (zh) | 基于插值滤波的帧内预测、视频编解码方法、装置和系统 | |
WO2022174638A1 (zh) | 视频编解码方法、装置、计算机可读介质及电子设备 | |
WO2022174659A1 (zh) | 视频编解码方法、装置、计算机可读介质及电子设备 | |
WO2024109099A1 (zh) | 视频编解码方法、装置、计算机可读介质及电子设备 | |
WO2022063040A1 (zh) | 视频编解码方法、装置及设备 | |
WO2024212676A1 (zh) | 视频编解码方法、装置、计算机可读介质及电子设备 | |
WO2022116854A1 (zh) | 视频解码方法、装置、可读介质、电子设备及程序产品 | |
WO2022037464A1 (zh) | 视频解码方法、视频编码方法、装置、设备及存储介质 | |
WO2023051223A1 (zh) | 滤波及编解码方法、装置、计算机可读介质及电子设备 | |
WO2022037458A1 (zh) | 视频编解码中的运动信息列表构建方法、装置及设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22918349 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2024516715 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20247012255 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022918349 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022918349 Country of ref document: EP Effective date: 20240807 |