CN114979628A - Image block prediction sample determining method and coding and decoding equipment - Google Patents

Image block prediction sample determining method and coding and decoding equipment Download PDF

Info

Publication number
CN114979628A
CN114979628A CN202110209565.0A CN202110209565A CN114979628A CN 114979628 A CN114979628 A CN 114979628A CN 202110209565 A CN202110209565 A CN 202110209565A CN 114979628 A CN114979628 A CN 114979628A
Authority
CN
China
Prior art keywords
pixel
pixel point
prediction sample
unmatched
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110209565.0A
Other languages
Chinese (zh)
Inventor
王英彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110209565.0A priority Critical patent/CN114979628A/en
Publication of CN114979628A publication Critical patent/CN114979628A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application provides a method for determining image block prediction samples and coding and decoding equipment, wherein the method comprises the following steps: obtaining an index of a reference pixel corresponding to each pixel point in at least one pixel point in an image block; determining at least one of a reference pixel index matrix, an unmatched pixel index matrix and an unmatched pixel list according to the index of a reference pixel corresponding to each pixel point in at least one pixel point; and according to at least one of the reference pixel index matrix, the unmatched pixel index matrix and the unmatched pixel list, determining a prediction sample of the pixel point, and then determining a prediction sample of the image block according to the prediction sample of the pixel point without deriving a YUV444 prediction sample, so that the memory size required by deriving the prediction sample is reduced, and the hardware implementation is facilitated.

Description

Method for determining image block prediction sample and coding and decoding equipment
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a method for determining image block prediction samples and coding and decoding equipment.
Background
Digital video technology may be incorporated into a variety of video devices, such as digital televisions, smart phones, computers, e-readers or video players, and the like. As video technology has been developed, video data includes a large amount of data, and in order to facilitate transmission of the video data, a video apparatus performs a video compression technology to more efficiently transmit or store the video data.
Compression of video data is currently achieved by reducing or eliminating redundant information in the video data through spatial prediction or temporal prediction. Motion compensation is a type of prediction method commonly used in video coding, and derives a prediction value of a current coding block from a coded area based on the redundancy characteristic of video content in a time domain or a space domain. The prediction method based on motion compensation comprises the following steps: inter prediction, intra block copy, intra string copy, etc. The intra-frame string copy prediction method divides a coding block into a series of pixel strings according to a certain scanning order. And the encoding end encodes the type, the length and the predicted value information of each string of the current encoding block in the code stream. Correspondingly, the decoding end derives a prediction sample of the current image block according to the type, the length and the prediction value information of each string carried in the code stream, and determines a reconstruction value of the current image block according to the prediction sample of the current image block.
However, the decoding end currently occupies a large memory when deriving the prediction samples.
Disclosure of Invention
The application provides a method for determining image block prediction samples and coding and decoding equipment, which reduce the size of a memory required by derivation of the prediction samples and are beneficial to hardware implementation.
In a first aspect, a method for determining prediction samples for an image block is provided, including:
obtaining an index of a reference pixel corresponding to each pixel point in at least one pixel point in the image block;
determining at least one of a reference pixel index matrix, an unmatched pixel index matrix and an unmatched pixel list according to the index of the reference pixel corresponding to each pixel point in the at least one pixel point;
for each pixel point in the at least one pixel point, determining a prediction sample of the pixel point according to at least one of the reference pixel index matrix, the unmatched pixel index matrix and the unmatched pixel list;
determining a prediction sample of the image block according to the prediction sample of the pixel point;
the reference pixel index matrix includes an index of a reference pixel corresponding to each pixel point in the at least one pixel point, the unmatched pixel list includes a corresponding relationship between a value of an unmatched pixel in the at least one pixel point and an index of an unmatched pixel, and the unmatched pixel index matrix includes an index of an unmatched pixel in the at least one pixel point in the unmatched pixel list.
In a second aspect, an apparatus for determining prediction samples for an image block is provided, including:
the acquisition unit is used for acquiring an index of a reference pixel corresponding to each pixel point in at least one pixel point in the image block;
the first determining unit is used for determining at least one of a reference pixel index matrix, an unmatched pixel index matrix and an unmatched pixel list according to the index of the reference pixel corresponding to each pixel point in the at least one pixel point;
a second determining unit, configured to determine, for each of the at least one pixel, a prediction sample of the pixel according to at least one of the reference pixel index matrix, the unmatched pixel index matrix, and the unmatched pixel list;
the third determining unit is used for determining a prediction sample of the image block according to the prediction sample of the pixel point;
the reference pixel index matrix includes an index of a reference pixel corresponding to each pixel point in the at least one pixel point, the unmatched pixel list includes a corresponding relationship between a value of an unmatched pixel in the at least one pixel point and an index of an unmatched pixel, and the unmatched pixel index matrix includes an index of an unmatched pixel in the at least one pixel point in the unmatched pixel list.
In a third aspect, an encoding device is provided that includes a processor and a memory. The memory is used for storing a computer program, and the processor is used for calling and running the computer program stored in the memory to execute the method of the first aspect or each implementation manner thereof.
In a fourth aspect, a decoding device is provided that includes a processor and a memory. The memory is used for storing a computer program, and the processor is used for calling and running the computer program stored in the memory to execute the method in the first aspect or the implementation manners thereof.
In a fifth aspect, a chip is provided for implementing the method in the first aspect or its implementation manners. Specifically, the chip includes: a processor configured to call and run the computer program from the memory, so that the device on which the chip is installed performs the method of the first aspect or its implementations.
A sixth aspect provides a computer-readable storage medium for storing a computer program, the computer program causing a computer to perform the method of the first aspect or its implementations.
In a seventh aspect, a computer program product is provided, which includes computer program instructions for causing a computer to execute the method of the first aspect or its implementation modes.
In an eighth aspect, there is provided a computer program which, when run on a computer, causes the computer to perform the method of the first aspect or its implementations.
According to the technical scheme provided by the application, the index of the reference pixel corresponding to each pixel point in at least one pixel point in the image block is obtained; determining at least one of a reference pixel index matrix, an unmatched pixel index matrix and an unmatched pixel list according to the index of a reference pixel corresponding to each pixel point in at least one pixel point; and according to at least one of the reference pixel index matrix, the unmatched pixel index matrix and the unmatched pixel list, determining a prediction sample of the pixel point, and then determining a prediction sample of the image block according to the prediction sample of the pixel point without deriving a YUV444 prediction sample, so that the memory size required by deriving the prediction sample is reduced, and the hardware implementation is facilitated.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic block diagram of a video encoding and decoding system according to an embodiment of the present application;
fig. 2 is a schematic block diagram of a video encoder provided by an embodiment of the present application;
FIG. 3 is a schematic block diagram of a decoding framework provided by embodiments of the present application;
FIG. 4 is a diagram illustrating intra block copy according to an embodiment of the present application;
FIG. 5 is a diagram illustrating intra-frame string replication according to an embodiment of the present application;
fig. 6 is a flowchart of a method for determining image block prediction samples according to an embodiment of the present disclosure;
FIG. 7A is a diagram illustrating an image block according to an example of the present application;
FIG. 7B is a schematic diagram of at least one pixel in an example of the present application;
FIG. 7C is a diagram of a reference pixel index matrix according to an example of the present application;
FIG. 7D is a diagram illustrating an unmatched pixel index matrix according to an example of the present application;
fig. 8 is a flowchart of another method for determining image block prediction samples according to an embodiment of the present application;
fig. 9 is a flowchart of another method for determining image block prediction samples according to an embodiment of the present application;
FIG. 10A is a schematic diagram of a color sampling pattern according to an embodiment of the present application;
FIG. 10B is a schematic diagram of another color sampling pattern according to an embodiment of the present application;
FIG. 10C is a schematic diagram of another color sampling mode according to an embodiment of the present application;
FIG. 11A is a diagram of chroma prediction samples in an example of the present application;
FIG. 11B is a diagram of chroma prediction samples in another example of the present application;
fig. 12 is a schematic block diagram of an apparatus for determining image block prediction samples provided by an embodiment of the present application;
fig. 13 is a schematic block diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The method and the device can be applied to the fields of image coding and decoding, video coding and decoding, hardware video coding and decoding, special circuit video coding and decoding, real-time video coding and decoding and the like. For example, the aspects of the present application may be incorporated into audio video coding standards (AVS), such as the h.264/AVC (AVC) standard, the h.265/High Efficiency Video Coding (HEVC) standard, and the h.266/multifunctional video coding (VVC) standard. Alternatively, the schemes of the present application may operate in conjunction with other proprietary or industry standards including ITU-T H.261, ISO/IECMPEG-1Visual, ITU-T H.262 or ISO/IECMPEG-2Visual, ITU-T H.263, ISO/IECMPEG-4Visual, ITU-T H.264 (also known as ISO/IECMPEG-4AVC), including Scalable Video Codec (SVC) and Multiview Video Codec (MVC) extensions. It should be understood that the techniques of this application are not limited to any particular codec standard or technique.
For ease of understanding, a video codec system according to an embodiment of the present application will be described first with reference to fig. 1.
Fig. 1 is a schematic block diagram of a video coding and decoding system 100 according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example, and the video coding and decoding system according to the embodiment of the present application includes, but is not limited to, that shown in fig. 1. As shown in fig. 1, the video codec system 100 includes an encoding device 110 and a decoding device 120. Wherein the encoding device is configured to encode (which may be understood as compressing) video data to generate a code stream and transmit the code stream to the decoding device. And the decoding equipment decodes the code stream generated by the coding of the coding equipment to obtain decoded video data.
The encoding apparatus 110 of the present embodiment may be understood as an apparatus having a video encoding function, and the decoding apparatus 120 may be understood as an apparatus having a video decoding function, that is, the present embodiment includes a wider range of devices for the encoding apparatus 110 and the decoding apparatus 120, including, for example, a smart phone, a desktop computer, a mobile computing device, a notebook (e.g., laptop) computer, a tablet computer, a set-top box, a television, a camera, a display device, a digital media player, a video game console, a vehicle-mounted computer, and the like.
In some embodiments, the encoding device 110 may transmit encoded video data (e.g., a codestream) to the decoding device 120 via the channel 130. Channel 130 may include one or more media and/or devices capable of transmitting encoded video data from encoding device 110 to decoding device 120.
In one example, channel 130 includes one or more communication media that enable encoding device 110 to transmit encoded video data directly to decoding device 120 in real-time. In this example, encoding apparatus 110 may modulate the encoded video data according to a communication standard and transmit the modulated video data to decoding apparatus 120. Where the communication medium comprises a wireless communication medium, such as the radio frequency spectrum, and optionally a wired communication medium, such as one or more physical transmission lines.
In another example, channel 130 includes a storage medium that can store video data encoded by encoding device 110. Storage media includes a variety of locally-accessed data storage media such as compact disks, DVDs, flash memory, and the like. In this example, decoding device 120 may retrieve encoded video data from the storage medium.
In another example, channel 130 may comprise a storage server that may store video data encoded by encoding device 110. In this example, decoding device 120 may download the stored encoded video data from the storage server. Alternatively, the storage server may store the encoded video data and may transmit the encoded video data to the decoding device 120, such as a web server (e.g., for a website), a File Transfer Protocol (FTP) server, and so on.
In some embodiments, the encoding apparatus 110 includes a video encoder 112 and an output interface 113. The output interface 113 may comprise, among other things, a modulator/demodulator (modem) and/or a transmitter.
In some embodiments, the encoding device 110 may include a video source 111 in addition to the video encoder 112 and the input interface 113.
Video source 111 may include at least one of a video capture device (e.g., a video camera), a video archive, a video input interface for receiving video data from a video content provider, and a computer graphics system for generating video data.
The video encoder 112 encodes video data from the video source 111 to generate a code stream. The video data may include one or more images (pictures) or sequences of images (pictures). The code stream contains the coding information of the picture or the sequence of pictures in the form of a bit stream. The encoded information may include encoded image data and associated data. The associated data may include Sequence Parameter Sets (SPS), Picture Parameter Sets (PPS), and other syntax structures. An SPS may contain parameters that apply to one or more sequences. A PPS may contain parameters that apply to one or more pictures. A syntax structure refers to a set of zero or more syntax elements in a codestream arranged in a specified order.
The video encoder 112 transmits the encoded video data directly to the decoding apparatus 120 via the output interface 113. The encoded video data may also be stored on a storage medium or storage server for subsequent reading by decoding device 120.
In some embodiments, decoding apparatus 120 includes an input interface 121 and a video decoder 122.
In some embodiments, the decoding apparatus 120 may further include a display device 123 in addition to the input interface 121 and the video decoder 122.
The input interface 121 includes a receiver and/or a modem. The input interface 121 may receive encoded video data through the channel 130.
The video decoder 122 is configured to decode the encoded video data to obtain decoded video data, and transmit the decoded video data to the display device 123.
The display device 123 displays the decoded video data. The display device 123 may be integrated with the decoding apparatus 120 or external to the decoding apparatus 120. The display device 123 may include a variety of display devices such as a Liquid Crystal Display (LCD), a plasma display, an Organic Light Emitting Diode (OLED) display, or other types of display devices.
In addition, fig. 1 is only an example, and the technical solution of the embodiment of the present application is not limited to fig. 1, for example, the technology of the present application may also be applied to single-side video encoding or single-side video decoding.
The following describes a video coding framework related to embodiments of the present application.
Fig. 2 is a schematic block diagram of a video encoder 200 provided by an embodiment of the present application. It should be understood that the video encoder 200 may be used for lossy compression (lossy compression) as well as lossless compression (lossless compression) of images. The lossless compression may be visual lossless compression (visual lossless compression) or mathematical lossless compression (mathematical lossless compression).
The video encoder 200 may be applied to image data in a luminance chrominance (YCbCr, YUV) format.
For example, the video encoder 200 reads video data, and divides one frame of image into a number of Coding Tree Units (CTUs) for each frame of image in the video data, and in some examples, the CTBs may be referred to as "tree blocks", "Largest Coding units" (LCUs), or "Coding tree blocks" (CTBs). Each CTU may be associated with a block of pixels of equal size within the picture. Each pixel may correspond to one luminance (luma or luma) sample and two chrominance (chroma or chroma) samples. Thus, each CTU may be associated with one block of luma samples and two blocks of chroma samples. One CTU size is, for example, 128 × 128, 64 × 64, 32 × 32, or the like. A CTU may be further divided into Coding Units (CUs) for Coding, and the CUs may be rectangular blocks or square blocks. A CU may be further divided into a Prediction Unit (PU) and a Transform Unit (TU), so that coding, prediction, and transform are separated, and the processing is more flexible. In one example, the CTUs are partitioned into CUs in a quadtree manner, and the CUs are partitioned into TUs and PUs in a quadtree manner.
Video encoders and video decoders may support various PU sizes. Assuming that the size of a particular CU is 2 nx 2N, video encoders and video decoders may support PU sizes of 2 nx 2N or nxn for intra prediction, and symmetric PUs of 2 nx 2N, 2 nx N, N x 2N, N xn, or similar sizes, for inter prediction. Video encoders and video decoders may also support asymmetric PUs of 2 nxnu, 2 nxnd, nlx 2N, and nrx 2N for inter prediction.
In some embodiments, as shown in fig. 2, the video encoder 200 may include: a prediction unit 210, a residual unit 220, a transform/quantization unit 230, an inverse transform/quantization unit 240, a reconstruction unit 250, a loop filtering unit 260, a decoded picture buffer 270, and an entropy coding unit 280. It should be noted that the video encoder 200 may include more, fewer, or different functional components.
Alternatively, in this application, a current block may be referred to as a current Coding Unit (CU) or a current Prediction Unit (PU), etc. The prediction block may also be referred to as a prediction block or a picture prediction block, and the reconstructed picture block may also be referred to as a reconstructed block or a picture reconstructed picture block.
In some embodiments, prediction unit 210 includes inter prediction unit 211 and intra prediction unit 212. Since there is a strong correlation between adjacent pixels in one frame of video, a method of using intra prediction in a video coding and decoding technology eliminates spatial redundancy between adjacent pixels. Because of strong similarity between adjacent frames in the video, the inter-frame prediction method is used in the video coding and decoding technology to eliminate the time redundancy between the adjacent frames, thereby improving the coding efficiency.
The inter prediction unit 211 may be used for inter prediction, which may refer to image information of different frames, find a reference block from a reference frame using motion information, generate a prediction block from the reference block, and remove temporal redundancy; the frames used for inter-frame prediction may be P-frames, which refer to forward predicted frames, and/or B-frames, which refer to bi-directional predicted frames. The motion information includes a list of reference frames where the reference frames are located, reference frame indices, and motion vectors. The motion vector can be integer pixel or sub-pixel, if the motion vector is sub-pixel, then the block of the required sub-pixel needs to be made by interpolation filtering in the reference frame, here, the integer pixel or sub-pixel block in the reference frame found according to the motion vector is called the reference block. Some techniques may directly use the reference block as the prediction block, and some techniques may reprocess the reference block to generate the prediction block. Reprocessing the generated prediction block on the basis of the reference block may also be understood as processing the reference block as a prediction block and then reprocessing it on the basis of the prediction block to generate a new prediction block.
The most commonly used inter prediction methods at present include: geometric Partitioning Mode (GPM) in the VVC video codec standard, and Angular Weighted Prediction (AWP) in the AVS3 video codec standard. These two intra prediction modes have common in principle.
The intra prediction unit 212 predicts pixel information within the current coded image block for removing spatial redundancy, referring to only information of the same frame image. The frame used for intra prediction may be an I-frame.
The intra prediction modes used in HEVC include Planar mode (Planar), DC, and 33 angular modes, for 35 prediction modes. The intra mode used by VVC includes Planar, DC, and 65 angular modes, and 67 prediction modes. The intra mode used by AVS3 includes DC, Plane, Bilinear, and 63 angle modes, and 66 prediction modes in total.
In some embodiments, the intra prediction unit 212 may be implemented using intra block copy techniques and intra string copy techniques.
Residual unit 220 may generate a residual block for the CU based on the block of pixels of the CU and a prediction block of a PU of the CU. For example, residual unit 220 may generate a residual block for a CU such that each sample in the residual block has a value equal to the difference between: samples in a pixel block of the CU, and corresponding samples in a prediction block of a PU of the CU.
The transform/quantization unit 230 may quantize the transform coefficients. Transform/quantization unit 230 may quantize transform coefficients associated with TUs of a CU based on Quantization Parameter (QP) values associated with the CU. The video encoder 200 may adjust the degree of quantization applied to the transform coefficients associated with the CU by adjusting the QP value associated with the CU.
The inverse transform/quantization unit 240 may apply inverse quantization and inverse transform to the quantized transform coefficients, respectively, to reconstruct a residual block from the quantized transform coefficients.
Reconstruction unit 250 may add samples of the reconstructed residual block to corresponding samples of one or more prediction blocks generated by prediction unit 210 to generate a reconstructed image block associated with the TU. In this manner, the video encoder 200 may reconstruct blocks of pixels of the CU by reconstructing blocks of samples for each TU of the CU.
Loop filtering unit 260 may perform a deblocking filtering operation to reduce blocking artifacts for blocks of pixels associated with the CU.
In some embodiments, loop filtering unit 260 includes a deblocking filtering unit for deblocking effects and a sample adaptive compensation/adaptive loop filtering (SAO/ALF) unit for removing ringing effects.
Decoded picture buffer 270 may store reconstructed pixel blocks. Inter prediction unit 211 may perform inter prediction on PUs of other pictures using a reference picture containing reconstructed pixel blocks. In addition, intra prediction unit 212 may use the reconstructed pixel blocks in decoded picture buffer 270 to perform intra prediction on other PUs in the same picture as the CU.
Entropy encoding unit 280 may receive the quantized transform coefficients from transform/quantization unit 230. Entropy encoding unit 280 may perform one or more entropy encoding operations on the quantized transform coefficients to produce entropy encoded data.
Fig. 3 is a schematic block diagram of a decoding framework 300 provided by an embodiment of the present application.
As shown in fig. 3, the video decoder 300 includes: entropy decoding unit 310, prediction unit 320, inverse quantization/transform unit 330, reconstruction unit 340, loop filtering unit 350, and decoded picture buffer 360. It should be noted that the video decoder 300 may include more, fewer, or different functional components.
The video decoder 300 may receive a codestream. The entropy decoding unit 310 may parse the codestream to extract syntax elements from the codestream. As part of parsing the code stream, the entropy decoding unit 310 may parse the entropy-encoded syntax elements in the code stream. The prediction unit 320, the inverse quantization/transformation unit 330, the reconstruction unit 340, and the loop filtering unit 350 may decode the video data according to syntax elements extracted from the code stream, i.e., generate decoded video data.
In some embodiments, the prediction unit 320 includes an intra prediction unit 321 and an inter prediction unit 322.
Intra prediction unit 321 may perform intra prediction to generate a prediction block for the PU. Intra-prediction unit 321 may use an intra-prediction mode to generate a prediction block for a PU based on blocks of pixels of spatially neighboring PUs. The intra-prediction unit 321 may also determine an intra-prediction mode of the PU from one or more syntax elements parsed from the codestream.
The inter prediction unit 322 may construct a first reference picture list (list 0) and a second reference picture list (list 1) according to syntax elements parsed from the bitstream. Furthermore, if the PU is encoded using inter prediction, entropy decoding unit 310 may parse motion information of the PU. Inter prediction unit 322 may determine one or more reference blocks for the PU from the motion information of the PU. Inter prediction unit 322 may generate a prediction block for the PU from one or more reference blocks of the PU.
Inverse quantization/transform unit 330 may inverse quantize (i.e., dequantize) transform coefficients associated with the TU. Inverse quantization/transform unit 330 may use a QP value associated with a CU of the TU to determine a quantization level.
After inverse quantizing the transform coefficients, inverse quantization/transform unit 330 may apply one or more inverse transforms to the inverse quantized transform coefficients in order to generate a residual block associated with the TU.
Reconstruction unit 340 uses the residual blocks associated with the TUs of the CU and the prediction blocks of the PUs of the CU to reconstruct the pixel blocks of the CU. For example, the reconstruction unit 340 may add samples of the residual block to corresponding samples of the prediction block to reconstruct a pixel block of the CU, resulting in a reconstructed image block.
Loop filtering unit 350 may perform a deblocking filtering operation to reduce blocking artifacts for blocks of pixels associated with the CU.
Video decoder 300 may store the reconstructed image of the CU in decoded image cache 360. The video decoder 300 may use the reconstructed image in the decoded image buffer 360 as a reference image for subsequent prediction or may transmit the reconstructed image to a display device for presentation.
The basic flow of video encoding and decoding is as follows: on the encoding side, a frame image is divided into blocks, and for a current block, the prediction unit 210 generates a prediction block for the current block using intra prediction or inter prediction. The residual unit 220 may calculate a residual block, i.e., a difference value of the prediction block and the original block of the current block, which may also be referred to as residual information, based on the prediction block and the original block of the current block. The residual block may remove information insensitive to human eyes through processes of transformation and quantization by the transformation/quantization unit 230 to eliminate visual redundancy. Alternatively, the residual block before being transformed and quantized by the transforming/quantizing unit 230 may be referred to as a time-domain residual block, and the time-domain residual block after being transformed and quantized by the transforming/quantizing unit 230 may be referred to as a frequency residual block or a frequency-domain residual block. The entropy coding unit 280 receives the quantized transform coefficient output by the transform quantization unit 230, and may perform entropy coding on the quantized transform coefficient to output a code stream. For example, the entropy encoding unit 280 may remove character redundancy according to the target context model and probability information of the binary code stream.
At the decoding end, the entropy decoding unit 310 may parse the code stream to obtain prediction information, a quantization coefficient matrix, and the like of the current block, and the prediction unit 320 may generate a prediction block of the current block using intra prediction or inter prediction on the current block based on the prediction information. The inverse quantization/transform unit 330 performs inverse quantization and inverse transform on the quantization coefficient matrix using the quantization coefficient matrix obtained from the code stream to obtain a residual block. The reconstruction unit 340 adds the prediction block and the residual block to obtain a reconstructed block. The reconstructed blocks constitute a reconstructed image, and the loop filtering unit 350 performs loop filtering on the reconstructed image based on the image or the blocks to obtain a decoded image. The encoding end also needs similar operations as the decoding end to obtain the decoded image. The decoded image may also be referred to as a reconstructed image, and the reconstructed image may be a subsequent frame serving as a reference frame for inter-frame prediction.
It should be noted that the block division information determined by the encoding end, and mode information or parameter information such as prediction, transform, quantization, entropy coding, loop filter, and the like, are carried in the code stream as necessary. The decoding end analyzes the code stream and analyzes and determines the block division information which is the same as the encoding end according to the existing information, and mode information or parameter information such as prediction, transformation, quantization, entropy coding, loop filtering and the like is determined, so that the decoding image obtained by the encoding end is the same as the decoding image obtained by the decoding end.
The above is a basic flow of a video codec under a block-based hybrid coding framework, and as technology develops, some modules or steps of the framework or flow may be optimized.
An intra block copy technique and an intra string copy technique are described below.
Intra Block Copy (IBC) is an Intra Coding tool adopted in HEVC Screen Content Coding (SCC) extension, and significantly improves the Coding efficiency of Screen content. In AVS3, VVC, IBC techniques have also been adopted to improve the performance of on-screen content coding. The IBC uses the spatial correlation of the screen content video to predict the pixels of the current block to be coded by using the pixels of the image coded on the current image, thereby effectively saving the bits required by the coded pixels. As shown in fig. 4, the displacement between the current coding Block and its reference Block in IBC is called Block Vector (BV). VVC predicts BV using AMVP mode similar to that in inter prediction and allows BVD to be encoded using 1 or 4 pixel resolution.
Intra String Copy (ISC) techniques, also known as String Copy Intra prediction, divide an encoded block into a series of pixel strings or unmatched pixels in some scan order (raster scan, round-trip scan, Zig-Zag scan, etc.). Similar to IBC, the current string finds a reference string of the same shape in the encoded region of the current frame image, derives prediction information of the current string based on this, obtains a residual signal of the current string by encoding the original signal and the prediction information of the current string, and encodes the residual signal. For example: fig. 5 is a schematic diagram of intra-frame string replication according to an embodiment of the present application, and as shown in fig. 5, 28 pixels in white are string 1, 35 pixels in light gray are string 2, and 1 pixel in black represents an unmatched pixel (the unmatched pixel is also referred to as an isolated point, and a pixel value of the unmatched pixel is directly encoded, instead of being derived by referring to a predicted value of the string). Where the reference string of string 1 is to its left, the displacement of string 1 to its corresponding reference string is represented by string vector 1. The reference string of string 2 is above it and the displacement of string 2 to its corresponding reference string is represented by string vector 2.
The intra-frame String copy technique needs to encode a String Vector (SV) corresponding to each String in the current coding block, a String length, a flag indicating whether there is a matching String, and the like. Where the String Vector (SV) represents the displacement of the string to be encoded to its reference string. The string length represents the number of pixels that the string contains.
The equivalent string and unit vector string mode is a seed mode for intra-frame string replication and is adopted in the AVS3 standard 10 months in 2020. Similar to intra-string copying, the mode divides an encoding/decoding block into a series of pixel strings or unmatched pixels in a certain scanning order, and the type of the pixel strings can be an equivalent string or a unit base vector string. An iso-string is characterized by all pixels in the pixel string having the same pre-determined value. A unit vector string (also referred to as a unit base vector string, a unit offset string, a copy upper string, etc.) is characterized by a displacement vector of (0, -1), which is simple to implement, and each pixel of the string uses the upper pixel as a prediction value of the current pixel. The equivalent string mode needs to encode the type, length and predicted value information of each string of the current coding block in the code stream. Correspondingly, the decoding end derives the prediction sample from the information obtained by decoding the code stream.
The predictive value may be encoded by the following methods: 1) directly coding the predicted value; 2) constructing a reference pixel candidate list L1, and coding the index of a predicted value in L1; 3) a first reference pixel list L0 is constructed from which a reference pixel candidate list L1, a coded reuse _ flag and the index of the coded predictor in L1 are derived based on the reuse flag reuse _ flag. In the current iso-string implementation, the prediction value is encoded using the method 3) described above. It should be noted that in the above method, the predicted values are not recorded in the list, but the positions of the reference pixels in the image are recorded.
The current equivalent string and unit base vector string mode in AVS3 is realized for YUV420 format video image, the current prediction sample matrix is marked as pred _ y, pred _ cb and pred _ cr, the method uses a memory with height of LCU, width of image width and channel number of 3 to derive prediction sample, which is marked as LuRowBuf [ ch ] [ x ] [ y ], and then the value of a pixel sample can be confirmed according to the channel, horizontal coordinate and vertical coordinate of color component.
The process of deriving the prediction samples in AVS3 is as follows:
step 1, for a CU in an equivalent string and unit base vector string mode, dividing the CU into a series of equivalent strings, unit base vector strings or unmatched pixels, sequentially deriving prediction samples of each part according to the following method, and setting the coordinates of a current pixel as (x, y):
in case 1, if the type of the current string where the current pixel is located is an iso-string, the following steps 11 to 13 are performed:
step 11, decoding the code stream to obtain the value of the reference pixel of the current pixel point, for example, directly decoding the code stream or confirming by a point vector, and analyzing comp _ flag from the code stream, wherein the comp _ flag is used for indicating whether the chroma component value of the current pixel point exists or not;
step 12, assigning a value to the Y component LcuRowBuf [0] [ x ] [ Y ] of the current position (x, Y);
step 13, if the chroma component value of the current position (x, y) exists, assigning values to the Cb component LcuRowBuf [1] [ x ] [ y ] and the Cr component LcuRowBuf [2] [ x ] [ y ] of the current position (x, y);
in case 2, if the type of the string in which the current pixel is located is an unmatched pixel, the following steps 21 and 22 are performed:
step 21, if the coordinates of the current pixel are integer multiples of 2 (namely x% 2 ═ 0& & Y% 2 ═ 0), decoding the code stream to obtain pixel values LcuRowBuf [0] [ x ] [ Y ], LcuRowBuf [1] [ x ] [ Y ], LcuRowBuf [2] [ x ] [ Y ] of Y, Cb and Cr components; ,
and step 22, if the coordinate of the current pixel point is not an integral multiple of 2, decoding the code stream to obtain the pixel value of the Y component, and setting the values of the LcuRowBuf [0] [ x ] [ Y ], the Cb and Cr components LcuRowBuf [1] [ x ] [ Y ], and the LcuRowBuf [2] [ x ] [ Y ] as 0.
In case 3, if the type of the string where the current pixel point is located is the unit base vector string, the pixel value of the pixel point adjacent to the current pixel point is used as the value of the reference pixel of the current pixel point, that is, LcuRowBuf [0] [ x ] [ y ] ═ LcuRowBuf [0] [ x ] [ y-1], LcuRowBuf [1] [ x ] [ y ] } LcuRowBuf [2] [ x ] [ y-1], and LcuRowBuf [2] [ x ] [ y ] } LcuRowBuf [2] [ x ] [ y-1 ];
and step 2, after the prediction samples of the whole CU are exported, downsampling the prediction samples of the chrominance components, and downsampling a chrominance sample matrix with the size of wXh into a chrominance sample matrix with the size of w/2 Xh/2. Specifically, the non-zero pixels in each 2 × 2 sub-block in the prediction sample of the whole CU are averaged to obtain the down-sampled chroma sample value.
And step 3, deriving values of pred _ y, pred _ u and pred _ v according to the positions of all pixel points in the CU, wherein for a brightness prediction sample, pred _ y is a pixel value in LcuRowBuf [0], pred _ cb is a downsampled chroma sample value in LcuRowBuf [1], and pred _ cr is a downsampled chroma sample value in LcuRowBuf [2 ].
As can be seen from the above, in the current AVS3, for the video image in YUV4:2:0 format or YUV4:2:2, the size of YUV4:4:4 is required to be used when deriving the prediction sample, and the occupied memory space is large, which is not favorable for the implementation of the decoding-end hardware device.
In order to solve the technical problem, at least one of a reference pixel index matrix, an unmatched pixel index matrix and an unmatched pixel list is determined, a prediction sample of a pixel point is determined according to the reference pixel index matrix, the unmatched pixel index matrix and the unmatched pixel list, a prediction sample of an image block is determined according to the prediction sample of the pixel point, downsampling is performed after the YUV444 prediction sample is not needed to be derived, the memory size needed by derivation of the prediction sample is reduced, and hardware implementation is facilitated.
The technical solutions provided in the embodiments of the present application are described in detail below with reference to specific embodiments.
It should be noted that the method of the present application is applicable to an encoding end and a decoding end, and the process of determining the image block prediction samples at the encoding end and the decoding end is substantially similar. Here, the method for determining the image block prediction sample provided in the present application is described by taking a decoding side as an example, and the encoding side may refer to the decoding side.
Fig. 6 is a flowchart of a method for determining image block prediction samples according to an embodiment of the present disclosure, where the method according to the embodiment of the present disclosure is applied to the decoding end shown in fig. 1 or fig. 3, as shown in fig. 6, and includes:
s610, obtaining an index of a reference pixel corresponding to at least one pixel point in the image block.
The execution subject of the embodiment of the present application includes, but is not limited to, the following devices: a decoder, or an apparatus for performing determination of tile prediction samples, such as a desktop computer, a mobile computing device, a notebook (e.g., laptop) computer, a tablet computer, a set-top box, a handset such as a smartphone, a television, a camera, a display device, a digital media player, a video game console, an on-board computer, or the like.
At the encoding end, the image block is divided into a series of pixel strings according to a certain scanning sequence, and each pixel string comprises at least one pixel point.
And the encoding end encodes the length, the type and the predicted value information of the pixel string in the code stream. Therefore, when the decoding end decodes the image block, the type of the pixel string where the pixel point in the image block is located can be decoded from the code stream, and the pixel string comprises at least one pixel point.
Optionally, the type of the pixel string includes any one of: equivalent string, unmatched pixel, unit base vector string.
And after the type of the pixel string to which each pixel point belongs in at least one pixel point in the image block is decoded from the code stream, determining the value of the reference pixel of the pixel point according to the type of the pixel string.
For each pixel point in at least one pixel point, determining the index of the reference pixel corresponding to each pixel point according to the following mode, specifically as follows:
in case 1, if the type of the pixel string to which the pixel point belongs is an equivalent string, the code stream is decoded to obtain the index of the reference pixel corresponding to the pixel string, and since the reference pixel value of each pixel point in the equivalent string is the same, the index of the reference pixel corresponding to the equivalent string to which the pixel point belongs can be determined as the index of the reference pixel corresponding to the pixel point.
In this manner, the value of the reference pixel corresponding to the pixel point can also be determined. For example, according to the index of the reference pixel corresponding to the pixel point, the value of the reference pixel corresponding to the index is searched in a preset first reference pixel list ref _ list, and the value of the reference pixel corresponding to the index is determined as the value of the reference pixel of the pixel point.
For example, the preset first reference pixel list ref _ list is shown in table 1:
TABLE 1
Index Value of reference pixel
1 A1
2 A2
3 A3
…… ……
Because all the pixel points in the equivalent string have the same value of the reference pixel, if the type of the pixel string to which the pixel point belongs is the equivalent string, the encoding end encodes the index of the value of the reference pixel corresponding to the pixel string in the code stream, for example, the value of the reference pixel corresponding to all the pixel points in the pixel string is a2, the index of the value a2 of the reference pixel is 2, and the encoding end encodes the index 2 in the code stream. The decoding end decodes the code stream, obtains an index of the value of the reference pixel corresponding to the pixel string, for example, 2, searches the value a2 of the reference pixel corresponding to the index 2 from the table 1, and determines the value a2 of the reference pixel as the value of the reference pixel corresponding to the pixel string.
Since the reference pixels corresponding to all the pixels in the pixel string have the same value, which is a2, it can be determined that the reference pixel of the current pixel is also a 2.
In case 2, if the type of the pixel string to which the pixel belongs is an unmatched pixel, optionally, the pixel may be referred to as an unmatched pixel. Therefore, the code stream is decoded to obtain the predicted value of the pixel point, the predicted value of the pixel point is placed in the unmatched pixel list, and the index of the reference pixel corresponding to the pixel point is determined according to the position of the predicted value of the pixel point in the unmatched pixel list.
Specifically, if the type of the pixel string to which the pixel belongs is an unmatched pixel, it is indicated that the pixel string includes one pixel, such as the unmatched pixel shown in the black area in fig. 5, and the unmatched pixel is the pixel. When the encoding end encodes, the predicted value of the unmatched pixel is encoded in the code stream. Therefore, if the pixel point is an unmatched pixel point, the decoding end can directly decode from the code stream to obtain the predicted value of the pixel point, place the predicted value of the pixel point in the unmatched pixel list, and determine the index of the reference pixel corresponding to the pixel point according to the position of the predicted value of the pixel point in the unmatched pixel list.
In case 3, if the type of the pixel string to which the pixel belongs is a unit basis vector string, the value of the decoded pixel adjacent above the pixel is obtained, an index corresponding to the value of the decoded pixel is searched in a preset first reference pixel list, and the index is determined as the index of the reference pixel corresponding to the pixel.
Specifically, each pixel in the unit base vector string uses the value of the pixel adjacent to the upper part as the value of the reference pixel. Therefore, if the type of the pixel string to which the pixel point belongs is a unit basis vector string, the decoding end obtains the value of the decoded pixel point adjacent to the pixel point, and queries the index corresponding to the value of the decoded pixel point in the preset first reference pixel list, and determines the index as the index of the reference pixel corresponding to the pixel point.
In some embodiments, the index of the reference pixel may also be understood as an index of the reference pixel location.
S620, determining at least one of a reference pixel index matrix, an unmatched pixel index matrix and an unmatched pixel list according to the index of the reference pixel corresponding to each pixel point in at least one pixel point.
The reference pixel index matrix ref _ index includes an index of a reference pixel corresponding to each pixel in at least one pixel.
The unmatched pixel list unaligned _ pixel _ list includes a correspondence between a value of an unmatched pixel in the at least one pixel and an index of the unmatched pixel.
The unmatched pixel index matrix unmatched _ pixel _ index includes an index of a unmatched pixel in the unmatched pixel list unmatched _ pixel _ list in at least one pixel point.
It should be noted that the size of the reference pixel index matrix and the size of the unmatched pixel index matrix are determined by the number of at least one pixel. For example, if at least one pixel includes all pixels in the image block, the size of the reference pixel index matrix and the size of the unmatched pixel index matrix are the same as the size of the image block, for example, 8 × 8. If at least one pixel includes two rows of pixels of the image block, the size of the reference pixel index matrix and the size of the unmatched pixel index matrix are the same as the size of the two rows of pixels, for example, 2 × 8.
In one example, it is assumed that the image block is as shown in fig. 7A, at least one pixel point is as shown in fig. 7B, the reference pixel index matrix is as shown in fig. 7C, each element represents an index of a reference pixel corresponding to a pixel point at the element position, and C1 represents that the pixel point at the position is an unmatched pixel. Fig. 7D is an index matrix of unmatched pixels, where each element indicates whether the pixel at the position of the element is an unmatched pixel, and if the value of the index is 0, it indicates that the pixel at the position is not an unmatched pixel, and if the value of the index is not 0, such as c1 shown in fig. 7D, it indicates that the pixel at the position is an unmatched pixel.
In some embodiments, the first reference pixel list ref _ list includes chrominance values and luminance values of the reference pixels, for example, luminance Y and chrominance Cb, Cr values of each reference pixel.
In some embodiments, the first reference pixel list ref _ list includes position information of the reference pixel, so that the value of the reference pixel can be obtained according to the position information of the reference pixel.
Optionally, the position information of the reference pixel is mainly used to confirm the position of the reference pixel, and if the reference pixel is in the current region, the position information of the reference pixel may be the coordinate of the reference pixel in the current region. If the reference pixel is in the current LCU row, the location information of the reference pixel at this time may be the coordinate of the reference pixel in the current LCU row.
In some embodiments, the first reference pixel list corresponds to a luma sub-list Y _ only _ list, and each element in the luma sub-list is used to indicate whether a corresponding reference pixel in the first reference pixel list has a luma value other than zero and a chroma value of zero. For example, Y _ only _ list is used to record information as to whether only Y exists in the corresponding element of ref _ list, and the colorimetric values Cb, Cr are set to 0.
In some embodiments, the unmatched pixel list corresponds to a luma sub-list ump _ y _ only _ list of unmatched pixels, each element in the luma sub-list ump _ y _ only _ list of unmatched pixels being used to indicate whether the corresponding unmatched pixels in the unmatched pixel list have luma values other than zero and chroma values of zero. For example, ump _ Y _ only _ list is used to record information as to whether only Y exists in the unamatted _ pixel _ list corresponding element, and the chroma values Cb, Cr are set to 0.
S630, aiming at each pixel point in at least one pixel point, determining a prediction sample of the pixel point according to at least one of the reference pixel index matrix, the unmatched pixel index matrix and the unmatched pixel list.
And S640, determining a prediction sample of the image block according to the prediction sample of the pixel point.
Since the process of determining the prediction sample of each pixel point in at least one pixel point is the same, for convenience of description, one pixel point is taken as an example for explanation, and other pixel points are referred to.
According to the method and the device, the prediction samples of the pixel points are determined according to at least one of the reference pixel index matrix, the unmatched pixel index matrix and the unmatched pixel list, the prediction samples of the image blocks are determined according to the prediction samples of the pixel points, and compared with the prior art that the memory with the size of YUV4:4:4 is used for conducting down-sampling after the prediction samples are derived, the occupied amount of the memory in the process of determining the prediction samples of the image blocks is reduced.
With reference to a specific example, a process of determining a prediction sample of the pixel point according to at least one of the reference pixel index matrix, the unmatched pixel index matrix, and the unmatched pixel list in S630 is described in detail below.
Fig. 8 is a flowchart of another method for determining image block prediction samples according to an embodiment of the present application, and as shown in fig. 8, the step S630 includes:
s701, determining the type of the pixel point;
s702, determining a prediction sample of the pixel point according to the type of the pixel point and at least one of a reference pixel index matrix, an unmatched pixel index matrix and an unmatched pixel list.
In some embodiments, the determining the type of the pixel point in S701 includes, but is not limited to, the following modes:
in the first mode, according to the position information of the pixel point, the index of the reference pixel corresponding to the pixel point is inquired in a reference pixel index matrix; and determining the type of the pixel point according to the index of the reference pixel corresponding to the pixel point. For example, if the index value of the reference pixel corresponding to the pixel point is the first value, the type of the pixel point is determined to be an unmatched pixel. Illustratively, the first value is C1 in fig. 7C, that is, when the value of the index of the reference pixel corresponding to the pixel point is C1, the type of the pixel point is determined to be an unmatched pixel. For another example, if the index of the reference pixel corresponding to the pixel point is the same as the indexes of the reference pixels corresponding to the plurality of pixel points adjacent to the pixel point, it is determined that the type of the pixel point is an equivalent string. For another example, if the index of the reference pixel corresponding to the pixel point is different from the indexes of the reference pixels corresponding to the plurality of pixel points adjacent to the pixel point, the type of the pixel point is determined to be a unit basis vector string.
And obtaining a pixel type matrix, wherein the pixel type matrix comprises the type of each pixel point in at least one pixel point. Therefore, the type corresponding to the pixel point can be inquired in the pixel type matrix at the later stage according to the position information of the pixel point.
The prediction samples of the pixel point comprise a brightness prediction sample and/or a chroma prediction sample, wherein the brightness prediction sample can be understood as a prediction sample of the pixel point under a brightness component, and the chroma prediction sample can be understood as a prediction sample of the pixel point under a chroma component. In some embodiments, the prediction samples in the luma component are also referred to as luma prediction values and the prediction samples in the luma component are also referred to as chroma prediction values.
The process of determining luma prediction samples for a pixel point is first described.
If the prediction samples of the pixel include the luminance prediction sample, the step S702 includes the following steps:
S702-A1, if the type of the pixel point is an equivalent string or a unit basis vector string, determining a brightness prediction sample of the pixel point according to the position information of the pixel point, a reference pixel index matrix and a preset first reference pixel list;
S702-A2, if the type of the pixel point is an unmatched pixel, determining a brightness prediction sample of the pixel point according to the position information of the pixel point, the unmatched pixel index matrix and the unmatched pixel list.
In a possible implementation manner of the foregoing S702-a1, the determining the luminance prediction sample of the pixel according to the position information of the pixel, the reference pixel index matrix, and the preset first reference pixel list includes: according to the position information of the pixel point, determining the index of the reference pixel corresponding to the pixel point from the reference pixel index matrix; inquiring the value of the reference pixel corresponding to the pixel point in a first reference pixel list according to the index of the reference pixel corresponding to the pixel point; and determining the brightness value of the reference pixel corresponding to the pixel point as a brightness prediction sample value of the pixel point. Optionally, the luminance value of the reference pixel corresponding to the pixel point may also be processed, for example, multiplied by a preset coefficient, and the processed luminance value is determined as the luminance prediction sample value of the pixel point.
In a possible implementation manner of the foregoing S702-a2, determining the luma prediction sample of the pixel point according to the position information of the pixel point, and the unmatched pixel index matrix and the unmatched pixel list may include: according to the position information of the pixel points, determining the index of the unmatched pixel corresponding to the pixel point from the unmatched pixel index matrix; according to the index of the unmatched pixel corresponding to the pixel point, the value of the unmatched pixel corresponding to the pixel point is inquired in the unmatched pixel list; and determining the brightness value of the unmatched pixel corresponding to the pixel point as a brightness prediction sample value of the pixel point. Optionally, the brightness value of an unmatched pixel corresponding to the pixel point may also be processed, for example, multiplied by a preset coefficient, and the processed brightness value is determined as the brightness prediction sample value of the pixel point.
The following describes the process of determining the chroma prediction sample of the pixel.
If the prediction samples of the pixel include chroma prediction samples, as shown in fig. 9, the step S702 includes the following steps:
s801, determining a color sampling mode of the image block and position information of a chroma prediction sample of the pixel point.
During encoding, an encoding end divides a current image to be encoded into a plurality of image blocks, and encodes each image block, and correspondingly, a decoding end decodes each image block in the current image. Therefore, the color sampling mode of the image block according to the embodiment of the present application is the color sampling mode of the image in which the image block is located.
In some embodiments, the color format of the video image of the embodiments of the present application is YUV. YUV is mainly used to optimize transmission of color video signals. Compared with the transmission of RGB video signals, it has the greatest advantage of occupying very little bandwidth (RGB requires the simultaneous transmission of three independent video signals). Wherein "Y" represents brightness (Luma or Luma), i.e., a gray scale value; "U" and "V" denote Chroma (Chroma) which describes the color and saturation of an image and is used to specify the color of a pixel. "luminance" is established through the RGB input signals by superimposing specific parts of the RGB signals together. "chroma" defines two aspects of color-hue and saturation, represented by Cr and Cb, respectively. Where Cr reflects the difference between the red part of the RGB input signal and the luminance value of the RGB signal. And Cb reflects the difference between the blue part of the RGB input signal and the luminance value of the RGB signal. The importance of using the YUV color space is that its luminance signal Y and chrominance signal U, V are separate. If there is only a Y signal component and no U, V component, then the image so represented is a black and white grayscale image. The YUV space is used for color TV set to solve the compatibility problem between color TV set and black and white TV set with brightness signal Y, so that the black and white TV set can also receive color TV signal.
The storage format of the YUV code stream is actually closely related to the color sampling mode thereof, and the mainstream color sampling mode has three types, which are respectively: YUV4:4:4, YUV4:2:2 and YUV4:2:0, ratio N1: n2: relative sampling rate in the horizontal direction from the numbers in N3, N1 denotes the number of Y samples in odd and even rows, N2 denotes the number of U and V samples in odd rows, and N3 denotes the number of U and V samples in even rows.
Where YUV4:4:4 indicates that the chroma channel is not downsampled, i.e., there is a set of UV samples for each Y sample, as shown in fig. 10A.
As shown in fig. 10B, YUV4:2:2 denotes 2: 1, without vertical downsampling. I.e. four Y samples per two U or V samples per scan line. I.e. each two Y samples share a set of UV components.
As shown in fig. 10C, YUV4:2:0 represents 2: 1 horizontal downsampling, 2: 1 vertical downsampling, i.e. every four Y samples share a set of UV samples.
It should be noted that the color sampling mode according to the embodiment of the present application includes, but is not limited to, those shown in fig. 10A, 10B, and 10C. Optionally, a color sampling mode for down-sampling the luminance component is further included, such as YUV3:2:0, YUV3:2:2, and the like.
In some embodiments, the color sampling mode of the image block is default, e.g., the color sampling mode of the default image block at the encoding end and the decoding end is YUV4:2: 0.
In some embodiments, the encoding end carries the color sampling mode information of the image block in the code stream, so that the decoding end can obtain the color sampling mode of the image block by decoding the code stream.
In the present application, the size of the chroma prediction sample of the pixel point is related to the color sampling mode of the image block, for example, when the color sampling mode is YUV444, the size of the chroma prediction sample of the pixel point is the same as the size of the luminance prediction sample of the pixel point, and the position information of the chroma prediction sample of the pixel point is the position of the luminance prediction sample of the pixel point. When the color sampling pattern is YUV422, the chroma prediction sample of the pixel corresponds to 2 luma prediction samples of the pixel, and if the position coordinate of at least one pixel is as shown in fig. 7B, and if the color sampling pattern is YUV422, the size of the chroma prediction sample corresponding to the at least one pixel is as shown in fig. 11A, for example, when the position coordinate of the pixel is (1,2), the position coordinate of the chroma prediction sample of the pixel is (0, 2). When the color sampling pattern is YUV420, the chroma prediction sample of the pixel corresponds to 4 luma prediction samples of the pixel, if the position coordinate of at least one pixel point is as shown in fig. 7B, if the color sampling pattern is YUV420, the size of the chroma prediction sample corresponding to the at least one pixel point is as shown in fig. 11B, for example, when the position coordinate of the pixel is (1,2), the position coordinate of the chroma prediction sample of the pixel is (0, 2).
S802, according to the color sampling mode of the image block and the position information of the chroma prediction samples of the pixel points, the position information of M brightness prediction samples corresponding to the chroma prediction samples of the pixel points is determined, wherein M is a positive integer.
In some embodiments, if the color sampling pattern is YUV4:2:0, then M is 4; if the color sampling mode is YUV4:2:2, then M is 2; if the color sampling mode is YUV4:4:4, then M is 1.
Specifically, M luminance prediction samples corresponding to the chrominance prediction samples of the pixel point can be determined according to the color sampling mode of the image block, for example, if the color sampling mode of the image block is YUV444, the chrominance prediction samples of the pixel point correspond to one luminance prediction sample; if the color sampling mode of the image block is YUV422, the chroma prediction sample of the pixel point corresponds to two brightness prediction samples; if the color sampling mode of the image block is YUV420, the chroma prediction samples of the pixel point correspond to 4 luma prediction samples.
For example, if the color sampling mode of the image block is YUV422, and the position coordinate of the pixel point is assumed to be (1,2), the position coordinates of the chroma prediction samples of the pixel point obtained in fig. 11A are (0,2), the position coordinates of the chroma prediction samples (0,2) correspond to 2 luma prediction samples, and the position coordinates of the 2 luma prediction samples are (0,2) and (1, 2). If the color sampling mode of the image block is YUV420, assuming that the position coordinate of the pixel point is (1,2), the position coordinate of the chroma prediction sample obtained in fig. 11B is (0,2), the position coordinate (0,2) of the chroma prediction sample corresponds to 4 luma prediction samples, and the position coordinates of the 4 luma prediction samples are (0,2), (1,2), (0,3), and (1, 3).
And S803, determining a value of a reference pixel corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples according to the position information and the pixel point types of the M brightness prediction samples and at least one of the reference pixel index matrix, the unmatched pixel index matrix and the unmatched pixel list.
In this application, when the types of the pixel points are different, the process of determining the value of the reference pixel corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples is different according to the position information of the M brightness prediction samples and at least one of the reference pixel index matrix, the unmatched pixel index matrix and the unmatched pixel list.
In some embodiments, if the type of the pixel point is an equivalent string or a unit basis vector string, then the step S803 includes the step S803-a: and determining the value of a reference pixel corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples according to the position information of the M brightness prediction samples, the reference pixel index matrix and a preset first reference pixel list.
For example, according to the position information of the M luma prediction samples, an index of a reference pixel corresponding to the position information of at least one luma prediction sample in the M luma prediction samples is looked up in a reference pixel index matrix; and determining the value of the reference pixel corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples from the first reference pixel list according to the index of the reference pixel corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples.
In some embodiments, if the type of the pixel is an unmatched pixel, the step S803 includes the step S803-B: and determining the value of the reference pixel corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples according to the position information of the M brightness prediction samples, the unmatched pixel index matrix and the unmatched pixel list.
For example, according to the position information of the M brightness prediction samples, determining a first brightness prediction sample in the M brightness prediction samples, which is the same as the position information of the pixel point; according to the position information of the first brightness prediction sample, determining the index of a reference pixel corresponding to the position information of the first brightness prediction sample in the unmatched pixel index matrix or the reference pixel index matrix; and determining the value of the reference pixel corresponding to the position information of the first brightness prediction sample in the unmatched pixel list according to the index of the reference pixel corresponding to the position information of the first brightness prediction sample.
In this example, if M is greater than or equal to 2, a second luma prediction sample associated with the position information of the pixel point is further included in the M luma prediction samples, and at this time, the method according to this embodiment of the present application further includes: according to the position information of the second brightness prediction sample, searching the index of the reference pixel corresponding to the position information of the second brightness prediction sample in the reference pixel index matrix; and determining the value of the reference pixel corresponding to the position information of the second brightness prediction sample from the first reference pixel list according to the index of the reference pixel corresponding to the position information of the second brightness prediction sample.
S804, according to the value of the reference pixel corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples, determining the chroma prediction sample value of the pixel point.
The above manners for determining the chroma prediction sample value of the pixel point in S804 include, but are not limited to, the following:
determining a value of a reference pixel corresponding to the position information of a target brightness prediction sample in M brightness prediction samples; and determining the chroma prediction sample value of the pixel point according to the chroma value of the reference pixel corresponding to the position information of the target brightness prediction sample.
Optionally, the target luma prediction sample may be any luma prediction sample among M luma prediction samples.
Optionally, the target luma prediction sample is a luma prediction sample located at the upper left corner of the M luma prediction samples.
In some embodiments, after the position information of the target predicted luminance sample is obtained, the chroma value of the pixel point is determined according to the chroma value of the reference pixel corresponding to the position information of the target predicted luminance sample.
In some embodiments, after the position information of the target predicted luma sample is obtained, it is determined whether a chroma value of a reference pixel corresponding to the position information of the target predicted luma sample is 0.
And if the chroma value of the reference pixel corresponding to the position information of the target prediction brightness sample is judged to be not 0, determining the chroma value of the pixel point according to the chroma value of the reference pixel of the position information of the target prediction brightness sample.
If the chroma value of the reference pixel corresponding to the position information of the target brightness prediction sample is 0, determining the decoded chroma prediction sample value adjacent to the upper part of the chroma prediction sample of the pixel point as the chroma prediction sample value of the pixel point, for example, if the chroma prediction sample position coordinate of the pixel point is (0,2), determining the value of the chroma prediction sample (0,0) as the chroma prediction sample value of the pixel point. Or obtaining an M brightness prediction sample corresponding to a chroma prediction sample adjacent to the chroma prediction sample of the pixel point, and determining the chroma value of a reference pixel corresponding to the position information of a luma prediction sample at the upper left corner in the M brightness prediction samples as the chroma prediction value of the pixel point.
In a second mode, if the first reference pixel list includes the position information of the reference pixel, determining a target luma prediction sample from the M luma prediction samples; according to the position information of the target brightness prediction sample, if the coordinate of the target brightness prediction sample is located in a preset position list, determining the chroma value of a reference pixel corresponding to the coordinate of the target brightness prediction sample as the chroma prediction sample value of a pixel point; or determining that a first pixel point corresponding to the coordinate of the target brightness prediction sample belongs to the equivalent string and the first pixel point is a pixel point of which the first coordinate value in the equivalent string is an integral multiple of 2, and determining the chromatic value of a reference pixel corresponding to the coordinate of the target brightness prediction sample as the chromatic prediction sample value of the pixel point.
Wherein, the preset position list includes at least one of the following: a first reference pixel list and a second reference pixel list.
The second reference pixel list may be understood as an updated list of the first reference pixel list.
In some embodiments, if a reference pixel corresponding to the coordinate of the target brightness prediction sample is not in the preset position list, or a first pixel point corresponding to the coordinate of the target brightness prediction sample does not belong to an equivalence string, or the first pixel point belongs to an equivalence string and the first pixel point is not a pixel point in the equivalence string whose first coordinate value is an integer multiple of 2, the method further includes: and downsampling the chroma value of the reference pixel corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples to obtain the chroma prediction sample value of the pixel point. For example, the average value of the chrominance values of the reference pixels corresponding to the position information of at least one luminance prediction sample in the M luminance prediction samples is used as the chrominance prediction sample value of the pixel point; or, taking an average value of non-zero chrominance values in the chrominance values of the reference pixels corresponding to the position information of at least one luminance prediction sample in the M luminance prediction samples as the chrominance prediction sample value of the pixel point.
And in the third mode, downsampling the chroma value of the reference pixel corresponding to the position information of at least one of the M luma prediction samples to obtain the chroma prediction sample value of the pixel point. For example, the average value of the chroma values of the reference pixels corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples is used as the chroma prediction sample value of the pixel point; or, taking an average value of non-zero chrominance values in the chrominance values of the reference pixels corresponding to the position information of at least one luminance prediction sample in the M luminance prediction samples as the chrominance prediction sample value of the pixel point.
It should be understood that fig. 6-11B are only examples of the present application and should not be construed as limiting the present application.
The preferred embodiments of the present application have been described in detail with reference to the accompanying drawings, however, the present application is not limited to the details of the above embodiments, and various simple modifications can be made to the technical solution of the present application within the technical concept of the present application, and these simple modifications are all within the protection scope of the present application. For example, the various features described in the foregoing detailed description may be combined in any suitable manner without contradiction, and various combinations that may be possible are not described in this application in order to avoid unnecessary repetition. For example, various embodiments of the present application may be arbitrarily combined with each other, and the same should be considered as the disclosure of the present application as long as the concept of the present application is not violated.
It should also be understood that, in the various method embodiments of the present application, the sequence numbers of the above-mentioned processes do not imply the order of execution, and the order of execution of the processes should be determined by their functions and inherent logic, and should not limit the implementation processes of the embodiments of the present application. In addition, in the embodiment of the present application, the term "and/or" is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist. Specifically, a and/or B may represent: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter associated objects are in an "or" relationship.
Method embodiments of the present application are described in detail above with reference to fig. 6-11B, and apparatus embodiments of the present application are described in detail below with reference to fig. 12-13.
Fig. 12 is a schematic block diagram of an apparatus for determining image block prediction samples provided in an embodiment of the present application, where the apparatus may belong to a decoding end, for example, a decoding device. Alternatively, the apparatus may also belong to an encoding end, such as an encoding device.
As shown in fig. 12, the apparatus 10 for determining prediction samples of an image block may include:
the acquiring unit 11 is configured to acquire an index of a reference pixel corresponding to each pixel in at least one pixel in the image block;
a first determining unit 12, configured to determine at least one of a reference pixel index matrix, an unmatched pixel index matrix, and an unmatched pixel list according to an index of a reference pixel corresponding to each pixel point in the at least one pixel point;
a second determining unit 13, configured to determine, for each pixel point of the at least one pixel point, a prediction sample of the pixel point according to at least one of the reference pixel index matrix, the unmatched pixel index matrix, and the unmatched pixel list;
a third determining unit 14, configured to determine a prediction sample of the image block according to the prediction sample of the pixel point;
the reference pixel index matrix includes an index of a reference pixel corresponding to each pixel point in the at least one pixel point, the unmatched pixel list includes a corresponding relationship between a value of an unmatched pixel in the at least one pixel point and an index of an unmatched pixel, and the unmatched pixel index matrix includes an index of an unmatched pixel in the at least one pixel point in the unmatched pixel list.
In some embodiments, the type of the pixel string to which the pixel point belongs includes any one of: equivalent string, unmatched pixel, unit base vector string.
In some embodiments, the second determining unit 13 is specifically configured to determine the type of the pixel point; and determining a prediction sample of the pixel point according to the type of the pixel point and at least one of the reference pixel index matrix, the unmatched pixel index matrix and the unmatched pixel list.
In some embodiments, the second determining unit 13 is specifically configured to query, according to the position information of the pixel point, an index of a reference pixel corresponding to the pixel point in the reference pixel index matrix; and determining the type of the pixel point according to the index of the reference pixel corresponding to the pixel point.
In some embodiments, the second determining unit 13 is specifically configured to determine that the type of the pixel is an unmatched pixel if the index value of the reference pixel corresponding to the pixel is a first numerical value.
In some embodiments, the second determining unit 13 is specifically configured to obtain a type of a pixel string to which each pixel point in the at least one pixel point belongs; determining the type of the vector string to which the pixel point belongs as the type of the pixel point to obtain a pixel type matrix, wherein the pixel type matrix comprises the type of each pixel point in the at least one pixel point; and inquiring the type corresponding to the pixel point in the pixel type matrix according to the position information of the pixel point.
In some embodiments, if the prediction sample of the pixel point includes a luminance prediction sample, the second determining unit 13 is specifically configured to determine the luminance prediction sample of the pixel point according to the position information of the pixel point, the reference pixel index matrix and a preset first reference pixel list if the type of the pixel point is an equivalent string or a unit base vector string; and if the type of the pixel point is an unmatched pixel, determining a brightness prediction sample of the pixel point according to the position information of the pixel point, the unmatched pixel index matrix and the unmatched pixel list.
In some embodiments, the second determining unit 13 is specifically configured to determine, according to the position information of the pixel point, an index of a reference pixel corresponding to the pixel point from the reference pixel index matrix; inquiring the value of the reference pixel corresponding to the pixel point in the first reference pixel list according to the index of the reference pixel corresponding to the pixel point; and determining the brightness value of the reference pixel corresponding to the pixel point as the brightness prediction sample value of the pixel point.
In some embodiments, the second determining unit 13 is specifically configured to determine, according to the position information of the pixel point, an index of an unmatched pixel corresponding to the pixel point from the unmatched pixel index matrix; inquiring the value of the unmatched pixel corresponding to the pixel point in the unmatched pixel list according to the index of the unmatched pixel corresponding to the pixel point; and determining the brightness value of the unmatched pixel corresponding to the pixel point as the brightness prediction sample value of the pixel point.
In some embodiments, if the prediction samples of the pixel point include chroma prediction samples, the second determining unit 13 is specifically configured to determine a color sampling mode of the image block and position information of the chroma prediction samples of the pixel point; determining the position information of M brightness prediction samples corresponding to the chroma prediction samples of the pixel points according to the color sampling mode of the image block and the position information of the chroma prediction samples of the pixel points, wherein M is a positive integer; determining a value of a reference pixel corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples according to the position information of the M brightness prediction samples and the types of the pixel points, and at least one of the reference pixel index matrix, the unmatched pixel index matrix and the unmatched pixel list; and determining the chroma prediction sample value of the pixel point according to the value of the reference pixel corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples.
In some embodiments, the second determining unit 13 is specifically configured to determine, if the type of the pixel point is an equal-valued string or a unit basis vector string, a value of a reference pixel corresponding to position information of at least one luminance prediction sample in the M luminance prediction samples according to the position information of the M luminance prediction samples, the reference pixel index matrix, and the first reference pixel list; and if the type of the pixel point is an unmatched pixel, determining a value of a reference pixel corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples according to the position information of the M brightness prediction samples, the unmatched pixel index matrix and the unmatched pixel list.
In some embodiments, the second determining unit 13 is specifically configured to query, in the reference pixel index matrix, an index of a reference pixel corresponding to the position information of at least one luma prediction sample in the M luma prediction samples according to the position information of the M luma prediction samples; and determining the value of the reference pixel corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples from the first reference pixel list according to the index of the reference pixel corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples.
In some embodiments, the second determining unit 13 is specifically configured to determine, according to the position information of the M luminance prediction samples, a first luminance prediction sample in the M luminance prediction samples that is the same as the position information of the pixel point; according to the position information of the first brightness prediction sample, determining the index of a reference pixel corresponding to the position information of the first brightness prediction sample in the unmatched pixel index matrix or the reference pixel index matrix; and determining the value of the reference pixel corresponding to the position information of the first luma prediction sample in the unmatched pixel list according to the index of the reference pixel corresponding to the position information of the first luma prediction sample.
In some embodiments, if the M luminance prediction samples further include a second luminance prediction sample corresponding to the position information of the pixel point, the second determining unit 13 is further configured to query, according to the position information of the second luminance prediction sample, an index of a reference pixel corresponding to the position information of the second luminance prediction sample in the reference pixel index matrix; and determining the value of the reference pixel corresponding to the position information of the second brightness prediction sample from the first reference pixel list according to the index of the reference pixel corresponding to the position information of the second brightness prediction sample.
In some embodiments, the second determining unit 13 is specifically configured to determine a value of a reference pixel corresponding to the position information of one target luma prediction sample in the M luma prediction samples; and determining the chroma prediction sample value of the pixel point according to the chroma value of the reference pixel corresponding to the position information of the target brightness prediction sample.
In some embodiments, the second determining unit 13 is specifically configured to determine the chroma prediction sample value of the pixel point according to the chroma value of the reference pixel of the position information of the target luma prediction sample if the chroma value of the reference pixel corresponding to the position information of the target luma prediction sample is not 0.
In some embodiments, if the chroma value of the reference pixel corresponding to the position information of the target luma prediction sample is 0, the second determining unit 13 is further configured to determine a decoded chroma prediction sample value adjacent to the chroma prediction sample of the pixel point as the chroma prediction sample value of the pixel point.
In some embodiments, if the first reference pixel list includes position information of the reference pixel, the second determining unit 13 is specifically configured to determine a target luma prediction sample from the M luma prediction samples; according to the position information of the target brightness prediction sample, if the coordinate of the target brightness prediction sample is determined to be located in a preset position list, or if the first pixel point corresponding to the coordinate of the target brightness prediction sample belongs to an equivalent string and is a pixel point of which the first coordinate value is an integral multiple of 2 in the equivalent string, determining the chroma value of a reference pixel corresponding to the coordinate of the target brightness prediction sample as the chroma prediction sample value of the pixel point, wherein the predicted position list comprises the position information of the pixel point of which the chroma prediction sample is not subjected to downsampling.
In some embodiments, if the reference pixel corresponding to the coordinate of the target luminance prediction sample is not in the preset position list, or the first pixel point corresponding to the coordinate of the target luminance prediction sample does not belong to the equivalent string, or the first pixel point belongs to the equivalent string and the first pixel point is not a pixel point of which the first coordinate value in the equivalent string is an integral multiple of 2, the second determining unit 13 is further configured to down-sample the chroma value of the reference pixel corresponding to the position information of at least one luminance prediction sample in the M luminance prediction samples, so as to obtain the chroma prediction sample value of the pixel point.
Optionally, the target luma prediction sample is a luma prediction sample located at the top left corner of the M luma prediction samples.
In some embodiments, the second determining unit 13 is specifically configured to perform downsampling on a chroma value of a reference pixel corresponding to the position information of at least one of the M luma prediction samples to obtain a chroma prediction sample value of the pixel point.
In some embodiments, the second determining unit 13 is specifically configured to take an average value of chrominance values of reference pixels corresponding to the position information of at least one luminance prediction sample in the M luminance prediction samples as the chrominance prediction sample value of the pixel point; or, taking an average value of non-zero chrominance values in the chrominance values of the reference pixels corresponding to the position information of at least one luminance prediction sample in the M luminance prediction samples as the chrominance prediction sample value of the pixel point.
In some embodiments, if the color sampling pattern is YUV4:2:0, then M is 4; if the color sampling mode is YUV4:2:2, then M is 2; if the color sampling mode is YUV4:4:4, then M is 1.
In some embodiments, the first reference pixel list comprises chrominance and luminance values of reference pixels.
In some embodiments, the first reference pixel list corresponds to a luminance sub-list, and each element in the luminance sub-list is used to indicate whether a corresponding reference pixel in the first reference pixel list has a luminance value different from zero and a chrominance value of zero.
In some embodiments, the list of unmatched pixels corresponds to a luminance sublist of unmatched pixels, and each element in the luminance sublist of unmatched pixels is used to indicate whether the corresponding unmatched pixel in the list of unmatched pixels has a luminance value different from zero and a chrominance value of zero.
It is to be understood that the apparatus embodiments and the method embodiments may correspond to each other and similar descriptions may be made with reference to the method embodiments. To avoid repetition, further description is omitted here. Specifically, the apparatus shown in fig. 11 may execute the method embodiment corresponding to the decoding end, and the foregoing and other operations and/or functions of each module in the apparatus are respectively for implementing the corresponding flow of the method embodiment corresponding to the decoding end, and are not described herein again for brevity.
The apparatus of an embodiment of the present application is described above in terms of functional modules in conjunction with the following figures. It should be understood that the functional modules may be implemented by hardware, by instructions in software, or by a combination of hardware and software modules. Specifically, the steps of the method embodiments in the present application may be implemented by integrated logic circuits of hardware in a processor and/or instructions in the form of software, and the steps of the method disclosed in conjunction with the embodiments in the present application may be directly implemented as a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. Alternatively, the software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, electrically erasable programmable memory, registers, or other storage medium known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps in the above method embodiments in combination with hardware thereof.
Fig. 13 is a schematic block diagram of an electronic device 30 provided in an embodiment of the present application, where the electronic device 30 may be a decoding device or an encoding device.
As shown in fig. 13, the electronic device 30 may be a video decoder according to an embodiment of the present application, and the electronic device 30 may include:
a memory 33 and a processor 32, the memory 33 being arranged to store a computer program 34 and to transfer the program code 34 to the processor 32. In other words, the processor 32 may call and run the computer program 34 from the memory 33 to implement the method in the embodiment of the present application.
For example, the processor 32 may be configured to perform the steps of the method 200 described above according to instructions in the computer program 34.
In some embodiments of the present application, the processor 32 may include, but is not limited to:
general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like.
In some embodiments of the present application, the memory 33 includes, but is not limited to:
volatile memory and/or non-volatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, and not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic random access memory (Dynamic RAM, DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (DDR SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), and Direct bus RAM (DR RAM).
In some embodiments of the present application, the computer program 34 may be divided into one or more units, which are stored in the memory 33 and executed by the processor 32 to perform the methods provided herein. The one or more elements may be a series of computer program instruction segments capable of performing certain functions, which are used to describe the execution of the computer program 34 in the electronic device 30.
As shown in fig. 13, the electronic device 30 may further include:
a transceiver 33, the transceiver 33 being connectable to the processor 32 or the memory 33.
The processor 32 may control the transceiver 33 to communicate with other devices, and specifically, may transmit information or data to the other devices or receive information or data transmitted by the other devices. The transceiver 33 may include a transmitter and a receiver. The transceiver 33 may further include antennas, and the number of antennas may be one or more.
It should be understood that the various components in the electronic device 30 are connected by a bus system that includes a power bus, a control bus, and a status signal bus in addition to a data bus.
The present application also provides a computer storage medium having stored thereon a computer program which, when executed by a computer, enables the computer to perform the method of the above-described method embodiments. Alternatively, the present application also provides a computer program product containing instructions, which when executed by a computer, cause the computer to execute the method of the above method embodiment.
When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., Digital Video Disk (DVD)), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the unit is only one logical functional division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment. For example, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and all the changes or substitutions should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (31)

1. A method for determining prediction samples for an image block, comprising:
obtaining an index of a reference pixel corresponding to each pixel point in at least one pixel point in the image block;
determining at least one of a reference pixel index matrix, an unmatched pixel index matrix and an unmatched pixel list according to the index of the reference pixel corresponding to each pixel point in the at least one pixel point;
for each pixel point in the at least one pixel point, determining a prediction sample of the pixel point according to at least one of the reference pixel index matrix, the unmatched pixel index matrix and the unmatched pixel list;
determining a prediction sample of the image block according to the prediction sample of the pixel point;
the reference pixel index matrix comprises an index of a reference pixel corresponding to each pixel point in the at least one pixel point, the unmatched pixel list comprises a corresponding relation between a value of an unmatched pixel in the at least one pixel point and an index of the unmatched pixel, and the unmatched pixel index matrix comprises an index of the unmatched pixel in the at least one pixel point in the unmatched pixel list.
2. The method according to claim 1, wherein the type of the pixel string to which the pixel point belongs comprises any one of: equivalent string, unmatched pixel, unit base vector string.
3. The method of claim 2, wherein determining the predicted sample of the pixel point according to at least one of the reference pixel index matrix, the unmatched pixel index matrix, and the unmatched pixel list comprises:
determining the type of the pixel point;
and determining a prediction sample of the pixel point according to the type of the pixel point and at least one of the reference pixel index matrix, the unmatched pixel index matrix and the unmatched pixel list.
4. The method of claim 3, wherein said determining the type of the pixel point comprises:
inquiring the index of a reference pixel corresponding to the pixel point in the reference pixel index matrix according to the position information of the pixel point;
and determining the type of the pixel point according to the index of the reference pixel corresponding to the pixel point.
5. The method according to claim 4, wherein the determining the type of the pixel point according to the index of the reference pixel corresponding to the pixel point comprises:
and if the index value of the reference pixel corresponding to the pixel point is a first numerical value, determining that the type of the pixel point is an unmatched pixel.
6. The method of claim 3, further comprising:
obtaining the type of a pixel string to which each pixel point in the at least one pixel point belongs;
determining the type of the vector string to which the pixel point belongs as the type of the pixel point to obtain a pixel type matrix, wherein the pixel type matrix comprises the type of each pixel point in the at least one pixel point;
the determining the type of the pixel point includes:
and inquiring the type corresponding to the pixel point in the pixel type matrix according to the position information of the pixel point.
7. The method of claim 3, wherein if the prediction sample of the pixel comprises a luma prediction sample, determining the prediction sample of the pixel according to the type of the pixel and at least one of the reference pixel index matrix, the unmatched pixel index matrix and the unmatched pixel list comprises:
if the type of the pixel point is an equivalent string or a unit base vector string, determining a brightness prediction sample of the pixel point according to the position information of the pixel point, the reference pixel index matrix and a first reference pixel list;
and if the type of the pixel point is an unmatched pixel, determining a brightness prediction sample of the pixel point according to the position information of the pixel point, the unmatched pixel index matrix and the unmatched pixel list.
8. The method of claim 7, wherein determining the luma prediction sample for the pixel based on the location information of the pixel, the reference pixel index matrix and the first reference pixel list comprises:
according to the position information of the pixel points, determining the indexes of the reference pixels corresponding to the pixel points from the reference pixel index matrix;
inquiring the value of the reference pixel corresponding to the pixel point in the first reference pixel list according to the index of the reference pixel corresponding to the pixel point;
and determining the brightness value of the reference pixel corresponding to the pixel point as the brightness prediction sample value of the pixel point.
9. The method of claim 7, wherein determining the luma prediction sample of the pixel point according to the location information of the pixel point, and the index matrix of the unmatched pixels and the list of unmatched pixels comprises:
according to the position information of the pixel points, determining the index of the unmatched pixel corresponding to the pixel point from the unmatched pixel index matrix;
according to the index of the unmatched pixel corresponding to the pixel point, the value of the unmatched pixel corresponding to the pixel point is inquired in the unmatched pixel list;
and determining the brightness value of the unmatched pixel corresponding to the pixel point as the brightness prediction sample value of the pixel point.
10. The method of claim 3, wherein if the predicted sample of the pixel point comprises a chroma predicted sample, the determining the predicted sample of the pixel point according to the type of the pixel point and at least one of the reference pixel index matrix, the unmatched pixel index matrix and the unmatched pixel list comprises:
determining the color sampling mode of the image block and the position information of the chroma prediction sample of the pixel point;
determining the position information of M brightness prediction samples corresponding to the chroma prediction samples of the pixel points according to the color sampling mode of the image block and the position information of the chroma prediction samples of the pixel points, wherein M is a positive integer;
determining a value of a reference pixel corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples according to the position information of the M brightness prediction samples, the types of the pixel points and at least one of the reference pixel index matrix, the unmatched pixel index matrix and the unmatched pixel list;
and determining the chroma prediction sample value of the pixel point according to the value of the reference pixel corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples.
11. The method according to claim 10, wherein the determining a value of a reference pixel corresponding to the position information of at least one luma prediction sample of the M luma prediction samples according to the position information of the M luma prediction samples and the type of the pixel point and at least one of the reference pixel index matrix, the unmatched pixel index matrix, and the unmatched pixel list comprises:
if the type of the pixel point is an equivalent string or a unit base vector string, determining a value of a reference pixel corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples according to the position information of the M brightness prediction samples, the reference pixel index matrix and a first reference pixel list;
and if the type of the pixel point is an unmatched pixel, determining a value of a reference pixel corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples according to the position information of the M brightness prediction samples, the unmatched pixel index matrix and the unmatched pixel list.
12. The method according to claim 11, wherein the determining the value of the reference pixel corresponding to the position information of at least one luma prediction sample among the M luma prediction samples according to the position information of the M luma prediction samples, the reference pixel index matrix and the first reference pixel list comprises:
according to the position information of the M brightness prediction samples, inquiring the index of a reference pixel corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples in the reference pixel index matrix;
and determining the value of the reference pixel corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples from the first reference pixel list according to the index of the reference pixel corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples.
13. The method according to claim 11, wherein the determining a value of a reference pixel corresponding to the position information of at least one luma prediction sample in the M luma prediction samples according to the position information of the M luma prediction samples, the unmatched pixel index matrix and the unmatched pixel list comprises:
determining a first brightness prediction sample which is the same as the pixel point in the M brightness prediction samples according to the position information of the M brightness prediction samples;
according to the position information of the first brightness prediction sample, determining the index of a reference pixel corresponding to the position information of the first brightness prediction sample in the unmatched pixel index matrix or the reference pixel index matrix;
and determining the value of the reference pixel corresponding to the position information of the first brightness prediction sample in the unmatched pixel list according to the index of the reference pixel corresponding to the position information of the first brightness prediction sample.
14. The method of claim 13, wherein if a second luma prediction sample associated with the location information of the pixel point is further included in the M luma prediction samples, the method further comprises:
according to the position information of the second brightness prediction sample, inquiring the index of a reference pixel corresponding to the position information of the second brightness prediction sample in the reference pixel index matrix;
and determining the value of the reference pixel corresponding to the position information of the second luma prediction sample from the first reference pixel list according to the index of the reference pixel corresponding to the position information of the second luma prediction sample.
15. The method according to any one of claims 10-14, wherein the determining the chroma prediction sample value of the pixel point according to the value of the reference pixel corresponding to the position information of at least one of the M luma prediction samples comprises:
determining a value of a reference pixel corresponding to the position information of a target brightness prediction sample in the M brightness prediction samples;
and determining the chroma prediction sample value of the pixel point according to the chroma value of the reference pixel corresponding to the position information of the target brightness prediction sample.
16. The method of claim 15, wherein the determining the chroma prediction sample value of the pixel point according to the chroma value of the reference pixel corresponding to the position information of the target luma prediction sample comprises:
and if the chroma value of the reference pixel corresponding to the position information of the target brightness prediction sample is not 0, determining the chroma prediction sample value of the pixel point according to the chroma value of the reference pixel of the position information of the target brightness prediction sample.
17. The method of claim 16, wherein if the chroma value of the reference pixel corresponding to the position information of the target luma prediction sample is 0, the method further comprises:
and determining the adjacent decoded chroma prediction sample value above the chroma prediction sample of the pixel point as the chroma prediction sample value of the pixel point.
18. The method according to any one of claims 10-14, wherein the determining the chroma prediction sample value of the pixel point according to the value of the reference pixel corresponding to the position information of at least one of the M luma prediction samples comprises:
determining a target luma prediction sample from the M luma prediction samples;
according to the position information of the target brightness prediction sample, if the coordinate of the target brightness prediction sample is determined to be located in a preset position list, or the first pixel point corresponding to the coordinate of the target brightness prediction sample is determined to belong to an equivalent string and is a pixel point of which the first coordinate value is an integral multiple of 2 in the equivalent string, determining the chroma value of a reference pixel corresponding to the coordinate of the target brightness prediction sample as a chroma prediction sample value of the pixel point, wherein the predicted position list comprises the position information of the pixel point of which the chroma prediction sample is not subjected to downsampling.
19. The method of claim 18, wherein if the reference pixel corresponding to the coordinate of the target luma prediction sample is not in a preset position list, or if the first pixel corresponding to the coordinate of the target luma prediction sample does not belong to an equivalent string, or if the first pixel belongs to an equivalent string and the first pixel is not a pixel whose first coordinate value in the equivalent string is an integer multiple of 2, the method further comprises:
and downsampling the chroma value of the reference pixel corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples to obtain the chroma prediction sample value of the pixel point.
20. The method according to any of claims 15-19, wherein the target luma prediction sample is a luma prediction sample located at the top-left corner of the M luma prediction samples.
21. The method according to claim 10, wherein the determining the chroma prediction sample value of the pixel point according to the value of the reference pixel corresponding to the position information of at least one luma prediction sample in the M luma prediction samples comprises:
and downsampling the chroma value of the reference pixel corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples to obtain the chroma prediction sample value of the pixel point.
22. The method according to claim 19 or 21, wherein the down-sampling a chroma value of a reference pixel corresponding to the position information of at least one luma prediction sample in the M luma prediction samples to obtain chroma prediction sample values of the pixel point comprises:
taking the average value of the chroma values of the reference pixels corresponding to the position information of at least one of the M brightness prediction samples as the chroma prediction sample value of the pixel point; alternatively, the first and second electrodes may be,
and taking the average value of non-zero chromatic values in the chromatic values of the reference pixels corresponding to the position information of at least one brightness prediction sample in the M brightness prediction samples as the chromatic prediction sample value of the pixel point.
23. The method according to any of claims 10-14, wherein if the color sampling pattern is YUV4:2:0, then M is 4; if the color sampling mode is YUV4:2:2, then M is 2; if the color sampling mode is YUV4:4:4, then M is 1.
24. The method of claim 7, wherein the first reference pixel list comprises chrominance values and luminance values of reference pixels; alternatively, the first reference pixel list includes position information of the reference pixel.
25. The method according to any one of claims 1-14, wherein the first reference pixel list corresponds to a luma sub-list, and each element in the luma sub-list is used to indicate whether a corresponding reference pixel in the first reference pixel list has a luma value different from zero and a chroma value of zero.
26. The method according to any one of claims 1-14, wherein the unmatched pixel list corresponds to a luminance sub-list of unmatched pixels, and each element in the luminance sub-list of unmatched pixels is used to indicate whether the corresponding unmatched pixel in the unmatched pixel list has a luminance value different from zero and a chrominance value of zero.
27. The method of claim 18, wherein the preset list of locations comprises at least one of: a first reference pixel list and a second reference pixel list, the second reference pixel list being an updated list of the first reference pixel list.
28. An apparatus for determining prediction samples for an image block, comprising:
the acquisition unit is used for acquiring an index of a reference pixel corresponding to each pixel point in at least one pixel point in the image block;
the first determining unit is used for determining at least one of a reference pixel index matrix, an unmatched pixel index matrix and an unmatched pixel list according to the index of the reference pixel corresponding to each pixel point in the at least one pixel point;
a second determining unit, configured to determine, for each pixel in the at least one pixel, a prediction sample of the pixel according to at least one of the reference pixel index matrix, the unmatched pixel index matrix, and the unmatched pixel list;
the third determining unit is used for determining a prediction sample of the image block according to the prediction sample of the pixel point;
the reference pixel index matrix comprises an index of a reference pixel corresponding to each pixel point in the at least one pixel point, the unmatched pixel list comprises a corresponding relation between a value of an unmatched pixel in the at least one pixel point and an index of the unmatched pixel, and the unmatched pixel index matrix comprises an index of the unmatched pixel in the at least one pixel point in the unmatched pixel list.
29. A decoding device characterized in that said encoding device is adapted to perform the method according to any one of claims 1 to 27.
30. An encoding device, characterized in that the encoding device is configured to perform the method according to any one of claims 1 to 27.
31. A computer-readable storage medium for storing a computer program which causes a computer to perform the method of any one of claims 1 to 27.
CN202110209565.0A 2021-02-24 2021-02-24 Image block prediction sample determining method and coding and decoding equipment Pending CN114979628A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110209565.0A CN114979628A (en) 2021-02-24 2021-02-24 Image block prediction sample determining method and coding and decoding equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110209565.0A CN114979628A (en) 2021-02-24 2021-02-24 Image block prediction sample determining method and coding and decoding equipment

Publications (1)

Publication Number Publication Date
CN114979628A true CN114979628A (en) 2022-08-30

Family

ID=82974170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110209565.0A Pending CN114979628A (en) 2021-02-24 2021-02-24 Image block prediction sample determining method and coding and decoding equipment

Country Status (1)

Country Link
CN (1) CN114979628A (en)

Similar Documents

Publication Publication Date Title
US11611757B2 (en) Position dependent intra prediction combination extended with angular modes
TWI705694B (en) Slice level intra block copy and other video coding improvements
CN111819852B (en) Method and apparatus for residual symbol prediction in the transform domain
US20180249177A1 (en) Reference Frame Encoding Method and Apparatus, and Reference Frame Decoding Method and Apparatus
US20160182913A1 (en) Palette mode for subsampling format
US20160234494A1 (en) Restriction on palette block size in video coding
TW202005399A (en) Block-based adaptive loop filter (ALF) design and signaling
CN113411613B (en) Method for video coding image block, decoding device and coder/decoder
US20230042484A1 (en) Decoding method and coding method for unmatched pixel, decoder, and encoder
WO2023044868A1 (en) Video encoding method, video decoding method, device, system, and storage medium
CN113938679A (en) Image type determination method, device, equipment and storage medium
CN116614625A (en) Video coding method, device and medium
CN114979628A (en) Image block prediction sample determining method and coding and decoding equipment
CN114979629A (en) Image block prediction sample determining method and coding and decoding equipment
CN116760976B (en) Affine prediction decision method, affine prediction decision device, affine prediction decision equipment and affine prediction decision storage medium
WO2024092425A1 (en) Video encoding/decoding method and apparatus, and device and storage medium
WO2023197229A1 (en) Video coding/decoding method, apparatus, device and system and storage medium
WO2023236113A1 (en) Video encoding and decoding methods, apparatuses and devices, system, and storage medium
WO2023184250A1 (en) Video coding/decoding method, apparatus and system, device and storage medium
US20230319267A1 (en) Video coding method and video decoder
CN116405701A (en) Image filtering method, device, equipment and storage medium
CN116918326A (en) Video encoding and decoding method and system, video encoder and video decoder
CN116965020A (en) Video encoding and decoding method and system and video encoder and decoder
CN117121485A (en) Video encoding and decoding method and system and video encoder and decoder
CN116982312A (en) Video encoding and decoding method and system and video encoder and decoder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40074376

Country of ref document: HK