US11477465B2 - Colour component prediction method, encoder, decoder, and storage medium - Google Patents

Colour component prediction method, encoder, decoder, and storage medium Download PDF

Info

Publication number
US11477465B2
US11477465B2 US17/454,612 US202117454612A US11477465B2 US 11477465 B2 US11477465 B2 US 11477465B2 US 202117454612 A US202117454612 A US 202117454612A US 11477465 B2 US11477465 B2 US 11477465B2
Authority
US
United States
Prior art keywords
reference sample
current block
block
sample set
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US17/454,612
Other languages
English (en)
Other versions
US20220070476A1 (en
Inventor
Shuai Wan
Yanzhuo Ma
Junyan Huo
Haixin Wang
Fuzheng Yang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Assigned to GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD. reassignment GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HUO, JUNYAN, MA, YANZHUO, WAN, SHUAI, WANG, HAIXIN, YANG, FUZHENG
Publication of US20220070476A1 publication Critical patent/US20220070476A1/en
Priority to US17/942,679 priority Critical patent/US11770542B2/en
Application granted granted Critical
Publication of US11477465B2 publication Critical patent/US11477465B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/43Hardware specially adapted for motion estimation or compensation
    • H04N19/433Hardware specially adapted for motion estimation or compensation characterised by techniques for memory access
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation

Definitions

  • Embodiments of the present disclosure relate to the technical field of picture processing, and particularly to a method for colour component prediction, an encoder, a decoder, and a storage medium.
  • VVC Versatile Video Coding
  • MIP Matrix-based Intra Prediction
  • FIG. 1 is a composition block diagram of a video coding system according to an embodiment of the present disclosure.
  • FIG. 2 is a composition block diagram of a video decoding system according to an embodiment of the present disclosure.
  • FIG. 3 is a flowchart of a method for colour component prediction according to an embodiment of the present disclosure.
  • FIG. 4A is a structure diagram of positions of reference samples according to an embodiment of the present disclosure.
  • FIG. 4B is a structure diagram of down-sampling processing of reference samples according to an embodiment of the present disclosure.
  • FIG. 5A is a structure diagram of buffer filling according to the related technical solution.
  • FIG. 5B is a structure diagram of another buffer filling according to the related technical solution.
  • FIG. 5C is a structure diagram of buffer filling according to an embodiment of the present disclosure.
  • FIG. 6A is a structure diagram of determination of input samples according to the related technical solution.
  • FIG. 6B is another structure diagram of determination of input samples according to an embodiment of the present disclosure.
  • FIG. 7 is a flowchart of another method for colour component prediction according to an embodiment of the present disclosure.
  • FIG. 8 is a structure diagram of generation of a predicted value according to an embodiment of the present disclosure.
  • FIG. 9 is a composition structure diagram of an encoder according to an embodiment of the present disclosure.
  • FIG. 10 is a specific hardware structure diagram of an encoder according to an embodiment of the present disclosure.
  • FIG. 11 is a composition structure diagram of a decoder according to an embodiment of the present disclosure.
  • FIG. 12 is a specific hardware structure diagram of a decoder according to an embodiment of the present disclosure.
  • a first colour component, a second colour component, and a third colour component are usually adopted to represent a Coding Block (CB).
  • the three colour components are a luma component, a blue chroma component, and a red chroma component respectively.
  • the luma component is usually represented by sign Y
  • the blue chroma component is usually represented by sign Cb or U
  • the red chroma component is usually represented by sign Cr or V. Therefore, the video picture may be represented in a YCbCr format, or may be represented in a YUV format.
  • the first colour component may be the luma component
  • the second colour component may be the blue chroma component
  • the third colour component may be the red chroma component.
  • input data predicted by MIP may include a reference sample in a previous line and in a left column of a current block, an MIP mode (which may be represented as modeld) applied to the current block, information of a width and a height of the current block, and whether it needs to be transposed, and the like; output data predicted by MIP may include a predicted value of the current block.
  • the MIP process may specifically include four steps: configuring an MIP core parameter, acquiring a reference sample, constructing input samples, and generating a predicted value. After the four steps, the predicted value of the current block may be obtained.
  • the embodiments of the present disclosure provide a method for colour component prediction.
  • a neighbouring reference sample set of a current block is determined, and a preset parameter value corresponding to the current block is determined, the neighbouring reference sample set including at least one reference sample; the neighbouring reference sample set and the preset parameter value are buffered to construct an input reference sample set; an input sample matrix is determined by means of a first preset calculation model based on the input reference sample set; and colour component prediction is performed on the current block according to the input sample matrix to obtain a prediction block of the current block.
  • the input sample matrix may be determined based on the input reference sample set and the first preset calculation model, while the derivation process of the input samples for matrix multiplication is also simplified, so that the derivation process of the input sample matrix is unified, and the solutions of the embodiments of the present disclosure no longer depend on the type of the current block and can also realize parallel processing, thereby reducing the calculation complexity.
  • Embodiments of the present disclosure provide a method for colour component prediction, an encoder, a decoder, and a storage medium, which may simplify the derivation process of input samples for matrix multiplication, and further can reduce the time complexity.
  • the embodiments of the present disclosure provide a method for colour component prediction, which may be applied to an encoder and include the following operations.
  • a neighbouring reference sample set of a current block is determined, and a preset parameter value corresponding to the current block is determined, the neighbouring reference sample set including at least one reference sample.
  • the neighbouring reference sample set and the preset parameter value are buffered to construct an input reference sample set.
  • An input sample matrix is determined by means of a first preset calculation model based on the input reference sample set.
  • Colour component prediction is performed on the current block according to the input sample matrix to obtain a prediction block of the current block.
  • the embodiments of the present disclosure provide a method for colour component prediction, which may be applied to a decoder and include the following operations.
  • a neighbouring reference sample set of a current block is determined, and a preset parameter value corresponding to the current block is determined, the neighbouring reference sample set including at least one reference sample.
  • the neighbouring reference sample set and the preset parameter value are buffered to construct an input reference sample set.
  • An input sample matrix is determined by means of a first preset calculation model based on the input reference sample set.
  • Colour component prediction is performed on the current block according to the input sample matrix to obtain a prediction block of the current block.
  • an encoder which may include a first determination unit, a first buffer unit, and a first prediction unit.
  • the first determination unit is configured to determine a neighbouring reference sample set of a current block, and determine a preset parameter value corresponding to the current block, the neighbouring reference sample set including at least one reference sample.
  • the first buffer unit is configured to buffer the neighbouring reference sample set and the preset parameter value to construct an input reference sample set.
  • the first determination unit is further configured to determine an input sample matrix by means of a first preset calculation model based on the input reference sample set.
  • the first prediction unit is configured to perform colour component prediction on the current block according to the input sample matrix to obtain a prediction block of the current block.
  • an encoder which may include a first memory and a first processor.
  • the first memory may be configured to store a computer program capable of running in the first processor.
  • the first processor may be configured to run the computer program to execute the method as described in the first aspect.
  • the embodiments of the present disclosure provide a decoder, which may include a second determination unit, a second buffer unit, and a second prediction unit.
  • the second determination unit is configured to determine a neighbouring reference sample set of a current block, and determine a preset parameter value corresponding to the current block, the neighbouring reference sample set including at least one reference sample.
  • the second buffer unit is configured to buffer the neighbouring reference sample set and the preset parameter value to construct an input reference sample set.
  • the second determination unit is further configured to determine an input sample matrix by means of a first preset calculation model based on the input reference sample set.
  • the second prediction unit is configured to perform colour component prediction on the current block according to the input sample matrix to obtain a prediction block of the current block.
  • the embodiments of the present disclosure provide a decoder, which may include a second memory and a second processor.
  • the second memory may be configured to store a computer program capable of running in the second processor.
  • the second processor may be configured to run the computer program to execute the method as described in the second aspect.
  • the embodiments of the present disclosure provide a computer storage medium, which may store a colour component prediction program.
  • the colour component prediction program may be executed by a first processor to implement the method as described in the first aspect, or by a second processor to implement the method as described in the second aspect.
  • the embodiments of the present disclosure provide a method for colour component prediction, an encoder, a decoder, and a storage medium.
  • a neighbouring reference sample set of a current block is determined, and a preset parameter value corresponding to the current block is determined; the neighbouring reference sample set and the preset parameter value are buffered to construct an input reference sample set; an input sample matrix is determined by means of a first preset calculation model according to the reference sample set; and colour component prediction is performed on the current block according to the input sample matrix to obtain a prediction block of the current block.
  • the input sample matrix may be determined based on the input reference sample set and the first preset calculation model, while the derivation process of the input samples for matrix multiplication is also simplified, so that the derivation process of the input sample matrix is unified, and the solutions of the embodiments of the present disclosure no longer depend on the type of current block and can realize parallel processing, thereby reducing the calculation complexity.
  • FIG. 1 is a composition block diagram of a video coding system according to an embodiment of the present disclosure.
  • the video coding system 100 includes a transform and quantization unit 101 , an intra estimation unit 102 , an intra prediction unit 103 , a motion compensation unit 104 , a motion estimation unit 105 , an inverse transform and inverse quantization unit 106 , a filter control analysis unit 107 , a filter unit 108 , a coding unit 109 , a decoded picture buffer unit 110 , etc.
  • the filter unit 108 may implement deblocking filtering and Sample Adaptive Offset (SAO) filtering.
  • SAO Sample Adaptive Offset
  • the coding unit 109 may implement header information coding and Context-based Adaptive Binary Arithmetic Coding (CABAC).
  • CABAC Context-based Adaptive Binary Arithmetic Coding
  • a video coding block may be obtained by division of a Coding Tree Unit (CTU), and then residual sample information obtained by intra or inter prediction is processed through the transform and quantization unit 101 to transform the video coding block, including transforming the residual information from a sample domain to a transform domain and quantizing an obtained transform coefficient to further reduce a bit rate.
  • the intra estimation unit 102 and the intra prediction unit 103 are configured to perform intra prediction on the video coding block. Exactly, the intra estimation unit 102 and the intra prediction unit 103 are configured to determine an intra prediction mode to be adopted to code the video coding block.
  • the motion compensation unit 104 and the motion estimation unit 105 are configured to execute intra prediction coding on the received video coding block relative to one or more blocks in one or more reference frames to provide time prediction information.
  • Motion estimation executed by the motion estimation unit 105 is a process of generating a motion vector.
  • a motion of the video coding block may be estimated according to the motion vector, and then the motion compensation unit 104 executes motion compensation based on the motion vector determined by the motion estimation unit 105 .
  • the intra prediction unit 103 is further configured to provide selected intra predicted data to the coding unit 109 , and the motion estimation unit 105 also sends calculated motion vector data to the coding unit 109 .
  • the inverse transform and inverse quantization unit 106 is configured to reconstruct the video coding block, namely a residual block is reconstructed in the sample domain, an artifact with a blocking effect in the reconstructed residual block is removed through the filter control analysis unit 107 and the filter unit 108 , and then the reconstructed residual block is added to a predictive block in a frame of the decoded picture buffer unit 110 to generate a reconstructed video coding block.
  • the coding unit 109 is configured to code various coding parameters and quantized transform coefficients. In a CABAC-based coding algorithm, a context may be based on neighbouring coding blocks and configured to code information indicating the determined intra prediction mode to output a bitstream of the video signal.
  • the decoded picture buffer unit 110 is configured to store the reconstructed video coding block as a prediction reference. As video pictures are coded, new reconstructed video coding blocks may be continuously generated, and all these reconstructed video coding blocks may be stored in the decoded picture buffer unit 110 .
  • FIG. 2 is a composition structure diagram of a video decoding system according to an embodiment of the present disclosure.
  • the video decoding system 200 includes a decoding unit 201 , an inverse transform and inverse quantization unit 202 , an intra prediction unit 203 , a motion compensation unit 204 , a filter unit 205 , a decoded picture buffer unit 206 , etc.
  • the decoding unit 201 may implement header information decoding and CABAC decoding.
  • the filter unit 205 may implement deblocking filtering and SAO filtering. After coding processing shown in FIG. 1 is performed on an input video signal, a bitstream of the video signal is output.
  • the bitstream is input to the video decoding system 200 , and passes through the decoding unit 201 at first to obtain a decoded transform coefficient.
  • a residual block is generated in a sample domain by processing of the inverse transform and inverse quantization unit 202 for the transform coefficient.
  • the intra prediction unit 203 may be configured to generate predicted data of a current video decoding block based on a determined intra prediction mode and data of a previous decoded block from a present frame or picture.
  • the motion compensation unit 204 is configured to analyze a motion vector and another associated syntactic element to determine prediction information for the video decoding block and generate a predictive block of the video decoding block that is currently decoded by use of the prediction information.
  • the residual block from the inverse transform and inverse quantization unit 202 and the corresponding predictive block generated by the intra prediction unit 203 or the motion compensation unit 204 are summed to form a decoded video block.
  • An artifact with a blocking effect in the decoded video signal may be removed through the filter unit 205 to improve the video quality.
  • the decoded video block is stored in the decoded picture buffer unit 206 .
  • the decoded picture buffer unit 206 is configured to store a reference picture for subsequent intra prediction or motion compensation, and is further configured to output a video signal, thus, the recovered original video signal is obtained.
  • the method for colour component prediction in the embodiment of the present disclosure is mainly applied to the intra prediction unit 103 shown in FIG. 1 and the intra prediction unit 203 shown in FIG. 2 . That is, the method for colour component prediction of the embodiments of the present disclosure may be applied to not only a video coding system but also a video decoding system, and may even be applied to the video coding system and the video decoding system at the same time. No specific limits are made in the embodiments of the present disclosure.
  • the “current block” specifically refers to a current CB in intra prediction
  • the “current block” specifically refers to a current decoding block in intra prediction
  • FIG. 3 a flowchart of a method for colour component prediction according to an embodiment of the present disclosure is shown. As shown in FIG. 3 , the method may include the following operations.
  • a neighbouring reference sample set of a current block is determined, and a preset parameter value corresponding to the current block is determined, the neighbouring reference sample set including at least one reference sample.
  • each current to-be-coded picture block may be called a coding block.
  • each coding block may include a first colour component, a second colour component, and a third colour component.
  • the current block is a coding block of which the first colour component, the second colour component, or the third colour component is currently to be predicted in the video picture.
  • the current block performs first colour component prediction
  • a first colour component is a luma component, that is, a to-be-predicted colour component is the luma component
  • the current block may also be called a luma block
  • the second colour component is a chroma component, that is, a to-be-predicted colour component is the chroma component
  • the current block may also be called a chroma block.
  • the neighbouring reference sample set may be obtained by filtering reference samples in the left neighbouring region and the top neighbouring region of the current block, may be obtained by filtering reference samples in left neighbouring region and the bottom-left neighbouring region of the current block, and may also be obtained by filtering reference samples in the top neighbouring region and the right neighbouring region of the current block, which is not specifically limited in the embodiment of the present disclosure.
  • the operation that a neighbouring reference sample set of a current block is determined may include the following operations.
  • a reference sample neighbouring to at least one side of the current block is acquired, the at least one side of the current block including at least one of a top side, a top-right side, a left side, or a bottom-left side.
  • the neighbouring reference sample set of the current block is determined according to the acquired reference sample.
  • the at least one side of the current block may be the top side (also referred to as a top line), may also be the top-right side (also referred to as a top-right line), or the left side (also referred to as a left column), or the bottom-left side (also referred to as a bottom-left column), and even may be a combination of two sides, such as the top side and the left side, which is not limited in the embodiment of the present disclosure.
  • the operation that a neighbouring reference sample set of a current block is determined may include the following operations.
  • the reference sample neighbouring to the at least one side of the current block is acquired, the at least one side including the top side and/or the left side.
  • the neighbouring reference sample set of the current block is determined according to the acquired reference sample.
  • the at least one side of the current block may include the left side of the current block and/or the top side of the current block, namely the at least one side of the current block may refer to the top side of the current block, or may refer to the left side of the current block, or may even refer to the top side and left side of the current block. No specific limits are made in the embodiment of the present disclosure.
  • the operation that a neighbouring reference sample set of a current block is determined may include the following operations.
  • First filtering processing is performed on the reference sample neighbouring to the at least one side of the current block to determine a reference sample neighbouring to the at least one side.
  • the neighbouring reference sample set of the current block is formed according to the acquired reference sample.
  • the method may further include the following operation.
  • the first filtering processing includes down-sampling filtering or low-pass filtering.
  • the neighbouring reference sample set may be obtained by filtering a reference sample neighbouring to the left side of the current block and a reference sample neighbouring to the top side of the current block.
  • the neighbouring reference sample set may be obtained by filtering a reference sample neighbouring to the left side of the current block.
  • the neighbouring reference sample set may be obtained by filtering a reference sample neighbouring to the top side of the current block.
  • the filtering may refer to down-sampling filtering, or may refer to low-pass filtering, which is not specifically limited in the embodiment of the present disclosure.
  • a reference sample of the MIP technology may be a reconstructed values of a reference sample in the previous line, neighbouring to the current block and a reconstructed value of a reference sample in the left column, neighbouring to the current block, the reference sample of the current block may be obtained from reference samples respectively corresponding to the top side and the left side of the current block.
  • FIG. 4A a diagram of positions of reference samples according to an embodiment of the present disclosure is shown.
  • the reference samples corresponding to the top side of the current block are samples filled with gray, which may be represented by refT.
  • the reference samples corresponding to the left side of the current block are samples filled with slashes, which may be represented by refL.
  • the reference samples of the current block may include refT and refL, and the neighbouring reference sample set is obtained by filtering refT and refL. It is important to note that ineffective positions (e.g., boundaries of a picture) may be filled by the same method as that of acquiring reference samples in the traditional intra prediction technology.
  • the current block may be classified into three types according to the size, which may be recorded with mipSizeId. Specifically, a different type of the current block correspond to a different number of sample points included in a neighbouring reference sample set and a different number of matrix multiplication output sample points.
  • the operation that the neighbouring reference sample set of the current block is determined according to the acquired reference sample may include the following operations.
  • Sampling positions of the reference samples are determined based on the at least one side of the current block.
  • Reference samples corresponding to the sampling positions are selected from the acquired reference samples, the selected parameter samples form the neighbouring reference sample set.
  • the operation that sampling positions of the reference samples are determined based on the at least one side of the current block may include the following operation.
  • Down-sampling processing is performed on the at least one side of the current block to determine the sampling positions.
  • boundarySize is related to mipSizeId of the current block.
  • each reference side may be represented by bDwn, which may be calculated according to formula (1).
  • every bDwn reference samples are subjected to an average operation.
  • Each obtained average value serves as a sample point of the reference samples redS which may be calculated according to formula (2).
  • S may be substituted with W and H respectively, where W represents the top side, and H represents the left side.
  • top-side reference samples redT obtained by down-sampling top-side reference samples refT of the current block
  • left-side reference samples redL obtained by down-sampling left-side reference samples refL of the current block
  • FIG. 4B taking a 4 ⁇ 4 current block as an example, the redL obtained by down-sampling the left side includes two reference samples, namely a reference sample 1 and a reference sample 2 ; the redT obtained by down-sampling the top side includes two reference samples, namely a reference sample 3 and a reference sample 4 .
  • the neighbouring reference sample set of the current block includes four reference samples.
  • a bit depth value (which may be represented by BitDepth) corresponding to a to-be-predicted colour component of the current block is also required.
  • a bit depth value (which may be represented by BitDepth) corresponding to a to-be-predicted colour component of the current block is also required.
  • the to-be-predicted colour component is a luma component
  • a luma bit depth of the current block may be obtained; or, assuming that the to-be-predicted colour component is a chroma component, a chroma bit depth of the current block may be obtained, so that the preset parameter value of the current block is obtained.
  • the operation that a preset parameter value corresponding to the current block is determined may include the following operations.
  • a bit depth value corresponding to a to-be-predicted colour component of the current block is acquired.
  • the preset parameter value may be represented as 1 «(BitDepth ⁇ 1) after the BitDepth corresponding to the to-be-predicted colour component of the current block is acquired.
  • the obtained neighbouring reference sample set and the preset parameter value of the current block may be buffered to construct an input reference sample set.
  • an initial input reference sample set may be constructed first; one bit is added at the end of an initial buffer to buffer the preset parameter value to obtain an input reference sample set, which facilitates the subsequent construction of an input sample matrix.
  • the operation that the neighbouring reference sample set and the preset parameter value are buffered to construct an input reference sample set may include the following operations.
  • the neighbouring reference sample set is buffered to obtain an initial input reference sample set.
  • the preset parameter value is buffered by using a data unit after the initial input reference sample set to obtain the input reference sample set.
  • the operation that the neighbouring reference sample set is buffered to obtain an initial input reference sample set may include the following operations.
  • a value of a transposition processing indication flag is determined by using a Rate Distortion Optimization (RDO) manner.
  • RDO Rate Distortion Optimization
  • a reference sample is stored in a buffer, such that a reference sample corresponding to the top side of the current block in the neighbouring reference sample set is stored ahead of a reference sample corresponding to the left side of the current block in the neighbouring reference sample set, and the buffer is determined as the initial input reference sample set.
  • the reference sample is stored in a buffer, such that the reference sample corresponding to the top side of the current block in the neighbouring reference sample set is stored after the reference sample corresponding to the left side of the current block in the neighbouring reference sample set, transposition processing is performed on the buffer, and the transposed buffer is determined as the initial input reference sample set.
  • the value of the transposition processing indication flag may be determined by RDO. For example, a first cost value when transposition processing is performed and a second cost value when transposition processing is not performed are calculated respectively; if the first cost value is less than the second cost value, it may be determined that the value of the transposition processing indication flag is equal to 1, then reference samples corresponding to the top side in the neighbouring reference sample set may be stored after reference samples corresponding to the left side in the neighbouring reference sample set, or a reference sample corresponding to the left side in the neighbouring reference sample set may be stored ahead of a reference sample corresponding to the top side of the current block in the neighbouring reference sample set, that is, transposition processing is required; if the first cost value is no less than the second cost value, it may be determined that the value of the transposition processing indication flag is equal to 0, then the reference sample corresponding to the top side of the current block in the neighbouring reference sample set may be stored ahead of the reference sample corresponding to the left side of the current block in the neighbouring reference sample set
  • the determined value of the transposition processing indication flag needs to be written in a bitstream to facilitate subsequent parsing processing on the decoder side.
  • the operation that the neighbouring reference sample set is buffered to obtain an initial input reference sample set may include the following operations.
  • a bitstream is parsed to obtain a value of a transposition processing indication flag.
  • a reference sample is stored in a buffer, such that a reference sample corresponding to the top side of the current block in the neighbouring reference sample set is stored ahead of a reference sample corresponding to the left side of the current block in the neighbouring reference sample set, and the buffer is determined as the initial input reference sample set.
  • reference samples are stored in a buffer, such that reference samples corresponding to the top side in the neighbouring reference sample set is stored after reference samples corresponding to the left side in the neighbouring reference sample set, transposition processing is performed on the buffer, and the transposed buffer is determined as the initial input reference sample set.
  • the value of the transposition processing indication flag may be directly obtained by parsing the bitstream; then, it is determined whether to perform transposition processing on the buffer according to the value of the transposition processing indication flag.
  • the buffer may be represented by pTemp
  • redL includes a reference sample 1 and a reference sample 2
  • redT includes a reference sample 3 and a reference sample 4 ; thus, the buffer order in pTemp is the reference sample 3 , the reference sample 4 , the reference sample 1 , and the reference sample 2 . Since the reference samples corresponding to the top side of the current block are all stored ahead of the reference samples corresponding to the left side of the current block, transposition is omitted here, and the resulting buffer is the initial input reference sample set.
  • redL includes a reference sample 1 and a reference sample 2
  • redT includes a reference sample 3 and a reference sample 4
  • the buffer order in pTemp is the reference sample 1 , the reference sample 2 , the reference sample 3 , and the reference sample 4 . Since the reference samples corresponding to the top side are all stored after the reference samples corresponding to the left side, transposition is required here, and the transposed buffer is determined as the initial input reference sample set.
  • a data unit may be expanded after the initial input reference sample set.
  • the data unit is configured to buffer a preset parameter value, i.e., to store 1 «(BitDepth ⁇ 1), as shown in FIG. 5C .
  • a preset parameter value i.e., to store 1 «(BitDepth ⁇ 1)
  • FIG. 5C stilling taking a 4 ⁇ 4 current block as an example, four values are stored in the initial input reference sample set, namely reference samples after down-sampling of the reference samples.
  • five values are stored in the input reference sample set, i.e., in addition to the reference samples obtained by the down-sampling of the four reference samples, a preset parameter value is further stored.
  • an input sample matrix is determined by means of a first preset calculation model based on the input reference sample set.
  • input samples are matrix vectors to be subjected to matrix multiplication.
  • the current solution is determined by an initial buffer (represented by pTemp), the type (represented by mipSizeId) of current block, a bit depth value (represented by BitDepth) corresponding to a to-be-predicted colour component, and the number of input samples, and finally an x th input sample (represented by P[x]) in the input sample matrix is obtained.
  • the initial buffer may be expanded to the input reference sample set, and used to store 1 «(BitDepth ⁇ 1), so that the derivation process of input samples is no longer related to the type mipSizeId of the current block, and the derivation process of input samples for matrix multiplication is unified.
  • the input samples may be determined only through the input reference sample set (still represented by pTemp) and the number of input samples, so that an i th input sample (represented by p[i]) in the input sample matrix is acquired.
  • the operation that an input sample matrix is determined by means of a first preset calculation model based on the input reference sample set may include the following operations.
  • An i th input sample is calculated by means of a first preset calculation model according to a sample corresponding to a (i+1) th position and a sample corresponding to the 0 th position in the input reference sample set, where i is a positive integer greater than or equal to 0 and less than N, N representing the number of elements contained in the input sample matrix.
  • the input sample matrix is formed according to N input samples obtained by calculation.
  • the operation that an i th input sample is calculated by means of a first preset calculation model may include the following operation.
  • a subtraction operation is performed by means of the first preset calculation model to obtain the i th input sample.
  • the method may further include the following operations.
  • the minuend of the subtraction operation is set to be equal to the sample corresponding to the (i+1) th position in the reference sample set; and the subtraction of the subtraction operation is set to be equal to the sample corresponding to the 0 th position in the reference sample set.
  • N is the number of input samples (which may also be represented by inSize), and the number of input samples is the number of elements contained in the input sample matrix;
  • pTemp[0] represents the sample corresponding to the 0 th position
  • pTemp[i+1] represents the sample corresponding to the (i+1) th position
  • p[i] represents the i th input sample.
  • colour component prediction is performed on the current block according to the input sample matrix to obtain a prediction block of the current block.
  • a temporary predicted value of at least one sample in the MIP block may be calculated first; then clipping processing, transposition processing, and up-sampling processing are carried out in sequence to finally obtain the prediction block of the current block.
  • the current block may be a present luma block, and finally a luma prediction block of the current luma block may be obtained, in which a luma predicted value of at least one sample is provided; or assuming that the to-be-predicted colour component is a chroma component, then the current block may be a current chroma block, and finally a chroma prediction block of the present chroma block may be obtained, in which a chroma predicted value of at least one sample is provided. No limits are made thereto in the embodiment of the present disclosure.
  • the step that colour component prediction is performed on the current block according to the input sample matrix to obtain a prediction block of the current block may include the following steps.
  • an MIP block of the current block is obtained according to the input sample matrix, the MIP block including a predicted sample at at least part of sample positions in the current block.
  • a weight matrix represented by mWeight
  • a shift factor represented by sW
  • an offset factor represented by fO
  • the operation that the MIP block of the current block is obtained according to the input sample matrix may include the following operations.
  • a weight matrix, a shift factor, and an offset factor corresponding to the current block are acquired.
  • Matrix multiplication processing is performed on the input sample matrix, the weight matrix, the shift factor, and the offset factor by means of a second preset calculation model to calculate the MIP block.
  • a weight matrix table is pre-established, and the weight matrix table is stored in the encoder or decoder.
  • a weight matrix mWeight[x][y] that the current block needs to use may be determined by looking up the table.
  • a shift factor table is also pre-established, as shown in Table 1, and an offset factor table, as shown in Table 2. And the shift factor table and the offset factor table are also stored in the encoder or decoder.
  • the shift factor sW and the offset factor fO that need to be used for the current block may also be determined by looking up the tables.
  • the weight matrix mWeight[x][y], the shift factor sW, and the offset factor fO may be determined by looking up the tables, that is, the MIP block predMip[x][y] may be calculated.
  • the second preset calculation model is shown below.
  • predSize represents a side length of the MIP block predMip.
  • the temporary predicted value of at least one sample in the MIP block predMip may be calculated to obtain the MIP block.
  • the temporary predicted value of at least one sample in the MIP block may be subjected to clipping processing. Specifically, if the temporary predicted value is less than 0, it can be set to 0; if the temporary predicted value is greater than (1 «BitDepth) ⁇ 1, then it can be set to (1 «BitDepth) ⁇ 1, so that a range of the predicted value can be clippinged between 0 and (1 «BitDepth) ⁇ 1.
  • the predicted value of at least one sample in the MIP block may be obtained, and the range of the predicted value is between 0 and (1 «BitDepth) ⁇ 1; and then it is determined whether transposition processing is required according to the transposition flag bit isTransposed, so that the final MIP block is determined.
  • the operation that whether to perform transposition processing on the MIP block is judged may include the following operations.
  • a first cost value when transposition processing is performed on the MIP block and a second cost value when transposition processing is not performed on the MIP block are respectively calculated by using an RDO manner.
  • the first cost value is less than the second cost, it is determined to perform transposition processing on the MIP block; or when the first cost value is no less than the second cost, it is determined not to perform transposition processing on the MIP block.
  • the operation that whether to perform transposition processing on the MIP block is judged may include the following operations.
  • a bitstream is parsed to obtain a value of a transposition processing indication flag.
  • the transposition processing indication flag is represented by isTransposed, whether the MIP block needs to be transposed may be judged according to the value of isTransposed. Specifically, on the encoder side, if the first cost value is less than the second cost value, the value of isTransposed is 1, then it can be determined that the MIP block needs to be transposed; or if the first cost value is no less than the second cost value, the value of isTransposed is 0, then it can be determined that the MIP block does not need to be transposed.
  • the value of the transposition processing indication flag may be obtained by parsing the bitstream; if the value of isTransposed is parsed to be 1, then it can be determined that the MIP block needs to be transposed; or, if the value of isTransposed is parsed to be 0, it can be determined that the MIP block does not need to be transposed.
  • the MIP block predMip may be directly used to perform subsequent steps, i.e., to perform S 406 and judge whether the size of the MIP block is the same as the size of the current block; when isTransposed is 0, it indicates that the MIP block needs to be transposed, then transposition processing may be performed with the following formula:
  • predTemp predTemp
  • the transposed MIP block may be obtained after the MIP block is subjected to the transposition processing, and serves as an MIP block. Then, S 406 is also performed to judge whether the size of the MIP block is the same as the size of the current block.
  • a prediction block of the current block is set to be equal to the MIP block, the prediction block containing a predicted sample at all sample positions in the current block.
  • the second filtering processing may include up-sampling filtering or low-pass filtering.
  • the size of the MIP block only includes two types: a 4 ⁇ 4 MIP block and an 8 ⁇ 8 MIP block, and thus the size of the current block may be the same as or different from the size of the MIP block.
  • the current block may not be filled with the sample corresponding to the MIP block, resulting in that an up-sampling operation on the MIP block may be required for generation of a final predicted value, that is, by judging whether the size of the MIP block is the same as the size of the current block, it may be determined whether to perform up-sampling processing on the MIP block.
  • the current block may be filled with the MIP block directly, that is, there are no vacant samples in the filled current block, and a predicted value of each sample in the current block may be directly set to a predicted value of each sample in the MIP block, as shown below.
  • predSamples[x][y] represents a predicted value corresponding to a sample at position coordinates [x][y] in the current block
  • predMip[x][y] represents a predicted value corresponding to a sample at position coordinates [x][y] in the MIP block.
  • the MIP block predMip[x][y] may be directly used as the prediction block predSamples [x][y] of the current block.
  • the prediction block of the current block may be obtained.
  • the method may further include the following operations.
  • a horizontal up-sampling factor and a vertical up-sampling factor corresponding to the current block are determined.
  • the horizontal up-sampling factor and the vertical up-sampling factor a predicted value of a to-be-filled sample position in the current block is determined by means of a third preset calculation model to obtain a prediction block of the current block, the to-be-filled sample position being a sample position in the current block different from a sample position in the MIP block.
  • the MIP block predMip[x][y] needs to be subjected to up-sampling in a linear interpolation mode.
  • the width of the current block is nTbW
  • the height of the current block is nTbH
  • the horizontal up-sampling factor (represented by upHor) may be calculated.
  • the vertical up-sampling factor (represented by upVer) may be calculated. The specific calculation formula is as follows.
  • the current block needs to be filled according to the horizontal up-sampling factor upHor and the vertical up-sampling factor upVer, that is, the up-sampling operation is performed.
  • the specific up-sampling manner is to first fill to the positions corresponding to the previous line of the current block predSamples[x][ ⁇ 1] with the upper reference samples refT, and then fill the positions predSamples[ ⁇ 1][y] corresponding to the left column of the current block with the left reference samples refL of the left side. Then, according to formula (10), for the sample positions to be filled in the current block, for example, a vacant position between the predicted values at the corresponding positions, or a vacant position between the reference samples and the predicted value filling the corresponding position, etc., horizontal interpolation is performed followed by vertical interpolation, so that an up-sampling result predSamples[x][y] of the current block is obtained finally.
  • the predSamples[x][y] is a predicted value of the current block according to the MIP mode.
  • the method for colour component prediction when the method for colour component prediction is applied to the encoder side, the method for colour component prediction can be used to calculate the predicted value of at least one sample in the current block.
  • a residual corresponding to the at least one sample is calculated according to a difference value between a true value and a predicted value of the at least one sample in the current block, and the obtained residual is written in a bitstream.
  • the transposition processing indication flag (isTransposed) is obtained, the value of isTransposed also needs to be written in the bitstream, and then the bitstream is transmitted from the encoder side to the decoder side.
  • the value of isTransposed may be determined by parsing the bitstream, and then whether transposition processing is required is determined.
  • the method for colour component prediction may also be used to calculate the predicted value of at least one sample in the current block, the residual corresponding to the at least one sample may be directly obtained by parsing the bitstream, and thus, according to the predicted value and residual of the at least one sample in the current block, the true value of the at least one sample in the current block can be obtained.
  • the embodiment provides a method for colour component prediction, which is applied to an encoder or a decoder.
  • a neighbouring reference sample set of a current block is determined, and a preset parameter value corresponding to the current block is determined, the neighbouring reference sample set including at least one reference sample; the neighbouring reference sample set and the preset parameter value are buffered to construct an input reference sample set; an input sample matrix is determined by means of a first preset calculation model according to the reference sample set; and colour component prediction is performed on the current block according to the input sample matrix to obtain a prediction block of the current block.
  • the input sample matrix may be determined based on the input reference sample set and the first preset calculation model, while the derivation process of the input samples for matrix multiplication is also simplified, so that the derivation process of the input sample matrix is unified, and the solutions of the embodiments of the present disclosure no longer depend on the type of the current block and can further realize parallel processing, thereby reducing the calculation complexity.
  • the encoder 90 includes a first determination unit 901 , a first buffer unit 902 , and a first prediction unit 903 .
  • the first determination unit 901 is configured to determine a neighbouring reference sample set of a current block, and determine a preset parameter value corresponding to the current block, the neighbouring reference sample set including at least one reference sample.
  • the first buffer unit 902 is configured to buffer the neighbouring reference sample set and the preset parameter value to construct an input reference sample set.
  • the first determination unit 901 is further configured to determine an input sample matrix by means of a first preset calculation model based on the input reference sample set.
  • the first prediction unit 903 is configured to perform colour component prediction on the current block according to the input sample matrix to obtain a prediction block of the current block.
  • the encoder 90 may further include a first acquisition unit 904 , configured to acquire a reference sample neighbouring to at least one side of the current block, the at least one side of the current block including at least one of a top side, a top-right side, a left side, or a bottom-left side.
  • a first acquisition unit 904 configured to acquire a reference sample neighbouring to at least one side of the current block, the at least one side of the current block including at least one of a top side, a top-right side, a left side, or a bottom-left side.
  • the first determination unit 901 is configured to determine a neighbouring reference sample set of the current block according to the acquired reference sample.
  • the encoder 90 may further include a first processing unit 905 , configured to perform first filtering processing on a reference sample neighbouring to at least one side of the current block to determine reference sample neighbouring to the at least one side.
  • a first processing unit 905 configured to perform first filtering processing on a reference sample neighbouring to at least one side of the current block to determine reference sample neighbouring to the at least one side.
  • the first determination unit 901 is configured to form the neighbouring reference sample set of the current block according to the acquired reference sample.
  • the first filtering processing includes down-sampling filtering or low-pass filtering.
  • the first acquisition unit 904 is further configured to acquire a bit depth value corresponding to a to-be-predicted colour component of the current block.
  • the first processing unit 905 is further configured to convert 1 to a binary value, and perform a shift on the binary value by the bit depth minus 1 binary digits to obtain the preset parameter value.
  • the first buffer unit 902 is configured to buffer the neighbouring reference sample to obtain an initial input reference sample set, and buffer the preset parameter values by using a data unit after the initial input reference sample set, to obtain the input reference sample set.
  • the first determination unit 901 is further configured to determine a value of a transposition processing indication flag by using an RDO manner.
  • the first buffer unit 902 is specifically configured to, when the value of the transposition processing indication flag is equal to 0, store a reference sample in a buffer such that a reference sample corresponding to the top side of the current block in the neighbouring reference sample set is stored ahead of a reference sample corresponding to the left side of the current block in the neighbouring reference sample set, and determine the buffer as the initial input reference sample set; or when the value of the transposition processing indication flag is equal to 1, store the reference sample in a buffer such that the reference sample corresponding to the top side of the current block in the neighbouring reference sample set is stored after the reference sample corresponding to the left side of the current block in the neighbouring reference sample set, perform transposition processing on the buffer, and determine the transposed buffer as the initial input reference sample set.
  • the encoder 90 may further include a first calculation unit 906 which is configured to calculate an i th input sample by means of a first preset calculation model according to a sample corresponding to the (i+1) th position and a sample corresponding to the 0 th position in the reference sample set, where i is a positive integer greater than or equal to 0 and less than N, N representing the number of elements contained in the input sample matrix.
  • the first determination unit 901 is configured to form the input sample matrix according to N input samples obtained by calculation.
  • the first calculation unit 906 is specifically perform a subtraction operation by means of the first preset calculation model to obtain the i th input sample.
  • the first calculation unit 906 is specifically configured to set the minuend of the subtraction operation to be equal to the sample corresponding to the (i+1) th position in the reference sample set; and set the subtraction of the subtraction operation to be equal to the sample corresponding to the 0 th position in the reference sample set.
  • the first acquisition unit 904 is further configured to obtain an MIP block of the current block according to the input sample matrix, the MIP block including a predicted sample at at least part of sample positions in the current block.
  • the first processing unit 905 is further configured to, when one of a width and a height of the MIP block is different from that of the current block, perform second filtering processing on the MIP block to obtain a prediction block of the current block; or, when both of a width and a height of the MIP block are the same as those of the current block, set a prediction block of the current block to be equal to the MIP block, the prediction block containing a predicted sample at all sample positions in the current block.
  • the first acquisition unit 904 is specifically configured to perform clipping processing on the predicted sample in the MIP block to obtain the MIP block of the current block.
  • the encoder 90 may also include a first judgment unit 907 , which is configured to judge whether to perform transposition processing on the MIP block, and when a judgment result is “yes”, to perform transposition processing on the predicted sample in the MIP block, and determine the transposed MIP block as the MIP block of the current block.
  • a first judgment unit 907 which is configured to judge whether to perform transposition processing on the MIP block, and when a judgment result is “yes”, to perform transposition processing on the predicted sample in the MIP block, and determine the transposed MIP block as the MIP block of the current block.
  • the first calculation unit 906 is further configured to calculate a first cost value when transposition processing is performed on the MIP block and a second cost value when transposition processing is not performed on the MIP block.
  • the first judgment unit 907 is specifically configured to, when the first cost value is no less than the second cost value, determine not to perform transposition processing on the MIP block.
  • the second filtering processing includes up-sampling filtering or low-pass filtering.
  • the first acquisition unit 904 is further configured to acquire a weight matrix, a shift factor, and an offset factor corresponding to the current block.
  • the first calculation unit 906 is further configured to perform matrix multiplication processing on the input sample matrix, the weight matrix, the shift factor, and the offset factor by means of a second preset calculation model, to calculate the MIP block.
  • the first determination unit 901 is further configured to determine a horizontal up-sampling factor and a vertical up-sampling factor corresponding to the current block;
  • the first calculation unit 906 is further configured to determine, according to the MIP block, the horizontal up-sampling factor and the vertical up-sampling factor, a predicted value of a to-be-filled sample position in the current block by means of a third preset calculation model to obtain a prediction block of the current block, the to-be-filled sample position being a sample position in the current block different from a sample position in the MIP block.
  • unit may be part of a circuit, part of a processor, part of a program or software and the like, of course, may also be modular and may also be non-modular.
  • each component in the embodiment may be integrated into a processing unit, each unit may also exist independently, and two or more than two units may also be integrated into a unit.
  • the integrated unit may be implemented in a hardware form and may also be implemented in form of software function module.
  • the integrated unit When implemented in form of software function module and sold or used not as an independent product, the integrated unit may be stored in a computer-readable storage medium.
  • the technical solution of the embodiment substantially or parts making contributions to the conventional art or all or part of the technical solution may be embodied in form of software product, and the computer software product is stored in a storage medium, including a plurality of instructions configured to enable a computer device (which may be a personal computer, a server, a network device, etc.) or a processor to execute all or part of the steps of the method in the embodiment.
  • the abovementioned storage medium includes: various media capable of storing program codes such as a U disk, a mobile hard disk, a ROM, a RAM, a magnetic disk or an optical disk.
  • the embodiment of the present disclosure provides a computer storage medium, which is applied to an encoder 90 , and stores a colour component prediction program.
  • the colour component prediction program is executed by a first processor to implement any method as described in the abovementioned embodiments.
  • a specific hardware structure example of the encoder 90 is shown, and may include a first communication interface 1001 , a first memory 1002 , and a first processor 1003 . Each component is coupled together through a first bus system 1004 . It can be understood that the first bus system 1004 is configured to implement connection communication between these components.
  • the first bus system 1004 includes a data bus, and further includes a power bus, a control bus, and a state signal bus. However, for clear description, various buses in FIG. 10 are marked as the first bus system 1004 .
  • the first communication interface 1001 is configured to receive and send a signal in a process of receiving and sending information with another external network element.
  • the first memory 1002 is configured to store a computer program capable of running in the first processor 1003 .
  • the first processor 1003 is configured to run the computer program to execute the following operations.
  • a neighbouring reference sample set of the current block is determined, and a preset parameter value corresponding to the current block is determined, the neighbouring reference sample set including at least one reference sample.
  • the neighbouring reference sample set and the preset parameter value are buffered to construct an input reference sample set.
  • An input sample matrix is determined by means of a first preset calculation model based on the input reference sample set.
  • Colour component prediction is performed on the current block according to the input sample matrix to obtain a prediction block of the current block.
  • the first memory 1002 in the embodiment of the present disclosure may be a volatile memory or a nonvolatile memory, or may include both the volatile and nonvolatile memories.
  • the nonvolatile memory may be a ROM, a PROM, an Erasable PROM (EPROM), an EEPROM, or a flash memory.
  • the volatile memory may be a RAM, and is used as an external high-speed cache.
  • RAMs in various forms may be adopted, such as a Static RAM (SRAM), a Dynamic RAM (DRAM), a Synchronous DRAM (SDRAM), a Double Data Rate SDRAM (DDR SDRAM), an Enhanced SDRAM (ESDRAM), a Synchlink DRAM (SLDRAM), and a Direct Rambus RAM (DR RAM).
  • SRAM Static RAM
  • DRAM Dynamic RAM
  • SDRAM Synchronous DRAM
  • DDR SDRAM Double Data Rate SDRAM
  • ESDRAM Enhanced SDRAM
  • SLDRAM Synchlink DRAM
  • DR RAM Direct Rambus RAM
  • the first processor 1003 may be an integrated circuit chip with a signal processing capability. In an implementation process, each step of the method may be completed by an integrated logic circuit of hardware in the first processor 1003 or an instruction in a software form.
  • the processor 1003 may be a universal processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or another programmable logical device, discrete gate or transistor logical device and discrete hardware component. Each method, step and logical block diagram disclosed in the embodiments of the present disclosure may be implemented or executed.
  • the universal processor may be a microprocessor or the processor may also be any conventional processor, etc.
  • the steps of the method disclosed in combination with the embodiments of the present disclosure may be directly embodied to be executed and completed by a hardware decoding processor or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module may be located in a mature storage medium in this field such as a RAM, a flash memory, a ROM, a PROM or EEPROM, and a register.
  • the storage medium is located in the first memory 1002 .
  • the first processor 1003 reads information in the first memory 1002 and completes the steps of the method in combination with hardware.
  • the processing unit may be implemented in one or more ASICs, DSPs, DSP Devices (DSPDs), PLDs, FPGAs, universal processors, controllers, microcontrollers, microprocessors, other electronic units configured to execute the functions in the present disclosure or combinations thereof.
  • the technology of the present disclosure may be implemented through the modules (for example, processes and functions) executing the functions in the present disclosure.
  • a software code may be stored in the memory and executed by the processor.
  • the memory may be implemented in the processor or outside the processor.
  • the first processor 1003 is further configured to run the computer program to execute any method in the abovementioned embodiments.
  • the embodiment provides an encoder.
  • the encoder may include a first determination unit, a first buffer unit, and a first prediction unit.
  • the first determination unit is configured to determine a neighbouring reference sample set of the current block and determine a preset parameter value corresponding to the current block.
  • the first buffer unit is configured to buffer the neighbouring reference sample set and the preset parameter value to construct an input reference sample set; the first determination unit is further configured to determine an input sample matrix by means of a first preset calculation model based on the input reference sample set; and the first prediction unit is configured to perform colour component prediction on the current block according to the input sample matrix to obtain a prediction block of the current block.
  • the decoder 110 includes a second determination unit 1101 , a second buffer unit 1102 , and a second prediction unit 1103 .
  • the second determination unit 1101 is configured to determine a neighbouring reference sample set of a current block, and determine a preset parameter value corresponding to the current block, the neighbouring reference sample set including at least one reference sample.
  • the second buffer unit 1102 is configured to buffer the neighbouring reference sample set and the preset parameter value to construct an input reference sample set.
  • the second determination unit 1101 is further configured to determine an input sample matrix by means of a first preset calculation model based on the input reference sample set.
  • the second prediction unit 1103 is configured to perform colour component prediction on the current block according to the input sample matrix to obtain a prediction block of the current block.
  • the decoder 110 may further include a second acquisition unit 1104 , configured to acquire a reference sample neighbouring to at least one side of the current block, the at least one side of the current block including at least one of a top side, a top-right side, a left side, or a bottom-left side.
  • a second acquisition unit 1104 configured to acquire a reference sample neighbouring to at least one side of the current block, the at least one side of the current block including at least one of a top side, a top-right side, a left side, or a bottom-left side.
  • the second determination unit 1101 is configured to determine a neighbouring reference sample set of the current block according to the acquired reference sample.
  • the decoder 110 may further include a second processing unit 1105 , configured to perform first filtering processing on the reference sample neighbouring to at least one side of the current block to determine a reference sample neighbouring to the at least one side.
  • a second processing unit 1105 configured to perform first filtering processing on the reference sample neighbouring to at least one side of the current block to determine a reference sample neighbouring to the at least one side.
  • the second determination unit 1101 is configured to form the neighbouring reference sample set of the current block according to the acquired reference samples.
  • the first filtering processing includes down-sampling filtering or low-pass filtering.
  • the second acquisition unit 1104 is further configured to acquire a bit depth value corresponding to a to-be-predicted colour component of the current block.
  • the second processing unit 1105 is further configured to convert 1 to a binary value, and perform a shift on the binary value by the bit depth minus 1 binary digits to obtain a preset parameter value.
  • the second buffer unit 1102 is configured to buffer the neighbouring reference sample to obtain an initial input reference sample set, and buffer the preset parameter value by using a data unit after the initial input reference sample set to obtain the input reference sample set.
  • the decoder 110 may further include a parsing unit 1106 configured to parse a bitstream to obtain a value of a transposition processing indication flag.
  • the second buffer unit 1102 is specifically configured to, when the value of the transposition processing indication flag is equal to 0, store a reference sample in a buffer, such that a reference sample corresponding to the top side of the current block in the neighbouring reference sample set is stored ahead of a reference sample corresponding to the left side of the current block in the neighbouring reference sample set, and determine the buffer as the initial input reference sample set; or when the value of the transposition processing indication flag is equal to 1, store the reference sample in a buffer, such that the reference sample corresponding to the top side of the current block in the neighbouring reference sample set is stored after the reference sample corresponding to the left side of the current block in the neighbouring reference sample set, perform transposition processing on the buffer, and determine the transposed buffer as the initial input reference sample set.
  • the decoder 110 may further include a second calculation unit 1107 which is configured to calculate an i th input sample by means of a first preset calculation model according to a sample corresponding to the (i+1) th position and a sample corresponding to the 0 th position in the reference sample set, where i is a positive integer greater than or equal to 0 and less than N, N representing the number of elements contained in the input sample matrix.
  • a second calculation unit 1107 which is configured to calculate an i th input sample by means of a first preset calculation model according to a sample corresponding to the (i+1) th position and a sample corresponding to the 0 th position in the reference sample set, where i is a positive integer greater than or equal to 0 and less than N, N representing the number of elements contained in the input sample matrix.
  • the second determination unit 1101 is configured to form the input sample matrix according to N input samples obtained by calculation.
  • the second calculation unit 1107 is specifically perform a subtraction operation by means of the first preset calculation model to obtain the i th input sample.
  • the second calculation unit 1107 is specifically configured to set the minuend of the subtraction operation to be equal to the sample corresponding to the (i+1) th position in the reference sample set; and set the subtraction of the subtraction operation to be equal to the sample corresponding to the 0 th position in the reference sample set.
  • the second acquisition unit 1104 is further configured to obtain an MIP block of the current block according to the input sample matrix, the MIP block including a predicted sample at at least part of sample positions in the current block.
  • the second processing unit 1105 is further configured to, when one of a width and a height of the MIP block is different from that of the current block, perform second filtering processing on the MIP block to obtain a prediction block of the current block; or, when both of a width and a height of the MIP block are the same as those of the current block, set a prediction block of the current block to be equal to the MIP block, the prediction block containing a predicted sample at all sample positions in the current block.
  • the second acquisition unit 1104 is specifically configured to perform clipping processing on the predicted sample in the MIP block to obtain the MIP block of the current block.
  • the decoder 110 may also include a second judgment unit 1108 , which is configured to judge whether to perform transposition processing on the MIP block, and when a judgment result is “yes”, to perform transposition processing on the predicted sample in the MIP block, and determine the transposed MIP block as the MIP block of the current block.
  • a second judgment unit 1108 is configured to judge whether to perform transposition processing on the MIP block, and when a judgment result is “yes”, to perform transposition processing on the predicted sample in the MIP block, and determine the transposed MIP block as the MIP block of the current block.
  • the parsing unit 1106 is specifically configured to parse a bitstream to obtain a value of a transposition processing indication flag.
  • the second judgment unit 1108 is specifically configured to judge, according to the value of the transposition processing, whether to perform transposition processing on the MIP block.
  • the second filtering processing includes up-sampling filtering or low-pass filtering.
  • the second acquisition unit 1104 is further configured to acquire a weight matrix, a shift factor, and an offset factor corresponding to the current block.
  • the second calculation unit 1107 is further configured to perform matrix multiplication processing on the input sample matrix, the weight matrix, the shift factor, and the offset factor by means of a second preset calculation model to calculate the MIP block.
  • the second determination unit 1101 is further configured to determine a horizontal up-sampling factor and a vertical up-sampling factor corresponding to the current block.
  • the second calculation unit 1107 is further configured to determine, according to the MIP block, the horizontal up-sampling factor and the vertical up-sampling factor, a predicted value of a to-be-filled sample position in the current block by means of a third preset calculation model to obtain a prediction block of the current block, the to-be-filled sample position being a sample position in the current block different from a sample position in the MIP block.
  • unit may be part of a circuit, part of a processor, part of a program or software and the like, of course, may also be modular and may also be non-modular.
  • each component in the embodiment may be integrated into a processing unit, each unit may also exist independently, and two or more than two units may also be integrated into a unit.
  • the integrated unit may be implemented in a hardware form and may also be implemented in form of software function module.
  • the integrated unit When implemented in form of a software functional module and sold or used not as an independent product, the integrated unit may be stored in a computer-readable storage medium. Based on such an understanding, the embodiment provides a computer storage medium, which is applied to a decoder 110 and stores a colour component prediction program. The colour component prediction program is executed by a second processor to implement any method as described in the abovementioned embodiments.
  • FIG. 12 a specific hardware structure example of the decoder 110 according to the embodiment of the present disclosure is shown, and may include a second communication interface 1201 , a second memory 1202 , and a second processor 1203 . Each component is coupled together through a second bus system 1204 . It can be understood that the second bus system 1204 is configured to implement connection communication between these components.
  • the second bus system 1204 includes a data bus, and further includes a power bus, a control bus, and a state signal bus. However, for clear description, various buses in FIG. 12 are marked as the second bus system 1204 .
  • the second communication interface 1201 is configured to receive and send a signal in a process of receiving and sending information with another external network element.
  • the second memory 1202 is configured to store a computer program capable of running in the second processor 1203 .
  • the second processor 1203 is configured to run the computer program to execute the following operations.
  • a neighbouring reference sample set of a current block is determined, and a preset parameter value corresponding to the current block is determined, the neighbouring reference sample set including at least one reference sample.
  • the neighbouring reference sample set and the preset parameter value are buffered to construct an input reference sample set.
  • An input sample matrix is determined by means of a first preset calculation model based on the input reference sample set.
  • Colour component prediction is performed on the current block according to the input sample matrix to obtain a prediction block of the current block.
  • the second processor 1203 is further configured to run the computer program to execute any method in the abovementioned embodiments.
  • the second memory 1202 has a hardware function similar to that of the first memory 1002 and the second processor 1203 has a hardware function similar to that of the first processor 1003 . Elaborations are omitted herein.
  • sequence numbers of the embodiments of the present disclosure are adopted not to represent superiority-inferiority of the embodiments but only for description.
  • a neighbouring reference sample set of a current block is determined, and a preset parameter value corresponding to the current block is determined; the neighbouring reference sample set and the preset parameter value are buffered to construct an input reference sample set; an input sample matrix is determined by means of a first preset calculation model according to the reference sample set; and colour component prediction is performed on the current block according to the input sample matrix to obtain a prediction block of the current block.
  • the input sample matrix may be determined based on the input reference sample set and the first preset calculation model, while the derivation process of the input samples for matrix multiplication is also simplified, so that the derivation process of the input sample matrix is unified, and the solutions of the embodiments of the present disclosure no longer depend on the type of the current block and can also realize parallel processing, thereby reducing the calculation complexity.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
US17/454,612 2019-12-19 2021-11-11 Colour component prediction method, encoder, decoder, and storage medium Active US11477465B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/942,679 US11770542B2 (en) 2019-12-19 2022-09-12 Colour component prediction method, encoder, and decoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/126710 WO2021120122A1 (zh) 2019-12-19 2019-12-19 图像分量预测方法、编码器、解码器以及存储介质

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/126710 Continuation WO2021120122A1 (zh) 2019-12-19 2019-12-19 图像分量预测方法、编码器、解码器以及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/942,679 Continuation US11770542B2 (en) 2019-12-19 2022-09-12 Colour component prediction method, encoder, and decoder

Publications (2)

Publication Number Publication Date
US20220070476A1 US20220070476A1 (en) 2022-03-03
US11477465B2 true US11477465B2 (en) 2022-10-18

Family

ID=76477025

Family Applications (2)

Application Number Title Priority Date Filing Date
US17/454,612 Active US11477465B2 (en) 2019-12-19 2021-11-11 Colour component prediction method, encoder, decoder, and storage medium
US17/942,679 Active US11770542B2 (en) 2019-12-19 2022-09-12 Colour component prediction method, encoder, and decoder

Family Applications After (1)

Application Number Title Priority Date Filing Date
US17/942,679 Active US11770542B2 (en) 2019-12-19 2022-09-12 Colour component prediction method, encoder, and decoder

Country Status (6)

Country Link
US (2) US11477465B2 (zh)
EP (1) EP3955574A4 (zh)
JP (1) JP2023510666A (zh)
KR (1) KR20220112668A (zh)
CN (2) CN113439440A (zh)
WO (1) WO2021120122A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230007279A1 (en) * 2019-12-19 2023-01-05 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Colour component prediction method, encoder, and decoder

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3984228A4 (en) * 2019-06-14 2023-03-29 Telefonaktiebolaget Lm Ericsson (Publ) SAMPLE VALUE CLIPPING ON MIP REDUCED PREDICTION
US11973952B2 (en) 2019-06-14 2024-04-30 Telefonaktiebolaget Lm Ericsson (Publ) Simplified downsampling for matrix based intra prediction
WO2023197194A1 (zh) * 2022-04-12 2023-10-19 Oppo广东移动通信有限公司 编解码方法、装置、编码设备、解码设备以及存储介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130114695A1 (en) * 2011-11-07 2013-05-09 Qualcomm Incorporated Signaling quantization matrices for video coding
US20160088302A1 (en) * 2014-09-19 2016-03-24 Futurewei Technologies, Inc. Method and apparatus for non-uniform mapping for quantization matrix coefficients between different sizes of quantization matrices in image/video coding
CN106254883A (zh) 2016-08-02 2016-12-21 青岛海信电器股份有限公司 一种视频解码中的反变换方法和装置
WO2019077197A1 (en) 2017-10-16 2019-04-25 Nokia Technologies Oy METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR VIDEO ENCODING AND DECODING
US20190306498A1 (en) 2018-04-02 2019-10-03 Tencent America LLC Method and apparatus for video decoding using multiple line intra prediction
US20200344468A1 (en) * 2019-04-25 2020-10-29 Mediatek Inc. Method and Apparatus of Matrix based Intra Prediction in Image and Video Processing
US20200359050A1 (en) * 2019-05-09 2020-11-12 Qualcomm Incorporated Reference sampling for matrix intra prediction mode
WO2020239018A1 (en) * 2019-05-31 2020-12-03 Beijing Bytedance Network Technology Co., Ltd. Restricted upsampling process in matrix-based intra prediction
US20210321090A1 (en) * 2019-04-12 2021-10-14 Beijing Bytedance Network Technology Co., Ltd. Most probable mode list construction for matrix-based intra prediction

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130098122A (ko) * 2012-02-27 2013-09-04 세종대학교산학협력단 영상 부호화/복호화 장치 및 영상을 부호화/복호화하는 방법
SG10201808973XA (en) * 2012-04-13 2018-11-29 Mitsubishi Electric Corp Image encoding device, image decoding device, image encoding method and image decoding method
EP3522538A4 (en) * 2016-09-30 2020-07-29 LG Electronics Inc. -1- IMAGE PROCESSING METHOD AND DEVICE THEREFOR
FI20175006A1 (en) * 2017-01-03 2019-02-15 Nokia Technologies Oy Video and image coding using wide-angle intra-prediction
CN110463201B (zh) * 2017-03-22 2023-12-19 韩国电子通信研究院 使用参考块的预测方法和装置
KR20220112668A (ko) * 2019-12-19 2022-08-11 광동 오포 모바일 텔레커뮤니케이션즈 코포레이션 리미티드 이미지 요소 예측 방법, 인코더, 디코더 및 저장 매체

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130114695A1 (en) * 2011-11-07 2013-05-09 Qualcomm Incorporated Signaling quantization matrices for video coding
US20160088302A1 (en) * 2014-09-19 2016-03-24 Futurewei Technologies, Inc. Method and apparatus for non-uniform mapping for quantization matrix coefficients between different sizes of quantization matrices in image/video coding
CN106663209A (zh) 2014-09-19 2017-05-10 华为技术有限公司 用于非均匀映射图像/视频编码中不同尺寸的量化矩阵之间的量化矩阵系数的方法和装置
CN106254883A (zh) 2016-08-02 2016-12-21 青岛海信电器股份有限公司 一种视频解码中的反变换方法和装置
WO2019077197A1 (en) 2017-10-16 2019-04-25 Nokia Technologies Oy METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR VIDEO ENCODING AND DECODING
US20190306498A1 (en) 2018-04-02 2019-10-03 Tencent America LLC Method and apparatus for video decoding using multiple line intra prediction
US20190364273A1 (en) 2018-04-02 2019-11-28 Tencent America LLC Method and apparatus for video coding
US20210321090A1 (en) * 2019-04-12 2021-10-14 Beijing Bytedance Network Technology Co., Ltd. Most probable mode list construction for matrix-based intra prediction
US20200344468A1 (en) * 2019-04-25 2020-10-29 Mediatek Inc. Method and Apparatus of Matrix based Intra Prediction in Image and Video Processing
US20200359050A1 (en) * 2019-05-09 2020-11-12 Qualcomm Incorporated Reference sampling for matrix intra prediction mode
WO2020239018A1 (en) * 2019-05-31 2020-12-03 Beijing Bytedance Network Technology Co., Ltd. Restricted upsampling process in matrix-based intra prediction
WO2020239017A1 (en) * 2019-05-31 2020-12-03 Beijing Bytedance Network Technology Co., Ltd. One-step downsampling process in matrix-based intra prediction

Non-Patent Citations (14)

* Cited by examiner, † Cited by third party
Title
Benjamin Brass et al : "Versatile Video Coding (Draft 7)" JVET-P2001-vE, Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 16th Meeting: Geneva, CH, Oct. 1-11, 2019.
Biatek (Qualcomm) T et al: "Non-CE3: Simplified MIP with reduced memory footprint", 16. JVET Meeting: Oct. 1, 2019-Oct. 11, 2019; Geneva; (The Joint Video Exploration Team of ISO/IEC JTC1/SC29/WG11 and ITU-T SG. 16), No. JVET-P0194 ; m50156 Oct. 6, 2019 (Oct. 6, 2019), XP030216568, Retrieved from the Internet: URL:http://phenix. int-evry.fr/jvet/doc_end_user/documents/16_Geneva/wg11/JVET-P0194-v3. zip JVET-P0194-v3. docx [retrieved on Oct. 6, 2019] *the whole document* .
Chen,Jianle et al. "Algorithm description for Versatile Video Coding and Test Model 5(VTM 5)"Joint Video Experts Team (JV T) of ITU-T SG 16 WP 3 and ISO/IEC JTC I/SC 29/WG 11 14th Meeting: Geneva, CH, Mar. 19-27, 2019 Document: JVET-NI002-v2,Mar. 27, 2019 (Mar. 27, 2019),entire document.
Geert, Van der Auwera et al. "Description of Core Experiment 3 (CE3): Intra Prediction and Mode Coding"Joint Video Experts Team (JVET) of/TU-T SG 16 WP 3 and ISO/IEC JTC I/SC 29/WG 11 15th Meeting: Gothenburg, SE, Jul. 3-12, 2019 Document: JVET-02023-v3,Jul. 12, 2019 (Jul. 12, 2019),entire document.
Helle (Fraunhofer) P et al: "Variations of the 8-bit implementation of MIP" 127. MPEG Meeting; Jul. 8, 2019-Jul. 12, 2019; Gothenburg; (Motion Picture Expert Groupor ISO/IEC JTC1/SC29/WG11), No. m48606 Jul. 7, 2019 (Jul. 7, 2019), XP030222145, Retrieved from the lnternet:URL:http://phenix.int-evry.fr/mpeg/doc_end_user/documents/127_Gothenburg/wg11/m48606-JVET-00481-v2-JVET-00481-v2.zip JVET-00481-v2/JVET-O0481-v2. docx [retrieved on Jul. 7, 2019] *the whole document* .
International Search Report in the international application No. PCT /CN2019/126710, dated Sep. 2, 2020.
Nishi (Panasonic) T et al: "AHG9: Unified signalling of PTL and HRD parameters in VPS", 17. JVET Meeting: Jan. 7, 2020-Jan. 17, 2020; Brussels; (The Joint Video Exploration Team of ISO/IEC JTC1/SC29/WG11 and ITU-TSG.16), No. JVET-Q0047 ; m51619 Dec. 18, 2019 (Dec. 18, 2019), XP030222423, Retrieved from the Internet:URL: http://phenix.int-evry.fr/jvet/doc_end_user/documents/17_Brussels/wg11/JVET-Q0047-v1.zip JVET-Q0047_based_on_JVET-P2001-vE.docx [retrieved on Dec. 18, 2019] *clauses 7.3 9.5, 7.4.10.5 and 8.4 5.2.1* .
P. HELLE (FRAUNHOFER), J. PFAFF (FRAUNHOFER), T. HINZ, P. MERKLE (FRAUNHOFER), B. STALLENBERGER, M. SCHäFER, H. SCHWARZ, D. M: "Variations of the 8-bit implementation of MIP", 127. MPEG MEETING; 20190708 - 20190712; GOTHENBURG; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11), 7 July 2019 (2019-07-07), XP030222145
Pfaff, Jonathan et al. "CE3: Affine linear weighted intra prediction (test 1.2.1, test 1.2.2)"Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC I/SC 29/WG 11 13th Meeting: Marrakech,MA, Jan. 9-18, 2019 Document: JVET-M0043,Jan. 18, 2019 (Jan. 18, 2019),entire document.
Pfaff. Jonathan et al. "CE3: Affine linear weighted intra prediction (CE3-4.I. CE3-4.2)"Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC I/SC 29/WG 11 14th Meeting: Geneva,CH, Mar. 19-27, 2019 Document:JVET-N0217,Mar. 27, 2019 (Mar. 27, 2019),part 1, sections 1.1-1.4.
Supplementary European Search Report in the European application No. 19956859.3, dated Aug. 19, 2022.
T. BIATEK (QUALCOMM), A.K. RAMASUBRAMONIAN, G. VAN DER AUWERA, M. KARCZEWICZ (QUALCOMM): "Non-CE3: Simplified MIP with reduced memory footprint", 16. JVET MEETING; 20191001 - 20191011; GENEVA; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), 6 October 2019 (2019-10-06), XP030216568
T. NISHI (PANASONIC), K. ABE (PANASONIC), V. DRUGEON (PANASONIC): "AHG9: Unified signalling of PTL and HRD parameters in VPS", 17. JVET MEETING; 20200107 - 20200117; BRUSSELS; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), 18 December 2019 (2019-12-18), XP030222423
Wang,B. et al. Non-CE3: Simplifications of Intra Mode Coding for Matrix-based Intra Prediction Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC I/SC 29/WG 11 15th Meeting: Gothenburg, SE, Jul. 3-12, 2019 Document:JVET-00170-vl,Jul. 12, 2019 (Jul. 12, 2019),entire document.

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230007279A1 (en) * 2019-12-19 2023-01-05 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Colour component prediction method, encoder, and decoder
US11770542B2 (en) * 2019-12-19 2023-09-26 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Colour component prediction method, encoder, and decoder

Also Published As

Publication number Publication date
WO2021120122A1 (zh) 2021-06-24
US20230007279A1 (en) 2023-01-05
KR20220112668A (ko) 2022-08-11
CN113891082B (zh) 2023-06-09
CN113439440A (zh) 2021-09-24
CN113891082A (zh) 2022-01-04
EP3955574A4 (en) 2022-09-21
JP2023510666A (ja) 2023-03-15
US11770542B2 (en) 2023-09-26
US20220070476A1 (en) 2022-03-03
EP3955574A1 (en) 2022-02-16

Similar Documents

Publication Publication Date Title
US11477465B2 (en) Colour component prediction method, encoder, decoder, and storage medium
TWI705694B (zh) 片級內部區塊複製及其他視訊寫碼改善
US11930181B2 (en) Method for colour component prediction, encoder, decoder and storage medium
US11843781B2 (en) Encoding method, decoding method, and decoder
US11683478B2 (en) Prediction method for decoding and apparatus, and computer storage medium
US20220014772A1 (en) Method for picture prediction, encoder, and decoder
US20220014765A1 (en) Method for picture prediction, encoder, decoder, and storage medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WAN, SHUAI;MA, YANZHUO;HUO, JUNYAN;AND OTHERS;REEL/FRAME:058107/0657

Effective date: 20210922

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE