CN112313950A - Method and apparatus for predicting video image component, and computer storage medium - Google Patents
Method and apparatus for predicting video image component, and computer storage medium Download PDFInfo
- Publication number
- CN112313950A CN112313950A CN201880094931.9A CN201880094931A CN112313950A CN 112313950 A CN112313950 A CN 112313950A CN 201880094931 A CN201880094931 A CN 201880094931A CN 112313950 A CN112313950 A CN 112313950A
- Authority
- CN
- China
- Prior art keywords
- image component
- value
- pixel point
- reference value
- coding block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 68
- 238000004364 calculation method Methods 0.000 claims description 37
- 238000004590 computer program Methods 0.000 claims description 7
- 238000005070 sampling Methods 0.000 description 21
- 238000010586 diagram Methods 0.000 description 18
- 238000001914 filtration Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000013139 quantization Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 4
- 239000013598 vector Substances 0.000 description 4
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 3
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 2
- 238000012417 linear regression Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 241000023320 Luma <angiosperm> Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000007373 indentation Methods 0.000 description 1
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The embodiment of the application discloses a method and a device for predicting video image components and a computer storage medium, wherein the method comprises the following steps: acquiring a first image component reconstruction value, a first image component adjacent reference value and a second image component adjacent reference value corresponding to a coding block; the first image component reconstruction value represents a reconstruction value of at least one pixel point of the coding block corresponding to a first image component, and the first image component adjacent reference value and the second image component adjacent reference value represent a reference value of each adjacent pixel point in the coding block adjacent reference pixel corresponding to the first image component and a reference value of the second image component respectively; determining a model parameter according to the obtained first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value; and acquiring a second image component predicted value corresponding to each pixel point in the coding block according to the model parameter.
Description
The embodiment of the application relates to the technical field of video coding and decoding, in particular to a method and a device for predicting video image components and a computer storage medium.
With the improvement of the requirement of people on the video display quality, new video application forms such as high-definition videos and ultrahigh-definition videos are produced. With such high-resolution and high-quality video appreciation becoming more and more widely used, the demand for video compression technology is also becoming higher and higher. h.265/High Efficiency Video Coding (HEVC) is the latest international Video compression standard at present, and the compression performance of h.265/HEVC is improved by about 50% compared with the previous generation Video Coding standard h.264/Advanced Video Coding (AVC), but still does not meet the requirement of rapid development of Video applications, especially new Video applications such as ultra High definition, Virtual Reality (VR), etc.
The Video coding experts group of ITU-T and the motion Picture experts group of ISO/IEC established a Joint Video research Team (JVET) to set up the next generation Video coding standard in 2015. Joint Exploration Test Model (JEM) is a general reference software platform based on which different coding tools are verified. In 4 months 2018, jvt formally names the next generation Video Coding standard as multifunctional Video Coding (VVC), and its corresponding test model is VTM. In JEM and VTM reference software, a linear model-based prediction method has been integrated, and the chroma component can derive the chroma prediction value from the luma component through a linear model. However, when a linear model is constructed, the accuracy of the calculated chromaticity prediction value is low.
Disclosure of Invention
In order to solve the foregoing technical problems, embodiments of the present application desirably provide a method and an apparatus for predicting a video image component, and a computer storage medium, which can effectively improve the accuracy of predicting a video image component, so that a predicted value of the video image component is closer to an original value of the video image component, and thus, an encoding rate is saved.
The technical scheme of the embodiment of the application can be realized as follows:
in a first aspect, an embodiment of the present application provides a method for predicting a video image component, where the method includes:
acquiring a first image component reconstruction value, a first image component adjacent reference value and a second image component adjacent reference value corresponding to a coding block; the first image component reconstruction value represents a reconstruction value of at least one pixel point of the coding block corresponding to a first image component, and the first image component adjacent reference value and the second image component adjacent reference value represent a reference value of each adjacent pixel point in the coding block adjacent reference pixel corresponding to the first image component and a reference value of the second image component respectively;
determining a model parameter according to the obtained first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value;
and acquiring a second image component predicted value corresponding to each pixel point in the coding block according to the model parameter.
In a second aspect, an embodiment of the present application provides an apparatus for predicting a video image component, where the apparatus for predicting a video image component includes: an acquisition section, a determination section, and a prediction section;
the acquisition part is configured to acquire a first image component reconstruction value, a first image component adjacent reference value and a second image component adjacent reference value corresponding to the coding block; the first image component reconstruction value represents a reconstruction value of at least one pixel point of the coding block corresponding to a first image component, and the first image component adjacent reference value and the second image component adjacent reference value represent a reference value of each adjacent pixel point in the coding block adjacent reference pixel corresponding to the first image component and a reference value of the second image component respectively;
the determining part is configured to determine a model parameter according to the acquired first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value;
and the prediction part is configured to acquire a second image component prediction value corresponding to each pixel point in the coding block according to the model parameter.
In a third aspect, an embodiment of the present application provides an apparatus for predicting a video image component, where the apparatus for predicting a video image component includes: a memory and a processor;
the memory for storing a computer program operable on the processor;
the processor is configured to perform the steps of the method of the first aspect when running the computer program.
In a fourth aspect, the present application provides a computer storage medium storing a prediction program for a video image component, which when executed by at least one processor implements the steps of the method of the first aspect.
The embodiment of the application provides a method and a device for predicting video image components and a computer storage medium, wherein a first image component reconstruction value, a first image component adjacent reference value and a second image component adjacent reference value corresponding to a coding block are obtained; the first image component reconstruction value represents a reconstruction value of at least one pixel point of the coding block corresponding to a first image component, and the first image component adjacent reference value and the second image component adjacent reference value represent a reference value of each adjacent pixel point in the coding block adjacent reference pixel corresponding to the first image component and a reference value of the second image component respectively; determining a model parameter according to the obtained first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value; acquiring a second image component predicted value corresponding to each pixel point in the coding block according to the model parameter; according to the method and the device, the adjacent reference value of the first image component and the adjacent reference value of the second image component are considered in the determination of the model parameters, and the reconstruction value of the first image component is also considered, so that the predicted value of the second image component is closer to the original value of the second image component, the prediction accuracy of the video image component can be effectively improved, the predicted value of the video image component is closer to the original value of the video image component, and the coding rate is saved.
Fig. 1A to fig. 1C are schematic structural diagrams of video image sampling formats in the related art;
fig. 2A and 2B are schematic diagrams of sampling of a first image component adjacent reference value and a second image component adjacent reference value of a coding block in a related art scheme;
fig. 3A to 3C are schematic structural diagrams of a principle of a CCLM preset model in a related art scheme;
FIG. 4 is a schematic diagram of grouping of adjacent reference values of a first image component and adjacent reference values of a second image component in MMLM prediction mode according to the related art;
fig. 5 is a schematic distribution diagram of each pixel point and adjacent reference pixels in a coding block according to an embodiment of the present disclosure;
fig. 6 is a block diagram illustrating a video coding system according to an embodiment of the present application;
fig. 7 is a block diagram illustrating a video decoding system according to an embodiment of the present application;
fig. 8 is a flowchart illustrating a method for predicting video image components according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram illustrating a prediction apparatus for video image components according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram illustrating another apparatus for predicting video image components according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram illustrating a prediction apparatus for a video image component according to an embodiment of the present application;
fig. 12 is a schematic hardware configuration diagram of an apparatus for predicting a video image component according to an embodiment of the present disclosure.
So that the manner in which the features and elements of the present embodiments can be understood in detail, a more particular description of the embodiments, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings.
In a video image, a first image component, a second image component and a third image component are generally adopted to characterize a coding block; wherein the three image components are a luminance component, a blue chrominance component and a red chrominance component, respectively, and specifically, the luminance component is generally represented by the symbol Y, the blue chrominance component is generally represented by the symbol Cb, and the red chrominance component is generally represented by the symbol Cr.
In the embodiment of the present application, the first image component may be a luminance component Y, the second image component may be a blue chrominance component Cb, and the third image component may be a red chrominance component Cr, but this is not particularly limited in the embodiment of the present application. The currently commonly used sampling formats are the YCbCr format, which includes the following types, as shown in fig. 1A to 1C, respectively, where the crosses (X) in the figures represent the first image component sampling points, and the circles (o) represent the second image component or third image component sampling points. The YCbCr format includes:
4:4:4 Format: as shown in fig. 1A, indicating that the second image component or the third image component is not down-sampled; taking 4 sampling samples of a first image component, 4 sampling samples of a second image component and 4 sampling samples of a third image component at every 4 continuous pixel points on each scanning line;
4:2:2 Format: as shown in fig. 1B, it is shown that the first image component is 2:1 horizontally sampled with respect to the second image component or the third image component, without vertical down-sampling; taking 4 sampling samples of a first image component, 2 sampling samples of a second image component and 2 sampling samples of a third image component at every 4 continuous pixel points on each scanning line;
4:2:0 Format: as shown in fig. 1C, representing the first image component being 2:1 horizontally down-sampled and 2:1 vertically down-sampled relative to the second image component or the third image component; it takes 2 samples of the first image component, 1 sample of the second image component and 1 sample of the third image component at every 2 continuous pixels on the horizontal scanning line and the vertical scanning line.
In the case of a video image adopting a YCbCr format of 4:2:0, if a first image component of the video image is a coding block of 2N × 2N size, the corresponding second image component or third image component is a coding block of N × N size, where N is the side length of the coding block. In the embodiment of the present application, the following description will be given by taking the 4:2:0 format as an example, but the technical solution of the embodiment of the present application is also applicable to other sampling formats.
In the next generation video coding standard h.266, in order to further improve coding performance and coding efficiency, an extension improvement is performed for inter-component Prediction (CCP), and inter-component Linear Model Prediction (CCLM) is proposed. In h.266, the CCLM implements prediction between the first image component and the second image component, between the first image component and the third image component, and between the second image component and the third image component, and the following description will take prediction between the first image component and the second image component as an example, but the technical solution of the embodiment of the present application can also be applied to prediction of other image components.
It is to be understood that, in order to reduce redundancy between the first image component and the second image component, in the prediction mode using CCLM, the first image component and the second image component are of the same coding block, and the second image component is predicted based on the first image component reconstruction value of the same coding block, for example, using a preset model as shown in equation (1):
Pred C[i,j]=α·Rec Y[i,j]+β (1)
wherein i, j represents the position coordinates of the sampling points in the coding block, i represents the horizontal direction, j represents the vertical direction, PredC[i,j]Representing the position coordinates in the code block as [ i, j ]]Corresponding to the sampling point of (a) and (b) a second image component prediction value, RecY[i,j]Representing the (down-sampled) position coordinates in the same block as [ i, j ]]The first image component reconstructed values corresponding to the sampling points, α and β are model parameters of the preset model, and may be derived by minimizing a regression error of the first image component adjacent reference value and the second image component adjacent reference value around the coding block, as calculated by using equation (2):
wherein y (N) represents all the first image component adjacent reference values on the left side and the upper side after down sampling, c (N) represents all the second image component adjacent reference values on the left side and the upper side, N is the side length of the second image component coding block, and N is 1, 2. Referring to fig. 2A and 2B, schematic diagrams of sampling of a first image component adjacent reference value and a second image component adjacent reference value of a coding block in a related art scheme are respectively shown; wherein, in fig. 2A, a bold larger box is used to highlight the first image component encoding block 21, and a gray filled circle is used to indicate the adjacent reference value y (n) of the first image component encoding block 21; in fig. 2B, a bold larger box is used to highlight the second image component encoding block 22, while a gray filled circle is used to indicate the adjacent reference value c (n) of the second image component encoding block 22. FIG. 2A shows a first image component encoding block 21 of size 2 Nx 2N, for a video image of 4:2:0 format, a first image component of size 2 Nx 2N corresponds to a second image component of size Nx N, as shown at 22 in FIG. 2B; that is, fig. 2A and 2B are schematic diagrams of an encoding block in which first image component sampling and second image component sampling are performed separately for the same encoding block. Here, for the square coding block, equation (2) can be directly applied; for non-square coding blocks, the neighboring samples of the longer edge are first downsampled to obtain the number of samples equal to the number of samples of the shorter edge. Alpha and beta do not need to be transmitted, and can be calculated by the formula (2) in a decoder; in the embodiments of the present application, this is not particularly limited.
Fig. 3A to 3C show schematic structural diagrams of a principle of a CCLM default model in a related art, as shown in fig. 3A to 3C, where a, b, and C are adjacent reference values of a first image component, A, B, C is an adjacent reference value of a second image component, E is a reconstructed value of the first image component corresponding to a certain pixel point in a coding block, and E is a predicted value of the second image component corresponding to the pixel point; using all the first image component adjacent reference values y (n) and the second image component adjacent reference values C (n) of the coding block, α and β can be calculated according to formula (2), and a preset model can be established according to α and β obtained by calculation and formula (1), as shown in fig. 3C; and (3) bringing a first image component reconstruction value E corresponding to a certain pixel point in the coding block into the preset model in the formula (1), and calculating to obtain a second image component prediction value E corresponding to the pixel point.
In JEM, there are currently two prediction modes for CCLM: one is the prediction mode of the single model CCLM; the other is a prediction mode of a multi-Model CCLM (MMLM), also called prediction mode of MMLM. As the name implies, the prediction mode of the single model CCLM is only one preset model to realize the prediction of the second image component from the first image component; the prediction mode of the MMLM has a plurality of preset models to realize the prediction of the second image component from the first image component. For example, in the prediction mode of the MMLM, the first image component adjacent reference value and the second image component adjacent reference value of the coding block are divided into two groups, each of which can be used separately as a training set for deriving model parameters in the preset model, that is, each group can derive a set of model parameters α and β; and the first image component reconstruction values of the coding blocks can also be grouped according to a classification method of adjacent reference values of the first image component, and corresponding model parameters alpha and beta are respectively used for establishing a preset model.
Fig. 4 is a diagram illustrating grouping of first image component adjacent reference values and second image component adjacent reference values in an MMLM prediction mode in the related art scheme; the threshold value is a set value used for indicating that a plurality of preset models are established, and the threshold value is obtained by averaging according to adjacent reference values Y (n) of the first image component; as can be seen from fig. 4, assuming that the Threshold is represented by Threshold, taking Threshold as a boundary point, if the first image component neighboring reference value is less than or equal to the Threshold, the first image component is classified into the first group; if the adjacent reference value of the first image component is larger than the threshold value, dividing the first image component into a second group; here, the model parameters α 1 and β 1 of the first preset model may be derived from the first image component neighboring reference value and the second image component neighboring reference value of the first group, for example, α 1 ═ 2, β 1 ═ 1; model parameters α 2 and β 2 of the second predetermined model may be derived from the first image component neighboring reference value and the second image component neighboring reference value of the second group, for example, α 2 — 1/2, β 2 — 1; the established first preset model M1 and the second preset model M2 are shown as formula (3),
wherein, RecY[i,j]Representing the position coordinates in the code block as [ i, j ]]A first image component reconstruction value corresponding to the pixel point; pred1C[i,j]Representing the position coordinates in the code block as [ i, j ]]The pixel point obtains a second image component predicted value, Pred, according to the first preset model M12C[i,j]Representing the position coordinates in the code block as [ i, j ]]And the pixel point obtains a second image component predicted value according to the second preset model M2.
The method comprises the following steps of calculating model parameters alpha and beta of a preset model by using a first image component adjacent reference value and a second image component adjacent reference value of a coding block; specifically, α and β are obtained by minimizing the regression error of the first image component neighboring reference value and the second image component neighboring reference value, as shown in the above equation (2).
However, due to the difference of the spatial regions, the spatial texture of the image often changes, and the distribution characteristics of the pixel points in different regions are different, for example, part of the pixel points have high brightness and part of the pixel points have low brightness; if the model parameters of the preset model are simply constructed by using the adjacent reference pixels, the constructed model parameters are not optimal due to insufficient comprehensive consideration, so that the predicted value of the second image component obtained by the preset model is not accurate enough.
In the embodiment of the application, the prediction method of the video image component provides that model parameters of a preset model are constructed based on a first image component reconstruction value and a second image component temporary value of a coding block, wherein the second image component temporary value is obtained according to the similarity degree between the first image component reconstruction value and a first image component adjacent reference value of the coding block, so that the constructed model parameters are close to the optimal model parameters as much as possible; referring to fig. 5, a distribution diagram of each pixel point and adjacent reference pixels in a coding block provided by an embodiment of the present application is shown; as shown in fig. 5, the adjacent reference pixels of the coding block are mainly high luminance, and the pixels in the coding block are mainly low and medium luminance; according to the method and the device, not only are the adjacent reference values of the first image component and the adjacent reference values of the second image component corresponding to the adjacent reference pixels considered, but also the similarity degree between the reconstruction value of the first image component and the adjacent reference values of the first image component is considered, so that the constructed model parameters are close to the optimal model parameters as much as possible. In the embodiment of the application, the first image component reconstruction value of the coding block participates in not only the application of the preset model, but also the calculation of the model parameter, so that the second image component predicted value obtained by the embodiment of the application is further closer to the original value of the second image component. The technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
Referring to fig. 6, an example of a block diagram of the components of a video encoding system is shown; as shown in fig. 6, the video Coding system 600 includes components such as transform and quantization 601, intra estimation 602, intra prediction 603, motion compensation 604, motion estimation 605, inverse transform and inverse quantization 606, filter control analysis 607, deblocking filtering and Sample Adaptive indentation (SAO) filtering 608, header information Coding and Context-based Adaptive Binary arithmetic Coding (CABAC) 609, and decoded image buffer 610; aiming at an input original video signal, a video Coding block can be obtained through the division of a Coding Tree Unit (CTU), then the video Coding block is transformed through transformation and quantization 601, including the steps of transforming residual information from a pixel domain to a transformation domain and quantizing the obtained transformation coefficient so as to further reduce the bit rate; intra estimation 602 and intra prediction 603 are for intra prediction of the video coding block; in particular, intra estimation 602 and intra prediction 603 are used to determine the intra prediction mode to be used to encode the video coding block; motion compensation 604 and motion estimation 605 are used to perform inter-prediction encoding of a received video coding block relative to one or more blocks in one or more reference frames to provide temporal prediction; motion estimation performed by motion estimation 605 is the process of generating motion vectors that can estimate the motion of the video coding block, and then motion compensation is performed by motion compensation 604 based on the motion vectors determined by motion estimation 605; after determining the intra prediction mode, intra prediction 603 is also used to provide the selected intra prediction data to header information coding and CABAC 609, and motion estimation 605 sends the calculated determined motion vector data to header information coding and CABAC 609 as well; furthermore, inverse transform and dequantization 606 is used for reconstruction of the video coding block, reconstructing a residual block in the pixel domain, which removes blocking artifacts through filter control analysis 607 and deblocking and SAO filtering 608, and then adding the reconstructed residual block to a predictive block in the frame of decoded picture buffer 610 to generate a reconstructed video coding block; the header information coding and CABAC 609 is used for coding the quantized transform coefficients, and in a CABAC-based coding algorithm, context content may be based on adjacent coding blocks, and may be used for coding information indicating the determined intra prediction mode, and outputting a code stream of the video signal; the decoded picture buffer 610 is used for storing reconstructed video coding blocks, and new reconstructed video coding blocks are continuously generated along with the progress of video picture coding, and the reconstructed video coding blocks are stored in the decoded picture buffer 610.
Referring to fig. 7, an example of a block diagram of a video decoding system is shown; as shown in fig. 7, the video decoding system 700 includes a header information decoding and CABAC decoding 701, an inverse transform and inverse quantization 702, an intra prediction 703, a motion compensation 704, a deblocking filtering and SAO filtering 705, and a decoded image buffer 706; after the input video signal is subjected to the encoding processing of fig. 6, a code stream of the video signal is output; the code stream is input into the video decoding system 700, and is first subjected to header information decoding and CABAC decoding 701 to obtain a decoded transform coefficient; processing the transform coefficients by inverse transform and inverse quantization 702 to produce a block of residuals in the pixel domain; intra prediction 703 may be used to generate prediction data for a current video decoded block based on the determined intra prediction mode and data from previously decoded blocks of the current frame or picture; motion compensation 704 is the determination of prediction information for a video decoded block by parsing motion vectors and other associated syntax elements and using the prediction information to generate a predictive block for the video decoded block being decoded; forming a decoded video block by summing the residual block from inverse transform and inverse quantization 702 with the corresponding predictive block generated by motion compensation 704; the decoded video signal may be passed through deblocking and SAO filtering 705 to remove blocking artifacts, which may improve video quality; the decoded video blocks are then stored in a decoded picture buffer 706, and the decoded picture buffer 706 stores reference pictures for subsequent motion compensation and also stores the decoded output video signal, i.e. the restored original video signal.
The embodiment of the present application is mainly applied to the intra prediction 603 portion shown in fig. 6 and the intra prediction 703 portion shown in fig. 7; that is, the embodiment of the present application may simultaneously function as an encoding system and a decoding system, and the embodiment of the present application is not particularly limited to this.
Based on the application scenario example of fig. 6 or fig. 7, referring to fig. 8, a flow of a method for predicting a video image component provided by an embodiment of the present application is shown, where the method may include:
s801: acquiring a first image component reconstruction value, a first image component adjacent reference value and a second image component adjacent reference value corresponding to a coding block; the first image component reconstruction value represents a reconstruction value of at least one pixel point of the coding block corresponding to a first image component, and the first image component adjacent reference value and the second image component adjacent reference value represent a reference value of each adjacent pixel point in the coding block adjacent reference pixel corresponding to the first image component and a reference value of the second image component respectively;
s802: determining a model parameter according to the obtained first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value;
s803: and acquiring a second image component predicted value corresponding to each pixel point in the coding block according to the model parameter.
It should be noted that the coding block is a current coding block to be subjected to second image component prediction or third image component prediction; the first image component reconstruction value is used for representing a reconstruction value of a first image component corresponding to at least one pixel point in the coding block, the first image component adjacent reference value is used for representing a reference value of a first image component corresponding to an adjacent reference pixel point of the coding block, and the second image component adjacent reference value is used for representing a reference value of a second image component corresponding to an adjacent reference pixel point of the coding block.
In the technical solution shown in fig. 8, a first image component reconstructed value, a first image component adjacent reference value, and a second image component adjacent reference value corresponding to a coding block are obtained; the first image component reconstruction value represents a reconstruction value of at least one pixel point of the coding block corresponding to a first image component, and the adjacent reference value of the first image component and the adjacent reference value of the second image component represent a reference value of each adjacent pixel point in the adjacent reference pixels of the coding block corresponding to the first image component and a reference value of the second image component respectively; determining a model parameter according to the obtained first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value; acquiring a second image component predicted value corresponding to each pixel point in the coding block according to the model parameter; in the embodiment of the application, the determination of the model parameter not only considers the adjacent reference value of the first image component and the adjacent reference value of the second image component, but also considers the reconstruction value of the first image component, so that the predicted value of the second image component is closer to the original value of the second image component, the prediction accuracy of the video image component can be effectively improved, the predicted value of the video image component is closer to the original value of the video image component, and the coding rate is further saved.
Understandably, analyzing the optimal model parameters of the acquired preset model from the aspect of mathematical theory; for the second image component predicted value corresponding to each pixel point in the coding block, it is generally desired that the prediction residual between the original value of the second image component and the second image component predicted value is as small as possible, so that the optimized objective function is
Wherein i and j represent the position coordinates of the pixel points in the coding block, i represents the horizontal direction, j represents the vertical direction,for the position coordinates in the code block as [ i, j ]]Corresponding to the pixel point of (a) a first image component reconstruction value, Ci, j]For the position coordinates in the code block as [ i, j ]]Corresponding to the pixel point of (a) of the second image component original value, CPred[i,j]For in the coding blockPosition coordinates of [ i, j]A second image component predicted value corresponding to the pixel point of (a); here, the optimum model parameter α of the model is presetoptAnd betaoptCan be obtained by a least square method, as shown in formula (5),
as can be seen from equation (5), the reconstruction value of the NxN first image components within the coding block is usedAnd a second image component original value C [ i, j]Linear regression is carried out to obtain the optimal model parameter alpha under the condition of minimum mean square erroroptAnd betaopt。
However, due to the optimal model parameter αoptAnd betaoptThe calculation of (a) uses the original value of the second image component of the coding block, but the original value of the second image component is not available at the decoding end, so that alpha needs to be calculatedoptAnd betaoptThe transmission is performed, in which case additional bit overhead is introduced; in addition, the model parameter αoptAnd betaoptLarge in magnitude and span, and if transmitted, may be irrevocable for predicted performance. Therefore, in the embodiment of the application, a second image component temporary value is constructed based on the similarity degree between the first image component reconstruction value and the adjacent reference value of the first image component of the coding block, and the constructed second image component temporary value is used for replacing the second image component original value corresponding to each pixel point in the coding block; in this case, the optimal model parameters can be obtained, and no additional bit overhead is introduced.
Based on the technical solution shown in fig. 8, in a possible implementation manner, the determining a model parameter according to the obtained first image component reconstructed value, the first image component adjacent reference value, and the second image component adjacent reference value includes:
acquiring a second image component temporary value according to the first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value; wherein the second image component temporary value characterizes a temporary value of the second image component corresponding to at least one pixel point of the coding block;
and determining model parameters according to the first image component reconstruction value and the acquired second image component temporary value.
It should be noted that, in the embodiment of the present application, the determination of the model parameter not only considers the adjacent reference value of the first image component and the adjacent reference value of the second image component, but also considers the reconstructed value of the first image component; according to the similarity between the first image component reconstruction value and the first image component adjacent reference value, a matching pixel point of each pixel point in the coding block can be obtained, and the second image component temporary value is obtained according to the second image component adjacent reference value corresponding to the matching pixel point; that is to say, the second image component temporary value is obtained based on the second image component adjacent reference value corresponding to the matching pixel point of at least one pixel point of the coding block in the adjacent reference pixels of the coding block.
As can be understood, for the obtaining of the second image component temporary value, in a possible implementation manner, the obtaining of the second image component temporary value according to the first image component reconstruction value, the first image component neighboring reference value and the second image component neighboring reference value includes:
calculating the difference value of any one of the adjacent reference values of the first image component and the first image component reconstruction value corresponding to each pixel point of the coding block;
according to the result of the difference calculation, acquiring a matched pixel point of each pixel point from the adjacent reference pixels of the coding block;
and taking the second image component adjacent reference value corresponding to the matched pixel point as a second image component temporary value corresponding to each pixel point.
Optionally, when only 1 neighboring pixel point corresponding to the neighboring reference value of the first image component with the smallest difference is obtained according to the result of the difference calculation, in the above implementation manner, specifically, the obtaining a matching pixel point of each pixel point from the neighboring reference pixels of the coding block according to the result of the difference calculation includes:
acquiring an adjacent pixel point corresponding to the adjacent reference value of the first image component with the minimum difference according to the result of the difference calculation;
and taking the adjacent pixel points as matching pixel points of each pixel point.
Optionally, when, according to a result of the difference calculation, there are a plurality of adjacent pixel points corresponding to the adjacent reference value of the first image component with the smallest difference, in the above implementation manner, specifically, the obtaining, according to the result of the difference calculation, a matching pixel point of each pixel point from the adjacent reference pixels of the coding block includes:
acquiring an adjacent pixel point set corresponding to the adjacent reference value of the first image component with the minimum difference according to the difference calculation result;
and calculating the distance value between each adjacent pixel point and each pixel point in the adjacent pixel point set, and selecting the adjacent pixel point with the minimum distance value as a matching pixel point of each pixel point.
In the prediction mode of CCLM, the position coordinates of the coding block are [ i, j ]]Assuming that the first image component reconstruction value corresponding to the pixel point is obtainedSearching and from first image component adjacent reference value of coding blockNearest first image component neighboring reference value, i.e.The difference between the two is minimal; if the searched andthe adjacent pixel points corresponding to the nearest adjacent reference value of the first image component are only 1, and the adjacent pixel points have the position coordinates of [ i, j]Matching pixel points of the pixel points; if the searched anda plurality of adjacent pixel points corresponding to the nearest adjacent reference value of the first image component are obtained, namely an adjacent pixel point set is obtained, then the distance value between each adjacent pixel point in the adjacent pixel point set and each pixel point is required to be calculated, and then the adjacent pixel point with the minimum distance value is selected as the position coordinate [ i, j [ ]]Matching pixel points of the pixel points; after the matching pixel point is obtained, the second image component corresponding to the matching pixel point can be used as the position coordinate [ i, j]Second image component temporary values corresponding to pixel points, using C' [ i, j]Represents; when the temporary values of the second image components of all pixel points of the coding block are found, the temporary values can be usedTo replaceCalculating model parameters and using the model parameters to construct a second image component predicted value; by using the method, a set which is relatively close to the original value of the second image component corresponding to the coding block can be constructed.
It is understood that, for the acquisition of the temporary value of the second image component, besides the temporary value of the second image component obtained by the above matching method using the closest pixel, the temporary value of the second image component can be obtained by a construction method such as interpolation; therefore, in another possible implementation manner, the obtaining a second image component temporary value according to the first image component reconstruction value, the first image component neighboring reference value, and the second image component neighboring reference value includes:
calculating the difference value of any one of the adjacent reference values of the first image component and the first image component reconstruction value corresponding to each pixel point of the coding block;
according to the result of the difference calculation, a first matching pixel point and a second matching pixel point of each pixel point are obtained from the adjacent reference pixels of the coding block; the first matching pixel point represents an adjacent pixel point corresponding to a first image component adjacent reference value which is larger than the first image component reconstruction value and has the minimum difference value in the first image component adjacent reference values, and the second matching pixel point represents an adjacent pixel point corresponding to a first image component adjacent reference value which is smaller than the first image component reconstruction value and has the minimum difference value in the first image component adjacent reference values;
and carrying out interpolation operation according to the second image component adjacent reference value corresponding to the first matching pixel point and the second image component adjacent reference value corresponding to the second matching pixel point to obtain a second image component temporary value corresponding to each pixel point.
In the prediction mode of CCLM, the position coordinates of the coding block are [ i, j ]]Assuming that the first image component reconstruction value corresponding to the pixel point is obtainedBased on the principle that the first image component neighboring reference value is closest to the first image component reconstruction value, it is desirable to find a desired first image component neighboring reference value Y from the first image component neighboring reference valueCAnd is desirably requiredBut the required Y cannot be derived directly from the first image component neighboring reference valueC(ii) a At this time, a search can be made from the first picture component adjacent reference value of the coding block, first obtaining a reference value larger thanAnd withNearest first image component neighboring reference value Y1,Y 1The corresponding adjacent pixel point is a first matching pixel point, and the adjacent reference value of the second image component corresponding to the first matching pixel point is C1(ii) a Then obtain a value less thanAnd withNearest first image component neighboring reference value Y2,Y 2The corresponding adjacent pixel point is a second matching pixel point, and the adjacent reference value of the second image component corresponding to the second matching pixel point is C2(ii) a Namely, the first matching pixel point and the second matching pixel point obtained by searching are the position coordinates of [ i, j]A first matching pixel point and a second matching pixel point of the pixel points; at this time Y can be utilizedC、Y 1、C 1、Y 2And C2Performing interpolation operation, assuming thatObtained with YCCorresponding second image component neighboring reference value CC=C 2+(Y C-Y 2)×(C 1-C 2)/(Y 1-Y 2) The value can be taken as a position coordinate of [ i, j]Second image component temporary values corresponding to pixel points, using C' [ i, j]Represents; when the temporary values of the second image components of all pixel points of the coding block are found, the temporary values can be usedTo replaceCalculating model parameters and using the model parameters to construct a second image component predicted value; by the method, a set which is relatively close to the original value of the second image component corresponding to the coding block can be constructed.
In the embodiment of the present application, for obtaining the temporary value of the second image component, not only the temporary value of the second image component may be obtained by using the matching method using the closest pixel, but also the temporary value of the second image component may be obtained by using an interpolation method, or even a part of pixels may be used to obtain the temporary value of the second image component by using the matching method using the closest pixel and a part of pixels together by using the interpolation method; the embodiments of the present application are not particularly limited.
In the embodiment of the present application, for obtaining the temporary value of the second image component, the search range of the matching pixel point may be further expanded or reduced. For example, the search range may be limited to a difference between a row and a column of the coordinate position of the pixel point corresponding to the temporary value of the second image component to be determined, where the difference is not more than n, where n is an integer greater than 1; the searching range can also be expanded to the position information of the pixel points of m adjacent rows and/or m adjacent columns, wherein m is an integer larger than 1; even the position information of the pixel points of other coding blocks in the lower left or upper right region can be used; the embodiments of the present application are not particularly limited.
It can be understood that after all the temporary values of the second image components corresponding to the encoded blocks are obtained, the model parameters can be determined; in a possible implementation manner, the determining a model parameter according to the acquired first image component reconstructed value and the acquired second image component temporary value includes:
inputting the first image component reconstruction value and the second image component temporary value into a first preset factor calculation model to obtain a first model parameter;
and inputting the first model parameter, the first image component reconstruction value and the second image component temporary value into a second preset factor calculation model to obtain a second model parameter.
It should be noted that the model parameters include a first model parameter and a second model parameter; after all the temporary values of the second image components corresponding to the encoded blocks are obtained, linear regression is still performed by using a least square method, and the first model parameter α 'and the second model parameter β' in the preset model can be obtained as follows:
as can be understood, after the first model parameter and the second model parameter are obtained, the second image component prediction value corresponding to each pixel point in the coding block can be obtained according to the established preset model; therefore, in the foregoing implementation manner, specifically, the obtaining, according to the model parameter, the second image component prediction value corresponding to each pixel point in the coding block includes:
establishing a preset model based on the first model parameter and the second model parameter; the preset model is used for representing a calculation relation between a first image component reconstruction value and a second image component predicted value corresponding to each pixel point in the coding block;
and obtaining a second image component predicted value corresponding to each pixel point in the coding block according to the preset model and the first image component reconstruction value corresponding to each pixel point in the coding block.
It should be noted that, according to the obtained first model parameter α 'and the second model parameter β', the established preset model is shown as formula (7),
for a position coordinate of [ i, j]According to the obtained first image component reconstruction valueCombining the preset model described in the above formula (7), the position coordinate [ i, j ] can be obtained]Second image component predicted value Pred corresponding to pixel pointC[i,j]。
In the embodiment of the application, a second image component temporary value is constructed to replace a second image component original value corresponding to a coding block, then a first model parameter and a second model parameter are calculated according to a first image component reconstruction value and the second image component original value, and the first model parameter and the second model parameter are used for establishing a preset model, so that the deviation between the established preset model and an expected model is small, a second image component predicted value obtained according to the preset model is closer to the second image component original value, and the prediction accuracy of a video image component is improved.
In the embodiment of the present application, in addition to obtaining the second image component predicted value according to the first model parameter, the second model parameter and the established preset model, the obtained second image component temporary value may be directly used as the second image component predicted value; the embodiment of the present application is not particularly limited to this. If the temporary value of the second image component is directly used as the predicted value of the second image component, the first model parameter and the second model parameter do not need to be calculated and a preset model does not need to be established, so that the calculation amount of the second image component prediction is greatly reduced.
Based on the technical solution shown in fig. 8, in a possible implementation manner, the method further includes:
acquiring a second image component reconstruction value and a third image component adjacent reference value corresponding to the coding block; the second image component reconstruction value represents a second image component reconstruction value corresponding to at least one pixel point of the coding block, and the third image component adjacent reference value represents a third image component reference value corresponding to each adjacent pixel point in the adjacent reference pixels of the coding block;
determining sub-model parameters according to the obtained second image component reconstruction value, the second image component adjacent reference value and the third image component adjacent reference value;
and acquiring a third image component predicted value corresponding to each pixel point in the coding block according to the sub-model parameters.
In the foregoing implementation manner, specifically, the determining sub-model parameters according to the obtained second image component reconstruction value, the second image component adjacent reference value, and the third image component adjacent reference value includes:
acquiring a third image component temporary value according to the second image component reconstruction value, the second image component adjacent reference value and the third image component adjacent reference value; the temporary value of the third image component is obtained based on the adjacent reference value of the third image component corresponding to the matched pixel point of at least one pixel point of the coding block in the adjacent reference pixels of the coding block;
and determining sub-model parameters according to the second image component reconstruction value and the acquired third image component temporary value.
It should be noted that, in the embodiment of the present application, in addition to the prediction from the first image component to the second image component, the prediction from the second image component to the third image component, or from the third image component to the second image component may also be performed; the prediction method of the third image component to the second image component is similar to the prediction method of the second image component to the third image component, and the following description will be given by taking the prediction of the second image component to the third image component as an example in the embodiment of the present application.
Specifically, after the second image component reconstructed value of the coding block and the third image component adjacent reference value of the coding block are obtained, the same method for determining the temporary value of the second image component as described above is adopted in combination with the second image component adjacent reference value, for example, a matching method of the closest pixel may be adopted, or an interpolation method may also be adopted, so that the temporary value of the third image component may be obtained. Here, the determined sub-model parameters include a first sub-model parameter and a second sub-model parameter, based on the second image component reconstruction value and the third image component temporary value; a sub-preset model can be established according to the first sub-model parameter and the second sub-model parameter; and obtaining a third image component predicted value corresponding to each pixel point in the coding block according to the sub-preset model and the second image component reconstruction value corresponding to each pixel point in the coding block.
In the embodiment of the present application, it is assumed that the second image component reconstruction value of the encoded block is usedIndicating, for the third image component temporal value of the coding blockRepresenting, then, the first sub-model parameter alpha in the sub-preset model*And a second submodel parameter beta*The following were used:
obtaining a first sub-model parameter alpha*And a second submodel parameter beta*Then, a sub-preset model can be established, the established sub-preset model is shown as formula (9),
thus, for a position coordinate of [ i, j ]]According to the obtained second image component reconstruction valueCombining the sub-preset model described in the above formula (9), the position coordinate [ i, j ] can be obtained]Third image component prediction value corresponding to pixel point
It is understood that the above prediction method applied to the prediction mode of the CCLM is also applicable to the prediction mode of the MMLM; as the name implies, the prediction mode of the MMLM is such that there are a plurality of preset models to enable the prediction of the second image component from the first image component; therefore, based on the technical solution shown in fig. 8, in a possible implementation manner, the method further includes:
obtaining at least one threshold value according to a first image component reconstruction value corresponding to at least one pixel point of the coding block;
grouping according to the comparison result of the first image component reconstruction value and the at least one threshold value to obtain at least two groups of first image component reconstruction values and second image component temporary values;
and determining model parameters according to each group of the at least two groups of first image component reconstruction values and second image component temporary values to obtain at least two groups of model parameters.
In the foregoing implementation manner, specifically, the obtaining a second image component prediction value corresponding to each pixel point in the coding block according to the model parameter includes:
establishing at least two preset models based on the obtained at least two groups of model parameters; the at least two preset models and the at least two groups of model parameters have corresponding relations;
according to the comparison result of the first image component reconstruction value and the at least one threshold value, selecting a preset model corresponding to each pixel point in the coding block from the at least two preset models;
and acquiring a second image component predicted value corresponding to each pixel point in the coding block according to the preset model corresponding to each pixel point in the coding block and the first image component reconstruction value.
It should be noted that the threshold is a classification basis of the first image component reconstruction value of the coding block. The threshold value is a set value used for indicating that a plurality of preset models are established, and the size of the threshold value is related to a first image component reconstruction value corresponding to the coding block; specifically, the average value of the first image component reconstruction values corresponding to the coding blocks may be obtained by calculating, or the median value of the first image component reconstruction values corresponding to the coding blocks may be obtained by calculating, which is not specifically limited in this embodiment of the present application.
In the embodiment of the present application, it is assumed that, according to a first image component reconstruction value corresponding to at least one pixel point of the coding block and equation (10), a Mean value Mean is calculated:
wherein Mean represents a Mean of first image component reconstruction values corresponding to the encoded block,and M represents the sampling number of the first image component reconstruction value corresponding to the coding block.
After the Mean value Mean is obtained through calculation, the Mean value Mean can be directly used as a threshold value, and two preset models can be established by using the threshold value; however, it should be noted that the embodiment of the present application is not limited to establishing only two preset models. Here, ,in this embodiment, an average Mean of first image component reconstructed values corresponding to the coding block is taken as a Threshold, that is, a Threshold is Mean, and the first image component reconstructed values corresponding to the coding block are compared with the Threshold, so that two groups of first image component reconstructed values and second image component temporary values can be obtained; according to the two groups of first image component reconstruction values and second image component temporary values, a first model parameter alpha of a first preset model can be respectively derived1' and a second model parameter beta1' and a first model parameter α of a second predetermined model2' and a second model parameter beta2'; combining equation (11), establishing a first preset model M1 'and a second preset model M2':
after obtaining a first preset model M1 'and a second preset model M2', reconstructing a first image component of each pixel point in a coding blockCompared with Threshold, ifA first preset model M1' is selected, and the coordinates of the position of the coding block are obtained as i, j according to the first preset model]Second image component predicted value Pred corresponding to pixel point1C[i,j](ii) a If it isA second predetermined model M2' is selected, and the coordinates of the position of the coding block are obtained as i, j according to the second predetermined model]Second image component predicted value Pred corresponding to pixel point2C[i,j]。
In the embodiment of the application, after a Mean value is obtained by calculation according to a first image component reconstruction value corresponding to the coding block, the Mean is taken as a demarcation point by comparing the first image component reconstruction value with the Mean value, and if the first image component reconstruction value is less than or equal to a threshold value, the first image component reconstruction value is divided into a first group; if the first image component reconstruction value is larger than the threshold value, dividing the first image component reconstruction value into a second group; this may result in a first set of image component reconstruction values for the first group and a first set of image component reconstruction values for the second group. In order to simplify the establishment of the preset model, for example, according to the principle of "determining a line from two points", at this time, the median calculation may be further performed on the first image component reconstruction value set of the first group to obtain a median of the first group (it should be noted that only one reconstruction value corresponding to one pixel point in the first group remains); then, the median calculation is continued for the first image component reconstruction value set of the second group to obtain a median of the second group (it should be noted that only a reconstruction value corresponding to one pixel point remains in the second group); according to the value corresponding to only one pixel point left in the first group and the value corresponding to only one pixel point left in the second group, a first image component reconstruction value and a second image component temporary value corresponding to the two pixel points respectively can be obtained, so that a model parameter can be determined, a preset model is established according to the model parameter, and a second image component predicted value can be obtained according to the established preset model; this prediction method can also greatly reduce the amount of computation for the second image component prediction.
The embodiment provides a method for predicting video image components, which comprises the steps of obtaining a first image component reconstruction value, a first image component adjacent reference value and a second image component adjacent reference value corresponding to a coding block; the first image component reconstruction value represents a reconstruction value of at least one pixel point of the coding block corresponding to a first image component, and the first image component adjacent reference value and the second image component adjacent reference value represent a reference value of each adjacent pixel point in the coding block adjacent reference pixel corresponding to the first image component and a reference value of the second image component respectively; determining a model parameter according to the obtained first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value; acquiring a second image component predicted value corresponding to each pixel point in the coding block according to the model parameter; in the embodiment of the application, the determination of the model parameter not only considers the adjacent reference value of the first image component and the adjacent reference value of the second image component, but also considers the reconstruction value of the first image component, so that the predicted value of the second image component is closer to the original value of the second image component, the prediction accuracy of the video image component can be effectively improved, the predicted value of the video image component is closer to the original value of the video image component, and the coding rate is further saved.
Based on the same inventive concept of the foregoing embodiments, referring to fig. 9, which shows a composition of a prediction apparatus 90 for a video image component provided in an embodiment of the present application, the prediction apparatus 90 for a video image component may include: an acquisition section 901, a determination section 902, and a prediction section 903; wherein,
the acquiring part 901 is configured to acquire a first image component reconstruction value, a first image component adjacent reference value and a second image component adjacent reference value corresponding to the coding block; the first image component reconstruction value represents a reconstruction value of at least one pixel point of the coding block corresponding to a first image component, and the first image component adjacent reference value and the second image component adjacent reference value represent a reference value of each adjacent pixel point in the coding block adjacent reference pixel corresponding to the first image component and a reference value of the second image component respectively;
the determining part 902 is configured to determine a model parameter according to the acquired first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value;
the predicting part 903 is configured to obtain a second image component predicted value corresponding to each pixel point in the coding block according to the model parameter.
In the above solution, the obtaining part 901 is further configured to obtain a second image component temporary value according to the first image component reconstructed value, the first image component adjacent reference value and the second image component adjacent reference value; wherein the second image component temporary value represents the temporary value of at least one pixel point of the coding block corresponding to the second image component;
the determining part 902 is configured to determine model parameters based on the first image component reconstructed value and the acquired second image component temporary value.
In the above scheme, referring to fig. 10, the prediction apparatus 90 of video image component further comprises a calculating part 904, wherein,
the calculating part 904 is configured to perform difference calculation on any one of the adjacent reference values of the first image component and the reconstructed value of the first image component corresponding to each pixel point of the coding block;
the obtaining part 901 is further configured to obtain a matching pixel point of each pixel point from the reference pixels adjacent to the coding block according to the result of the difference calculation; and taking the second image component adjacent reference value corresponding to the matched pixel point as a second image component temporary value corresponding to each pixel point.
In the above solution, the obtaining part 901 is configured to obtain, according to the result of the difference calculation, an adjacent pixel point corresponding to the adjacent reference value of the first image component with the smallest difference; and taking the adjacent pixel points as the matching pixel points of each pixel point.
In the above solution, the obtaining part 901 is configured to obtain, according to the result of the difference calculation, an adjacent pixel point set corresponding to the adjacent reference value of the first image component with the smallest difference;
the calculating part 904 is further configured to calculate a distance value between each neighboring pixel point in the set of neighboring pixel points and each pixel point;
the obtaining portion 901 is further configured to select an adjacent pixel point with the smallest distance value as a matching pixel point of each pixel point.
In the above solution, the calculating part 904 is configured to perform, for each pixel point of the coding block, a difference calculation on a first image component reconstructed value corresponding to any one of the reference values adjacent to the first image component and the each pixel point;
the obtaining part 901 is further configured to obtain a first matching pixel and a second matching pixel of each pixel from the reference pixels adjacent to the coding block according to the result of the difference calculation; the first matching pixel point represents an adjacent pixel point corresponding to a first image component adjacent reference value which is larger than the first image component reconstruction value and has the minimum difference value in the first image component adjacent reference values, and the second matching pixel point represents an adjacent pixel point corresponding to a first image component adjacent reference value which is smaller than the first image component reconstruction value and has the minimum difference value in the first image component adjacent reference values; and carrying out interpolation operation according to the second image component adjacent reference value corresponding to the first matching pixel point and the second image component adjacent reference value corresponding to the second matching pixel point to obtain a second image component temporary value corresponding to each pixel point.
In the above solution, the model parameters include a first model parameter and a second model parameter, and the obtaining part 901 is further configured to input the first image component reconstructed value and the second image component temporary value into a first preset factor calculation model to obtain the first model parameter;
the acquiring section 901 is further configured to input the first model parameter, the first image component reconstructed value, and the second image component temporary value into a second preset factor calculation model, so as to obtain the second model parameter.
In the above scheme, referring to fig. 11, the prediction apparatus 90 for video image components further comprises a creating part 905, wherein,
the establishing part 905 is configured to establish a preset model based on the first model parameter and the second model parameter; the preset model is used for representing a calculation relation between a first image component reconstruction value and a second image component predicted value corresponding to each pixel point in the coding block;
the predicting part 903 is further configured to obtain a second image component predicted value corresponding to each pixel point in the coding block according to the preset model and the first image component reconstructed value corresponding to each pixel point in the coding block.
In the above solution, the obtaining part 901 is further configured to obtain a second image component reconstructed value and a third image component adjacent reference value corresponding to the coding block; the second image component reconstruction value represents a second image component reconstruction value corresponding to at least one pixel point of the coding block, and the third image component adjacent reference value represents a third image component reference value corresponding to each adjacent pixel point in the adjacent reference pixels of the coding block;
the determining part 902 is further configured to determine sub-model parameters according to the obtained second image component reconstruction value, the second image component adjacent reference value and the third image component adjacent reference value;
the predicting part 903 is further configured to obtain a third image component predicted value corresponding to each pixel point in the coding block according to the sub-model parameter.
In the above solution, the obtaining part 901 is configured to obtain a third image component temporary value according to the second image component reconstructed value, the second image component adjacent reference value and the third image component adjacent reference value; the temporary value of the third image component is obtained based on the adjacent reference value of the third image component corresponding to the matched pixel point of at least one pixel point of the coding block in the adjacent reference pixels of the coding block;
the determining part 902 is configured to determine sub-model parameters based on the second image component reconstructed value and the acquired third image component temporary value.
In the above solution, the obtaining part 901 is further configured to obtain at least one threshold according to a first image component reconstruction value corresponding to at least one pixel point of the coding block; grouping according to the comparison result of the first image component reconstruction value and the at least one threshold value to obtain at least two groups of first image component reconstruction values and second image component temporary values; and determining model parameters according to each group of the at least two groups of the first image component reconstruction values and the second image component temporary values to obtain at least two groups of model parameters.
In the above scheme, the establishing part 905 is further configured to establish at least two preset models based on the obtained at least two sets of model parameters; the at least two preset models and the at least two groups of model parameters have corresponding relations;
the predicting part 903 is further configured to select a preset model corresponding to each pixel point in the coding block from the at least two preset models according to a comparison result between the first image component reconstruction value and the at least one threshold; and acquiring a second image component predicted value corresponding to each pixel point in the coding block according to the preset model corresponding to each pixel point in the coding block and the first image component reconstruction value.
It is understood that in this embodiment, "part" may be part of a circuit, part of a processor, part of a program or software, etc., and may also be a unit, and may also be a module or a non-modular.
In addition, each component in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.
Based on the understanding that the technical solution of the present embodiment essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method of the present embodiment. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Accordingly, the present embodiment provides a computer storage medium storing a prediction program for a video image component, which when executed by at least one processor implements the steps of the method described above in the solution shown in fig. 8.
Based on the composition of the prediction apparatus 90 for video image components and the computer storage medium, referring to fig. 12, which shows a specific hardware structure of the prediction apparatus 90 for video image components provided in the embodiment of the present application, the prediction apparatus may include: a network interface 1201, a memory 1202, and a processor 1203; the various components are coupled together by a bus system 1204. It is understood that the bus system 1204 is used to enable connective communication between these components. The bus system 1204 includes a power bus, a control bus, and a status signal bus, in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 1204 in fig. 12. The network interface 1201 is used for receiving and sending signals in the process of receiving and sending information with other external network elements;
a memory 1202 for storing a computer program operable on the processor 1203;
a processor 1203, configured to execute, when executing the computer program:
acquiring a first image component reconstruction value, a first image component adjacent reference value and a second image component adjacent reference value corresponding to a coding block; the first image component reconstruction value represents a reconstruction value of at least one pixel point of the coding block corresponding to a first image component, and the first image component adjacent reference value and the second image component adjacent reference value represent a reference value of each adjacent pixel point in the coding block adjacent reference pixel corresponding to the first image component and a reference value of the second image component respectively;
determining a model parameter according to the obtained first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value;
and acquiring a second image component predicted value corresponding to each pixel point in the coding block according to the model parameter.
It will be appreciated that the memory 1202 in the subject embodiment can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (ddr Data Rate SDRAM, ddr SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The memory 1202 of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
And the processor 1203 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 1203. The Processor 1203 may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 1202, and the processor 1203 reads the information in the memory 1202 to complete the steps of the above-mentioned method in combination with the hardware thereof.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Optionally, as another embodiment, the processor 1203 is further configured to execute the steps of the method for predicting a video image component in the technical solution shown in fig. 8 when the computer program is run.
It should be noted that: the technical solutions described in the embodiments of the present application can be arbitrarily combined without conflict.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
In the embodiment of the application, a first image component reconstruction value, a first image component adjacent reference value and a second image component adjacent reference value corresponding to a coding block are obtained; the first image component reconstruction value represents a reconstruction value of at least one pixel point of the coding block corresponding to a first image component, and the first image component adjacent reference value and the second image component adjacent reference value represent a reference value of each adjacent pixel point in the coding block adjacent reference pixel corresponding to the first image component and a reference value of the second image component respectively; determining a model parameter according to the obtained first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value; acquiring a second image component predicted value corresponding to each pixel point in the coding block according to the model parameter; therefore, the prediction accuracy of the video image component can be effectively improved, the predicted value of the video image component is closer to the original value of the video image component, and the coding rate is further saved.
Claims (15)
- A method of prediction of a video image component, the method comprising:acquiring a first image component reconstruction value, a first image component adjacent reference value and a second image component adjacent reference value corresponding to a coding block; the first image component reconstruction value represents a reconstruction value of at least one pixel point of the coding block corresponding to a first image component, and the first image component adjacent reference value and the second image component adjacent reference value represent a reference value of each adjacent pixel point in the coding block adjacent reference pixel corresponding to the first image component and a reference value of the second image component respectively;determining a model parameter according to the obtained first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value;and acquiring a second image component predicted value corresponding to each pixel point in the coding block according to the model parameter.
- The method of claim 1, wherein said determining model parameters from the acquired first image component reconstruction value, the first image component neighboring reference value, and the second image component neighboring reference value comprises:acquiring a second image component temporary value according to the first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value; wherein the second image component temporary value represents the temporary value of at least one pixel point of the coding block corresponding to the second image component;and determining model parameters according to the first image component reconstruction value and the acquired second image component temporary value.
- The method of claim 2, wherein said obtaining a second image component temporary value from said first image component reconstructed value, said first image component neighboring reference value and said second image component neighboring reference value comprises:calculating the difference value of any one of the adjacent reference values of the first image component and the first image component reconstruction value corresponding to each pixel point of the coding block;according to the result of the difference calculation, acquiring a matched pixel point of each pixel point from the adjacent reference pixels of the coding block;and taking the second image component adjacent reference value corresponding to the matched pixel point as a second image component temporary value corresponding to each pixel point.
- The method of claim 3, wherein the obtaining a matching pixel point of each pixel point from the reference pixels adjacent to the coding block according to the result of the difference calculation comprises:acquiring an adjacent pixel point corresponding to the adjacent reference value of the first image component with the minimum difference according to the result of the difference calculation;and taking the adjacent pixel points as matching pixel points of each pixel point.
- The method of claim 3, wherein the obtaining a matching pixel point of each pixel point from the reference pixels adjacent to the coding block according to the result of the difference calculation comprises:acquiring an adjacent pixel point set corresponding to the adjacent reference value of the first image component with the minimum difference according to the difference calculation result;and calculating the distance value between each adjacent pixel point and each pixel point in the adjacent pixel point set, and selecting the adjacent pixel point with the minimum distance value as a matching pixel point of each pixel point.
- The method of claim 2, wherein said obtaining a second image component temporary value from said first image component reconstructed value, said first image component neighboring reference value and said second image component neighboring reference value comprises:calculating the difference value of any one of the adjacent reference values of the first image component and the first image component reconstruction value corresponding to each pixel point of the coding block;according to the result of the difference calculation, a first matching pixel point and a second matching pixel point of each pixel point are obtained from the adjacent reference pixels of the coding block; the first matching pixel point represents an adjacent pixel point corresponding to a first image component adjacent reference value which is larger than the first image component reconstruction value and has the minimum difference value in the first image component adjacent reference values, and the second matching pixel point represents an adjacent pixel point corresponding to a first image component adjacent reference value which is smaller than the first image component reconstruction value and has the minimum difference value in the first image component adjacent reference values;and carrying out interpolation operation according to the second image component adjacent reference value corresponding to the first matching pixel point and the second image component adjacent reference value corresponding to the second matching pixel point to obtain a second image component temporary value corresponding to each pixel point.
- The method of claim 2, wherein the model parameters comprise first model parameters and second model parameters, and the determining model parameters from the acquired first image component reconstructed values and the acquired second image component temporary values comprises:inputting the first image component reconstruction value and the second image component temporary value into a first preset factor calculation model to obtain a first model parameter;and inputting the first model parameter, the first image component reconstruction value and the second image component temporary value into a second preset factor calculation model to obtain the second model parameter.
- The method of claim 7, wherein the obtaining a second image component prediction value corresponding to each pixel point in the coding block according to the model parameters comprises:establishing a preset model based on the first model parameter and the second model parameter; the preset model is used for representing a calculation relation between a first image component reconstruction value and a second image component predicted value corresponding to each pixel point in the coding block;and obtaining a second image component predicted value corresponding to each pixel point in the coding block according to the preset model and the first image component reconstruction value corresponding to each pixel point in the coding block.
- The method of any of claims 1 to 8, wherein the method further comprises:acquiring a second image component reconstruction value and a third image component adjacent reference value corresponding to the coding block; the second image component reconstruction value represents a second image component reconstruction value corresponding to at least one pixel point of the coding block, and the third image component adjacent reference value represents a third image component reference value corresponding to each adjacent pixel point in the adjacent reference pixels of the coding block;determining sub-model parameters according to the obtained second image component reconstruction value, the second image component adjacent reference value and the third image component adjacent reference value;and acquiring a third image component predicted value corresponding to each pixel point in the coding block according to the sub-model parameters.
- The method of claim 9, wherein determining sub-model parameters from the obtained second image component reconstructed value, the second image component neighboring reference value, and the third image component neighboring reference value comprises:acquiring a third image component temporary value according to the second image component reconstruction value, the second image component adjacent reference value and the third image component adjacent reference value; the temporary value of the third image component is obtained based on the adjacent reference value of the third image component corresponding to the matched pixel point of at least one pixel point of the coding block in the adjacent reference pixels of the coding block;and determining sub-model parameters according to the second image component reconstruction value and the acquired third image component temporary value.
- The method of any one of claims 1 to 10, wherein the method further comprises:obtaining at least one threshold value according to a first image component reconstruction value corresponding to at least one pixel point of the coding block;grouping according to the comparison result of the first image component reconstruction value and the at least one threshold value to obtain at least two groups of first image component reconstruction values and second image component temporary values;and determining model parameters according to each group of the at least two groups of first image component reconstruction values and second image component temporary values to obtain at least two groups of model parameters.
- The method according to claim 11, wherein said obtaining a second image component prediction value corresponding to each pixel point in the coded block according to the model parameters comprises:establishing at least two preset models based on the obtained at least two groups of model parameters; the at least two preset models and the at least two groups of model parameters have corresponding relations;according to the comparison result of the first image component reconstruction value and the at least one threshold value, selecting a preset model corresponding to each pixel point in the coding block from the at least two preset models;and acquiring a second image component predicted value corresponding to each pixel point in the coding block according to the preset model corresponding to each pixel point in the coding block and the first image component reconstruction value.
- A prediction device of a video image component, the prediction device of the video image component comprising: an acquisition section, a determination section, and a prediction section;the acquisition part is configured to acquire a first image component reconstruction value, a first image component adjacent reference value and a second image component adjacent reference value corresponding to the coding block; the first image component reconstruction value represents a reconstruction value of at least one pixel point of the coding block corresponding to a first image component, and the first image component adjacent reference value and the second image component adjacent reference value represent a reference value of each adjacent pixel point in the coding block adjacent reference pixel corresponding to the first image component and a reference value of the second image component respectively;the determining part is configured to determine a model parameter according to the acquired first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value;and the prediction part is configured to acquire a second image component prediction value corresponding to each pixel point of the coding block according to the model parameter.
- A prediction apparatus for a video image component, wherein the prediction apparatus for the video image component comprises: a memory and a processor;the memory for storing a computer program operable on the processor;the processor, when executing the computer program, is adapted to perform the steps of the method of any of claims 1 to 12.
- A computer storage medium, wherein the computer storage medium stores a prediction program for a video image component, which when executed by at least one processor implements the steps of the method of any one of claims 1 to 12.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/107109 WO2020056767A1 (en) | 2018-09-21 | 2018-09-21 | Video image component prediction method and apparatus, and computer storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112313950A true CN112313950A (en) | 2021-02-02 |
CN112313950B CN112313950B (en) | 2023-06-02 |
Family
ID=69888083
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201880094931.9A Active CN112313950B (en) | 2018-09-21 | 2018-09-21 | Video image component prediction method, device and computer storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112313950B (en) |
WO (1) | WO2020056767A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112784830B (en) * | 2021-01-28 | 2024-08-27 | 联想(北京)有限公司 | Character recognition method and device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130114706A1 (en) * | 2011-10-25 | 2013-05-09 | Canon Kabushiki Kaisha | Method and apparatus for processing components of an image |
CN103379321A (en) * | 2012-04-16 | 2013-10-30 | 华为技术有限公司 | Prediction method and prediction device for video image component |
CN103533374A (en) * | 2012-07-06 | 2014-01-22 | 乐金电子(中国)研究开发中心有限公司 | Method and device for video encoding and decoding |
CN105306944A (en) * | 2015-11-30 | 2016-02-03 | 哈尔滨工业大学 | Chrominance component prediction method in hybrid video coding standard |
JP6005572B2 (en) * | 2013-03-28 | 2016-10-12 | Kddi株式会社 | Moving picture encoding apparatus, moving picture decoding apparatus, moving picture encoding method, moving picture decoding method, and program |
CN106688237A (en) * | 2014-09-30 | 2017-05-17 | 凯迪迪爱通信技术有限公司 | Moving image coding device, moving image decoding device, moving image compression and transmission system, moving image coding method, moving image decoding method, and program |
CN107580222A (en) * | 2017-08-01 | 2018-01-12 | 北京交通大学 | A kind of image or method for video coding based on Linear Model for Prediction |
US20180077426A1 (en) * | 2016-09-15 | 2018-03-15 | Qualcomm Incorporated | Linear model chroma intra prediction for video coding |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107079166A (en) * | 2014-10-28 | 2017-08-18 | 联发科技(新加坡)私人有限公司 | The method that guided crossover component for Video coding is predicted |
CN107211121B (en) * | 2015-01-22 | 2020-10-23 | 联发科技(新加坡)私人有限公司 | Video encoding method and video decoding method |
-
2018
- 2018-09-21 CN CN201880094931.9A patent/CN112313950B/en active Active
- 2018-09-21 WO PCT/CN2018/107109 patent/WO2020056767A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130114706A1 (en) * | 2011-10-25 | 2013-05-09 | Canon Kabushiki Kaisha | Method and apparatus for processing components of an image |
CN103379321A (en) * | 2012-04-16 | 2013-10-30 | 华为技术有限公司 | Prediction method and prediction device for video image component |
CN103533374A (en) * | 2012-07-06 | 2014-01-22 | 乐金电子(中国)研究开发中心有限公司 | Method and device for video encoding and decoding |
JP6005572B2 (en) * | 2013-03-28 | 2016-10-12 | Kddi株式会社 | Moving picture encoding apparatus, moving picture decoding apparatus, moving picture encoding method, moving picture decoding method, and program |
CN106688237A (en) * | 2014-09-30 | 2017-05-17 | 凯迪迪爱通信技术有限公司 | Moving image coding device, moving image decoding device, moving image compression and transmission system, moving image coding method, moving image decoding method, and program |
CN105306944A (en) * | 2015-11-30 | 2016-02-03 | 哈尔滨工业大学 | Chrominance component prediction method in hybrid video coding standard |
US20180077426A1 (en) * | 2016-09-15 | 2018-03-15 | Qualcomm Incorporated | Linear model chroma intra prediction for video coding |
CN107580222A (en) * | 2017-08-01 | 2018-01-12 | 北京交通大学 | A kind of image or method for video coding based on Linear Model for Prediction |
Non-Patent Citations (2)
Title |
---|
JIANLE CHEN: "Algorithm Description of Joint Explorafion Test Model 5", 《JOINT VIDEO EXPLORATION TEAM (JVET)》 * |
张涛: "视频压缩中的高效帧内编码技术研究", 《中国博士学位论文全文数据库》 * |
Also Published As
Publication number | Publication date |
---|---|
WO2020056767A1 (en) | 2020-03-26 |
CN112313950B (en) | 2023-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108848376B (en) | Video encoding method, video decoding method, video encoding device, video decoding device and computer equipment | |
WO2021004152A1 (en) | Image component prediction method, encoder, decoder, and storage medium | |
CN113784128B (en) | Image prediction method, encoder, decoder, and storage medium | |
EP2536147A1 (en) | Predictive coding method for motion vector, predictive decoding method for motion vector, video coding device, video decoding device, and programs therefor | |
CN112534817A (en) | Method and apparatus for predicting video image component, and computer storage medium | |
CN113439440A (en) | Image component prediction method, encoder, decoder, and storage medium | |
CN113068025B (en) | Decoding prediction method, device and computer storage medium | |
CN112313950B (en) | Video image component prediction method, device and computer storage medium | |
CN113766233B (en) | Image prediction method, encoder, decoder, and storage medium | |
CN113395520B (en) | Decoding prediction method, device and computer storage medium | |
US11843724B2 (en) | Intra prediction method and apparatus, and computer-readable storage medium | |
WO2024207136A1 (en) | Encoding/decoding method, code stream, encoder, decoder and storage medium | |
CN113840144B (en) | Image component prediction method, encoder, decoder, and computer storage medium | |
RU2805048C2 (en) | Image prediction method, encoder and decoder | |
CN113261279A (en) | Method for determining prediction value, encoder, decoder, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |