CN117915100A - Chroma component prediction method, device and equipment - Google Patents

Chroma component prediction method, device and equipment Download PDF

Info

Publication number
CN117915100A
CN117915100A CN202211255923.2A CN202211255923A CN117915100A CN 117915100 A CN117915100 A CN 117915100A CN 202211255923 A CN202211255923 A CN 202211255923A CN 117915100 A CN117915100 A CN 117915100A
Authority
CN
China
Prior art keywords
image block
target image
value
prediction
chroma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211255923.2A
Other languages
Chinese (zh)
Inventor
周川
吕卓逸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to PCT/CN2023/123385 priority Critical patent/WO2024078416A1/en
Publication of CN117915100A publication Critical patent/CN117915100A/en
Pending legal-status Critical Current

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application discloses a method, a device and equipment for predicting a chrominance component, which belong to the technical field of encoding and decoding, and the method for predicting the chrominance component provided by the embodiment of the application comprises the following steps: determining template pixels corresponding to the target image blocks based on the target identifications corresponding to the target image blocks; the template pixels are at least part of pixels of a pixel area adjacent to the target image block; fitting a first chrominance reconstruction value, a first chrominance prediction value and a first brightness value corresponding to the template pixels to obtain model parameters; the model parameters are parameters of a chrominance component prediction model corresponding to the target image block, and the chrominance component prediction model is determined based on the target identification; and determining the target chroma predicted value corresponding to the target image block based on the model parameter, the second chroma predicted value corresponding to the target image block and the second brightness value.

Description

Chroma component prediction method, device and equipment
Cross Reference to Related Applications
The present application claims priority from chinese patent application No.202211248936.7 filed in china at 10 and 12 of 2022, the entire contents of which are incorporated herein by reference.
Technical Field
The application belongs to the technical field of coding and decoding, and particularly relates to a method, a device and equipment for predicting a chrominance component.
Background
In the related art, intra prediction mode and inter-component linear prediction mode (Cross Component Liner prediction Mode, CCLM) are generally used to determine a chrominance component prediction value corresponding to an image block. Specifically, one chroma prediction value corresponding to the image block may be determined based on the intra-frame prediction mode, another chroma prediction value corresponding to the image block may be determined based on the inter-component linear prediction mode, and chroma component fusion may be performed on the two chroma prediction values using a preset weight combination to obtain a final chroma prediction value.
In the above process, the two chroma prediction values are fused only by the preset weight combination to obtain the final chroma prediction value, however, the weight combination has a limitation in number, which results in lower accuracy of the final chroma prediction value.
Disclosure of Invention
The embodiment of the application provides a method, a device and equipment for predicting a chrominance component, which can solve the problem of lower accuracy of a chrominance predicted value in the existing scheme.
In a first aspect, a chrominance component prediction method is provided, including:
Determining template pixels corresponding to a target image block based on a target identifier corresponding to the target image block; the template pixels are at least part of pixels of a pixel area adjacent to the target image block;
fitting a first chrominance reconstruction value, a first chrominance prediction value and a first brightness value which correspond to the template pixels to obtain model parameters; the model parameters are parameters of a chrominance component prediction model corresponding to the target image block, and the chrominance component prediction model is determined based on the target identifier;
and determining a target chroma prediction value corresponding to the target image block based on the model parameter, the second chroma prediction value corresponding to the target image block and the second brightness value.
In a second aspect, there is provided a chrominance component prediction apparatus comprising:
The first determining module is used for determining template pixels corresponding to the target image block based on the target identifier corresponding to the target image block; the template pixels are at least part of pixels of a pixel area adjacent to the target image block;
The fitting module is used for fitting the first chrominance reconstruction value, the first chrominance prediction value and the first brightness value corresponding to the template pixels to obtain model parameters; the model parameters are parameters of a chrominance component prediction model corresponding to the target image block, and the chrominance component prediction model is determined based on the target identifier;
and the second determining module is used for determining a target chroma prediction value corresponding to the target image block based on the model parameter, a second chroma prediction value corresponding to the target image block and a second brightness value.
In a third aspect, there is provided a terminal comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, there is provided a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, a chip is provided, the chip comprising a processor and a communication interface, the communication interface being coupled to the processor, the processor being configured to execute programs or instructions for implementing the method according to the first aspect.
In a sixth aspect, there is provided a computer program/program product stored in a storage medium, the computer program/program product being executed by at least one processor to carry out the steps of the method according to the first aspect.
In the embodiment of the application, the template pixels corresponding to the target image blocks are determined based on the target identifications corresponding to the target image blocks; the template pixels are at least part of pixels of a pixel area adjacent to the target image block; fitting a first chrominance reconstruction value, a first chrominance prediction value and a first brightness value corresponding to the template pixels to obtain model parameters; the model parameters are parameters of a chrominance component prediction model corresponding to the target image block, and the chrominance component prediction model is determined based on the target identification; and determining the target chroma predicted value corresponding to the target image block based on the model parameter, the second chroma predicted value corresponding to the target image block and the second brightness value.
In the embodiment of the application, the first chrominance predicted value and the first luminance value corresponding to the template pixels are fitted, and the linear relation between the first chrominance predicted value and the first luminance value is adjusted, so that the first chrominance predicted value and the first luminance value are fitted towards the first chrominance reconstructed value, and the model parameters are obtained. In the related art, chroma component fusion is performed on two chroma prediction values corresponding to an image block through a preset weight combination, and the weight combination has a limitation in number, which reduces the precision of the chroma component prediction values. In the embodiment of the application, the first chrominance rebuilding value, the first chrominance predicted value and the first luminance value corresponding to the template pixel are fitted, the linear relation between the first chrominance predicted value and the first luminance value is adjusted, so that the first chrominance predicted value and the first luminance value are fitted towards the first chrominance rebuilding value, the model parameter is obtained, and the target chrominance predicted value is determined based on the model parameter. In the determining process of the target chroma predicted value, the target chroma predicted value is strongly correlated with the fitting result of the first chroma reconstruction value, the first chroma predicted value and the first brightness value, and is not limited in number by the weight combination, so that the precision of the chroma component predicted value is improved.
Drawings
FIG. 1 is a schematic diagram of a template pixel and image block in the related art;
FIG. 2 is a flowchart of a chrominance component prediction method according to an embodiment of the present application;
FIG. 3 is one of the schematic diagrams of a template pixel and image block provided by an embodiment of the present application;
FIG. 4 is a second schematic diagram of a template pixel and image block provided by an embodiment of the present application;
Fig. 5 is a block diagram of a chrominance component prediction apparatus provided in an embodiment of the present application;
fig. 6 is a block diagram of a communication device according to an embodiment of the present application;
fig. 7 is a schematic diagram of a hardware structure of a terminal according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are derived by a person skilled in the art based on the embodiments of the application, fall within the scope of protection of the application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the "first" and "second" distinguishing between objects generally are not limited in number to the extent that the first object may, for example, be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/" generally means a relationship in which the associated object is an "or" before and after.
In the related art, one chroma prediction value corresponding to an image block is determined based on an intra-frame prediction mode, another chroma prediction value corresponding to the image block is determined based on an inter-component linear prediction mode, and chroma component fusion is performed on the two chroma prediction values by using a preset weight combination to obtain a final chroma prediction value.
Specifically, the final chroma prediction value of an image block can be determined by the following formula (1):
pred=(w0*pred0+w1*pred1+(1<<(shift-1)))>>shift (1)
where pred denotes a final chroma prediction value, pred0 denotes a chroma prediction value determined based on an intra prediction mode, pred1 denotes a chroma prediction value determined based on an inter-component linear prediction mode, and w0 and w1 denote weight combinations, for example, w0 and w1 may be {1,3}, {3,1} or {2,2}.
Optionally, the inter-component linear prediction modes include an inter-component linear single model prediction mode and an inter-component linear multi-model prediction mode.
For inter-component linear single model prediction modes, the chroma prediction value may be determined by the following equation (2).
predC(i,j)=α·recL′(i,j)+β (2)
Wherein pred C (i, j) represents a chrominance prediction value determined based on an inter-component linear single model prediction mode, rec L' (i, j) represents a luminance value of an image block, and α and β represent mode parameters, which may be obtained by performing a minimum linear mean square error calculation on a reconstructed luminance value and a reconstructed chrominance value of a model pixel.
Referring to fig. 1, as shown in fig. 1, the template pixels include a row of pixels located above and adjacent to the image block, and a column of pixels located at the left side of and adjacent to the image block. Optionally, the template pixels comprise a row of pixels located above and adjacent to the image block, or a column of pixels located to the left of and adjacent to the image block.
For the inter-component linear multi-model prediction mode, the luminance value of the image block can be divided into two types according to the luminance average value of the image block, and the chrominance prediction value is calculated by the formula (1).
However, in the above process, the chroma component prediction values are subjected to chroma component fusion only by a preset weight combination, and there is a limitation in the number of weight combinations, which results in lower accuracy of the final chroma component prediction values.
In order to solve the above technical problems, an embodiment of the present application provides a chrominance component prediction method. The chrominance component prediction method provided by the embodiment of the application is described in detail below through some embodiments and application scenarios thereof with reference to the accompanying drawings.
Referring to fig. 2, fig. 2 is a flowchart of a chrominance component prediction method according to an embodiment of the present application. The chroma component prediction method provided in the present embodiment includes the following steps:
S201, determining template pixels corresponding to a target image block based on a target identifier corresponding to the target image block.
In this step, a target identifier corresponding to a target image block may be obtained through a code stream, where the target identifier is used to characterize a template pixel corresponding to the target image block, so as to determine the template pixel corresponding to the target image block. Wherein the template pixels are at least part of the pixels in the pixel area adjacent to the target image block. It should be noted that, the intra-chroma prediction mode corresponding to the target image block may also be obtained through the code stream, and then the second chroma prediction value corresponding to the target image block may be determined based on the intra-chroma prediction mode.
Optionally, the module pixel is adjacent to the target image block and is located above and/or to the left of the target image block; or alternatively
The module pixels are located above and/or to the left of the interior of the target image block.
Referring to fig. 3, as shown in fig. 3, an alternative embodiment is that the template pixels are adjacent to the target image block and are located above and/or to the left of the target image block. Referring to fig. 4, as shown in fig. 4, another alternative embodiment is that the module pixels are located above and/or to the left of the interior of the target image block.
Optionally, the target identifier is further used for characterizing a chrominance component prediction model, and the chrominance component prediction model corresponding to the target image block can be determined through the target identifier. It should be understood that the above-described chrominance component prediction models may be classified into linear models and nonlinear models, or into single models and multiple models, that is, chrominance component prediction models include, but are not limited to, inter-component linear prediction single models, inter-component linear prediction multiple models, inter-component nonlinear prediction single models, and inter-component nonlinear prediction multiple models.
S202, fitting the first chrominance reconstruction value, the first chrominance prediction value and the first luminance value corresponding to the template pixels to obtain model parameters.
In this step, an alternative embodiment is: the model parameters are obtained by fitting the first chrominance reconstruction value, the first chrominance prediction value, and the first luminance value using a matrix decomposition method including, but not limited to, LDL matrix decomposition, QR matrix decomposition, and LU matrix decomposition. It should be understood that the template pixel is a reconstructed pixel, and the first chrominance reconstruction value corresponding to the template pixel may be directly obtained.
Another alternative embodiment is: and carrying out minimum linear mean square error calculation on the first chrominance reconstruction value, the first chrominance predicted value and the first brightness value to obtain model parameters.
It should be noted that, the above-mentioned process of fitting the first chrominance reconstruction value, the first chrominance prediction value and the first luminance value is to adjust a linear relationship between the first chrominance prediction value and the first luminance value so that the first chrominance prediction value and the first luminance value are fitted towards the first chrominance reconstruction value, that is, the first chrominance prediction value and the first luminance value are made to approach the first chrominance reconstruction value by adjusting the first chrominance prediction value and the first luminance value.
And S203, determining a target chroma prediction value corresponding to the target image block based on the model parameter, the second chroma prediction value corresponding to the target image block and the second brightness value.
In this step, a second chroma prediction value corresponding to the target image block may be determined based on the chroma intra prediction mode characterized by the target identifier; and further determining a target chroma prediction value corresponding to the target image block based on the model parameter, the second chroma prediction value and a second luminance value corresponding to the target image block. For specific embodiments, reference is made to the following examples.
In the embodiment of the application, the template pixels corresponding to the target image blocks are determined based on the target identifications corresponding to the target image blocks; the template pixels are at least part of pixels of a pixel area adjacent to the target image block; fitting a first chrominance reconstruction value, a first chrominance prediction value and a first brightness value corresponding to the template pixels to obtain model parameters; the model parameters are parameters of a chrominance component prediction model corresponding to the target image block, and the chrominance component prediction model is determined based on the target identification; and determining the target chroma predicted value corresponding to the target image block based on the model parameter, the second chroma predicted value corresponding to the target image block and the second brightness value. In the related art, chroma component fusion is performed on two chroma prediction values corresponding to an image block through a preset weight combination, and the weight combination has a limitation in number, which reduces the precision of the chroma component prediction values. In the embodiment of the application, the first chrominance rebuilding value, the first chrominance predicted value and the first luminance value corresponding to the template pixel are fitted, the linear relation between the first chrominance predicted value and the first luminance value is adjusted, so that the first chrominance predicted value and the first luminance value are fitted towards the first chrominance rebuilding value, the model parameter is obtained, and the target chrominance predicted value is determined based on the model parameter. In the determining process of the target chroma predicted value, the target chroma predicted value is strongly correlated with the fitting result of the first chroma reconstruction value, the first chroma predicted value and the first brightness value, and is not limited in number by the weight combination, so that the precision of the chroma component predicted value is improved.
Optionally, the determining, based on the model parameter, the second chroma prediction value and the second luma value corresponding to the target image block, the target chroma prediction value corresponding to the target image block includes:
and calculating the model parameters, a second chroma predicted value corresponding to the target image block and a second brightness value by using the chroma component prediction model to obtain the target chroma predicted value corresponding to the target image block.
As described above, the target identification may characterize a chrominance component prediction model, which includes an inter-component linear prediction model or an inter-component nonlinear prediction model.
In this embodiment, the model parameter, the second chroma prediction value corresponding to the target image block, and the second luminance value corresponding to the target image block may be used as inputs of a chroma component prediction model, and the chroma component prediction model calculates the model parameter, the second chroma prediction value, and the second luminance value, and outputs the target chroma prediction value corresponding to the target image block.
An alternative embodiment is that the chrominance component prediction model is an inter-component linear prediction single model, and in this embodiment, the target chrominance prediction value can be determined through the formula (3).
predC(i,j)=α0·rec′L(i,j)+α1·pred′C(i,j)+α2·midValue (3)
Where pred C (i, j) represents the target chroma prediction value, α 0、α1 and α 2 represent model parameters, rec 'L (i, j) represents the second luma value, pred' C (i, j) represents the second chroma prediction value, and midValue represents the corresponding median value of the target image block.
Alternatively, the chrominance component prediction model is an inter-component linear prediction multi-model, and in this embodiment, the target chrominance prediction value may be determined by the formula (4).
predC(i,j)=α0·(((rec′L(i,j))2+midValue)>>bitDepth)+α1·rec′L(i,j)+α2·pred′C(i,j)+α3·midValue (4)
Wherein pred C (i, j) represents the target chroma prediction value, α 0、α1、α2 and α 3 represent model parameters, rec 'L (i, j) represents the second luminance value, pred' C (i, j) represents the second chroma prediction value, bitDepth represents the video bit depth corresponding to the target image block, and midValue represents the median corresponding to the target image block.
Wherein the median value is associated with the video bit depth, optionally, in the case of a video bit depth of 10, the median value is 512.
For the inter-component linear prediction multi-model, the second luminance value corresponding to the target image block can be divided into two types according to the luminance average value corresponding to the target image block, and the target chromaticity predicted value is calculated by the formula (3).
For the inter-component nonlinear prediction multi-model, the second luminance value corresponding to the target image block can be divided into two types according to the luminance average value corresponding to the target image block, and the target chromaticity predicted value is calculated by the formula (4).
Example 1:
if the target mark comprises 0, the target image block is not fused with the chrominance component;
If the target mark comprises 1, the template pixels are positioned above and at the left side of the target image block; performing chroma component fusion by using inter-component linear prediction multi-model related in the related art;
If the target mark comprises 2, the template pixels are positioned above and at the left side of the target image block; the inter-component linear prediction single model provided by the embodiment is used for chroma component fusion;
if the target mark comprises 3, the template pixels are positioned above and at the left side of the target image block; the inter-component linear prediction multi-model provided by the embodiment is used for chroma component fusion.
Example 2:
if the target mark comprises 0, the target image block is not fused with the chrominance component;
If the target mark comprises 1, the template pixel is positioned above and to the left of the target image block, or the template pixel is positioned above and to the left of the interior of the target image block; the inter-component linear prediction multi-model provided by the embodiment of the application is used for chroma component fusion;
If the target mark comprises 2, the template pixel is positioned above and to the left of the target image block, or the template pixel is positioned above and to the left of the interior of the target image block; the inter-component nonlinear prediction multi-model provided by the embodiment of the application is used for chroma component fusion.
Example 3:
if the target mark comprises 0, the target image block is not fused with the chrominance component;
If the target mark comprises 1, the template pixel is positioned above and to the left of the target image block, or the template pixel is positioned above and to the left of the interior of the target image block; the inter-component linear prediction multi-model provided by the embodiment of the application is used for chroma component fusion;
if the target mark comprises 2, the template pixel is positioned above and to the left of the target image block, or the template pixel is positioned above and to the left of the interior of the target image block; the inter-component linear prediction single model provided by the embodiment of the application is used for chroma component fusion.
Example 4:
if the target mark comprises 0, the target image block is not fused with the chrominance component;
if the target mark comprises 1, the chroma component prediction model is a linear model; if the target mark does not comprise 1, the chroma component prediction model is represented as a nonlinear model;
If the target mark comprises 2, the template pixel is positioned above and to the left of the target image block, or the template pixel is positioned above and to the left of the interior of the target image block; the inter-component prediction multimode provided by the embodiment of the application is used for chroma component fusion;
if the target mark comprises 3, the template pixel is positioned above and to the left of the target image block, or the template pixel is positioned above and to the left of the interior of the target image block; the inter-component prediction single model provided by the embodiment of the application is used for chroma component fusion;
if the target mark comprises 4, the template pixel is positioned above the target image block or positioned above the inner part of the target image block; the inter-component prediction single model provided by the embodiment of the application is used for chroma component fusion;
If the target mark comprises 5, the template pixel is positioned at the left side of the target image block, or the template pixel is positioned at the left side of the inner part of the target image block; the inter-component prediction single model provided by the embodiment of the application is used for chroma component fusion;
if the target mark comprises 6, the template pixel is positioned above the target image block or positioned above the inner part of the target image block; the inter-component prediction multimode provided by the embodiment of the application is used for chroma component fusion;
If the target mark comprises 7, the template pixel is positioned at the left side of the target image block, or the template pixel is positioned at the left side of the inner part of the target image block; the inter-component prediction multi-model provided by the embodiment of the application is used for chroma component fusion.
For example, the target logo includes 1 and 7, then it indicates that the template pixel is located on the left side of the target image block or that the template pixel is located on the left side of the interior of the target image block; the inter-component linear prediction multi-model provided by the embodiment of the application is used for chroma component fusion.
For example, the target mark is 5, which indicates that the template pixel is located on the left side of the target image block, or that the template pixel is located on the left side of the interior of the target image block; the inter-component nonlinear prediction single model provided by the embodiment of the application is used for chroma component fusion.
Optionally, before the fitting of the first chrominance reconstruction value, the first chrominance prediction value and the first luminance value corresponding to the template pixel, the method includes:
performing intra-frame prediction on the template pixels according to the chroma intra-frame prediction modes corresponding to the template pixels, and determining the first chroma prediction value;
Performing intra-frame prediction on the template pixels according to the brightness intra-frame prediction modes corresponding to the template pixels, and determining the first brightness value; or determining the first brightness value according to the brightness reconstruction value of the template pixel.
In this embodiment, optionally, the first reference pixel may be determined by performing intra-frame prediction on the template pixel based on the chroma intra-frame prediction mode corresponding to the template pixel, and the chroma value of the first reference pixel may be determined as the first chroma prediction value corresponding to the template pixel.
An alternative embodiment is: the second reference pixel may be determined by intra-prediction of the template pixel based on a luminance intra-prediction mode corresponding to the template pixel, and the first luminance value corresponding to the template pixel may be determined based on a luminance value of the second reference pixel.
Another alternative embodiment is: and obtaining a brightness reconstruction value of the template pixel, and determining a first brightness value corresponding to the template pixel according to the brightness reconstruction value.
Optionally, the determining the first luminance value, the method includes:
And determining whether to downsample the first brightness value according to the video sampling format corresponding to the target image block. For example, when the video sampling format is YUV444, downsampling is not required; downsampling is required when the video format is YUV 420.
And under the condition that downsampling is needed, the first brightness value corresponding to the target image block is the brightness value after downsampling.
Optionally, before determining the target chroma prediction value corresponding to the target image block based on the model parameter and the second chroma prediction value and the second luminance value corresponding to the target image block, the method includes:
Performing intra-frame prediction on the target image block according to a chroma intra-frame prediction mode corresponding to the target image block, and determining the second chroma prediction value;
Performing intra-frame prediction on the target image block according to a brightness intra-frame prediction mode corresponding to the target image block, and determining the second brightness value; or determining the second luminance value according to the luminance reconstruction value of the target image block.
In this embodiment, alternatively, the chroma intra prediction mode and the luma intra prediction mode corresponding to the target image block may be obtained through a code stream. And carrying out intra prediction on the target image block based on a chroma intra prediction mode corresponding to the target image block to determine a third reference pixel, and determining the chroma value of the third reference pixel as a second chroma prediction value corresponding to the target image block.
An alternative embodiment is: the fourth reference pixel may be determined by intra-prediction of the target image block based on a luminance intra-prediction mode corresponding to the target image block, and the second luminance value corresponding to the target image block may be determined based on a luminance value of the fourth reference pixel.
Another alternative embodiment is: and obtaining a brightness reconstruction value of the target image block, and determining a second brightness value corresponding to the target image block according to the brightness reconstruction value.
Optionally, the determining the second luminance value, the method includes:
And determining whether to downsample the second brightness value according to the video sampling format corresponding to the target image block. For example, when the video sampling format is YUV444, downsampling is not required; downsampling is required when the video sampling format is YUV 420.
And in the case that downsampling is required, the second brightness value corresponding to the target image block is the brightness value after downsampling.
According to the chrominance component prediction method provided by the embodiment of the application, the execution main body can be a chrominance component prediction device. In the embodiment of the present application, a method for performing chroma component prediction by using a chroma component prediction apparatus is taken as an example, and the chroma component prediction apparatus provided in the embodiment of the present application is described.
As shown in fig. 5, an embodiment of the present application further provides a chrominance component prediction apparatus 500, including:
A first determining module 501, configured to determine, based on a target identifier corresponding to a target image block, a template pixel corresponding to the target image block;
The fitting module 502 is configured to fit a first chrominance reconstruction value, a first chrominance prediction value and a first luminance value corresponding to the template pixel to obtain a model parameter;
A second determining module 503, configured to determine a target chroma prediction value corresponding to the target image block based on the model parameter, and a second chroma prediction value and a second luminance value corresponding to the target image block.
Optionally, the second determining module 503 is specifically configured to:
and calculating the model parameters, a second chroma predicted value corresponding to the target image block and a second brightness value by using the chroma component prediction model to obtain the target chroma predicted value corresponding to the target image block.
Optionally, the chrominance component prediction apparatus 500 further includes:
A third determining module, configured to perform intra-frame prediction on the template pixel according to a chroma intra-frame prediction mode corresponding to the template pixel, and determine the first chroma prediction value;
A fourth determining module, configured to perform intra-frame prediction on the template pixel according to a luminance intra-frame prediction mode corresponding to the template pixel, and determine the first luminance value; or determining the first brightness value according to the brightness reconstruction value of the template pixel.
Optionally, the chrominance component prediction apparatus 500 further includes:
a fifth determining module, configured to perform intra-frame prediction on the target image block according to a chroma intra-frame prediction mode corresponding to the target image block, and determine the second chroma prediction value;
A sixth determining module, configured to perform intra-frame prediction on the target image block according to a luminance intra-frame prediction mode corresponding to the target image block, and determine the second luminance value; or determining the second luminance value according to the luminance reconstruction value of the target image block.
Optionally, the module pixels are adjacent to the target image block and located above and/or to the left of the target image block; or alternatively
The module pixels are located above and/or to the left of the interior of the target image block.
In the related art, chroma component fusion is performed on two chroma prediction values corresponding to an image block through a preset weight combination, and the weight combination has a limitation in number, which reduces the precision of the chroma component prediction values. In the embodiment of the application, the first chrominance rebuilding value, the first chrominance predicted value and the first luminance value corresponding to the template pixel are fitted, the linear relation between the first chrominance predicted value and the first luminance value is adjusted, so that the first chrominance predicted value and the first luminance value are fitted towards the first chrominance rebuilding value, the model parameter is obtained, and the target chrominance predicted value is determined based on the model parameter. In the determining process of the target chroma predicted value, the target chroma predicted value is strongly correlated with the fitting result of the first chroma reconstruction value, the first chroma predicted value and the first brightness value, and is not limited in number by the weight combination, so that the precision of the chroma component predicted value is improved.
The embodiment of the device corresponds to the embodiment of the chrominance component prediction method shown in fig. 2, and each implementation process and implementation manner in the embodiment of the method can be applied to the embodiment of the device, and the same technical effects can be achieved.
The chroma component prediction apparatus in the embodiment of the present application may be an electronic device, for example, an electronic device with an operating system, or may be a component in an electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the terminals may include, but are not limited to, the types of terminals listed above, other devices may be servers, network attached storage (Network Attached Storage, NAS), etc., and embodiments of the present application are not limited in detail.
Optionally, as shown in fig. 6, the embodiment of the present application further provides a communication device 600, including a processor 601 and a memory 602, where the memory 602 stores a program or instructions executable on the processor 601, for example, when the communication device 600 is a terminal, the program or instructions implement the steps of the above-mentioned embodiment of the chrominance component prediction method when executed by the processor 601, and achieve the same technical effects.
The embodiment of the present application also provides a terminal, including a processor 601 and a communication interface, where the processor 601 is configured to perform the following operations:
determining template pixels corresponding to a target image block based on a target identifier corresponding to the target image block;
Fitting a first chrominance reconstruction value, a first chrominance prediction value and a first brightness value which correspond to the template pixels to obtain model parameters;
and determining a target chroma prediction value corresponding to the target image block based on the model parameter, the second chroma prediction value corresponding to the target image block and the second brightness value.
The terminal embodiment corresponds to the terminal-side method embodiment, and each implementation process and implementation manner of the method embodiment can be applied to the terminal embodiment, and the same technical effects can be achieved. Specifically, fig. 7 is a schematic diagram of a hardware structure of a terminal for implementing an embodiment of the present application.
The terminal 700 includes, but is not limited to: radio frequency unit 701, network module 702, audio output unit 703, input unit 704, sensor 705, display unit 706, user input unit 707, interface unit 708, memory 709, and processor 710.
Those skilled in the art will appreciate that the terminal 700 may further include a power source (e.g., a battery) for powering the various components, and that the power source may be logically coupled to the processor 710 via a power management system so as to perform functions such as managing charging, discharging, and power consumption via the power management system. The terminal structure shown in fig. 7 does not constitute a limitation of the terminal, and the terminal may include more or less components than shown, or may combine certain components, or may be arranged in different components, which will not be described in detail herein.
It should be appreciated that in embodiments of the present application, the input unit 704 may include a graphics processor (Graphics Processing Unit, GPU) 7041 and a microphone 7042, with the graphics processor 7041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes at least one of a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two parts, a touch detection device and a touch controller. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
In the embodiment of the present application, after receiving downlink data from a network side device, the radio frequency unit 701 may transmit the downlink data to the processor 710 for processing; the radio frequency unit 701 may send uplink data to the network side device. Typically, the radio unit 701 includes, but is not limited to, an antenna, an amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
The memory 709 may be used to store software programs or instructions and various data. The memory 709 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 709 may include volatile memory or nonvolatile memory, or the memory 709 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (double DATA RATE SDRAM, DDRSDRAM), enhanced synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCH LINK DRAM, SLDRAM), and direct random access memory (DRRAM). Memory 709 in embodiments of the application includes, but is not limited to, these and any other suitable types of memory.
Processor 710 may include one or more processing units; optionally, processor 710 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 710.
The processor 710 is configured to perform the following operations:
determining template pixels corresponding to a target image block based on a target identifier corresponding to the target image block;
Fitting a first chrominance reconstruction value, a first chrominance prediction value and a first brightness value which correspond to the template pixels to obtain model parameters;
and determining a target chroma prediction value corresponding to the target image block based on the model parameter, the second chroma prediction value corresponding to the target image block and the second brightness value.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above-mentioned chroma component prediction method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
Wherein the processor is a processor in the terminal described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the chrominance component prediction method, and can achieve the same technical effect, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, or the like.
The embodiments of the present application further provide a computer program/program product stored in a storage medium, where the computer program/program product is executed by at least one processor to implement the respective processes of the embodiments of the chrominance component prediction method, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (12)

1. A method of chrominance component prediction, comprising:
Determining template pixels corresponding to a target image block based on a target identifier corresponding to the target image block; the template pixels are at least part of pixels of a pixel area adjacent to the target image block;
fitting a first chrominance reconstruction value, a first chrominance prediction value and a first brightness value which correspond to the template pixels to obtain model parameters; the model parameters are parameters of a chrominance component prediction model corresponding to the target image block, and the chrominance component prediction model is determined based on the target identifier;
and determining a target chroma prediction value corresponding to the target image block based on the model parameter, the second chroma prediction value corresponding to the target image block and the second brightness value.
2. The method of claim 1, wherein determining the target chroma prediction value for the target image block based on the model parameter and the second chroma prediction value and the second luma value for the target image block comprises:
Calculating the model parameters, a second chroma prediction value corresponding to the target image block and a second brightness value by using the chroma component prediction model to obtain a target chroma prediction value corresponding to the target image block; the chrominance component prediction model includes an inter-component linear prediction model or an inter-component nonlinear prediction model.
3. The method of claim 1, wherein prior to fitting the first chrominance prediction value and the first luminance value of the template pixel corresponding to the first chrominance reconstruction value, the method comprises:
performing intra-frame prediction on the template pixels according to the chroma intra-frame prediction modes corresponding to the template pixels, and determining the first chroma prediction value;
Performing intra-frame prediction on the template pixels according to the brightness intra-frame prediction modes corresponding to the template pixels, and determining the first brightness value; or determining the first brightness value according to the brightness reconstruction value of the template pixel.
4. The method according to claim 1, wherein before determining the target chroma prediction value corresponding to the target image block based on the model parameter and the second chroma prediction value and the second luma value corresponding to the target image block, the method comprises:
Performing intra-frame prediction on the target image block according to a chroma intra-frame prediction mode corresponding to the target image block, and determining the second chroma prediction value;
Performing intra-frame prediction on the target image block according to a brightness intra-frame prediction mode corresponding to the target image block, and determining the second brightness value; or determining the second luminance value according to the luminance reconstruction value of the target image block.
5. The method according to any of claims 1-4, wherein module pixels are adjacent to and above and/or to the left of a target image block; or alternatively
The module pixels are located above and/or to the left of the interior of the target image block.
6. A chrominance component prediction apparatus, comprising:
The first determining module is used for determining template pixels corresponding to the target image block based on the target identifier corresponding to the target image block; the template pixels are at least part of pixels of a pixel area adjacent to the target image block;
The fitting module is used for fitting the first chrominance reconstruction value, the first chrominance prediction value and the first brightness value corresponding to the template pixels to obtain model parameters; the model parameters are parameters of a chrominance component prediction model corresponding to the target image block, and the chrominance component prediction model is determined based on the target identifier;
and the second determining module is used for determining a target chroma prediction value corresponding to the target image block based on the model parameter, a second chroma prediction value corresponding to the target image block and a second brightness value.
7. The apparatus of claim 6, wherein the second determining module is specifically configured to:
Calculating the model parameters, a second chroma prediction value corresponding to the target image block and a second brightness value by using the chroma component prediction model to obtain a target chroma prediction value corresponding to the target image block; the chrominance component prediction model includes an inter-component linear prediction model or an inter-component nonlinear prediction model.
8. The apparatus of claim 6, wherein the apparatus further comprises:
A third determining module, configured to perform intra-frame prediction on the template pixel according to a chroma intra-frame prediction mode corresponding to the template pixel, and determine the first chroma prediction value;
A fourth determining module, configured to perform intra-frame prediction on the template pixel according to a luminance intra-frame prediction mode corresponding to the template pixel, and determine the first luminance value; or determining the first brightness value according to the brightness reconstruction value of the template pixel.
9. The apparatus of claim 6, wherein the apparatus further comprises:
a fifth determining module, configured to perform intra-frame prediction on the target image block according to a chroma intra-frame prediction mode corresponding to the target image block, and determine the second chroma prediction value;
A sixth determining module, configured to perform intra-frame prediction on the target image block according to a luminance intra-frame prediction mode corresponding to the target image block, and determine the second luminance value; or determining the second luminance value according to the luminance reconstruction value of the target image block.
10. The apparatus according to any of claims 6-9, wherein a module pixel is adjacent to a target image block and above and/or to the left of the target image block; or alternatively
The module pixels are located above and/or to the left of the interior of the target image block.
11. A terminal comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the chrominance component prediction method of any one of claims 1 to 5.
12. A readable storage medium, characterized in that it has stored thereon a program or instructions which, when executed by a processor, implement the steps of the chrominance component prediction method according to any one of claims 1 to 5.
CN202211255923.2A 2022-10-12 2022-10-13 Chroma component prediction method, device and equipment Pending CN117915100A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2023/123385 WO2024078416A1 (en) 2022-10-12 2023-10-08 Chromaticity component prediction method and apparatus, and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211248936 2022-10-12
CN2022112489367 2022-10-12

Publications (1)

Publication Number Publication Date
CN117915100A true CN117915100A (en) 2024-04-19

Family

ID=90689644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211255923.2A Pending CN117915100A (en) 2022-10-12 2022-10-13 Chroma component prediction method, device and equipment

Country Status (1)

Country Link
CN (1) CN117915100A (en)

Similar Documents

Publication Publication Date Title
CN110324622B (en) Video coding rate control method, device, equipment and storage medium
KR102219320B1 (en) Method and device for image coding and decoding for inter-frame prediction
KR100947692B1 (en) Motion vector coding method and motion vector decoding method
US8483496B2 (en) Image encoding/decoding system
US11528507B2 (en) Image encoding and decoding method, apparatus, and system, and storage medium to determine a transform core pair to effectively reduce encoding complexity
CN115002463B (en) Image processing method, intelligent terminal and storage medium
US11812050B2 (en) Motion vector obtaining method and apparatus
CN113887599A (en) Screen light detection model training method, and ambient light detection method and device
US11146826B2 (en) Image filtering method and apparatus
CN117915100A (en) Chroma component prediction method, device and equipment
CN116456102B (en) Image processing method, processing apparatus, and storage medium
WO2024078416A1 (en) Chromaticity component prediction method and apparatus, and device
CN113038179A (en) Video encoding method, video decoding method, video encoding device, video decoding device and electronic equipment
Kim et al. Hash rearrangement scheme for HEVC screen content coding
CN115118982B (en) Video processing method, device, storage medium and computer program product
CN115866263A (en) Video decoding method, video encoding method and related equipment
WO2024007952A1 (en) Loop filtering method and apparatus, and device
WO2023116510A1 (en) Inter-frame prediction method and terminal
WO2023198144A1 (en) Inter-frame prediction method and terminal
CN117915097A (en) Intra-frame prediction method, device and equipment
CN116233426B (en) Attribute quantization and inverse quantization methods, devices and equipment
CN115883833A (en) Intra-frame prediction method and device
CN116847088B (en) Image processing method, processing apparatus, and storage medium
CN116614623A (en) Predictive encoding method, predictive decoding method and terminal
CN117933333A (en) Method for determining neural network model loss value and related application method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination