WO2020258053A1 - Image component prediction method and apparatus, and computer storage medium - Google Patents

Image component prediction method and apparatus, and computer storage medium Download PDF

Info

Publication number
WO2020258053A1
WO2020258053A1 PCT/CN2019/092859 CN2019092859W WO2020258053A1 WO 2020258053 A1 WO2020258053 A1 WO 2020258053A1 CN 2019092859 W CN2019092859 W CN 2019092859W WO 2020258053 A1 WO2020258053 A1 WO 2020258053A1
Authority
WO
WIPO (PCT)
Prior art keywords
image component
reference pixel
point
adjacent
value
Prior art date
Application number
PCT/CN2019/092859
Other languages
French (fr)
Chinese (zh)
Inventor
霍俊彦
马彦卓
万帅
杨付正
李新伟
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to CN201980084801.1A priority Critical patent/CN113196770A/en
Priority to PCT/CN2019/092859 priority patent/WO2020258053A1/en
Publication of WO2020258053A1 publication Critical patent/WO2020258053A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding

Definitions

  • the embodiments of the present application relate to the field of video coding and decoding technologies, and in particular to an image component prediction method, device, and computer storage medium.
  • H.265/High Efficiency Video Coding has been unable to meet the needs of the rapid development of video applications.
  • JVET Joint Video Exploration Team
  • VVC VVC Reference Software Test Platform
  • VTM an image component prediction method based on a prediction model has been integrated, through which the chrominance component can be predicted from the luminance component of the current coding block (CB).
  • CB current coding block
  • the embodiments of the application provide an image component prediction method, device, and computer storage medium. By optimizing the fitting points used in the derivation of model parameters, the robustness of prediction is improved, so that the constructed prediction model is more accurate, and Improved the codec prediction performance of video images.
  • an embodiment of the present application provides an image component prediction method, the method includes:
  • N adjacent reference pixels corresponding to the to-be-predicted image components of the encoding block in the video image; wherein, the N adjacent reference pixels are reference pixels adjacent to the encoding block, and N is a preset Integer value
  • the N first image component values corresponding to the N adjacent reference pixels are compared to determine the first reference pixel subset and the second reference pixel subset; wherein the preset number of adjacent reference pixels corresponds to the preset number
  • the first image component value of the first reference pixel subset includes: two adjacent references corresponding to the smallest first image component value and the second smallest first image component value among the preset number of first image component values Pixels
  • the second reference pixel subset includes: two adjacent reference pixels corresponding to the largest first image component value and the second largest first image component value among the preset number of first image component values;
  • the model parameters are determined, and the prediction model corresponding to the image component to be predicted is obtained according to the model parameters; wherein, the prediction model is used to realize the prediction processing of the image component to be predicted, To obtain the predicted value corresponding to the image component to be predicted.
  • an embodiment of the present application provides an image component prediction device.
  • the image component prediction device includes: an acquisition unit, a comparison unit, a calculation unit, and a prediction unit, wherein:
  • the acquiring unit is configured to acquire N adjacent reference pixels corresponding to the image components to be predicted of the encoding block in the video image; wherein the N adjacent reference pixels are reference pixels adjacent to the encoding block Point, N is a preset integer value;
  • the comparison unit is configured to compare the N first image component values corresponding to the N adjacent reference pixels to determine a first reference pixel subset and a second reference pixel subset; wherein, a preset number of adjacent The reference pixel points correspond to a preset number of first image component values, and the first reference pixel subset includes: among the preset number of first image component values, the smallest first image component value and the second smallest first image component value are Corresponding to two adjacent reference pixel points, the second reference pixel subset includes: among the preset number of first image component values, two corresponding to the largest first image component value and the second largest first image component value Adjacent reference pixels;
  • the calculation unit is configured to calculate the average value of the N adjacent reference pixel points to obtain a first average value point
  • the comparison unit is further configured to compare the values of the first image component of the first reference pixel subset and/or the second reference pixel subset through the first average point, and determine two fitting points;
  • the prediction unit is configured to determine model parameters based on the two fitting points, and obtain a prediction model corresponding to the image component to be predicted according to the model parameters; wherein, the prediction model is used to implement Prediction processing of the predicted image component to obtain the predicted value corresponding to the image component to be predicted.
  • an embodiment of the present application provides an image component prediction device, the image component prediction device including: a memory and a processor;
  • the memory is used to store a computer program that can run on the processor
  • the processor is configured to execute the method described in the first aspect when running the computer program.
  • an embodiment of the present application provides a computer storage medium, the computer storage medium stores an image component prediction program, and when the image component prediction program is executed by at least one processor, the method described in the first aspect is implemented. method.
  • the embodiments of the present application provide an image component prediction method, device, and computer storage medium.
  • N adjacent reference pixels are obtained, where the N adjacent reference pixels are Is the reference pixel adjacent to the coding block, and N is a preset integer value; then the N first image component values corresponding to the N adjacent reference pixels are compared to determine the first reference pixel subset and A second reference pixel subset, which includes: two adjacent reference pixels corresponding to the smallest first image component value and the second smallest first image component value among the preset number of first image component values Point, the second reference pixel subset includes: among the preset number of first image component values, two adjacent reference pixels corresponding to the largest first image component value and the second largest first image component value are calculated; The average value of N adjacent reference pixels is obtained to obtain the first average value point; the first image component value is compared with the first reference pixel subset and/or the second reference pixel subset through the first average value point to determine the two pseudo Combining point
  • Figure 1 is a schematic diagram of the distribution of an effective adjacent area provided by a related technical solution
  • Figure 2 is a schematic diagram of the distribution of selected areas in one of three modes provided by related technical solutions
  • Figure 3 is a schematic flow chart of a traditional solution for deriving model parameters provided by related technical solutions
  • Fig. 4A is a schematic diagram of a prediction model under a traditional scheme provided by a related technical scheme
  • 4B is a schematic diagram of a prediction model under another traditional solution provided by the related technical solution.
  • FIG. 5 is a schematic diagram of the composition of a video encoding system provided by an embodiment of the application.
  • FIG. 6 is a schematic diagram of the composition of a video decoding system provided by an embodiment of the application.
  • FIG. 7 is a schematic flowchart of an image component prediction method provided by an embodiment of the application.
  • 8A is a schematic structural diagram of adjacent reference pixel selection in INTRA_LT_CCLM mode according to an embodiment of the application;
  • 8B is a schematic structural diagram of adjacent reference pixel selection in INTRA_L_CCLM mode according to an embodiment of the application;
  • FIG. 8C is a schematic structural diagram of adjacent reference pixel selection in the INTRA_T_CCLM mode according to an embodiment of the application.
  • FIG. 9 is a schematic flowchart of another image component prediction method provided by an embodiment of the application.
  • FIG. 10 is a schematic flowchart of a model parameter derivation solution provided by an embodiment of the application.
  • FIG. 11 is a schematic diagram of comparison between a prediction model provided by an embodiment of the present application and a traditional solution
  • FIG. 12 is a schematic diagram of comparison between another solution of the application provided by an embodiment of the application and a prediction model under the traditional solution;
  • FIG. 13 is a schematic diagram of comparison between another solution of the application provided by an embodiment of the application and a prediction model under a traditional solution;
  • FIG. 14 is a schematic diagram of a structure for determining a fitting point provided by an embodiment of the application.
  • FIG. 15 is a schematic structural diagram of a prediction model provided by an embodiment of this application.
  • 16 is a schematic diagram of the composition structure of an image component prediction apparatus provided by an embodiment of the application.
  • FIG. 17 is a schematic diagram of a specific hardware structure of an image component prediction apparatus provided by an embodiment of the application.
  • FIG. 19 is a schematic diagram of the composition structure of a decoder provided by an embodiment of the application.
  • the first image component, the second image component, and the third image component are generally used to characterize the coding block; among them, the three image components are a luminance component, a blue chrominance component, and a red chrominance component.
  • the luminance component is usually represented by the symbol Y
  • the blue chrominance component is usually represented by the symbol Cb or U
  • the red chrominance component is usually represented by the symbol Cr or V; in this way, the video image can be represented in YCbCr format or YUV. Format representation.
  • the first image component may be a luminance component
  • the second image component may be a blue chrominance component
  • the third image component may be a red chrominance component
  • the cross-component prediction technology mainly includes the cross-component linear model prediction (CCLM) mode and the multi-directional linear model prediction (Multi-Directional Linear Model Prediction, MDLM) mode, whether it is based on the model parameters derived from the CCLM mode or the model parameters derived from the MDLM mode, its corresponding prediction model can realize the first image component to the second image component, and the second image component to the first image component , Prediction between image components such as the first image component to the third image component, the third image component to the first image component, the second image component to the third image component, or the third image component to the second image component.
  • CCLM cross-component linear model prediction
  • MDLM Multi-Directional Linear Model Prediction
  • the CCLM mode is used in VVC.
  • the first image component and the second image component For the same coding block, that is, the predicted value of the second image component is constructed according to the reconstruction value of the first image component of the same coding block, as shown in equation (1),
  • i,j represent the position coordinates of the pixel in the coding block
  • i represents the horizontal direction
  • j represents the vertical direction
  • Pred C [i,j] represents the pixel corresponding to the location coordinate [i,j] in the coding block
  • Pred L [i,j] represents the reconstructed value of the first image component corresponding to the pixel with the position coordinate [i,j] in the same coding block (downsampled)
  • ⁇ and ⁇ represent the model parameter.
  • its adjacent areas may include a left adjacent area, an upper adjacent area, a lower left adjacent area, and an upper right adjacent area.
  • three cross-component linear model prediction modes can be included, which are: the intra-CCLM mode adjacent to the left and upper side (can be represented by the INTRA_LT_CCLM mode), and the intra-CCLM mode adjacent to the left and lower left side (It can be represented by INTRA_L_CCLM mode) and the intra-frame CCLM mode adjacent to the upper side and the upper right side (can be represented by INTRA_T_CCLM mode).
  • each mode can select a preset number (such as 4) of adjacent reference pixels for the derivation of model parameters ⁇ and ⁇ , and the biggest difference between these three modes is that they are used to derive the model
  • the selected regions corresponding to the adjacent reference pixels of the parameters ⁇ and ⁇ are different.
  • the upper selection area corresponding to the adjacent reference pixel is W'
  • the left selection area corresponding to the adjacent reference pixel is H'
  • FIG. 1 shows a schematic diagram of the distribution of effective adjacent areas provided by related technical solutions.
  • the left side adjacent area, the lower left side adjacent area, the upper side adjacent area, and the upper right side adjacent area are all valid.
  • the selection areas for the three modes are shown in Figure 2.
  • Figure 2 shows the selection area of INTRA_LT_CCLM mode, including the left adjacent area and upper side adjacent area;
  • (b) shows the selection area of INTRA_L_CCLM mode, including the left adjacent area And the adjacent area on the lower left side;
  • (c) shows the selection area of INTRA_T_CCLM mode, including the adjacent area on the upper side and the adjacent area on the upper right side.
  • reference points for deriving model parameters can be selected in the selection area.
  • the reference points selected in this way can be called adjacent reference pixels, and usually the number of adjacent reference pixels is at most 4; and for a W ⁇ H code block with a certain size, the positions of the adjacent reference pixels Generally certain.
  • the model parameters are currently determined according to the schematic flow diagram of the traditional solution for deriving the model parameters shown in FIG. 3.
  • the process shown in FIG. 3 assuming that the image component to be predicted is a chrominance component, the process may include:
  • steps S303 and S304 will be executed; when the number of effective pixels is 2, step S305 will be executed, and then steps S306-S309 will be executed sequentially; When the number of points is 4, steps S306-S309 will be executed sequentially.
  • the predicted values Pred C [i,j] corresponding to the chrominance components of all pixels in the current coding block can be filled with the default values of the chrominance components;
  • the default value is the middle value of the chrominance component.
  • the intermediate value can be calculated as 1 ⁇ (BitDepthC-1).
  • the component range corresponding to the chrominance component is 0-255, and the intermediate value is 128 at this time.
  • the preset component value can be 128; assuming that the current video image is 10 bits For video, the component range corresponding to the chrominance component is 0-1023, and the intermediate value is 512 at this time. At this time, the preset component value can be 512.
  • the principle of "two points determine a straight line" is used to construct a prediction model; the two points here can be called fitting points.
  • two adjacent reference pixels with a larger value and two adjacent reference pixels with a smaller value are obtained after four comparisons; then according to the brightness Two adjacent reference pixels with larger components, find a mean point (which can be represented by mean max ), and find another mean point based on two adjacent reference pixels with a smaller brightness component (you can use mean min ) to obtain two mean points mean max and mean min ; then mean max and mean min are used as two fitting points to derive the model parameters (which can be expressed by ⁇ and ⁇ ), and finally build according to the model parameters Predict model, and perform chroma component prediction processing based on the predict model.
  • the gray diagonal line indicates the prediction model constructed according to the two fitting points; the gray dotted line
  • LMS Least Mean Square
  • FIG. 4B A schematic diagram of the prediction model under a traditional scheme; as can be seen from Figure 4B, the gray slanted line and the gray dotted line have a certain deviation, that is, the prediction model constructed by the traditional scheme cannot be compared with the distribution of these 4 adjacent reference pixels. Fits well.
  • the current traditional scheme lacks robustness in the process of model parameter derivation, and cannot fit well when the four adjacent reference pixels are in an uneven distribution.
  • an embodiment of the present application provides an image component prediction method, which obtains N adjacent reference pixels for the image components to be predicted of the coding block in the video image.
  • the N adjacent reference pixels are reference pixels adjacent to the encoding block, and N is a preset integer value; the N first image component values corresponding to the N adjacent reference pixels are compared to determine A first reference pixel subset and a second reference pixel subset, the first reference pixel subset includes: among the preset number of first image component values, the smallest first image component value and the second smallest first image component value correspond to The second reference pixel subset includes two adjacent reference pixels corresponding to the largest first image component value and the second largest first image component value among the preset number of first image component values Reference pixel; calculate the average value of the N adjacent reference pixels to obtain the first average point; perform the first image component value on the first reference pixel subset and/or the second reference pixel subset through the first average point Determine the
  • the video encoding system 500 includes a transform and quantization unit 501, an intra-frame estimation unit 502, and an intra-frame
  • the filtering unit 508 may Implementing deblocking filtering and Sample Adaptive Offset (SAO) filtering
  • the encoding unit 509 can implement header information encoding and context-based adaptive binary arithmetic coding (Context-based Adaptive Binary Arithmatic Coding, CABAC).
  • a video coding block can be obtained by dividing the coding tree block (Coding Tree Unit, CTU), and then the residual pixel information obtained after intra or inter prediction is paired by the transform and quantization unit 501
  • the video coding block is transformed, including transforming the residual information from the pixel domain to the transform domain, and quantizing the resulting transform coefficients to further reduce the bit rate;
  • the intra-frame estimation unit 502 and the intra-frame prediction unit 503 are used for Perform intra prediction on the video encoding block; specifically, the intra estimation unit 502 and the intra prediction unit 503 are used to determine the intra prediction mode to be used to encode the video encoding block;
  • the motion compensation unit 504 and the motion estimation unit 505 is used to perform inter-frame predictive coding of the received video coding block relative to one or more blocks in one or more reference frames to provide temporal prediction information;
  • the motion estimation performed by the motion estimation unit 505 is for generating motion vectors In the process, the motion vector can estimate the motion of the video coding block, and then the
  • the context content can be based on adjacent coding blocks, can be used to encode information indicating the determined intra prediction mode, and output the code stream of the video signal; and the decoded image buffer unit 510 is used to store reconstructed video coding blocks for Forecast reference. As the encoding of the video image progresses, new reconstructed video encoding blocks will be continuously generated, and these reconstructed video encoding blocks will be stored in the decoded image buffer unit 510.
  • the video decoding system 600 includes a decoding unit 601, an inverse transform and inverse quantization unit 602, and an intra-frame The prediction unit 603, the motion compensation unit 604, the filtering unit 605, and the decoded image buffer unit 606, etc., wherein the decoding unit 601 can implement header information decoding and CABAC decoding, and the filtering unit 605 can implement deblocking filtering and SAO filtering.
  • the decoding unit 601 can implement header information decoding and CABAC decoding
  • the filtering unit 605 can implement deblocking filtering and SAO filtering.
  • the code stream of the video signal is output; the code stream is input into the video decoding system 600, and first passes through the decoding unit 601 to obtain the decoded transform coefficient;
  • the inverse transform and inverse quantization unit 602 performs processing to generate residual blocks in the pixel domain;
  • the intra prediction unit 603 can be used to generate data based on the determined intra prediction mode and the data from the previous decoded block of the current frame or picture The prediction data of the current video decoding block;
  • the motion compensation unit 604 determines the prediction information for the video decoding block by analyzing the motion vector and other associated syntax elements, and uses the prediction information to generate the predictability of the video decoding block being decoded Block; by summing the residual block from the inverse transform and inverse quantization unit 602 and the corresponding predictive block generated by the intra prediction unit 603 or the motion compensation unit 604 to form a decoded video block; the decoded video signal Through the filtering unit 605 in order to remove the block effect artifacts, the video quality can be improved;
  • the image component prediction method in the embodiment of this application is mainly applied to the part of the intra prediction unit 503 shown in FIG. 5 and the part of the intra prediction unit 603 shown in FIG. 6, and is specifically applied to CCLM prediction in intra prediction section. That is to say, the image component prediction method in the embodiment of this application can be applied to a video encoding system, a video decoding system, or even a video encoding system and a video decoding system at the same time.
  • the embodiment of this application There is no specific limitation.
  • the "coding block in the video image” specifically refers to the current coding block in the intra prediction; when the method is applied to the intra prediction unit 603 part, “coding in the video image "Block” specifically refers to the current decoded block in intra prediction.
  • FIG. 7 shows a schematic flowchart of an image component prediction method provided by an embodiment of the present application.
  • the method may include:
  • S701 Acquire N adjacent reference pixels corresponding to the image component to be predicted of the coding block in the video image
  • N adjacent reference pixels are reference pixels adjacent to the coding block, and N is a preset integer value, which may also be referred to as a preset number.
  • the video image can be divided into multiple coding blocks, and each coding block can include a first image component, a second image component, and a third image component.
  • the coding block in this embodiment of the application is a video image to be encoded. The current block.
  • the image component to be predicted is the first image component
  • the second image component needs to be predicted by the prediction model the image component to be predicted is the second image component
  • the third image component is predicted by the prediction model
  • the image component to be predicted is the third image component.
  • N may generally be 4, but the embodiment of the present application does not specifically limit it.
  • the obtaining N adjacent reference pixels corresponding to the image component to be predicted of the coding block in the video image may include:
  • S701-1 Obtain a first reference pixel set corresponding to the image component to be predicted of the coding block in the video image
  • the first reference pixel set is determined by the left side of the coding block.
  • the adjacent reference pixels in the side adjacent area and the upper adjacent area are composed of adjacent reference pixels, as shown in Figure 2 (a); for the INTRA_L_CCLM mode, the first reference pixel set is composed of the left adjacent area and It is composed of adjacent reference pixels in the adjacent area on the lower left side, as shown in Figure 2 (b); for the INTRA_T_CCLM mode, the first reference pixel set is composed of the adjacent area on the upper side of the coding block and the adjacent area on the upper right side. It is composed of adjacent reference pixels in Figure 2 (c).
  • the acquiring the first reference pixel set corresponding to the image component to be predicted of the coding block in the video image may include:
  • Acquiring reference pixels adjacent to at least one side of the coding block wherein the at least one side includes the left side of the coding block and/or the upper side of the coding block;
  • a first reference pixel set corresponding to the image component to be predicted is formed.
  • At least one side of the coding block can refer to the upper side of the coding block, or the left side of the coding block, or even the upper and left sides of the coding block.
  • the embodiments are not specifically limited.
  • the first reference pixel set at this time can be the sum of the reference pixels adjacent to the left side of the encoding block and the encoding
  • the upper side of the block is composed of adjacent reference pixels.
  • the first reference pixel set can be composed of the adjacent pixels on the left side of the encoding block. It is composed of reference pixels adjacent to the side; when the adjacent area on the left is an invalid area and the adjacent area on the upper side is an effective area, then the first reference pixel set can be composed of the upper side of the coding block. It is composed of adjacent reference pixels.
  • the acquiring the first reference pixel set corresponding to the image component to be predicted of the coding block in the video image may include:
  • a first reference pixel set corresponding to the image component to be predicted is formed.
  • the reference row or reference column adjacent to the encoding block can refer to the reference row adjacent to the upper side of the encoding block, or the reference column adjacent to the left side of the encoding block, or even It refers to a reference row or a reference column adjacent to other sides of the coding block, which is not specifically limited in the embodiment of the present application.
  • the adjacent reference rows of the coding blocks will describe the reference behaviors with adjacent sides above, and the reference columns adjacent to the coding blocks will take the reference columns adjacent to the left as an example. description.
  • the reference pixels in the reference row adjacent to the coding block may include reference pixels adjacent to the upper side and the upper right side (also referred to as adjacent reference pixels corresponding to the upper side and the upper right side) Point), where the upper side represents the upper side of the coding block, and the upper right side represents the side length of the upper side of the coding block that is horizontally extended to the right and the same height as the current coding block; in the reference column adjacent to the coding block
  • the reference pixels of may also include reference pixels adjacent to the left side and the lower left side (also referred to as the adjacent reference pixels corresponding to the left side and the lower left side), where the left side represents the code
  • the left side of the block and the lower left side represent the side length that is the same width as the current decoded block, which is vertically extended downward from the left side of the encoding block; however, the embodiment of the present application does not specifically limit it.
  • the first reference pixel set at this time may be composed of reference pixels in the reference column adjacent to the coding block;
  • the first reference pixel set at this time may be composed of reference pixels in the reference row adjacent to the coding block.
  • S701-2 Perform screening processing on the first reference pixel set to obtain a second reference pixel set; wherein, the second reference pixel set includes N adjacent reference pixels;
  • the first parameter pixel set there may be some unimportant reference pixels (for example, these reference pixels have poor correlation) or some abnormal reference pixels, in order to ensure the accuracy of the prediction model , These reference pixels need to be removed to obtain the second reference pixel set; wherein, the number of effective pixels contained in the second reference pixel set is usually selected as 4 in practical applications, but the embodiment of the application There is no specific limitation.
  • the filtering process on the first reference pixel set to obtain the second reference pixel set may include:
  • adjacent reference pixels corresponding to the position of the pixel to be selected are selected from the first reference pixel set, and the selected adjacent reference pixels form a second reference pixel set ;
  • the second reference pixel set includes N adjacent reference pixels.
  • the image component intensity can be represented by image component values, such as brightness value, chroma value, etc.; here, the larger the image component value, the higher the image component intensity.
  • the screening of the first reference pixel set can be based on the position of the reference pixel to be selected, or based on the intensity of the image component (such as luminance value, chrominance value, etc.), so that The filtered reference pixels to be selected form a second reference pixel set. The following will describe the position of the reference pixel to be selected as an example.
  • the selection method of selecting at most 4 adjacent reference pixels is as follows:
  • 2 adjacent reference pixels to be selected can be filtered out in the upper selection area W', and their corresponding positions are S[ W'/4,-1] and S[3W'/4,-1]; 2 adjacent reference pixels to be selected can be screened out in the left selection area H', and their corresponding positions are respectively S[-1, H'/4] and S[-1,3H'/4]; these 4 adjacent reference pixels to be selected form a second reference pixel set, as shown in FIG. 8A.
  • the adjacent area on the left side and the adjacent area on the upper side of the coding block are both effective, and in order to maintain the same resolution of the luminance component and the chrominance component, the luminance component needs to be down-sampled, so that The down-sampled luminance component and chrominance component have the same resolution.
  • the adjacent area on the left side and the adjacent area on the lower left side of the coding block are both effective, and in order to maintain the same resolution of the luminance component and the chrominance component, the luminance component still needs to be down-sampled, so The down-sampled luminance component and chrominance component have the same resolution.
  • the upper adjacent area and the upper right adjacent area of the coding block are both effective, and in order to maintain the same resolution of the luminance component and the chrominance component, the luminance component still needs to be down-sampled, so that The down-sampled luminance component and chrominance component have the same resolution.
  • the second reference pixel set by filtering the first reference pixel set, the second reference pixel set can be obtained, and the second reference pixel set generally includes a preset number of adjacent reference pixels, such as 4 adjacent reference pixels; The preset number of adjacent reference pixels are grouped by comparison, so that the two fitting points determined subsequently are more accurate.
  • S702 Compare the N first image component values corresponding to the N adjacent reference pixels, and determine a first reference pixel subset and a second reference pixel subset;
  • each adjacent reference pixel corresponds to the first image component value and the second image component value; in this way, the preset number of adjacent reference pixels corresponds to the preset number of first image component values, and the After a comparison process, the first reference pixel subset and the second reference pixel subset can be determined according to the value of the first image component.
  • the comparing the N first image component values corresponding to the N adjacent reference pixels to determine the first reference pixel subset and the second reference pixel subset may include :
  • the preset number of first image component values are compared multiple times to obtain a first array composed of the smallest first image component value and the second smallest first image component value, and the largest first image component value and the second largest first image component value.
  • the first reference pixel subset and the second reference pixel subset are derived from the preset number of first image component values (for example, four adjacent reference pixels need to be compared four times) .
  • the first reference pixel subset includes the smallest first image component value and the second smallest first image component value among the preset number of first image component values
  • the second reference pixel subset includes two adjacent ones corresponding to the largest first image component value and the second largest first image component value among the preset number of first image component values Reference pixels.
  • the preset number of adjacent reference pixels are numbered first, and then the preset number of adjacent reference pixels are initially grouped, and the first image corresponding to the i-th number is stored in the first array Component value, the first image component value corresponding to the i+1 number is stored in the second array, i is a positive integer greater than or equal to 0; then the first image component value in the first array and the first image component value in the second array An image component value is compared for multiple times, so that the smallest first image component value and the second smallest first image component value are stored in the first array, and the largest first image component value and the second largest first image component value are stored in the second array.
  • the first reference pixel subset and the second reference pixel subset are obtained.
  • FIG. 9 shows a schematic flowchart of another image component prediction method provided by an embodiment of the present application.
  • the method may further include:
  • S901 Based on the first image component value and the second image component value corresponding to each adjacent reference pixel point in the first reference pixel subset, obtain the second mean value corresponding to the first image component and the first image component corresponding to the second image component. Two averages, get the second average point;
  • S902 Based on the first image component value and the second image component value corresponding to each adjacent reference pixel point in the second reference pixel subset, obtain a third mean value corresponding to the first image component and a first image component corresponding to the second image component. Three averages, get the third average point.
  • the average value of the first image component value corresponding to each adjacent reference pixel in the first reference pixel subset is calculated to obtain the average value of the first image component corresponding to the multiple first image components, which can be called the first image
  • the second mean value of the component (which can be represented by luma min ); the mean value of the second image component value corresponding to each adjacent reference pixel in the first reference pixel subset is calculated to obtain the second image corresponding to the multiple second image components
  • the component average value can be called the second average value of the second image component (it can be represented by chroma min ).
  • the second average value point can be represented by mean min ; here, the first image component of the second average value point is luma min , The second image component of the two-mean point is chroma min .
  • the average value of the first image component value corresponding to each adjacent reference pixel in the second reference pixel subset is calculated to obtain the average value of the first image component corresponding to the multiple first image components, which can be called the first image component
  • the third mean value of can be represented by luma max ); the mean value of the second image component values corresponding to each adjacent reference pixel in the second reference pixel subset is calculated to obtain the second image components corresponding to multiple second image components
  • the mean value can be called the third mean value of the second image component (it can be expressed by chroma max ).
  • the third mean point can be expressed by mean max ; here, the first image component of the third mean point is luma max and the third The second image component of the mean point is chroma max .
  • this step may include:
  • S903 Obtain the first average value and the second image corresponding to the first image component based on the first image component value and the second image component value corresponding to each adjacent reference pixel in the preset number of adjacent reference pixels The first mean value corresponding to the component, obtain the first mean value point;
  • the average value of the first image component corresponding to each of the N adjacent reference pixels is calculated to obtain the average value of the first image component corresponding to the N first image components, which can be called the first image component value.
  • the first average value of an image component (which can be represented by meanL); the average value of the second image component value corresponding to each adjacent reference pixel in the preset number of adjacent reference pixels is calculated to obtain N first image components
  • the corresponding mean value of the first image component can be referred to as the first mean value of the first image component (which can be represented by meanC).
  • the first mean point can be expressed by mean; here, the first image component of the first mean point is meanL ,
  • the second image component of the first mean point is meanC.
  • the calculation of the first mean point can also be obtained by averaging the second mean point and the third mean point.
  • meanL it can be obtained by averaging luma min and luma max ;
  • meanC it can be obtained by averaging chroma min and chroma max .
  • S704 Perform a comparison of first image component values on the first reference pixel subset and/or the second reference pixel subset through the first average point, and determine two fitting points;
  • the two fitting points include the first fitting point and the second fitting point.
  • the second comparison process is performed on the first reference pixel subset and/or the second reference pixel subset through the first average point.
  • the first image component corresponding to the first average point is smaller than the first reference pixel subset.
  • the second smallest first image component value in the set, or the first image component corresponding to the first average point is greater than the second largest first image component value in the second reference pixel subset, or the first image component corresponding to the first average point is less than or equal to the first image component
  • the second largest first image component value in the second reference pixel subset For these three cases, the determined first fitting point and the second fitting point are different, so that the robustness of CCLM prediction can be improved.
  • the method may further include:
  • the method may further include:
  • the smallest first image component value and the second smallest first image component value can be determined, and the smallest first image component value can be determined.
  • the pixel corresponding to an image component value is taken as the first adjacent reference pixel, and the pixel corresponding to the second smallest first image component value is taken as the second adjacent reference pixel; by comparing two adjacent reference pixels in the second reference pixel subset
  • the first image component value corresponding to each pixel is compared, the largest first image component value and the second largest first image component value can be determined, and the pixel corresponding to the second largest first image component value is used as the third neighbor reference Pixel, the pixel corresponding to the largest first image component value is used as the fourth adjacent reference pixel.
  • the first image component is performed on the first reference pixel subset and/or the second reference pixel subset through the first mean point.
  • Comparison of values to determine two fitting points can include:
  • the method may further include:
  • the method may further include:
  • the first image component of the first mean point may be compared with the second smallest first image component value in the first reference pixel subset, and then the value of the first mean point
  • the first image component is compared with the second largest first image component value in the second reference pixel subset; it can also be the first image component of the first mean point with the second largest first image component in the second reference pixel subset.
  • the image component values are compared, and then the first image component of the first mean point is compared with the second smallest first image component value in the first reference pixel subset; the order of comparison is not specifically limited in this embodiment of the application.
  • the first image component of the first average point is the first average value corresponding to the first image component.
  • step S904 if the first image component of the first mean point is greater than or equal to the second smallest first image component value in the first reference pixel subset, then step S905 will be executed; otherwise, step S907 will be executed.
  • One fitting point is the first adjacent reference pixel point, and the second fitting point is the third mean point; for step S905, if the first image component of the first mean point is less than or equal to the second reference pixel subset The largest first image component value, then step S906 will be executed.
  • the first fitting point is the second mean point, and the second fitting point is the third mean point; otherwise, step S908 will be executed.
  • the fitting point is the second mean point, and the second fitting point is the fourth adjacent reference pixel point. It can be seen that after the second comparison process, the preset number of adjacent reference pixels can be divided into three cases. For these three cases, two different fitting points are used to construct the prediction model, which can improve the CCLM prediction. Robustness.
  • step S905 when the first image component of the first average point is greater than the second smallest first image component value in the first reference pixel subset, step S905 is executed, and when the first image component of the first average point is less than or equal to the first image component.
  • step S907 is executed; or when the first image component of the first mean point is less than the second largest first image component value in the second reference pixel subset, step S906 is executed, and when the first When the first image component of an average point is greater than or equal to the second largest first image component value in the second reference pixel subset, step S908 is executed.
  • S705 Determine model parameters based on the two fitting points, and obtain a prediction model corresponding to the image component to be predicted according to the model parameters; wherein, the prediction model is used to predict the image component to be predicted Processing to obtain the predicted value corresponding to the image component to be predicted;
  • the model parameters can be determined according to the first fitting point and the second fitting point; here, the model parameters include the first model parameters (can Denoted by ⁇ ) and the second model parameter (denoted by ⁇ ). Assuming that the image component to be predicted is a chrominance component, according to the model parameters ⁇ and ⁇ , the prediction model corresponding to the chrominance component shown in formula (1) can be obtained.
  • the model parameters include a first model parameter and a second model parameter
  • the determining the model parameter based on the two fitting points may include:
  • the method may further include:
  • the first model parameter can be calculated according to the first preset factor calculation model ⁇ , as shown in formula (2),
  • the second model parameter ⁇ can be calculated according to the calculation model combining the first fitting point (luma 1 , chroma 1 ) and the second preset factor, as shown in formula (3),
  • the second model parameter ⁇ is calculated according to the combination of the first mean point (meanL, meanC) and the second preset factor calculation model, as shown in formula (4),
  • a preset model can be constructed. Assuming that the image component to be predicted is a chrominance component, the prediction model corresponding to the chrominance component can be obtained according to the model parameters ( ⁇ and ⁇ ), as shown in equation (1); then the prediction model is used to predict the chrominance component, To get the predicted value corresponding to the chrominance component.
  • the method may further include:
  • two fitting points can be determined; in this way, one can be determined according to the "two points
  • the principle of "straight line” can determine the slope of the line (ie the first model parameter) and the intercept of the line (ie the second model parameter), so that the prediction model corresponding to the image component to be predicted can be obtained based on these two model parameters , In order to obtain the predicted value corresponding to the image component to be predicted for each pixel in the coding block.
  • the prediction model corresponding to the chrominance component shown in equation (1) can be obtained; then, the equation (1)
  • the prediction model shown in) performs prediction processing on the chrominance component of each pixel in the coding block, so that the predicted value corresponding to the chrominance component of each pixel can be obtained.
  • This embodiment provides a method for predicting image components.
  • N adjacent reference pixels are obtained, where N The adjacent reference pixels are reference pixels adjacent to the encoding block, and N is a preset integer value; then, the N first image component values corresponding to the N adjacent reference pixels are compared to determine the first The reference pixel subset and the second reference pixel subset, the first reference pixel subset includes: among the preset number of first image component values, two corresponding to the smallest first image component value and the second smallest first image component value Adjacent reference pixels, the second reference pixel subset includes: two adjacent reference pixels corresponding to the largest first image component value and the second largest first image component value among the preset number of first image component values Point; Calculate the average value of the N adjacent reference pixel points to obtain the first average value point; compare the first image component value of the first reference pixel subset and/or the second reference pixel subset through the first
  • the number of adjacent reference pixels used for derivation of the model parameters is generally 4; that is, the preset number may be 4. A detailed description will be given below taking the preset number equal to 4 as an example.
  • the image component to be predicted is a chrominance component
  • the first model parameter ⁇ when 0 adjacent reference pixels are obtained, the first model parameter ⁇ can be directly set to 0, and the second model parameter ⁇ is set to the default Value (it can also be the median value of chrominance component); when two adjacent reference pixels are obtained, the two adjacent reference pixels can be copied to obtain 4 adjacent reference pixels, and then according to The operation of 4 adjacent reference pixels is used to construct the prediction model.
  • two fitting points can be determined by 4 comparisons and average points, so as to use the principle of "two points to determine a straight line" to derive model parameters ;
  • a prediction model corresponding to the image component to be predicted can be constructed according to the model parameters to obtain the predicted value corresponding to the image component to be predicted.
  • the image component to be predicted is a chrominance component, and the chrominance component is predicted by the luminance component. It is assumed that the numbers of the obtained 4 adjacent reference pixels are 0, 1, 2, and 3 respectively.
  • two adjacent reference pixels with larger brightness values can be further selected (including the pixel with the largest brightness value and the pixel with the next largest brightness value).
  • Pixels) and 2 adjacent reference pixels with smaller brightness values may include the pixel with the smallest brightness value and the pixel with the next smallest brightness value).
  • two arrays of minIdx[2] and maxIdx[2] can be set to store two sets of adjacent reference pixels respectively. Initially, the adjacent reference pixels numbered 0 and 2 are put into minIdx[2], and The adjacent reference pixels numbered 1 and 3 are put into maxIdx[2], as shown below,
  • the two adjacent reference pixels with smaller brightness values can be stored in minIdx[2]
  • the two adjacent reference pixels with larger brightness values are stored in maxIdx[2] Pixels, as shown below,
  • Step1 if(L[minIdx[0]]>L[minIdx[1]],swap(minIdx[0],minIdx[1])
  • Step2 if(L[maxIdx[0]]>L[maxIdx[1]],swap(maxIdx[0],maxIdx[1])
  • Step3 if(L[minIdx[0]]>L[maxIdx[1]],swap(minIdx,maxIdx)
  • Step4 if(L[minIdx[1]]>L[maxIdx[0]],swap(minIdx[1],maxIdx[0])
  • two adjacent reference pixels with smaller brightness values can be obtained, and their corresponding brightness values are represented by luma 0 min and luma 1 min respectively, and the corresponding chromaticity values are represented by chroma 0 min and chroma 1 min respectively ; at the same time Two adjacent reference pixels with larger brightness values can also be obtained, and the corresponding brightness values are represented by luma 0 max and luma 1 max respectively, and the corresponding chromaticity values are represented by chroma 0 max and chroma 1 max respectively .
  • the neighboring reference pixels can be obtained is represented by the mean point min Mean, Mean min the mean point corresponding to the luminance value LUMA min, min Chroma color value; for 2
  • the average value of a larger adjacent reference pixel point can be obtained, and the second average value point can be represented by mean max .
  • the luminance value of this mean value point mean max is luma max and the chroma value is chroma max ; the details are as follows,
  • chroma min (chroma 0 min +chroma 1 min +1)>>1
  • chroma max (chroma 0 max +chroma 1 max +1)>>1
  • the model parameter ⁇ is the slope in the prediction model
  • the model parameter ⁇ is the intercept in the prediction model.
  • the prediction model lacks robustness, so when the distribution of 4 adjacent reference pixels is not uniform, the built prediction model will not be able to accurately fit these 4 adjacent reference pixels.
  • the distribution of reference pixels such as the comparison schematic diagram of the prediction model shown in FIG. 4B.
  • the first reference pixel subset and/or the second reference pixel subset may be subjected to a second comparison process through the brightness average value, so as to divide the preset number of adjacent reference pixels into multiple situations.
  • two different fitting points are used to construct a predictive model.
  • FIG. 10 shows a schematic flowchart of a model parameter derivation solution provided by an embodiment of the present application. As shown in Figure 10, the process may include:
  • S1006 Determine whether meanL is greater than or equal to the second minimum brightness value
  • steps S1006 and S1009 the comparison of these two steps is not in order; that is, the comparison of step S1006 can be performed first, and then the comparison of step S1009 can be performed; or the comparison of step S1009 can be performed first. Compare, and then perform the comparison in step S1006; the embodiment of the present application does not specifically limit it. For example, if the comparison of step S1009 is performed first, the flow at this time is modified as follows.
  • the third pixel corresponding to the second maximum brightness value and the fourth pixel corresponding to the maximum brightness value are determined through a comparison, it is determined whether meanL is less than or equal to The second maximum brightness value; when meanL is greater than the second maximum brightness value, the fourth pixel point corresponding to the maximum brightness value and the mean point mean min are used as two fitting points to construct the prediction model; when meanL is less than or equal to the second maximum brightness value, Determine the first pixel corresponding to the minimum brightness value and the second pixel corresponding to the second minimum brightness value through a comparison; then determine whether meanL is greater than or equal to the second minimum brightness value; when meanL is less than the second minimum brightness value, the minimum brightness value The corresponding first pixel point and mean point mean max are used as two fitting points to construct a prediction model; when meanL is greater than or equal to the second minimum brightness value, the mean points mean min and mean max are used as two fitting points to construct a prediction model; Finally, the chrominance component is predicted according to the constructed prediction model.
  • two points determine a straight line is used to construct the prediction model; the two points here can be called fitting points. After obtaining 4 adjacent reference pixels, you can calculate the average value of these 4 adjacent reference pixels (which can be represented by mean), and then use the average value for the second comparison process to determine the fitting points in different situations .
  • the image component to be predicted is a chrominance component
  • the chrominance component is predicted by the luminance component.
  • the four adjacent reference pixels are compared four times, and the 2 with the larger brightness value can be selected.
  • Two adjacent reference pixels can include the pixel with the largest brightness value and the pixel with the next largest brightness value
  • 2 adjacent reference pixels with a smaller brightness value can include the pixel with the smallest brightness value and the second brightness value The smallest pixel.
  • two arrays of minIdx[2] and maxIdx[2] can be set to store two sets of adjacent reference pixels respectively. Initially, the adjacent reference pixels numbered 0 and 2 are put into minIdx[2], and The adjacent reference pixels numbered 1 and 3 are put into maxIdx[2], as shown below,
  • the two adjacent reference pixels with smaller brightness values can be stored in minIdx[2]
  • the two adjacent reference pixels with larger brightness values are stored in maxIdx[2] Pixels, as shown below,
  • Step1 if(L[minIdx[0]]>L[minIdx[1]],swap(minIdx[0],minIdx[1])
  • Step2 if(L[maxIdx[0]]>L[maxIdx[1]],swap(maxIdx[0],maxIdx[1])
  • Step3 if(L[minIdx[0]]>L[maxIdx[1]],swap(minIdx,maxIdx)
  • Step4 if(L[minIdx[1]]>L[maxIdx[0]],swap(minIdx[1],maxIdx[0])
  • two adjacent reference pixels with smaller brightness values can be obtained.
  • the corresponding brightness values are represented by luma 0 min and luma 1 min
  • the corresponding chroma values are represented by chroma 0 min and chroma 1 min .
  • two adjacent reference pixels with larger brightness values can be obtained, and the corresponding brightness values are represented by luma 0 max and luma 1 max respectively, and the corresponding chromaticity values are represented by chroma 0 max and chroma 1 max respectively .
  • the neighboring reference pixels can be obtained is represented by the mean point min Mean, Mean min the mean point corresponding to the luminance value LUMA min, min Chroma color value; for 2
  • the average value of a larger adjacent reference pixel point can be obtained, and the second average value point can be represented by mean max .
  • the luminance value of this mean value point mean max is luma max and the chroma value is chroma max ; the details are as follows,
  • chroma min (chroma 0 min +chroma 1 min +1)>>1
  • chroma max (chroma 0 max +chroma 1 max +1)>>1
  • meanL is greater than or equal to the brightness value of the second smallest point
  • V compare the brightness meanL of the four adjacent reference pixels with the brightness value of the second largest point (fourth comparison):
  • the model parameters can be derived from these two fitting points using "two points to determine a straight line". Assuming the first fitting point (luma 1 , chroma 1 ) and the second fitting point (luma 2 , chroma 2 ), the model parameters ⁇ and ⁇ can be calculated by formula (6),
  • the chrominance value of the current coding block can be predicted through the prediction model.
  • the prediction model corresponding to the chrominance component can be obtained according to the model parameters, as shown in equation (1); then the prediction model is used to predict the chrominance component Processing to obtain the predicted value corresponding to the chrominance component.
  • the embodiment of the present application adds one averaging operation and at least two and up to four comparison operations, so that the four adjacent reference
  • the pixel points are divided into three cases, and two different fitting points are used to construct the prediction model for these three cases, thereby improving the robustness of CCLM prediction.
  • the preset number is 4, so there may be 3 situations for these 4 adjacent reference pixels, such as the brightness value of meanL at the second maximum point and the brightness value of the second minimum point. Between, or the mean brightness value meanL is less than the brightness value of the second smallest point, or the mean brightness value meanL is greater than the brightness value of the second largest point; that is, the four adjacent reference pixels can show a uniform distribution trend or a non-uniform distribution trend, The following will specifically describe these three situations.
  • the four adjacent reference pixel points are uniformly distributed.
  • FIG. 11 shows a schematic diagram of a comparison between a prediction model provided by an embodiment of the present application and a traditional solution provided by the present application.
  • the 4 black dots are the 4 adjacent reference pixels
  • the 2 gray dots are the average points corresponding to the two larger adjacent reference pixels among the 4 adjacent reference pixels.
  • Mean points corresponding to 2 smaller adjacent reference pixels that is, the two fitting points obtained by the traditional scheme
  • the gray diagonal line is the prediction model constructed according to the traditional scheme
  • the bold black dashed line is 4
  • the brightness average value of adjacent reference pixels, the two bold black circles are two fitting points, so the bold black diagonal line is the prediction model constructed according to the scheme of the embodiment of the application
  • the gray dotted line is the four phases
  • the oblique line and the bold black oblique line overlap, and are relatively close to the gray dotted line, that is, the scheme of the embodiment of the application is the same as the prediction model constructed by the traditional scheme, and both can accurately fit these 4 adjacent reference pixels Distribution.
  • the four adjacent reference pixel points are non-uniformly distributed.
  • FIG. 12 shows a schematic diagram of comparison between another prediction model provided by the present application and the traditional scheme provided by an embodiment of the present application.
  • the 4 black dots are the 4 adjacent reference pixels
  • the 2 gray dots are the average points corresponding to the two larger adjacent reference pixels among the 4 adjacent reference pixels.
  • Mean points corresponding to 2 smaller adjacent reference pixels that is, the two fitting points obtained by the traditional scheme
  • the gray diagonal line is the prediction model constructed according to the traditional scheme
  • the bold black dashed line is 4
  • the brightness average value of adjacent reference pixels, the two bold black circles are two fitting points, so the bold black diagonal line is the prediction model constructed according to the scheme of the embodiment of the application
  • the gray dotted line is the four phases
  • the line is closer, that is, the solution of the embodiment of the present application is more in line with the distribution of the four adjacent reference pixels than the prediction model constructed by the traditional solution.
  • the four adjacent reference pixel points are non-uniformly distributed.
  • FIG. 13 shows a schematic diagram of a comparison between another prediction model provided by the present application and the traditional solution provided by an embodiment of the present application.
  • the 4 black dots are the 4 adjacent reference pixels
  • the 2 gray dots are the average points corresponding to the two larger adjacent reference pixels among the 4 adjacent reference pixels.
  • Mean points corresponding to 2 smaller adjacent reference pixels that is, the two fitting points obtained by the traditional scheme
  • the gray diagonal line is the prediction model constructed according to the traditional scheme
  • the bold black dashed line is 4
  • the brightness average value of adjacent reference pixels, the two bold black circles are two fitting points, so the bold black diagonal line is the prediction model constructed according to the scheme of the embodiment of the application
  • the gray dotted line is the four phases
  • the line is closer, that is, the solution of the embodiment of the present application is more in line with the distribution of the four adjacent reference pixels than the prediction model constructed by the traditional solution.
  • test sequence required by JVET is subject to the general test conditions, so that the average change of BD-rate on the Y component, Cb component and Cr component is -0.03%. , -0.18%, -0.17%, it also shows that the solution of the embodiment of the present application brings a certain improvement in prediction performance under the premise of a small increase in complexity.
  • the 4 adjacent reference pixels can also be calculated according to the average brightness of the 4 adjacent reference pixels Divided into two categories, for example, the left one is represented by L, and the right one is represented by R.
  • L left one
  • R right one
  • the fitting points of this class may not be Then there is the mean point of 3 pixels, but the mean point of 2 pixels is arbitrarily selected from the 3 pixels.
  • the method may further include:
  • the model parameters are determined, and the prediction model corresponding to the image component to be predicted is obtained according to the model parameters; wherein, the prediction model is used to realize the Prediction processing of the image component to be predicted to obtain the predicted value corresponding to the image component to be predicted.
  • determining the first fitting point based on the third reference pixel subset; and determining the second fitting point based on the fourth reference pixel subset may include:
  • a part of adjacent reference pixels is selected from the fourth reference pixel subset, the average value of the part of adjacent reference pixels is calculated, and the calculated average point is used as the second fitting point.
  • determining the first fitting point based on the third reference pixel subset; and determining the second fitting point based on the fourth reference pixel subset may include:
  • One of the adjacent reference pixel points is selected from the fourth reference pixel subset as the second fitting point.
  • determining the first fitting point based on the third reference pixel subset; and determining the second fitting point based on the fourth reference pixel subset includes:
  • the third reference pixel subset includes three adjacent reference pixels and the fourth reference pixel subset includes one adjacent reference pixel, then two adjacent reference pixels are selected from the third reference pixel subset For reference pixels, perform average calculation on the selected two adjacent reference pixels, use the calculated average point as the first fitting point, and use 1 adjacent reference pixel in the fourth reference pixel subset As the second fitting point;
  • the third reference pixel subset includes 1 adjacent reference pixel
  • the fourth reference pixel subset includes 3 adjacent reference pixels
  • select 2 adjacent reference pixels from the fourth reference pixel subset For reference pixels, perform average calculation on the selected two adjacent reference pixels, use the calculated average point as the second fitting point, and use 1 adjacent reference pixel in the third reference pixel subset As the first fitting point.
  • some of the reference pixels selected here may be reference pixels of a power of two. For example, after dividing 4 adjacent reference pixels into two categories (such as L and R) according to the average brightness value, if the number of pixels in the L or R category is 3, then from 3 pixels In the operation of arbitrarily selecting 2 pixels in the points to find the average point, it can also be that if there are 3 pixels in the L class, then 2 pixels with the smaller brightness value are selected; if there are 3 pixels in the R class, Then select 2 pixels with larger brightness value.
  • one pixel point from the L category can be selected as the first fitting point.
  • One pixel point is arbitrarily selected from the R category as the second fitting point.
  • the two fitting points can be used to construct the model parameters ⁇ and ⁇ ; the two fitting points can also be used to derive the first model parameter ⁇ , and the average point of 4 pixels can be used to derive the second model parameter ⁇ ; In the process, it is possible to omit the operation of averaging points for L and R respectively.
  • FIG. 14 shows a schematic structural diagram for determining a fitting point provided by an embodiment of the present application.
  • the luminance mean value L mean of these adjacent reference pixels can be obtained, and then the adjacent reference pixels can be divided into L type and R type using the L mean .
  • the mean points (L Lmean , C Lmean ) and (L Rmean , C Rmean ) in each category can be used as two fitting points, and the first model parameter ⁇ is derived from these two fitting points, and the mean point ( L mean , C mean ) to derive the second model parameter ⁇ :
  • the above method can also be applied to the case of at most 4 valid adjacent reference pixels in VTM5.0.
  • VTM5.0 After obtaining 4 adjacent reference pixels through VTM5.0, first obtain the average brightness of the 4 pixels, use the average to divide the 4 pixels into two categories, and then use the average points of the two categories as two Fitting points to derive the model parameters ⁇ and ⁇ .
  • these 4 pixels cannot be divided into two categories according to the average brightness value, that is, when the number of pixels in the L type is 0 or the number of pixels in the R type is 0, the chromaticity average value of these 4 pixels can be directly used as Chromaticity prediction value.
  • the two types of mean points can be used as two fitting points to derive the first model parameter ⁇ , and the mean point of 4 pixels can be used to derive the second model parameter ⁇ .
  • the second model parameter ⁇ may be derived directly using the mean point of the two mean points that have been obtained.
  • VTM5.0 the two average points are rounded down to an integer.
  • the average value of these two average points is calculated again, and the result obtained by directly calculating the average value of 4 adjacent reference pixels may be different. , Which makes the prediction performance slightly different.
  • FIG. 15 shows a schematic structural diagram of a prediction model provided by an embodiment of the present application.
  • the gray oblique line represents the slope calculated based on the two fitting points, that is, the first model parameter ⁇ ; then the gray oblique line is translated, and the amount of translation is the second model parameter ⁇ .
  • the thick black diagonal line is the final prediction model.
  • the two points with the largest and smallest brightness values among all adjacent reference points corresponding to the coding block can be used as the two fitting points to derive the first A model parameter ⁇ , and a second model parameter ⁇ is derived using the mean value of all adjacent reference pixels at the same time.
  • JVET-N0524 proposal can also be applied to the case of up to 4 valid adjacent reference pixels in VTM5.0.
  • the average point and two of the two larger points of the 4 pixels can be used.
  • the mean point of the two smaller points is used as two fitting points to derive the first model parameter ⁇ , and then the mean point of these 4 pixels is used to derive the second model parameter ⁇ .
  • This embodiment provides an image component prediction method.
  • a preset number of adjacent reference pixels can be divided into 3 cases, and the three cases Two different fitting points are used to construct the prediction model, thereby improving the robustness of CCLM prediction; that is to say, by optimizing the fitting points used in the model parameter derivation, the constructed prediction model can be made more accurate , Can also improve the codec prediction performance of video images. .
  • FIG. 16 shows a schematic diagram of the composition structure of an image component prediction apparatus 160 provided by an embodiment of the present application.
  • the image component prediction device 160 may include: an acquisition unit 1601, a comparison unit 1602, a calculation unit 1603, and a prediction unit 1604, where
  • the acquiring unit 1601 is configured to acquire N adjacent reference pixels corresponding to the image components to be predicted of the encoding block in the video image; wherein, the N adjacent reference pixels are reference adjacent to the encoding block Pixels, N is a preset integer value;
  • the comparing unit 1602 is configured to compare the N first image component values corresponding to the N adjacent reference pixels to determine a first reference pixel subset and a second reference pixel subset; wherein, a preset number of phases
  • the adjacent reference pixel points correspond to a preset number of first image component values
  • the first reference pixel subset includes: among the preset number of first image component values, the smallest first image component value and the second smallest first image component value
  • the second reference pixel subset includes: among the preset number of first image component values, the two corresponding to the largest first image component value and the second largest first image component value Adjacent reference pixels;
  • the calculation unit 1603 is configured to calculate the average value of the N adjacent reference pixel points to obtain a first average value point
  • the comparison unit 1602 is further configured to compare the values of the first image component of the first reference pixel subset and/or the second reference pixel subset through the first average point, and determine two fitting points;
  • the prediction unit 1604 is configured to determine model parameters based on the two fitting points, and obtain a prediction model corresponding to the image component to be predicted according to the model parameters; wherein, the prediction model is used to Prediction processing of the image component to be predicted to obtain the predicted value corresponding to the image component to be predicted.
  • the image component prediction device 160 may further include a screening unit 1605,
  • the obtaining unit 1601 is further configured to obtain a first reference pixel set corresponding to the image component to be predicted of the coding block in the video image;
  • the screening unit 1605 is configured to perform screening processing on the first reference pixel set to obtain a second reference pixel set; wherein, the second reference pixel set includes N adjacent reference pixels.
  • the acquiring unit 1601 is specifically configured to acquire reference pixels adjacent to at least one side of the encoding block; wherein, the at least one side includes the left side of the encoding block and/or the The upper side of the coding block; and based on the reference pixels, a first reference pixel set corresponding to the image component to be predicted is formed.
  • the acquiring unit 1601 is specifically configured to acquire reference pixels in a reference row or a reference column adjacent to the encoding block; wherein, the reference row is defined by the upper side of the encoding block And the rows adjacent to the upper right side, the reference column is composed of the columns adjacent to the left side and the lower left side of the coding block; and based on the reference pixels, the waiting Predict the first reference pixel set corresponding to the image component.
  • the screening unit 1605 is specifically configured to determine the position of the pixel to be selected based on the pixel position and/or image component intensity corresponding to each adjacent reference pixel in the first reference pixel set; and Determining the position of the pixel to be selected, selecting adjacent reference pixels corresponding to the position of the pixel to be selected from the first reference pixel set, and composing the selected adjacent reference pixels into a second reference pixel set; Wherein, the second reference pixel set includes N adjacent reference pixels.
  • the acquiring unit 1601 is further configured to acquire a preset number of first image component values based on the preset number of adjacent reference pixels;
  • the comparing unit 1602 is specifically configured to perform multiple comparisons on a preset number of first image component values to obtain a first array composed of the smallest first image component value and the second smallest first image component value and the largest first image A second array composed of component values and the next largest first image component value; and placing two adjacent reference pixels corresponding to the first array into the first reference pixel subset to obtain the first reference pixel subset And placing two adjacent reference pixels corresponding to the second array into a second reference pixel subset to obtain the second reference pixel subset.
  • the obtaining unit 1601 is further configured to obtain the Nth image component value and the second image component value corresponding to each of the N adjacent reference pixels.
  • the average value of the first image component corresponding to one image component and the average value of the second image components corresponding to the N second image components are obtained to obtain the first average point.
  • the obtaining unit 1601 is further configured to obtain the first reference pixel based on the first image component value and the second image component value corresponding to each adjacent reference pixel in the first reference pixel subset.
  • the first image component value and the second image component value corresponding to the adjacent reference pixels are obtained, and the first image component average value corresponding to the multiple first image components in the second reference pixel subset and the first image component corresponding to the multiple second image components are obtained.
  • the two image components are averaged to obtain the third average point.
  • the comparing unit 1602 is specifically configured to compare the first image component of the first mean point with the second smallest first image component value in the first reference pixel subset; and when the first mean point is When the first image component is greater than or equal to the second smallest first image component value in the first reference pixel subset, the first image component of the first mean point is combined with the second largest first image component value in the second reference pixel subset Compare; and when the first image component of the first average point is less than or equal to the second largest first image component value in the second reference pixel subset, the second average point is used as the first fitting point, and the third average Point as the second fitting point; wherein, the two fitting points include a first fitting point and a second fitting point.
  • the comparison unit 1602 is further configured to compare the first image component values corresponding to two adjacent reference pixel points in the first reference pixel subset; and determine two values according to the comparison result.
  • the smallest first image component value and the second smallest first image component value among the first image component values, and the pixel corresponding to the smallest first image component value is used as the first adjacent reference pixel, and the second smallest first image component value The corresponding pixel is used as the second adjacent reference pixel;
  • the comparing unit 1602 is further configured to compare the first image component values corresponding to two adjacent reference pixels in the second reference pixel subset; and determine the two first image component values according to the comparison result
  • the second largest first image component value and the largest first image component value in the, and the pixel corresponding to the second largest first image component value is taken as the third adjacent reference pixel, and the pixel corresponding to the largest first image component value is taken as The fourth adjacent reference pixel.
  • the comparing unit 1602 is further configured to use the first adjacent reference pixel as the first fitting point when the first image component of the first mean point is less than the second smallest first image component value, and The third mean point is used as the second fitting point.
  • the comparing unit 1602 is further configured to use the second average point as the first fitting point when the first image component of the first average point is greater than the second largest first image component value, and the fourth phase The adjacent reference pixel is used as the second fitting point.
  • the calculation unit 1603 is further configured to obtain the first model parameter based on the first fitting point and the second fitting point, by calculating a model with a first preset factor; and For the first model parameter and the first fitting point, the second model parameter is obtained through a second preset factor calculation model.
  • the calculation unit 1603 is further configured to obtain the second model parameter through a second preset factor calculation model based on the first model parameter and the first average point.
  • the image component prediction device 160 may further include a grouping unit 1606 and a determining unit 1607, where:
  • the grouping unit 1606 is configured to perform grouping processing on the N adjacent reference pixels through a first average point to obtain a third reference pixel subset and a fourth reference pixel subset;
  • the determining unit 1607 is configured to determine the first fitting point based on the third reference pixel subset; determine the second fitting point based on the fourth reference pixel subset;
  • the prediction unit 1604 is specifically configured to determine model parameters based on the first fitting point and the second fitting point, and obtain a prediction model corresponding to the image component to be predicted according to the model parameters; wherein, The prediction model is used to implement prediction processing on the image component to be predicted, so as to obtain the predicted value corresponding to the image component to be predicted.
  • the determining unit 1607 is specifically configured to select a part of adjacent reference pixels from the third reference pixel subset, perform average calculation on the part of adjacent reference pixels, and calculate the average value obtained by the calculation. As the first fitting point; and selecting a part of adjacent reference pixel points from the fourth reference pixel subset, performing average calculation on the part of adjacent reference pixels, and using the calculated average point as the first Two fitting points.
  • the determining unit 1607 is specifically configured to select one of the adjacent reference pixel points from the third reference pixel subset as the first fitting point; and from the fourth reference pixel subset One of the adjacent reference pixels is selected as the second fitting point.
  • the value of N is 4; the determining unit 1607 is specifically configured to: if the third reference pixel subset includes 3 adjacent reference pixels, the fourth reference pixel subset includes 1 Adjacent reference pixels, select two adjacent reference pixels from the third reference pixel subset, perform average calculation on the selected two adjacent reference pixels, and use the calculated average point as the first Fitting point, using one adjacent reference pixel in the fourth reference pixel subset as the second fitting point; and if the third reference pixel subset includes one adjacent reference pixel, the first The four reference pixel subset includes 3 adjacent reference pixels, then 2 adjacent reference pixels are selected from the fourth reference pixel subset, and the average value of the selected 2 adjacent reference pixels is calculated to obtain The mean point of is used as the second fitting point, and an adjacent reference pixel point in the third reference pixel subset is used as the first fitting point.
  • the prediction unit 1604 is specifically configured to perform prediction processing on the image component to be predicted for each pixel in the coding block based on the prediction model to obtain the image component to be predicted for each pixel. Predictive value.
  • a "unit" may be a part of a circuit, a part of a processor, a part of a program, or software, etc., of course, may also be a module, or may be non-modular.
  • the various components in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be realized in the form of hardware or software function module.
  • the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this embodiment is essentially or It is said that the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium and includes several instructions to enable a computer device (which can A personal computer, server, or network device, etc.) or a processor (processor) executes all or part of the steps of the method described in this embodiment.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
  • this embodiment provides a computer storage medium that stores an image component prediction program that implements the method described in any one of the foregoing embodiments when the image component prediction program is executed by at least one processor.
  • FIG. 17 shows the specific hardware structure of the image component prediction device 160 provided by the embodiment of the present application, which may include: a network interface 1701, a memory 1702, and a processor 1703:
  • the components are coupled together through the bus system 1704.
  • the bus system 1704 is used to implement connection and communication between these components.
  • the bus system 1704 also includes a power bus, a control bus, and a status signal bus.
  • various buses are marked as the bus system 1704 in FIG. 19.
  • the network interface 1701 is used to receive and send signals in the process of sending and receiving information with other external network elements;
  • the memory 1702 is configured to store computer programs that can run on the processor 1703;
  • the processor 1703 is configured to execute: when the computer program is running:
  • N adjacent reference pixels corresponding to the to-be-predicted image components of the encoding block in the video image; wherein, the N adjacent reference pixels are reference pixels adjacent to the encoding block, and N is a preset Integer value
  • the N first image component values corresponding to the N adjacent reference pixels are compared to determine the first reference pixel subset and the second reference pixel subset; wherein the preset number of adjacent reference pixels corresponds to the preset number
  • the first image component value of the first reference pixel subset includes: two adjacent references corresponding to the smallest first image component value and the second smallest first image component value among the preset number of first image component values Pixels
  • the second reference pixel subset includes: two adjacent reference pixels corresponding to the largest first image component value and the second largest first image component value among the preset number of first image component values;
  • the model parameters are determined, and the prediction model corresponding to the image component to be predicted is obtained according to the model parameters; wherein, the prediction model is used to realize the prediction processing of the image component to be predicted, To obtain the predicted value corresponding to the image component to be predicted.
  • the memory 1702 in the embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), and electrically available Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
  • the volatile memory may be a random access memory (Random Access Memory, RAM), which is used as an external cache.
  • RAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • Enhanced SDRAM, ESDRAM Synchronous Link Dynamic Random Access Memory
  • Synchlink DRAM Synchronous Link Dynamic Random Access Memory
  • DRRAM Direct Rambus RAM
  • the processor 1703 may be an integrated circuit chip with signal processing capabilities. In the implementation process, the steps of the foregoing method can be completed by hardware integrated logic circuits in the processor 1703 or instructions in the form of software.
  • the aforementioned processor 1703 may be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (Field Programmable Gate Array, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC application specific integrated circuit
  • FPGA ready-made programmable gate array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory 1702, and the processor 1703 reads the information in the memory 1702, and completes the steps of the foregoing method in combination with its hardware.
  • the embodiments described herein can be implemented by hardware, software, firmware, middleware, microcode, or a combination thereof.
  • the processing unit can be implemented in one or more Application Specific Integrated Circuits (ASIC), Digital Signal Processing (DSP), Digital Signal Processing Equipment (DSP Device, DSPD), programmable Logic device (Programmable Logic Device, PLD), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), general-purpose processors, controllers, microcontrollers, microprocessors, and others for performing the functions described in this application Electronic unit or its combination.
  • ASIC Application Specific Integrated Circuits
  • DSP Digital Signal Processing
  • DSP Device Digital Signal Processing Equipment
  • PLD programmable Logic Device
  • PLD Field-Programmable Gate Array
  • FPGA Field-Programmable Gate Array
  • the technology described herein can be implemented through modules (such as procedures, functions, etc.) that perform the functions described herein.
  • the software codes can be stored in the memory and executed by the processor.
  • the memory can be implemented in the processor or external to the processor.
  • the processor 1703 is further configured to execute the method described in any one of the foregoing embodiments when the computer program is running.
  • FIG. 18 shows a schematic diagram of the composition structure of an encoder provided by an embodiment of the present application.
  • the encoder 180 may at least include the image component prediction device 160 described in any of the foregoing embodiments.
  • FIG. 19 shows a schematic diagram of the composition structure of a decoder provided by an embodiment of the present application.
  • the decoder 190 may at least include the image component prediction device 190 described in any of the foregoing embodiments.
  • N adjacent reference pixels are obtained, where the N adjacent reference pixels are reference pixels adjacent to the coding block.
  • N is a preset integer value; then, the N first image component values corresponding to the N adjacent reference pixels are compared to determine the first reference pixel subset and the second reference pixel subset, the first reference pixel
  • the set includes: two adjacent reference pixel points corresponding to the smallest first image component value and the second smallest first image component value among the preset number of first image component values, and the second reference pixel subset includes: Set the two adjacent reference pixels corresponding to the largest first image component value and the second largest first image component value in the number of first image component values; calculate the average value of the N adjacent reference pixels to obtain the first Mean point; compare the first image component values of the first reference pixel subset and/or the second reference pixel subset through the first mean point to determine two fitting points; then determine the model parameters based on the two fitting points , Obtain the prediction model

Abstract

An image component prediction method and apparatus, and a computer storage medium. The method comprises: obtaining N adjacent reference pixel points corresponding to an image component to be predicted of a coding block in a video image (S701), wherein N is a preset integer value; comparing N first image component values corresponding to the N adjacent reference pixel points, and determining a first reference pixel subset and a second reference pixel subset (S702), wherein the first reference pixel subset comprises two adjacent reference pixel points corresponding to a minimum first image component value and a secondary minimum first image component value, and the second reference pixel subset comprises two adjacent reference pixel points corresponding to a maximum first image component value and a secondary maximum first image component value; performing comparison of the first image component value on the first reference pixel subset and/or the second reference pixel subset by means of a first mean value point, and determining two fitting points (S704); and determining a model parameter according to the two fitting points, and according to the model parameter, obtaining a prediction model corresponding to the image component to be predicted.

Description

图像分量预测方法、装置及计算机存储介质Image component prediction method, device and computer storage medium 技术领域Technical field
本申请实施例涉及视频编解码技术领域,尤其涉及一种图像分量预测方法、装置及计算机存储介质。The embodiments of the present application relate to the field of video coding and decoding technologies, and in particular to an image component prediction method, device, and computer storage medium.
背景技术Background technique
随着人们对视频显示质量要求的提高,高清和超高清视频等新视频应用形式应运而生。H.265/高效率视频编码(High Efficiency Video Coding,HEVC)已经无法满足视频应用迅速发展的需求,联合视频研究组(Joint Video Exploration Team,JVET)提出了下一代视频编码标准H.266/多功能视频编码(Versatile Video Coding,VVC),其相应的测试模型为VVC的参考软件测试平台(VVC Test Model,VTM)。As people's requirements for video display quality increase, new video applications such as high-definition and ultra-high-definition video have emerged. H.265/High Efficiency Video Coding (HEVC) has been unable to meet the needs of the rapid development of video applications. The Joint Video Exploration Team (JVET) proposed the next-generation video coding standard H.266/multiple Functional Video Coding (Versatile Video Coding, VVC), and its corresponding test model is the VVC Reference Software Test Platform (VVC Test Model, VTM).
在VTM中,目前已经集成了一种基于预测模型的图像分量预测方法,通过该预测模型可以由当前编码块(Coding Block,CB)的亮度分量预测色度分量。然而,在构建预测模型时,由于用于模型参数推导的相邻参考像素点选取不合理,导致预测模型不准确,降低了视频图像的编解码预测性能。In VTM, an image component prediction method based on a prediction model has been integrated, through which the chrominance component can be predicted from the luminance component of the current coding block (CB). However, when constructing a prediction model, the selection of adjacent reference pixels used for model parameter derivation is unreasonable, resulting in inaccurate prediction models and reducing the coding and decoding prediction performance of video images.
发明内容Summary of the invention
本申请实施例提供一种图像分量预测方法、装置及计算机存储介质,通过对模型参数推导所使用的拟合点进行优化,提高了预测的鲁棒性,从而使得构建的预测模型更准确,而且提升了视频图像的编解码预测性能。The embodiments of the application provide an image component prediction method, device, and computer storage medium. By optimizing the fitting points used in the derivation of model parameters, the robustness of prediction is improved, so that the constructed prediction model is more accurate, and Improved the codec prediction performance of video images.
本申请实施例的技术方案可以如下实现:The technical solutions of the embodiments of the present application can be implemented as follows:
第一方面,本申请实施例提供了一种图像分量预测方法,所述方法包括:In the first aspect, an embodiment of the present application provides an image component prediction method, the method includes:
获取视频图像中编码块的待预测图像分量对应的N个相邻参考像素点;其中,所述N个相邻参考像素点为与所述编码块相邻的参考像素点,N为预设的整数值;Acquire N adjacent reference pixels corresponding to the to-be-predicted image components of the encoding block in the video image; wherein, the N adjacent reference pixels are reference pixels adjacent to the encoding block, and N is a preset Integer value
比较所述N个相邻参考像素点对应的N个第一图像分量值,确定第一参考像素子集和第二参考像素子集;其中,预设数量的相邻参考像素点对应预设数量的第一图像分量值,所述第一参考像素子集中包括:在预设数量的第一图像分量值中最小第一图像分量值和次最小第一图像分量值所对应的两个相邻参考像素点,所述第二参考像素子集中包括:在预设数量的第一图像分量值中最大第一图像分量值和次最大第一图像分量值所对应的两个相邻参考像素点;The N first image component values corresponding to the N adjacent reference pixels are compared to determine the first reference pixel subset and the second reference pixel subset; wherein the preset number of adjacent reference pixels corresponds to the preset number The first image component value of the first reference pixel subset includes: two adjacent references corresponding to the smallest first image component value and the second smallest first image component value among the preset number of first image component values Pixels, the second reference pixel subset includes: two adjacent reference pixels corresponding to the largest first image component value and the second largest first image component value among the preset number of first image component values;
计算所述N个相邻参考像素点的均值,得到第一均值点;Calculating the average value of the N adjacent reference pixel points to obtain the first average value point;
通过所述第一均值点对所述第一参考像素子集和/或第二参考像素子集进行第一图像分量值的比较,确定两个拟合点;Comparing the values of the first image component to the first reference pixel subset and/or the second reference pixel subset by using the first average point to determine two fitting points;
基于所述两个拟合点,确定模型参数,根据所述模型参数得到所述待预测图像分量对应的预测模型;其中,所述预测模型用于实现对所述待预测图像分量的预测处理,以得到所述待预测图像分量对应的预测值。Based on the two fitting points, the model parameters are determined, and the prediction model corresponding to the image component to be predicted is obtained according to the model parameters; wherein, the prediction model is used to realize the prediction processing of the image component to be predicted, To obtain the predicted value corresponding to the image component to be predicted.
第二方面,本申请实施例提供了一种图像分量预测装置,所述图像分量预测装置包括:获取单元、比较单元、计算单元和预测单元,其中,In a second aspect, an embodiment of the present application provides an image component prediction device. The image component prediction device includes: an acquisition unit, a comparison unit, a calculation unit, and a prediction unit, wherein:
所述获取单元,配置为获取视频图像中编码块的待预测图像分量对应的N个相邻参考像素点;其中,所述N个相邻参考像素点为与所述编码块相邻的参考像素点,N为预设的整数值;The acquiring unit is configured to acquire N adjacent reference pixels corresponding to the image components to be predicted of the encoding block in the video image; wherein the N adjacent reference pixels are reference pixels adjacent to the encoding block Point, N is a preset integer value;
所述比较单元,配置为比较所述N个相邻参考像素点对应的N个第一图像分量值,确定第一参考像素子集和第二参考像素子集;其中,预设数量的相邻参考像素点对应预设数量的第一图像分量值,所述第一参考像素子集中包括:在预设数量的第一图像分量值中最小第一图像分量值和次最小第一图像分量值所对应的两个相邻参考像素点,所述第二参考像素子集中包括:在预设数量的第一图像分量值中最大第一图像分量值和次最大第一图像分量值所对应的两个相邻参考像素点;The comparison unit is configured to compare the N first image component values corresponding to the N adjacent reference pixels to determine a first reference pixel subset and a second reference pixel subset; wherein, a preset number of adjacent The reference pixel points correspond to a preset number of first image component values, and the first reference pixel subset includes: among the preset number of first image component values, the smallest first image component value and the second smallest first image component value are Corresponding to two adjacent reference pixel points, the second reference pixel subset includes: among the preset number of first image component values, two corresponding to the largest first image component value and the second largest first image component value Adjacent reference pixels;
所述计算单元,配置为计算所述N个相邻参考像素点的均值,得到第一均值点;The calculation unit is configured to calculate the average value of the N adjacent reference pixel points to obtain a first average value point;
所述比较单元,还配置为通过所述第一均值点对所述第一参考像素子集和/或第二参考像素子集进行第一图像分量值的比较,确定两个拟合点;The comparison unit is further configured to compare the values of the first image component of the first reference pixel subset and/or the second reference pixel subset through the first average point, and determine two fitting points;
所述预测单元,配置为基于所述两个拟合点,确定模型参数,根据所述模型参数得到所述待预测图像分量对应的预测模型;其中,所述预测模型用于实现对所述待预测图像分量的预测处理,以得到所述待预测图像分量对应的预测值。The prediction unit is configured to determine model parameters based on the two fitting points, and obtain a prediction model corresponding to the image component to be predicted according to the model parameters; wherein, the prediction model is used to implement Prediction processing of the predicted image component to obtain the predicted value corresponding to the image component to be predicted.
第三方面,本申请实施例提供了一种图像分量预测装置,所述图像分量预测装置包括:存储器和处理器;In a third aspect, an embodiment of the present application provides an image component prediction device, the image component prediction device including: a memory and a processor;
所述存储器,用于存储能够在所述处理器上运行的计算机程序;The memory is used to store a computer program that can run on the processor;
所述处理器,用于在运行所述计算机程序时,执行如第一方面中所述的方法。The processor is configured to execute the method described in the first aspect when running the computer program.
第四方面,本申请实施例提供了一种计算机存储介质,所述计算机存储介质存储有图像分量预测程序,所述图像分量预测程序被至少一个处理器执行时实现如第一方面中所述的方法。In a fourth aspect, an embodiment of the present application provides a computer storage medium, the computer storage medium stores an image component prediction program, and when the image component prediction program is executed by at least one processor, the method described in the first aspect is implemented. method.
本申请实施例提供了一种图像分量预测方法、装置及计算机存储介质,首先针对视频图像中编码块的待预测图像分量,获取N个相邻参考像素点,这里的N个相邻参考像素点为与所述编码块相邻的参考像素点,N为预设的整数值;然后比较所述N个相邻参考像素点对应的N个第一图像分量值,确定第一参考像素子集和第二参考像素子集,该第一参考像素子集中包括:在预设数量的第一图像分量值中最小第一图像分量值和次最小第一图像分量值所对应的两个相邻参考像素点,该第二参考像素子集中包括:在预设数量的第一图像分量值中最大第一图像分量值和次最大第一图像分量值所对应的两个相邻参考像素点;计算所述N个相邻参考像素点的均值,得到第一均值点;通过第一均值点对第一参考像素子集和/或第二参考像素子集进行第一图像分量值的比较,确定两个拟合点;再基于两个拟合点确定模型参数,根据模型参数得到待预测图像分量对应的预测模型,以得到待预测图像分量对应的预测值;这样,经过第二比较处理之后,可以将预设数量的相邻参考像素点分成多种情况,针对这多种情况使用不同的两个拟合点来构建预测模型,可以提高CCLM预测的鲁棒性;也就是说,通过对模型参数推导所使用的拟合点进行优化,从而使得构建的预测模型更准确,而且提升了视频图像的编解码预测性能。The embodiments of the present application provide an image component prediction method, device, and computer storage medium. First, for the image component to be predicted of a coding block in a video image, N adjacent reference pixels are obtained, where the N adjacent reference pixels are Is the reference pixel adjacent to the coding block, and N is a preset integer value; then the N first image component values corresponding to the N adjacent reference pixels are compared to determine the first reference pixel subset and A second reference pixel subset, which includes: two adjacent reference pixels corresponding to the smallest first image component value and the second smallest first image component value among the preset number of first image component values Point, the second reference pixel subset includes: among the preset number of first image component values, two adjacent reference pixels corresponding to the largest first image component value and the second largest first image component value are calculated; The average value of N adjacent reference pixels is obtained to obtain the first average value point; the first image component value is compared with the first reference pixel subset and/or the second reference pixel subset through the first average value point to determine the two pseudo Combining point; then determine the model parameters based on the two fitting points, and obtain the prediction model corresponding to the image component to be predicted according to the model parameters to obtain the predicted value corresponding to the image component to be predicted; in this way, after the second comparison process, the prediction Assuming that the number of adjacent reference pixels is divided into multiple situations, using two different fitting points to construct the prediction model for these multiple situations can improve the robustness of CCLM prediction; that is, by deriving all the model parameters The fitting points used are optimized, so that the constructed prediction model is more accurate, and the coding and decoding prediction performance of the video image is improved.
附图说明Description of the drawings
图1为相关技术方案提供的一种有效相邻区域的分布示意图;Figure 1 is a schematic diagram of the distribution of an effective adjacent area provided by a related technical solution;
图2为相关技术方案提供的一种三种模式下选择区域的分布示意图;Figure 2 is a schematic diagram of the distribution of selected areas in one of three modes provided by related technical solutions;
图3为相关技术方案提供的一种模型参数推导传统方案的流程示意图;Figure 3 is a schematic flow chart of a traditional solution for deriving model parameters provided by related technical solutions;
图4A为相关技术方案提供的一种传统方案下预测模型的示意图;Fig. 4A is a schematic diagram of a prediction model under a traditional scheme provided by a related technical scheme;
图4B为相关技术方案提供的另一种传统方案下预测模型的示意图;4B is a schematic diagram of a prediction model under another traditional solution provided by the related technical solution;
图5为本申请实施例提供的一种视频编码系统的组成框图示意图;FIG. 5 is a schematic diagram of the composition of a video encoding system provided by an embodiment of the application;
图6为本申请实施例提供的一种视频解码系统的组成框图示意图;FIG. 6 is a schematic diagram of the composition of a video decoding system provided by an embodiment of the application;
图7为本申请实施例提供的一种图像分量预测方法的流程示意图;FIG. 7 is a schematic flowchart of an image component prediction method provided by an embodiment of the application;
图8A为本申请实施例提供的一种INTRA_LT_CCLM模式下相邻参考像素点选取的结构示意图;8A is a schematic structural diagram of adjacent reference pixel selection in INTRA_LT_CCLM mode according to an embodiment of the application;
图8B为本申请实施例提供的一种INTRA_L_CCLM模式下相邻参考像素点选取的结构示意图;8B is a schematic structural diagram of adjacent reference pixel selection in INTRA_L_CCLM mode according to an embodiment of the application;
图8C为本申请实施例提供的一种INTRA_T_CCLM模式下相邻参考像素点选取的结构示意图;FIG. 8C is a schematic structural diagram of adjacent reference pixel selection in the INTRA_T_CCLM mode according to an embodiment of the application;
图9为本申请实施例提供的另一种图像分量预测方法的流程示意图;9 is a schematic flowchart of another image component prediction method provided by an embodiment of the application;
图10为本申请实施例提供的一种模型参数推导方案的流程示意图;10 is a schematic flowchart of a model parameter derivation solution provided by an embodiment of the application;
图11为本申请实施例提供的一种本申请方案与传统方案下预测模型的对比示意图;FIG. 11 is a schematic diagram of comparison between a prediction model provided by an embodiment of the present application and a traditional solution;
图12为本申请实施例提供的另一种本申请方案与传统方案下预测模型的对比示意图;FIG. 12 is a schematic diagram of comparison between another solution of the application provided by an embodiment of the application and a prediction model under the traditional solution;
图13为本申请实施例提供的又一种本申请方案与传统方案下预测模型的对比示意图;FIG. 13 is a schematic diagram of comparison between another solution of the application provided by an embodiment of the application and a prediction model under a traditional solution;
图14为本申请实施例提供的一种确定拟合点的结构示意图;FIG. 14 is a schematic diagram of a structure for determining a fitting point provided by an embodiment of the application;
图15为本申请实施例提供的一种预测模型的结构示意图;FIG. 15 is a schematic structural diagram of a prediction model provided by an embodiment of this application;
图16为本申请实施例提供的一种图像分量预测装置的组成结构示意图;16 is a schematic diagram of the composition structure of an image component prediction apparatus provided by an embodiment of the application;
图17为本申请实施例提供的一种图像分量预测装置的具体硬件结构示意图;FIG. 17 is a schematic diagram of a specific hardware structure of an image component prediction apparatus provided by an embodiment of the application;
图18为本申请实施例提供的一种编码器的组成结构示意图;18 is a schematic diagram of the composition structure of an encoder provided by an embodiment of the application;
图19为本申请实施例提供的一种解码器的组成结构示意图。FIG. 19 is a schematic diagram of the composition structure of a decoder provided by an embodiment of the application.
具体实施方式Detailed ways
为了能够更加详尽地了解本申请实施例的特点与技术内容,下面结合附图对本申请实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本申请实施例。In order to have a more detailed understanding of the features and technical content of the embodiments of the present application, the implementation of the embodiments of the present application will be described in detail below in conjunction with the accompanying drawings. The attached drawings are for reference and explanation purposes only and are not used to limit the embodiments of the present application.
在视频图像中,一般采用第一图像分量、第二图像分量和第三图像分量来表征编码块;其中,这三个图像分量分别为一个亮度分量、一个蓝色色度分量和一个红色色度分量,具体地,亮度分量通常使用符号Y表示,蓝色色度分量通常使用符号Cb或者U表示,红色色度分量通常使用符号Cr或者V表示;这样,视频图像可以用YCbCr格式表示,也可以用YUV格式表示。In video images, the first image component, the second image component, and the third image component are generally used to characterize the coding block; among them, the three image components are a luminance component, a blue chrominance component, and a red chrominance component. Specifically, the luminance component is usually represented by the symbol Y, the blue chrominance component is usually represented by the symbol Cb or U, and the red chrominance component is usually represented by the symbol Cr or V; in this way, the video image can be represented in YCbCr format or YUV. Format representation.
在本申请实施例中,第一图像分量可以为亮度分量,第二图像分量可以为蓝色色度分量,第三图像分量可以为红色色度分量,但是本申请实施例不作具体限定。In the embodiment of the present application, the first image component may be a luminance component, the second image component may be a blue chrominance component, and the third image component may be a red chrominance component, but the embodiment of the present application does not specifically limit it.
在当前的视频图像或者视频编解码过程中,对于跨分量预测技术,主要包括跨分量线性模型预测(Cross-component Linear Model Prediction,CCLM)模式和多方向线性模型预测(Multi-Directional Linear Model Prediction,MDLM)模式,无论是根据CCLM模式推导的模型参数,还是根据MDLM模式推导的模型参数,其对应的预测模型均可以实现第一图像分量到第二图像分量、第二图像分量到第一图像分量、第一图像分量到第三图像分量、第三图像分量到第一图像分量、第二图像分量到第三图像分量、或者第三图像分量到第二图像分量等图像分量间的预测。In the current video image or video encoding and decoding process, the cross-component prediction technology mainly includes the cross-component linear model prediction (CCLM) mode and the multi-directional linear model prediction (Multi-Directional Linear Model Prediction, MDLM) mode, whether it is based on the model parameters derived from the CCLM mode or the model parameters derived from the MDLM mode, its corresponding prediction model can realize the first image component to the second image component, and the second image component to the first image component , Prediction between image components such as the first image component to the third image component, the third image component to the first image component, the second image component to the third image component, or the third image component to the second image component.
以第一图像分量到第二图像分量的预测为例,为了减少第一图像分量与第二图像分量之间的冗余,在VVC中使用CCLM模式,此时第一图像分量和第二图像分量为同一编码块的,即根据同一编码块的第一图像分量重建值来构造第二图像分量的预测值,如式(1)所示,Taking the prediction from the first image component to the second image component as an example, in order to reduce the redundancy between the first image component and the second image component, the CCLM mode is used in VVC. At this time, the first image component and the second image component For the same coding block, that is, the predicted value of the second image component is constructed according to the reconstruction value of the first image component of the same coding block, as shown in equation (1),
Pred C[i,j]=α·Rec L[i,j]+β  (1) Pred C [i,j]=α·Rec L [i,j]+β (1)
其中,i,j表示编码块中像素点的位置坐标,i表示水平方向,j表示竖直方向,Pred C[i,j]表示编码块中位置坐标为[i,j]的像素点对应的第二图像分量预测值,Pred L[i,j]表示同一编码块中(经过下采样的)位置坐标为[i,j]的像素点对应的第一图像分量重建值,α和β表示模型参数。 Among them, i,j represent the position coordinates of the pixel in the coding block, i represents the horizontal direction, j represents the vertical direction, and Pred C [i,j] represents the pixel corresponding to the location coordinate [i,j] in the coding block The predicted value of the second image component, Pred L [i,j] represents the reconstructed value of the first image component corresponding to the pixel with the position coordinate [i,j] in the same coding block (downsampled), and α and β represent the model parameter.
对于编码块而言,其相邻区域可以包括左侧相邻区域、上侧相邻区域、左下侧相邻区域和右上侧相邻区域。在VVC中,可以包括三种跨分量线性模型预测模式,分别为:左侧及上侧相邻的帧内CCLM模式(可以用INTRA_LT_CCLM模式表示)、左侧及左下侧相邻的帧内CCLM模式(可以用INTRA_L_CCLM模式表示)和上侧及右上侧相邻的帧内CCLM模式(可以用INTRA_T_CCLM模式表示)。在这三种模式中,每种模式都可以选取预设数量(比如4个)的相邻参考像素点用于模型参数α和β的推导,而这三种模式的最大区别在于用于推导模型参数α和β的相邻参考像素点对应的选择区域是不同的。For a coding block, its adjacent areas may include a left adjacent area, an upper adjacent area, a lower left adjacent area, and an upper right adjacent area. In VVC, three cross-component linear model prediction modes can be included, which are: the intra-CCLM mode adjacent to the left and upper side (can be represented by the INTRA_LT_CCLM mode), and the intra-CCLM mode adjacent to the left and lower left side (It can be represented by INTRA_L_CCLM mode) and the intra-frame CCLM mode adjacent to the upper side and the upper right side (can be represented by INTRA_T_CCLM mode). In these three modes, each mode can select a preset number (such as 4) of adjacent reference pixels for the derivation of model parameters α and β, and the biggest difference between these three modes is that they are used to derive the model The selected regions corresponding to the adjacent reference pixels of the parameters α and β are different.
具体地,针对第二图像分量对应的编码块尺寸为W×H,假定相邻参考像素点对应的上侧选择区域为W',相邻参考像素点对应的左侧选择区域为H';这样,Specifically, for the size of the coding block corresponding to the second image component is W×H, it is assumed that the upper selection area corresponding to the adjacent reference pixel is W', and the left selection area corresponding to the adjacent reference pixel is H'; ,
对于INTRA_LT_CCLM模式,相邻参考像素点可以在上侧相邻区域和左侧相邻区域进行选取,即W'=W,H'=H;For the INTRA_LT_CCLM mode, adjacent reference pixels can be selected in the upper adjacent area and the left adjacent area, that is, W'=W, H'=H;
对于INTRA_L_CCLM模式,相邻参考像素点可以在左侧相邻区域和左下侧相邻区域进行选取,即H'=W+H,并设置W'=0;For INTRA_L_CCLM mode, adjacent reference pixels can be selected in the adjacent area on the left side and the adjacent area on the lower left side, that is, H'=W+H, and set W'=0;
对于INTRA_T_CCLM模式,相邻参考像素点可以在上侧相邻区域和右上侧相邻区域进行选取,即W'=W+H,并设置H'=0。For the INTRA_T_CCLM mode, adjacent reference pixels can be selected in the upper adjacent area and the upper right adjacent area, that is, W'=W+H, and set H'=0.
需要注意的是,在VVC最新参考软件VTM5.0中,对于右上侧相邻区域内最多只存储了W范围的像素点,对于左下侧相邻区域内最多只存储了H范围的像素点;因此,虽然INTRA_L_CCLM模式和INTRA_T_CCLM模式的选择区域的范围定义为W+H,但是在实际应用中,INTRA_L_CCLM模式的选择区域将限制在H+H之内,INTRA_T_CCLM模式的选择区域将限制在W+W之内;这样,It should be noted that in the latest VVC reference software VTM5.0, for the upper right adjacent area, only pixels in the W range at most are stored, and for the lower left adjacent area, only pixels in the H range at most are stored; therefore Although the selection area of INTRA_L_CCLM mode and INTRA_T_CCLM mode is defined as W+H, in practical applications, the selection area of INTRA_L_CCLM mode will be limited to H+H, and the selection area of INTRA_T_CCLM mode will be limited to W+W. Inside; like this,
对于INTRA_L_CCLM模式,相邻参考像素点可以在左侧相邻区域和左下侧相邻区域进行选取,H'=min{W+H,H+H};For INTRA_L_CCLM mode, adjacent reference pixels can be selected in the adjacent area on the left and the adjacent area on the lower left, H'=min{W+H,H+H};
对于INTRA_T_CCLM模式,相邻参考像素点可以在上侧相邻区域和右上侧相邻区域进行选取,W'=min{W+H,W+W}。For the INTRA_T_CCLM mode, adjacent reference pixels can be selected in the upper adjacent area and the upper right adjacent area, W'=min{W+H,W+W}.
参见图1,其示出了相关技术方案提供的一种有效相邻区域的分布示意图。在图1中,左侧相邻区域、左下侧相邻区域、上侧相邻区域和右上侧相邻区域都是有效的。在图1的基础上,针对三种模式的选择区域如图2所示。其中,在图2中,(a)表示了INTRA_LT_CCLM模式的选择区域,包括了左侧相邻区域和上侧相邻区域;(b)表示了INTRA_L_CCLM模式的选择区域,包括了左侧相邻区域和左下侧相邻区域;(c)表示了INTRA_T_CCLM模式的选择区域,包括了上侧相邻区域和右上侧相邻区域。这样,在确定出三种模式的选择区域之后,可以在选择区域内进行用于模型参数推导的参考点的选取。如此选取到的参考点可以称为相邻参考像素点,通常相邻参考像素点的个数最多为4个;而且对于一个尺寸确定的W×H的编码块,其相邻参考像素点的位置一般是确定的。Refer to FIG. 1, which shows a schematic diagram of the distribution of effective adjacent areas provided by related technical solutions. In FIG. 1, the left side adjacent area, the lower left side adjacent area, the upper side adjacent area, and the upper right side adjacent area are all valid. On the basis of Figure 1, the selection areas for the three modes are shown in Figure 2. Among them, in Figure 2, (a) shows the selection area of INTRA_LT_CCLM mode, including the left adjacent area and upper side adjacent area; (b) shows the selection area of INTRA_L_CCLM mode, including the left adjacent area And the adjacent area on the lower left side; (c) shows the selection area of INTRA_T_CCLM mode, including the adjacent area on the upper side and the adjacent area on the upper right side. In this way, after the selection areas of the three modes are determined, reference points for deriving model parameters can be selected in the selection area. The reference points selected in this way can be called adjacent reference pixels, and usually the number of adjacent reference pixels is at most 4; and for a W×H code block with a certain size, the positions of the adjacent reference pixels Generally certain.
在获取到相邻参考像素点之后,根据有效像素点个数,目前是按照图3所示的模型参数推导传统方案的流程示意图来确定模型参数。根据图3所示的流程,假定待预测图像分量为色度分量,该流程可以包括:After obtaining the adjacent reference pixels, according to the number of effective pixels, the model parameters are currently determined according to the schematic flow diagram of the traditional solution for deriving the model parameters shown in FIG. 3. According to the process shown in FIG. 3, assuming that the image component to be predicted is a chrominance component, the process may include:
S301:从选择区域中获取相邻参考像素点;S301: Obtain adjacent reference pixels from the selected area;
S302:判断相邻参考像素点中的有效像素点个数;S302: Determine the number of effective pixel points in adjacent reference pixels;
S303:将第一模型参数设置为0,第二模型参数设置为默认值;S303: Set the first model parameter to 0 and the second model parameter to the default value;
S304:色度预测值填充为默认值;S304: Fill the chromaticity prediction value as the default value;
S305:通过“复制”获取四个有效像素点;S305: Obtain four effective pixels through "copy";
S306:经过4次比较获得亮度分量较大值的两个像素点和较小值的两个像素点;S306: Obtain two pixels with a larger value and two pixels with a smaller brightness component after 4 comparisons;
S307:计算两个均值点;S307: Calculate two mean points;
S308:将两个均值点作为拟合点推导第一模型参数和第二模型参数;S308: Use two mean points as fitting points to derive the first model parameter and the second model parameter;
S309:根据构建的预测模型进行色度分量的预测处理。S309: Perform prediction processing of the chrominance component according to the constructed prediction model.
需要说明的是,当有效像素点个数为0个时,将执行步骤S303和S304;当有效像素点个数为2个时,将执行步骤S305,然后顺序执行步骤S306-S309;当有效像素点个数为4个时,将顺序执行步骤S306-S309。It should be noted that when the number of effective pixels is 0, steps S303 and S304 will be executed; when the number of effective pixels is 2, step S305 will be executed, and then steps S306-S309 will be executed sequentially; When the number of points is 4, steps S306-S309 will be executed sequentially.
还需要说明的是,当有效像素点为0个时,那么可以将当前编码块内所有像素点的色度分量对应的预测值Pred C[i,j]全部填充为色度分量的默认值;本申请实施例中,该默认值为色度分量的中间值。假定色度分量的比特深度用BitDepthC表示,可以得到中间值的计算方式为1<<(BitDepthC-1)。示例性地,假定当前视频图像是8比特视频,那么色度分量对应的分量范围是0~255,此时中间值为128,这时候预设分量值可以为128;假定当前视频图像是10比特视频,那么色度分量对应的分量范围是0~1023,此时中间值为512,这时候预设分量值可以为512。 It should also be noted that when the number of effective pixels is 0, the predicted values Pred C [i,j] corresponding to the chrominance components of all pixels in the current coding block can be filled with the default values of the chrominance components; In the embodiment of the present application, the default value is the middle value of the chrominance component. Assuming that the bit depth of the chrominance component is represented by BitDepthC, the intermediate value can be calculated as 1<<(BitDepthC-1). Exemplarily, assuming that the current video image is an 8-bit video, the component range corresponding to the chrominance component is 0-255, and the intermediate value is 128 at this time. At this time, the preset component value can be 128; assuming that the current video image is 10 bits For video, the component range corresponding to the chrominance component is 0-1023, and the intermediate value is 512 at this time. At this time, the preset component value can be 512.
可以理解地,利用“两点确定一条直线”原则来构建预测模型;这里的两点可以称为拟合点。目前的传统方案中,首先在获取四个有效像素点之后,经过4次比较获得亮度分量较大值的两个相邻参考像素点和较小值的两个相邻参考像素点;然后根据亮度分量较大值的两个相邻参考像素点,求取一均值点(可以用mean max表示),根据亮度分量较小值的两个相邻参考像素点,求取另一均值点(可以用mean min表示),得到两个均值点mean max和mean min;再将mean max和mean min作为两个拟合点,以推导出模型参数(可以用α和β表示),最后根据模型参数构建出预测模型,并根据该预测模型进行色度分量的预测处理。 Understandably, the principle of "two points determine a straight line" is used to construct a prediction model; the two points here can be called fitting points. In the current traditional scheme, after four effective pixels are obtained, two adjacent reference pixels with a larger value and two adjacent reference pixels with a smaller value are obtained after four comparisons; then according to the brightness Two adjacent reference pixels with larger components, find a mean point (which can be represented by mean max ), and find another mean point based on two adjacent reference pixels with a smaller brightness component (you can use mean min ) to obtain two mean points mean max and mean min ; then mean max and mean min are used as two fitting points to derive the model parameters (which can be expressed by α and β), and finally build according to the model parameters Predict model, and perform chroma component prediction processing based on the predict model.
也就是说,在推导模型参数时,以有4个相邻参考像素点为例,目前会使用2个较大的相邻参考像素点所得到的均值和2个较小的相邻参考像素点所得到的均值作为两个拟合点来推导模型参数。在这4个相邻参考像素点呈较均匀分布时,如图4A所示的传统方案下预测模型的示意图;其中,坐标轴的横坐标表示亮度值(可以用Luma表示),坐标轴的纵坐标表示色度值(可以用chroma表示),4个黑色圆点为4个相邻参考像素点,2个灰色圆点分别为这4个相邻参考像素点中2个较大的相邻参考像素点对应的均值点和2个较小的相邻参考像素点对应的均值点(即两个拟合点),灰色斜线表示根据这两个拟合点所构建的预测模型;灰色点线为这4个相邻参考像素点使用最小均方算法(Least Mean Square,LMS)拟合出的预测模型;从图4A中可以看出,灰色斜线与灰色点线比较接近,即传统方案所构建的预测模型能够比较准确地拟合出这4个相邻参考像素点的分布情况。In other words, when deriving the model parameters, taking 4 adjacent reference pixels as an example, currently the average value of 2 larger adjacent reference pixels and 2 smaller adjacent reference pixels will be used The obtained mean values are used as two fitting points to derive model parameters. When the four adjacent reference pixels are more evenly distributed, the schematic diagram of the prediction model under the traditional scheme shown in Figure 4A; where the abscissa of the coordinate axis represents the brightness value (which can be represented by Luma), and the vertical of the coordinate axis The coordinates represent the chromaticity value (it can be represented by chroma), the 4 black dots are the 4 adjacent reference pixels, and the 2 gray dots are the 2 larger adjacent reference pixels of the 4 adjacent reference pixels. The mean point corresponding to the pixel point and the mean point corresponding to the two smaller adjacent reference pixels (that is, two fitting points), the gray diagonal line indicates the prediction model constructed according to the two fitting points; the gray dotted line The prediction model fitted by the Least Mean Square (LMS) algorithm for these 4 adjacent reference pixels; it can be seen from Figure 4A that the gray diagonal line is relatively close to the gray dotted line, that is, the traditional scheme The constructed prediction model can more accurately fit the distribution of these four adjacent reference pixels.
然而在这4个相邻参考像素点呈非均匀分布时,传统方案所构建的预测模型并不能够准确地拟合出这4个相邻参考像素点的分布情况,如图4B所示的另一种传统方案下预测模型的示意图;从图4B中可以看出,灰色斜线与灰色点线具有一定偏差,即传统方案所构建的预测模型与这4个相邻参考像素点的分布情况无法很好地拟合。也就是说,目前的传统方案在模型参数推导过程中缺乏鲁棒性,对于4个相邻参考像素点处于不均匀分布的情况下无法很好地拟合。However, when the four adjacent reference pixels are non-uniformly distributed, the prediction model constructed by the traditional scheme cannot accurately fit the distribution of the four adjacent reference pixels, as shown in Figure 4B. A schematic diagram of the prediction model under a traditional scheme; as can be seen from Figure 4B, the gray slanted line and the gray dotted line have a certain deviation, that is, the prediction model constructed by the traditional scheme cannot be compared with the distribution of these 4 adjacent reference pixels. Fits well. In other words, the current traditional scheme lacks robustness in the process of model parameter derivation, and cannot fit well when the four adjacent reference pixels are in an uneven distribution.
为了提高CCLM预测的鲁棒性,同时提高编解码性能,本申请实施例提供了一种图像分量预测方法,针对视频图像中编码块的待预测图像分量,获取N个相邻参考像素点,这里的N个相邻参考像素点为与所述编码块相邻的参考像素点,N为预设的整数值;比较所述N个相邻参考像素点对应的N个第一图像分量值,确定第一参考像素子集和第二参考像素子集,该第一参考像素子集中包括:在预设数量的第一图像分量值中最小第一图像分量值和次最小第一图像分量值所对应的两个相邻参考像素点,该第二参考像素子集中包括:在预设数量的第一图像分量值中最大第一图像分量值和次最大第一图像分量值所对应的两个相邻参考像素点;计算所述N个相邻参考像素点的均值,得到第一均值点;通过第一均值点对第一参考像素子集和/或第二参考像素子集进行第一图像分量值的比较,确定两个拟合点;基于两个拟合点确定模型参数,根据模型参数得到待预测图像分量对应的预测模型,以得到待预测图像分量对应的预测值;这样,经过第二比较处理之后,可以将预设数量的相邻参考像素点分成多种情况,针对这多种情况使用不同的两个拟合点来构建预测模型,可以提高CCLM预测的鲁棒性;也就是说,通过对模型参数推导所使用的拟合点进行优化,从而使得构建的预测模型更准确,而且提升了视频图像的编解码预测性能。In order to improve the robustness of CCLM prediction and at the same time improve the coding and decoding performance, an embodiment of the present application provides an image component prediction method, which obtains N adjacent reference pixels for the image components to be predicted of the coding block in the video image. Here The N adjacent reference pixels are reference pixels adjacent to the encoding block, and N is a preset integer value; the N first image component values corresponding to the N adjacent reference pixels are compared to determine A first reference pixel subset and a second reference pixel subset, the first reference pixel subset includes: among the preset number of first image component values, the smallest first image component value and the second smallest first image component value correspond to The second reference pixel subset includes two adjacent reference pixels corresponding to the largest first image component value and the second largest first image component value among the preset number of first image component values Reference pixel; calculate the average value of the N adjacent reference pixels to obtain the first average point; perform the first image component value on the first reference pixel subset and/or the second reference pixel subset through the first average point Determine the two fitting points; determine the model parameters based on the two fitting points, and obtain the prediction model corresponding to the image component to be predicted according to the model parameters to obtain the predicted value corresponding to the image component to be predicted; in this way, after the second comparison After processing, the preset number of adjacent reference pixels can be divided into multiple situations, and two different fitting points are used to construct the prediction model for these multiple situations, which can improve the robustness of CCLM prediction; that is, By optimizing the fitting points used in the model parameter derivation, the constructed prediction model is more accurate, and the coding and decoding prediction performance of the video image is improved.
下面将结合附图对本申请各实施例进行详细说明。The embodiments of the present application will be described in detail below in conjunction with the drawings.
参见图5,其示出了本申请实施例提供的一种视频编码系统的组成框图示例;如图7所示,该视频编码系统500包括变换与量化单元501、帧内估计单元502、帧内预测单元503、运动补偿单元504、运动估计单元505、反变换与反量化单元506、滤波器控制分析单元507、滤波单元508、编码单元509和解码图像缓存单元510等,其中,滤波单元508可以实现去方块滤波及样本自适应缩进(Sample Adaptive 0ffset,SAO)滤波,编码单元509可以实现头信息编码及基于上下文的自适应二进制算术编码(Context-based Adaptive Binary Arithmatic Coding,CABAC)。针对输入的原始视频信号,通过编码树块(Coding Tree Unit,CTU)的划分可以得到一个视频编码块,然后对经过帧内或帧间预测后得到的残差像素信息通过变换与量化单元501对该视频编码块进行变换,包括将残差信息从像素域变换到变换域,并对所得的变换系数进行量化,用以进一步减少比特率;帧内估计单元502和帧内预测单元503是用于对该视频编码块进行帧内预测;明确地说,帧内估计单元502和帧内预测单元503用于确定待用以编码该视频编码块的帧内预测模式;运动补偿单元504和运动估计单元505用于执行所接收的视频编码块相对于一或多个参考帧中的一或多个块的帧间预测编码以提供时间预测信息;由运动估计单元505执行的运动估计为产生运动向量的过程,所述运动向量可以估计该视频编码块的运动,然后由运动补偿单元504基于由运动估计单元505所确定的运动向量执行运动补偿;在确定帧内预测模式之后,帧内预测单元503还用于将所选择的帧内预测数据提供到编码单元509,而且运动估计单元505将所计算确定的运动向量数据也发送到编码单元509;此外,反变换与反量化单元506是用于该视频编码块的重构建,在像素域中重构建残差块,该重构建残差块通过滤波器控制分析单元507和滤波单元508去除方块效应伪影,然后将该重构残差块添加到解码图像缓存单元510的帧中的一个预测性块,用以产生经重构建的视频编码块;编码单元509是用于编码各种编码参数及量化后的变换系数,在基于CABAC的编码算法中,上下文内容可基于相邻编码块,可用于编码指示所确定的帧内预测模式的信息,输出该视频信号的码流;而解码图像缓存单元510是用于存放重构建的视频编码块,用于预测参考。随着视频图像编码的进行,会不断生成新的重构建的视频编码块,这些重构建的视频编码块都会被存放在解码图像缓存单元510中。Refer to FIG. 5, which shows an example of a block diagram of a video encoding system provided by an embodiment of the present application; as shown in FIG. 7, the video encoding system 500 includes a transform and quantization unit 501, an intra-frame estimation unit 502, and an intra-frame The prediction unit 503, the motion compensation unit 504, the motion estimation unit 505, the inverse transform and inverse quantization unit 506, the filter control analysis unit 507, the filtering unit 508, the encoding unit 509, and the decoded image buffer unit 510, etc., wherein the filtering unit 508 may Implementing deblocking filtering and Sample Adaptive Offset (SAO) filtering, the encoding unit 509 can implement header information encoding and context-based adaptive binary arithmetic coding (Context-based Adaptive Binary Arithmatic Coding, CABAC). For the input original video signal, a video coding block can be obtained by dividing the coding tree block (Coding Tree Unit, CTU), and then the residual pixel information obtained after intra or inter prediction is paired by the transform and quantization unit 501 The video coding block is transformed, including transforming the residual information from the pixel domain to the transform domain, and quantizing the resulting transform coefficients to further reduce the bit rate; the intra-frame estimation unit 502 and the intra-frame prediction unit 503 are used for Perform intra prediction on the video encoding block; specifically, the intra estimation unit 502 and the intra prediction unit 503 are used to determine the intra prediction mode to be used to encode the video encoding block; the motion compensation unit 504 and the motion estimation unit 505 is used to perform inter-frame predictive coding of the received video coding block relative to one or more blocks in one or more reference frames to provide temporal prediction information; the motion estimation performed by the motion estimation unit 505 is for generating motion vectors In the process, the motion vector can estimate the motion of the video coding block, and then the motion compensation unit 504 performs motion compensation based on the motion vector determined by the motion estimation unit 505; after determining the intra prediction mode, the intra prediction unit 503 also It is used to provide the selected intra prediction data to the encoding unit 509, and the motion estimation unit 505 also sends the calculated motion vector data to the encoding unit 509; in addition, the inverse transform and inverse quantization unit 506 is used for the video The reconstruction of the coding block, the reconstruction of the residual block in the pixel domain, the reconstruction of the residual block through the filter control analysis unit 507 and the filtering unit 508 to remove the blocking artifacts, and then the reconstructed residual block is added to the decoding A predictive block in the frame of the image buffer unit 510 is used to generate reconstructed video coding blocks; the coding unit 509 is used to encode various coding parameters and quantized transform coefficients. In the coding algorithm based on CABAC, The context content can be based on adjacent coding blocks, can be used to encode information indicating the determined intra prediction mode, and output the code stream of the video signal; and the decoded image buffer unit 510 is used to store reconstructed video coding blocks for Forecast reference. As the encoding of the video image progresses, new reconstructed video encoding blocks will be continuously generated, and these reconstructed video encoding blocks will be stored in the decoded image buffer unit 510.
参见图6,其示出了本申请实施例提供的一种视频解码系统的组成框图示例;如图6所示,该视频解码系统600包括解码单元601、反变换与反量化单元602、帧内预测单元603、运动补偿单元604、滤波单元605和解码图像缓存单元606等,其中,解码单元601可以实现头信息解码以及CABAC解码,滤波单元605可以实现去方块滤波以及SAO滤波。输入的视频信号经过图7的编码处理之后,输出该视频信号的码流;该码流输入视频解码系统600中,首先经过解码单元601,用于得到解码后的变换系数;针对该变换系数通过反变换与反量化单元602进行处理,以便在像素域中产生残差块;帧内预测单元603可用于基于所确定的帧内预测模式和来自当前帧或图片的先前经解码块的数据而产生当前视频解码块的预测数据;运动补偿单元604是通过剖析运动向量和其他关联语法元素来确定用于视频解码块的预测信息,并使用该预测信息以产生正被解码的视频解码块的预测性块;通过对来自反变换与反量化单元602的残差块与由帧内预测单元603或运动补偿单元604产生的对应预测性块进行求和,而形成解码的视频块;该解码的视频信号通过滤波单元605以便去除方块效应伪影,可以改善视频质量;然后将经解码的视频块存储于解码图像缓存单元606中,解码图像缓存单元606存储用于后续帧内预测或运动补偿的参考图像,同时也用于视频信号的输出,即得到了所恢复的原始视频信号。Refer to FIG. 6, which shows an example of a block diagram of a video decoding system provided by an embodiment of the present application; as shown in FIG. 6, the video decoding system 600 includes a decoding unit 601, an inverse transform and inverse quantization unit 602, and an intra-frame The prediction unit 603, the motion compensation unit 604, the filtering unit 605, and the decoded image buffer unit 606, etc., wherein the decoding unit 601 can implement header information decoding and CABAC decoding, and the filtering unit 605 can implement deblocking filtering and SAO filtering. After the input video signal is subjected to the encoding process of FIG. 7, the code stream of the video signal is output; the code stream is input into the video decoding system 600, and first passes through the decoding unit 601 to obtain the decoded transform coefficient; The inverse transform and inverse quantization unit 602 performs processing to generate residual blocks in the pixel domain; the intra prediction unit 603 can be used to generate data based on the determined intra prediction mode and the data from the previous decoded block of the current frame or picture The prediction data of the current video decoding block; the motion compensation unit 604 determines the prediction information for the video decoding block by analyzing the motion vector and other associated syntax elements, and uses the prediction information to generate the predictability of the video decoding block being decoded Block; by summing the residual block from the inverse transform and inverse quantization unit 602 and the corresponding predictive block generated by the intra prediction unit 603 or the motion compensation unit 604 to form a decoded video block; the decoded video signal Through the filtering unit 605 in order to remove the block effect artifacts, the video quality can be improved; then the decoded video blocks are stored in the decoded image buffer unit 606, and the decoded image buffer unit 606 stores reference images for subsequent intra prediction or motion compensation , It is also used for the output of the video signal, that is, the restored original video signal is obtained.
本申请实施例中的图像分量预测方法,主要应用在如图5所示的帧内预测单元503部分和如图6所示的帧内预测单元603部分,具体应用于帧内预测中的CCLM预测部分。也就是说,本申请实施例中的图像分量预测方法,既可以应用于视频编码系统,也可以应用于视频解码系统,甚至还可以同时应用于视频编码系统和视频解码系统,但是本申请实施例不作具体限定。当该方法应用于帧内预测单元503部分时,“视频图像中编码块”具体是指帧内预测中的当前编码块;当该方法应用于帧内预测单元603部分时,“视频图像中编码块”具体是指帧内预测中的当前解码块。The image component prediction method in the embodiment of this application is mainly applied to the part of the intra prediction unit 503 shown in FIG. 5 and the part of the intra prediction unit 603 shown in FIG. 6, and is specifically applied to CCLM prediction in intra prediction section. That is to say, the image component prediction method in the embodiment of this application can be applied to a video encoding system, a video decoding system, or even a video encoding system and a video decoding system at the same time. However, the embodiment of this application There is no specific limitation. When the method is applied to the intra prediction unit 503 part, the "coding block in the video image" specifically refers to the current coding block in the intra prediction; when the method is applied to the intra prediction unit 603 part, "coding in the video image "Block" specifically refers to the current decoded block in intra prediction.
基于上述图5或者图6的应用场景示例,参见图7,其示出了本申请实施例提供的一种图像分量预测方法的流程示意图。如图7所示,该方法可以包括:Based on the application scenario example of FIG. 5 or FIG. 6, refer to FIG. 7, which shows a schematic flowchart of an image component prediction method provided by an embodiment of the present application. As shown in Figure 7, the method may include:
S701:获取视频图像中编码块的待预测图像分量对应的N个相邻参考像素点;S701: Acquire N adjacent reference pixels corresponding to the image component to be predicted of the coding block in the video image;
需要说明的是,这里N个相邻参考像素点为与所述编码块相邻的参考像素点,N为预设的整数值,也可以称为预设数量。其中,视频图像可以划分为多个编码块,每个编码块可以包括第一图像分量、第二图像分量和第三图像分量,而本申请实施例中的编码块为视频图像中待进行编码处理的当前块。当需要通过预测模型对第一图像分量进行预测时,待预测图像分量为第一图像分量;当需要通过预测模型对第二图像分量进行预测时,待预测图像分量为第二图像分量;当需要通过预测模型对第三图像分量进行预测时,待预测图像分量为第三图像分量。It should be noted that the N adjacent reference pixels are reference pixels adjacent to the coding block, and N is a preset integer value, which may also be referred to as a preset number. The video image can be divided into multiple coding blocks, and each coding block can include a first image component, a second image component, and a third image component. The coding block in this embodiment of the application is a video image to be encoded. The current block. When the first image component needs to be predicted by the prediction model, the image component to be predicted is the first image component; when the second image component needs to be predicted by the prediction model, the image component to be predicted is the second image component; When the third image component is predicted by the prediction model, the image component to be predicted is the third image component.
这样,可以从与该编码块相邻的参考像素点中获取预设数量的相邻参考像素点,以确定用于模型参数推导的拟合点。另外,N的取值一般可以为4,但是本申请实施例不作具体限定。In this way, a preset number of adjacent reference pixels can be obtained from the reference pixels adjacent to the coding block to determine the fitting points for deriving the model parameters. In addition, the value of N may generally be 4, but the embodiment of the present application does not specifically limit it.
在一些实施例中,对于S701来说,所述获取视频图像中编码块的待预测图像分量对应的N个相邻参考像素点,可以包括:In some embodiments, for S701, the obtaining N adjacent reference pixels corresponding to the image component to be predicted of the coding block in the video image may include:
S701-1:获取视频图像中编码块的待预测图像分量对应的第一参考像素集合;S701-1: Obtain a first reference pixel set corresponding to the image component to be predicted of the coding block in the video image;
需要说明的是,当左侧相邻区域、左下侧相邻区域、上侧相邻区域和右上侧相邻区域都是有效区域时,对于INTRA_LT_CCLM模式,第一参考像素集合是由编码块的左侧相邻区域和上侧相邻区域中的相邻参考像素点组成的,如图2中(a)所示;对于INTRA_L_CCLM模式,第一参考像素集合是由编码块的左侧相邻区域和左下侧相邻区域中的相邻参考像素点组成的,如图2中(b)所示;对于INTRA_T_CCLM模式,第一参考像素集合是由编码块的上侧相邻区域和右上侧相邻区域中的相邻参考像素点组成的,如图2中(c)所示。It should be noted that when the left adjacent area, the lower left adjacent area, the upper adjacent area, and the upper right adjacent area are all valid areas, for the INTRA_LT_CCLM mode, the first reference pixel set is determined by the left side of the coding block. The adjacent reference pixels in the side adjacent area and the upper adjacent area are composed of adjacent reference pixels, as shown in Figure 2 (a); for the INTRA_L_CCLM mode, the first reference pixel set is composed of the left adjacent area and It is composed of adjacent reference pixels in the adjacent area on the lower left side, as shown in Figure 2 (b); for the INTRA_T_CCLM mode, the first reference pixel set is composed of the adjacent area on the upper side of the coding block and the adjacent area on the upper right side. It is composed of adjacent reference pixels in Figure 2 (c).
在一些实施例中,可选地,对于S701-1来说,所述获取视频图像中编码块的待预测图像分量对应的第一参考像素集合,可以包括:In some embodiments, optionally, for S701-1, the acquiring the first reference pixel set corresponding to the image component to be predicted of the coding block in the video image may include:
获取与所述编码块至少一个边相邻的参考像素点;其中,所述至少一个边包括所述编码块的左侧边和/或所述编码块的上侧边;Acquiring reference pixels adjacent to at least one side of the coding block; wherein the at least one side includes the left side of the coding block and/or the upper side of the coding block;
基于所述参考像素点,组成所述待预测图像分量对应的第一参考像素集合。Based on the reference pixel points, a first reference pixel set corresponding to the image component to be predicted is formed.
需要说明的是,编码块的至少一个边可以是指编码块的上侧边,也可以是指编码块的左侧边,甚至还可以是指编码块的上侧边和左侧边,本申请实施例不作具体限定。It should be noted that at least one side of the coding block can refer to the upper side of the coding block, or the left side of the coding block, or even the upper and left sides of the coding block. The embodiments are not specifically limited.
这样,对于INTRA_LT_CCLM模式,当左侧相邻区域和上侧相邻区域全部为有效区域时,这时候第一参考像素集合可以是由与编码块的左侧边相邻的参考像素点和与编码块的上侧边相邻的参考像素点组成的,当左侧相邻区域为有效区域、而上侧相邻区域为无效区域时,这时候第一参考像素集合可以是由与编码块的左侧边相邻的参考像素点组成的;当左侧相邻区域为无效区域、而上侧相邻区域为有效区域时,这时候第一参考像素集合可以是由与编码块的上侧边相邻的参考像素点组成的。In this way, for the INTRA_LT_CCLM mode, when the adjacent area on the left and the adjacent area on the upper side are all valid areas, the first reference pixel set at this time can be the sum of the reference pixels adjacent to the left side of the encoding block and the encoding The upper side of the block is composed of adjacent reference pixels. When the adjacent area on the left is the effective area and the adjacent area on the upper side is the ineffective area, then the first reference pixel set can be composed of the adjacent pixels on the left side of the encoding block. It is composed of reference pixels adjacent to the side; when the adjacent area on the left is an invalid area and the adjacent area on the upper side is an effective area, then the first reference pixel set can be composed of the upper side of the coding block. It is composed of adjacent reference pixels.
在一些实施例中,可选地,对于S701-1来说,所述获取视频图像中编码块的待预测图像分量对应的第一参考像素集合,可以包括:In some embodiments, optionally, for S701-1, the acquiring the first reference pixel set corresponding to the image component to be predicted of the coding block in the video image may include:
获取与所述编码块相邻的参考行或者参考列中的参考像素点;其中,所述参考行是由所述编码块的上侧边以及右上侧边所相邻的行组成的,所述参考列是由所述编码块的左侧边以及左下侧边所相邻的列组成的;Obtain a reference row or reference pixel point in a reference column adjacent to the encoding block; wherein the reference row is composed of the upper side of the encoding block and the row adjacent to the upper right side, the The reference column is composed of columns adjacent to the left side and the lower left side of the coding block;
基于所述参考像素点,组成所述待预测图像分量对应的第一参考像素集合。Based on the reference pixel points, a first reference pixel set corresponding to the image component to be predicted is formed.
需要说明的是,与编码块相邻的参考行或参考列可以是指与编码块上侧边相邻的参考行,也可以是指与编码块左侧边相邻的参考列,甚至还可以是指与编码块其他边相邻的参考行或参考列,本申请实施例不作具体限定。为了方便描述,在本申请实施例中,编码块相邻的参考行将以上侧边相邻的参考行为例进行描述,编码块相邻的参考列将以左侧边相邻的参考列为例进行描述。It should be noted that the reference row or reference column adjacent to the encoding block can refer to the reference row adjacent to the upper side of the encoding block, or the reference column adjacent to the left side of the encoding block, or even It refers to a reference row or a reference column adjacent to other sides of the coding block, which is not specifically limited in the embodiment of the present application. For the convenience of description, in the embodiments of the present application, the adjacent reference rows of the coding blocks will describe the reference behaviors with adjacent sides above, and the reference columns adjacent to the coding blocks will take the reference columns adjacent to the left as an example. description.
其中,与编码块相邻的参考行中的参考像素点可以包括与上侧边以及右上侧边相邻的参考像素点(也称之为上侧边以及右上侧边所对应的相邻参考像素点),其中,上侧边表示编码块的上侧边, 右上侧边表示编码块上侧边向右水平扩展出的与当前编码块高度相同的边长;与编码块相邻的参考列中的参考像素点还可以包括与左侧边以及左下侧边相邻的参考像素点(也称之为左侧边以及左下侧边所对应的相邻参考像素点),其中,左侧边表示编码块的左侧边,左下侧边表示编码块左侧边向下垂直扩展出的与当前解码块宽度相同的边长;但是本申请实施例也不作具体限定。Wherein, the reference pixels in the reference row adjacent to the coding block may include reference pixels adjacent to the upper side and the upper right side (also referred to as adjacent reference pixels corresponding to the upper side and the upper right side) Point), where the upper side represents the upper side of the coding block, and the upper right side represents the side length of the upper side of the coding block that is horizontally extended to the right and the same height as the current coding block; in the reference column adjacent to the coding block The reference pixels of may also include reference pixels adjacent to the left side and the lower left side (also referred to as the adjacent reference pixels corresponding to the left side and the lower left side), where the left side represents the code The left side of the block and the lower left side represent the side length that is the same width as the current decoded block, which is vertically extended downward from the left side of the encoding block; however, the embodiment of the present application does not specifically limit it.
这样,对于INTRA_L_CCLM模式,当左侧相邻区域和左下侧相邻区域为有效区域时,这时候第一参考像素集合可以是由与编码块相邻的参考列中的参考像素点组成的;对于INTRA_T_CCLM模式,当上侧相邻区域和右上侧相邻区域为有效区域时,这时候第一参考像素集合可以是由与编码块相邻的参考行中的参考像素点组成的。In this way, for the INTRA_L_CCLM mode, when the left adjacent area and the lower left adjacent area are valid areas, the first reference pixel set at this time may be composed of reference pixels in the reference column adjacent to the coding block; In the INTRA_T_CCLM mode, when the upper adjacent area and the upper right adjacent area are valid areas, the first reference pixel set at this time may be composed of reference pixels in the reference row adjacent to the coding block.
S701-2:对所述第一参考像素集合进行筛选处理,得到第二参考像素集合;其中,所述第二参考像素集合包括有N个相邻参考像素点;S701-2: Perform screening processing on the first reference pixel set to obtain a second reference pixel set; wherein, the second reference pixel set includes N adjacent reference pixels;
需要说明的是,在第一参数像素集合中,可能会存在部分不重要的参考像素点(比如这些参考像素点的相关性较差)或者部分异常的参考像素点,为了保证预测模型的准确性,需要将这些参考像素点剔除掉,从而得到了第二参考像素集合;其中,第二参考像素集合所包含的有效像素点个数,实际应用中,通常选取为4个,但是本申请实施例不作具体限定。It should be noted that in the first parameter pixel set, there may be some unimportant reference pixels (for example, these reference pixels have poor correlation) or some abnormal reference pixels, in order to ensure the accuracy of the prediction model , These reference pixels need to be removed to obtain the second reference pixel set; wherein, the number of effective pixels contained in the second reference pixel set is usually selected as 4 in practical applications, but the embodiment of the application There is no specific limitation.
在一些实施例中,对于S701-2来说,所述对所述第一参考像素集合进行筛选处理,得到第二参考像素集合,可以包括:In some embodiments, for S701-2, the filtering process on the first reference pixel set to obtain the second reference pixel set may include:
基于所述第一参考像素集合中每个相邻参考像素点对应的像素位置和/或图像分量强度,确定待选择像素点位置;Determine the position of the pixel to be selected based on the pixel position and/or image component intensity corresponding to each adjacent reference pixel in the first reference pixel set;
根据确定的待选择像素点位置,从所述第一参考像素集合中选取与所述待选择像素点位置对应的相邻参考像素点,将选取得到的相邻参考像素点组成第二参考像素集合;其中,所述第二参考像素集合包括有N个相邻参考像素点。According to the determined position of the pixel to be selected, adjacent reference pixels corresponding to the position of the pixel to be selected are selected from the first reference pixel set, and the selected adjacent reference pixels form a second reference pixel set ; Wherein, the second reference pixel set includes N adjacent reference pixels.
需要说明的是,图像分量强度可以用图像分量值来表示,比如亮度值、色度值等;这里,图像分量值越大,表明了图像分量强度越高。这样,针对第一参考像素集合的筛选,可以是根据待选择参考像素点的位置来进行筛选的,也可以是根据图像分量强度(比如亮度值、色度值等)来进行筛选的,从而将筛选出的待选择参考像素点组成第二参考像素集合。下面将以待选择参考像素点位置为例进行描述。It should be noted that the image component intensity can be represented by image component values, such as brightness value, chroma value, etc.; here, the larger the image component value, the higher the image component intensity. In this way, the screening of the first reference pixel set can be based on the position of the reference pixel to be selected, or based on the intensity of the image component (such as luminance value, chrominance value, etc.), so that The filtered reference pixels to be selected form a second reference pixel set. The following will describe the position of the reference pixel to be selected as an example.
示例性地,假定上侧选择区域W'内参考像素点的位置为S[0,-1]、…、S[W'-1,-1],左侧选择区域H'内参考像素点的位置为S[-1,0]、…、S[-1,H'-1];这样最多选取4个相邻参考像素点的筛选方式如下:Exemplarily, assuming that the positions of the reference pixels in the upper selection area W'are S[0,-1],...,S[W'-1,-1], the positions of the reference pixels in the left selection area H' The positions are S[-1,0],..., S[-1,H'-1]; in this way, the selection method of selecting at most 4 adjacent reference pixels is as follows:
对于INTRA_LT_CCLM模式,当上侧相邻区域和左侧相邻区域都有效时,此时在上侧选择区域W'内可以筛选出2个待选择相邻参考像素点,其对应位置分别为S[W'/4,-1]和S[3W'/4,-1];在左侧选择区域H'内可以筛选出2个待选择相邻参考像素点,其对应位置分别S[-1,H'/4]和S[-1,3H'/4];将这4个待选择相邻参考像素点组成第二参考像素集合,如图8A所示。在图8A中,编码块的左侧相邻区域和上侧相邻区域均是有效的,而且为了保持亮度分量和色度分量具有相同的分辨率,还需要针对亮度分量进行下采样处理,使得经过下采样的亮度分量与色度分量具有相同的分辨率。For the INTRA_LT_CCLM mode, when the upper adjacent area and the left adjacent area are both valid, at this time, 2 adjacent reference pixels to be selected can be filtered out in the upper selection area W', and their corresponding positions are S[ W'/4,-1] and S[3W'/4,-1]; 2 adjacent reference pixels to be selected can be screened out in the left selection area H', and their corresponding positions are respectively S[-1, H'/4] and S[-1,3H'/4]; these 4 adjacent reference pixels to be selected form a second reference pixel set, as shown in FIG. 8A. In Fig. 8A, the adjacent area on the left side and the adjacent area on the upper side of the coding block are both effective, and in order to maintain the same resolution of the luminance component and the chrominance component, the luminance component needs to be down-sampled, so that The down-sampled luminance component and chrominance component have the same resolution.
对于INTRA_L_CCLM模式,当只有左侧相邻区域和左下侧相邻区域有效时,此时在左侧选择区域H'内可以筛选出4个待选择相邻参考像素点,其对应位置分别为S[-1,H'/8]、S[-1,3H'/8]、S[-1,5H'/8]和S[-1,7H'/8];将这4个待选择相邻参考像素点组成第二参考像素集合,如图8B所示。在图8B中,编码块的左侧相邻区域和左下侧相邻区域均是有效的,而且为了保持亮度分量和色度分量具有相同的分辨率,仍需要针对亮度分量进行下采样处理,使得经过下采样的亮度分量与色度分量具有相同的分辨率。For the INTRA_L_CCLM mode, when only the adjacent area on the left and the adjacent area on the lower left are valid, 4 adjacent reference pixels to be selected can be filtered out in the left selection area H', and their corresponding positions are S[ -1,H'/8], S[-1,3H'/8], S[-1,5H'/8] and S[-1,7H'/8]; put these 4 to be selected adjacent The reference pixels constitute a second reference pixel set, as shown in FIG. 8B. In Fig. 8B, the adjacent area on the left side and the adjacent area on the lower left side of the coding block are both effective, and in order to maintain the same resolution of the luminance component and the chrominance component, the luminance component still needs to be down-sampled, so The down-sampled luminance component and chrominance component have the same resolution.
对于INTRA_T_CCLM模式,当只有上侧相邻区域和右上侧相邻区域有效时,此时在上侧选择区域W'内可以筛选出4个待选择相邻参考像素点,其对应位置分别为S[W'/8,-1]、S[3W'/8,-1]、S[5W'/8,-1]和S[7W'/8,-1];将这4个待选择相邻参考像素点组成第二参考像素集合,如图8C所示。在图8C中,编码块的上侧相邻区域和右上侧相邻区域均是有效的,而且为了保持亮度分量和色度分量具有相同的分辨率,仍需要针对亮度分量进行下采样处理,使得经过下采样的亮度分量与色度分量具有相同的分辨率。For the INTRA_T_CCLM mode, when only the upper adjacent area and the upper right adjacent area are valid, 4 adjacent reference pixels to be selected can be filtered out in the upper selection area W', and their corresponding positions are S[ W'/8,-1], S[3W'/8,-1], S[5W'/8,-1] and S[7W'/8,-1]; put these 4 to be selected adjacent The reference pixels constitute a second reference pixel set, as shown in FIG. 8C. In Fig. 8C, the upper adjacent area and the upper right adjacent area of the coding block are both effective, and in order to maintain the same resolution of the luminance component and the chrominance component, the luminance component still needs to be down-sampled, so that The down-sampled luminance component and chrominance component have the same resolution.
这样,通过对第一参考像素集合进行筛选,可以得到第二参考像素集合,而且第二参考像素集合中一般包括有预设数量的相邻参考像素点,比如4个相邻参考像素点;然后对这预设数量的相邻参考像素点通过比较来进行分组,以使得后续确定的两个拟合点更准确。In this way, by filtering the first reference pixel set, the second reference pixel set can be obtained, and the second reference pixel set generally includes a preset number of adjacent reference pixels, such as 4 adjacent reference pixels; The preset number of adjacent reference pixels are grouped by comparison, so that the two fitting points determined subsequently are more accurate.
S702:比较所述N个相邻参考像素点对应的N个第一图像分量值,确定第一参考像素子集和第二参考像素子集;S702: Compare the N first image component values corresponding to the N adjacent reference pixels, and determine a first reference pixel subset and a second reference pixel subset;
需要说明的是,每个相邻参考像素点对应有第一图像分量值和第二图像分量值;这样,预设数量的相邻参考像素点对应预设数量的第一图像分量值,通过第一比较处理之后,可以根据第一图像分量值的大小确定出第一参考像素子集和第二参考像素子集。It should be noted that each adjacent reference pixel corresponds to the first image component value and the second image component value; in this way, the preset number of adjacent reference pixels corresponds to the preset number of first image component values, and the After a comparison process, the first reference pixel subset and the second reference pixel subset can be determined according to the value of the first image component.
在一些实施例中,对于S702来说,所述比较所述N个相邻参考像素点对应的N个第一图像分量值,确定第一参考像素子集和第二参考像素子集,可以包括:In some embodiments, for S702, the comparing the N first image component values corresponding to the N adjacent reference pixels to determine the first reference pixel subset and the second reference pixel subset may include :
基于所述预设数量的相邻参考像素点,获取预设数量的第一图像分量值;Obtaining a preset number of first image component values based on the preset number of adjacent reference pixels;
对所述预设数量的第一图像分量值进行多次比较,得到最小第一图像分量值和次最小第一图像分量值所组成的第一数组以及最大第一图像分量值和次最大第一图像分量值所组成的第二数组;The preset number of first image component values are compared multiple times to obtain a first array composed of the smallest first image component value and the second smallest first image component value, and the largest first image component value and the second largest first image component value. A second array of image component values;
将所述第一数组所对应的两个相邻参考像素点放入第一参考像素子集中,得到所述第一参考像素子集;Putting two adjacent reference pixels corresponding to the first array into a first reference pixel subset to obtain the first reference pixel subset;
将所述第二数组所对应的两个相邻参考像素点放入第二参考像素子集中,得到所述第二参考像素子集。Put two adjacent reference pixels corresponding to the second array into a second reference pixel subset to obtain the second reference pixel subset.
需要说明的是,在获取到预设数量的第一图像分量值之后,通过对预设数量的第一图像分量值进行多次比较(比如,4个相邻参考像素点需要进行四次比较),得到第一参考像素子集和第二参考像素子集;其中,第一参考像素子集中包括预设数量的第一图像分量值中的最小第一图像分量值和次最小第一图像分量值所对应的两个相邻参考像素点,第二参考像素子集中包括预设数量的第一图像分量值中的最大第一图像分量值和次最大第一图像分量值所对应的两个相邻参考像素点。It should be noted that after obtaining the preset number of first image component values, multiple comparisons are performed on the preset number of first image component values (for example, four adjacent reference pixels need to be compared four times) , Obtain the first reference pixel subset and the second reference pixel subset; wherein the first reference pixel subset includes the smallest first image component value and the second smallest first image component value among the preset number of first image component values Corresponding two adjacent reference pixels, the second reference pixel subset includes two adjacent ones corresponding to the largest first image component value and the second largest first image component value among the preset number of first image component values Reference pixels.
具体来说,可以是首先对预设数量的相邻参考像素点进行编号,然后将这预设数量的相邻参考像素点进行初始分组处理,第一数组中存放第i编号对应的第一图像分量值,第二数组中存放第i+1编号对应的第一图像分量值,i为大于或等于0的正整数;然后将第一数组中的第一图像分量值与第二数组中的第一图像分量值进行多次比较,使得第一数组中存放最小第一图像分量值和次最小第一图像分量值,第二数组中存放最大第一图像分量值和次最大第一图像分量值,从而得到第一参考像素子集和第二参考像素子集。Specifically, it can be that the preset number of adjacent reference pixels are numbered first, and then the preset number of adjacent reference pixels are initially grouped, and the first image corresponding to the i-th number is stored in the first array Component value, the first image component value corresponding to the i+1 number is stored in the second array, i is a positive integer greater than or equal to 0; then the first image component value in the first array and the first image component value in the second array An image component value is compared for multiple times, so that the smallest first image component value and the second smallest first image component value are stored in the first array, and the largest first image component value and the second largest first image component value are stored in the second array. Thus, the first reference pixel subset and the second reference pixel subset are obtained.
进一步地,在得到第一参考像素子集和第二参考像素子集之后,还可以求取第一参考像素子集的第二均值点和第二参考像素子集的第三均值点。因此,在一些实施例中,参见图9,其示出了本申请实施例提供的另一种图像分量预测方法的流程示意图。如图9所示,在S702之后,该方法还可以包括:Further, after the first reference pixel subset and the second reference pixel subset are obtained, the second average point of the first reference pixel subset and the third average point of the second reference pixel subset can also be obtained. Therefore, in some embodiments, refer to FIG. 9, which shows a schematic flowchart of another image component prediction method provided by an embodiment of the present application. As shown in FIG. 9, after S702, the method may further include:
S901:基于所述第一参考像素子集中每个相邻参考像素点对应的第一图像分量值和第二图像分量值,获得第一图像分量对应的第二均值和第二图像分量对应的第二均值,得到第二均值点;S901: Based on the first image component value and the second image component value corresponding to each adjacent reference pixel point in the first reference pixel subset, obtain the second mean value corresponding to the first image component and the first image component corresponding to the second image component. Two averages, get the second average point;
S902:基于所述第二参考像素子集中每个相邻参考像素点对应的第一图像分量值和第二图像分量值,获得第一图像分量对应的第三均值和第二图像分量对应的第三均值,得到第三均值点。S902: Based on the first image component value and the second image component value corresponding to each adjacent reference pixel point in the second reference pixel subset, obtain a third mean value corresponding to the first image component and a first image component corresponding to the second image component. Three averages, get the third average point.
也就是说,对第一参考像素子集中每个相邻参考像素点对应的第一图像分量值进行均值计算,得到多个第一图像分量对应的第一图像分量均值,可以称为第一图像分量的第二均值(可以用luma min表示);对第一参考像素子集中每个相邻参考像素点对应的第二图像分量值进行均值计算,得到多个第二图像分量对应的第二图像分量均值,可以称为第二图像分量的第二均值(可以用chroma min表示),这里,第二均值点可以用mean min表示;这里,第二均值点的第一图像分量为luma min,第二均值点的第二图像分量为chroma minThat is to say, the average value of the first image component value corresponding to each adjacent reference pixel in the first reference pixel subset is calculated to obtain the average value of the first image component corresponding to the multiple first image components, which can be called the first image The second mean value of the component (which can be represented by luma min ); the mean value of the second image component value corresponding to each adjacent reference pixel in the first reference pixel subset is calculated to obtain the second image corresponding to the multiple second image components The component average value can be called the second average value of the second image component (it can be represented by chroma min ). Here, the second average value point can be represented by mean min ; here, the first image component of the second average value point is luma min , The second image component of the two-mean point is chroma min .
同理,对第二参考像素子集中每个相邻参考像素点对应的第一图像分量值进行均值计算,得到多个第一图像分量对应的第一图像分量均值,可以称为第一图像分量的第三均值(可以用luma max表示);对第二参考像素子集中每个相邻参考像素点对应的第二图像分量值进行均值计算,得到多个第二图像分量对应的第二图像分量均值,可以称为第二图像分量的第三均值(可以用chroma max表示),这里,第三均值点可以用mean max表示;这里,第三均值点的第一图像分量为luma max,第三均值点的第二图像分量为chroma maxIn the same way, the average value of the first image component value corresponding to each adjacent reference pixel in the second reference pixel subset is calculated to obtain the average value of the first image component corresponding to the multiple first image components, which can be called the first image component The third mean value of (can be represented by luma max ); the mean value of the second image component values corresponding to each adjacent reference pixel in the second reference pixel subset is calculated to obtain the second image components corresponding to multiple second image components The mean value can be called the third mean value of the second image component (it can be expressed by chroma max ). Here, the third mean point can be expressed by mean max ; here, the first image component of the third mean point is luma max and the third The second image component of the mean point is chroma max .
S703:计算所述N个相邻参考像素点的均值,得到第一均值点;S703: Calculate the average value of the N adjacent reference pixel points to obtain a first average value point;
需要说明的是,第一均值点是根据N个相邻参考像素点进行求均值得到的。因此,在一些实施例中,如图9所示,对于S703来说,该步骤可以包括:It should be noted that the first average point is obtained by averaging the N adjacent reference pixel points. Therefore, in some embodiments, as shown in FIG. 9, for S703, this step may include:
S903:基于所述预设数量的相邻参考像素点中每个相邻参考像素点对应的第一图像分量值和第二图像分量值,获得第一图像分量对应的第一均值和第二图像分量对应的第一均值,得到所述第一均值点;S903: Obtain the first average value and the second image corresponding to the first image component based on the first image component value and the second image component value corresponding to each adjacent reference pixel in the preset number of adjacent reference pixels The first mean value corresponding to the component, obtain the first mean value point;
也就是说,对N个相邻参考像素点中每个相邻参考像素点对应的第一图像分量值进行均值计算,得到N个第一图像分量对应的第一图像分量均值,可以称为第一图像分量的第一均值(可以用meanL表示);对预设数量的相邻参考像素点中每个相邻参考像素点对应的第二图像分量值进行均值计算,得到N个第一图像分量对应的第一图像分量均值,可以称为第一图像分量的第一均值(可以用meanC表示),这里,第一均值点可以用mean表示;这里,第一均值点的第一图像分量为meanL,第一均值点的第二图像分量为meanC。需要注意的是,对于第一均值点的计算,还可以通过第二均值点和第三均值点进行求均值得到,比如对于meanL来说,可以是根据luma min和luma max求均值得到的;对于meanC来说,可以是根据chroma min和chroma max求均值得到的。 In other words, the average value of the first image component corresponding to each of the N adjacent reference pixels is calculated to obtain the average value of the first image component corresponding to the N first image components, which can be called the first image component value. The first average value of an image component (which can be represented by meanL); the average value of the second image component value corresponding to each adjacent reference pixel in the preset number of adjacent reference pixels is calculated to obtain N first image components The corresponding mean value of the first image component can be referred to as the first mean value of the first image component (which can be represented by meanC). Here, the first mean point can be expressed by mean; here, the first image component of the first mean point is meanL , The second image component of the first mean point is meanC. It should be noted that the calculation of the first mean point can also be obtained by averaging the second mean point and the third mean point. For example, for meanL, it can be obtained by averaging luma min and luma max ; For meanC, it can be obtained by averaging chroma min and chroma max .
S704:通过所述第一均值点对所述第一参考像素子集和/或第二参考像素子集进行第一图像分量值的比较,确定两个拟合点;S704: Perform a comparison of first image component values on the first reference pixel subset and/or the second reference pixel subset through the first average point, and determine two fitting points;
需要说明的是,两个拟合点包括第一拟合点和第二拟合点。通过第一均值点对第一参考像素子集和/或第二参考像素子集进行第二比较处理,可以存在三种情况,比如第一均值点对应的第一图像分量小于第一参考像素子集中次最小第一图像分量值、或者第一均值点对应的第一图像分量大于第二参考像素子集中次最大第一图像分量值、或者第一均值点对应的第一图像分量小于或等于第二参考像素子集中次最大第一图像分量值。针对这三种情况,所确定的第一拟合点和第二拟合点是不同的,从而可以提高CCLM预测的鲁棒性。It should be noted that the two fitting points include the first fitting point and the second fitting point. The second comparison process is performed on the first reference pixel subset and/or the second reference pixel subset through the first average point. There can be three situations. For example, the first image component corresponding to the first average point is smaller than the first reference pixel subset. The second smallest first image component value in the set, or the first image component corresponding to the first average point is greater than the second largest first image component value in the second reference pixel subset, or the first image component corresponding to the first average point is less than or equal to the first image component The second largest first image component value in the second reference pixel subset. For these three cases, the determined first fitting point and the second fitting point are different, so that the robustness of CCLM prediction can be improved.
进一步地,在一些实施例中,在S704之前,该方法还可以包括:Further, in some embodiments, before S704, the method may further include:
对所述第一参考像素子集中两个相邻参考像素点各自对应的第一图像分量值进行比较;Comparing the first image component values corresponding to two adjacent reference pixels in the first reference pixel subset;
根据比较的结果,确定两个第一图像分量值中的最小第一图像分量值和次最小第一图像分量值,并将最小第一图像分量值对应的像素点作为第一相邻参考像素点,次最小第一图像分量值对应的像素点作为第二相邻参考像素点。According to the result of the comparison, determine the smallest first image component value and the second smallest first image component value of the two first image component values, and use the pixel corresponding to the smallest first image component value as the first adjacent reference pixel , The pixel corresponding to the second smallest first image component value is used as the second adjacent reference pixel.
进一步地,在一些实施例中,该方法还可以包括:Further, in some embodiments, the method may further include:
对所述第二参考像素子集中两个相邻参考像素点各自对应的第一图像分量值进行比较;Comparing the first image component values corresponding to two adjacent reference pixels in the second reference pixel subset;
根据比较的结果,确定两个第一图像分量值中的次最大第一图像分量值和最大第一图像分量值,并将次最大第一图像分量值对应的像素点作为第三相邻参考像素点,最大第一图像分量值对应的像素点作为第四相邻参考像素点。According to the result of the comparison, determine the second largest first image component value and the largest first image component value among the two first image component values, and use the pixel corresponding to the second largest first image component value as the third adjacent reference pixel Point, the pixel point corresponding to the largest first image component value is used as the fourth adjacent reference pixel point.
这样,通过对第一参考像素子集中两个相邻参考像素点各自对应的第一图像分量值进行比较,可以确定出最小第一图像分量值和次最小第一图像分量值,且将最小第一图像分量值对应的像素点作为第一相邻参考像素点,次最小第一图像分量值对应的像素点作为第二相邻参考像素点;通过对第二参考像素子集中两个相邻参考像素点各自对应的第一图像分量值进行比较,可以确定出最大第一图像分量值和次最大第一图像分量值,且将次最大第一图像分量值对应的像素点作为第三相邻参考像素点,最大第一图像分量值对应的像素点作为第四相邻参考像素点。In this way, by comparing the first image component values corresponding to two adjacent reference pixels in the first reference pixel subset, the smallest first image component value and the second smallest first image component value can be determined, and the smallest first image component value can be determined. The pixel corresponding to an image component value is taken as the first adjacent reference pixel, and the pixel corresponding to the second smallest first image component value is taken as the second adjacent reference pixel; by comparing two adjacent reference pixels in the second reference pixel subset The first image component value corresponding to each pixel is compared, the largest first image component value and the second largest first image component value can be determined, and the pixel corresponding to the second largest first image component value is used as the third neighbor reference Pixel, the pixel corresponding to the largest first image component value is used as the fourth adjacent reference pixel.
进一步地,在一些实施例中,如图9所示,对于S704来说,所述通过第一均值点对所述第一参考像素子集和/或第二参考像素子集进行第一图像分量值的比较,确定两个拟合点,可以包括:Further, in some embodiments, as shown in FIG. 9, for S704, the first image component is performed on the first reference pixel subset and/or the second reference pixel subset through the first mean point. Comparison of values to determine two fitting points can include:
S904:将第一均值点的第一图像分量与所述第一参考像素子集中次最小第一图像分量值进行比较;S904: Compare the first image component of the first average point with the second smallest first image component value in the first reference pixel subset;
S905:当第一均值点的第一图像分量大于或等于所述第一参考像素子集中次最小第一图像分量值时,将第一均值点的第一图像分量与所述第二参考像素子集中次最大第一图像分量值进行比较;S905: When the first image component of the first average point is greater than or equal to the second smallest first image component value in the first reference pixel subset, combine the first image component of the first average point with the second reference pixel Concentrate the second largest first image component value for comparison;
S906:当第一均值点的第一图像分量小于或等于所述第二参考像素子集中次最大第一图像分量值时,将第二均值点作为第一拟合点,将第三均值点作为第二拟合点;其中,所述两个拟合点包括第一拟合点和第二拟合点。S906: When the first image component of the first average point is less than or equal to the second largest first image component value in the second reference pixel subset, use the second average point as the first fitting point and the third average point as the The second fitting point; wherein the two fitting points include a first fitting point and a second fitting point.
进一步地,在S904之后,该方法还可以包括:Further, after S904, the method may further include:
S907:当第一均值点的第一图像分量小于次最小第一图像分量值时,将第一相邻参考像素点作为第一拟合点,将第三均值点作为第二拟合点。S907: When the first image component of the first average point is smaller than the second smallest first image component value, the first adjacent reference pixel point is used as the first fitting point, and the third average point is used as the second fitting point.
进一步地,在S905之后,该方法还可以包括:Further, after S905, the method may further include:
S908:当第一均值点的第一图像分量大于次最大第一图像分量值时,将第二均值点作为第一拟合点,将第四相邻参考像素点作为第二拟合点。S908: When the first image component of the first average point is greater than the second largest first image component value, the second average point is used as the first fitting point, and the fourth adjacent reference pixel point is used as the second fitting point.
需要说明的是,对于步骤S904和S905,可以是先将第一均值点的第一图像分量与所述第一参考像素子集中次最小第一图像分量值进行比较,然后将第一均值点的第一图像分量与所述第二参考像素子集中次最大第一图像分量值进行比较;也可以是先将第一均值点的第一图像分量与所述第二参考像素子集中次最大第一图像分量值进行比较,然后将第一均值点的第一图像分量与所述第一参考像素子集中次最小第一图像分量值进行比较;对于比较的次序,本申请实施例不作具体限定。It should be noted that for steps S904 and S905, the first image component of the first mean point may be compared with the second smallest first image component value in the first reference pixel subset, and then the value of the first mean point The first image component is compared with the second largest first image component value in the second reference pixel subset; it can also be the first image component of the first mean point with the second largest first image component in the second reference pixel subset. The image component values are compared, and then the first image component of the first mean point is compared with the second smallest first image component value in the first reference pixel subset; the order of comparison is not specifically limited in this embodiment of the application.
还需要说明的是,第一均值点的第一图像分量即是第一图像分量对应的第一均值。对于步骤S904来说,如果第一均值点的第一图像分量大于或等于第一参考像素子集中次最小第一图像分量值,那么将会执行步骤S905;否则会执行步骤S907,此时的第一拟合点为第一相邻参考像素点,第二拟合点为第三均值点;对于步骤S905来说,如果第一均值点的第一图像分量小于或等于第二参考像素子集中次最大第一图像分量值,那么将会执行步骤S906,此时的第一拟合点为第二均值点,第二拟合点为第三均值点;否则会执行步骤S908,此时的第一拟合点为第二均值点,第二拟合点为第四相邻参考像素点。可以看出,经过第二比较处理之后,可以将预设数量的相邻参考像素点分成三种情况,针对这三种情况使用不同的两个拟合点来构建预测模型,可以提高CCLM预测的鲁棒性。It should also be noted that the first image component of the first average point is the first average value corresponding to the first image component. For step S904, if the first image component of the first mean point is greater than or equal to the second smallest first image component value in the first reference pixel subset, then step S905 will be executed; otherwise, step S907 will be executed. One fitting point is the first adjacent reference pixel point, and the second fitting point is the third mean point; for step S905, if the first image component of the first mean point is less than or equal to the second reference pixel subset The largest first image component value, then step S906 will be executed. At this time, the first fitting point is the second mean point, and the second fitting point is the third mean point; otherwise, step S908 will be executed. The fitting point is the second mean point, and the second fitting point is the fourth adjacent reference pixel point. It can be seen that after the second comparison process, the preset number of adjacent reference pixels can be divided into three cases. For these three cases, two different fitting points are used to construct the prediction model, which can improve the CCLM prediction. Robustness.
还需要注意的是,对于“等于”情况的判断不作限定。例如,还可以是当第一均值点的第一图像分量大于第一参考像素子集中次最小第一图像分量值时,执行步骤S905,当第一均值点的第一图像分量小于或等于第一参考像素子集中次最小第一图像分量值时,执行步骤S907;或者当第一均值点的第一图像分量小于第二参考像素子集中次最大第一图像分量值时,执行步骤S906,当第一均值点的第一图像分量大于或等于第二参考像素子集中次最大第一图像分量值时,执行步骤S908。It should also be noted that there is no restriction on the judgment of "equal to". For example, when the first image component of the first average point is greater than the second smallest first image component value in the first reference pixel subset, step S905 is executed, and when the first image component of the first average point is less than or equal to the first image component. When the second smallest first image component value in the reference pixel subset, step S907 is executed; or when the first image component of the first mean point is less than the second largest first image component value in the second reference pixel subset, step S906 is executed, and when the first When the first image component of an average point is greater than or equal to the second largest first image component value in the second reference pixel subset, step S908 is executed.
S705:基于所述两个拟合点,确定模型参数,根据所述模型参数得到所述待预测图像分量对应的预测模型;其中,所述预测模型用于实现对所述待预测图像分量的预测处理,以得到所述待预测图像分量对应的预测值;S705: Determine model parameters based on the two fitting points, and obtain a prediction model corresponding to the image component to be predicted according to the model parameters; wherein, the prediction model is used to predict the image component to be predicted Processing to obtain the predicted value corresponding to the image component to be predicted;
需要说明的是,在获取到第一拟合点和第二拟合点之后,可以根据第一拟合点和第二拟合点确定出模型参数;这里,模型参数包括第一模型参数(可以用α表示)和第二模型参数(可以用β表示)。假定待预测图像分量为色度分量,根据模型参数α和β,可以得到如式(1)所示的色度分量对应的预测模型。It should be noted that after the first fitting point and the second fitting point are obtained, the model parameters can be determined according to the first fitting point and the second fitting point; here, the model parameters include the first model parameters (can Denoted by α) and the second model parameter (denoted by β). Assuming that the image component to be predicted is a chrominance component, according to the model parameters α and β, the prediction model corresponding to the chrominance component shown in formula (1) can be obtained.
在一些实施例中,对于S705来说,所述模型参数包括第一模型参数和第二模型参数,所述基于所述两个拟合点,确定模型参数,可以包括:In some embodiments, for S705, the model parameters include a first model parameter and a second model parameter, and the determining the model parameter based on the two fitting points may include:
S705-1:基于所述第一拟合点和所述第二拟合点,通过第一预设因子计算模型获得所述第一模型参数;S705-1: Based on the first fitting point and the second fitting point, obtain the first model parameter through a first preset factor calculation model;
S705-2:基于所述第一模型参数以及所述第一拟合点,通过第二预设因子计算模型获得所述第二模型参数;S705-2: Based on the first model parameter and the first fitting point, obtain the second model parameter through a second preset factor calculation model;
进一步地,在一些实施例中,在S705-1之后,该方法还可以包括:Further, in some embodiments, after S705-1, the method may further include:
S705-3:基于所述第一模型参数以及所述第一均值点,通过第二预设因子计算模型获得所述第二模型参数。S705-3: Based on the first model parameter and the first average point, obtain the second model parameter through a second preset factor calculation model.
需要说明的是,在获取到第一拟合点(luma 1,chroma 1)和第二拟合点(luma 2,chroma 2)之后,可以根据第一预设因子计算模型来计算第一模型参数α,如式(2)所示, It should be noted that after the first fitting point (luma 1 , chroma 1 ) and the second fitting point (luma 2 , chroma 2 ) are obtained, the first model parameter can be calculated according to the first preset factor calculation model α, as shown in formula (2),
Figure PCTCN2019092859-appb-000001
Figure PCTCN2019092859-appb-000001
在获取到第一模型参数α之后,可以根据结合第一拟合点(luma 1,chroma 1)以及第二预设因子计算模型来计算第二模型参数β,如式(3)所示, After the first model parameter α is obtained, the second model parameter β can be calculated according to the calculation model combining the first fitting point (luma 1 , chroma 1 ) and the second preset factor, as shown in formula (3),
β=chroma 1-α×luma 1  (3) β=chroma 1 -α×luma 1 (3)
根据结合第一均值点(meanL,meanC)以及第二预设因子计算模型来计算第二模型参数β,如式(4)所示,The second model parameter β is calculated according to the combination of the first mean point (meanL, meanC) and the second preset factor calculation model, as shown in formula (4),
β=meanC-α×meanL  (4)β=meanC-α×meanL (4)
这样,在得到第一模型参数α和第一模型参数β之后,可以构建预设模型。假定待预测图像分量为色度分量,这样可以根据模型参数(α和β)得到色度分量对应的预测模型,如式(1)所示; 然后利用该预测模型对色度分量进行预测处理,以得到色度分量对应的预测值。In this way, after the first model parameter α and the first model parameter β are obtained, a preset model can be constructed. Assuming that the image component to be predicted is a chrominance component, the prediction model corresponding to the chrominance component can be obtained according to the model parameters (α and β), as shown in equation (1); then the prediction model is used to predict the chrominance component, To get the predicted value corresponding to the chrominance component.
进一步地,在一些实施例中,在S705之后,该方法还可以包括:Further, in some embodiments, after S705, the method may further include:
基于所述预测模型对所述编码块中每个像素点的待预测图像分量进行预测处理,得到每个像素点的待预测图像分量对应的预测值。Performing prediction processing on the image component to be predicted for each pixel in the coding block based on the prediction model to obtain a predicted value corresponding to the image component to be predicted for each pixel.
需要说明的是,在通过第一均值点对第一参考像素子集和/或第二参考像素子集进行第二比较处理后,可以确定两个拟合点;这样,根据“两点确定一条直线”原则,可以确定出该直线的斜率(即第一模型参数)和该直线的截距(即第二模型参数),从而根据这两个模型参数就可以得到待预测图像分量对应的预测模型,以得到编码块中每个像素点的待预测图像分量对应的预测值。举例来说,假定待预测图像分量为色度分量,根据第一模型参数α和第二模型参数β,可以得到如式(1)所示的色度分量对应的预测模型;然后利用式(1)所示的预测模型对编码块中每个像素点的色度分量进行预测处理,如此可以得到每个像素点的色度分量对应的预测值。It should be noted that after the second comparison process is performed on the first reference pixel subset and/or the second reference pixel subset through the first mean point, two fitting points can be determined; in this way, one can be determined according to the "two points The principle of "straight line" can determine the slope of the line (ie the first model parameter) and the intercept of the line (ie the second model parameter), so that the prediction model corresponding to the image component to be predicted can be obtained based on these two model parameters , In order to obtain the predicted value corresponding to the image component to be predicted for each pixel in the coding block. For example, assuming that the image component to be predicted is a chrominance component, according to the first model parameter α and the second model parameter β, the prediction model corresponding to the chrominance component shown in equation (1) can be obtained; then, the equation (1) The prediction model shown in) performs prediction processing on the chrominance component of each pixel in the coding block, so that the predicted value corresponding to the chrominance component of each pixel can be obtained.
本实施例提供了一种图像分量预测方法,通过针对视频图像中编码块的待预测图像分量,针对视频图像中编码块的待预测图像分量,获取N个相邻参考像素点,这里的N个相邻参考像素点为与所述编码块相邻的参考像素点,N为预设的整数值;然后比较所述N个相邻参考像素点对应的N个第一图像分量值,确定第一参考像素子集和第二参考像素子集,该第一参考像素子集中包括:在预设数量的第一图像分量值中最小第一图像分量值和次最小第一图像分量值所对应的两个相邻参考像素点,该第二参考像素子集中包括:在预设数量的第一图像分量值中最大第一图像分量值和次最大第一图像分量值所对应的两个相邻参考像素点;计算所述N个相邻参考像素点的均值,得到第一均值点;通过第一均值点对第一参考像素子集和/或第二参考像素子集进行第一图像分量值的比较,确定两个拟合点;基于两个拟合点确定模型参数,根据模型参数得到待预测图像分量对应的预测模型,以得到待预测图像分量对应的预测值;这样,经过第二比较处理之后,可以将预设数量的相邻参考像素点分成多种情况,针对这多种情况使用不同的两个拟合点来构建预测模型,从而提高了CCLM预测的鲁棒性;也就是说,通过对模型参数推导所使用的拟合点进行优化,可以使得构建的预测模型更准确,而且提升了视频图像的编解码预测性能。This embodiment provides a method for predicting image components. For the image components to be predicted of the coding block in the video image, and for the image components to be predicted of the coding block in the video image, N adjacent reference pixels are obtained, where N The adjacent reference pixels are reference pixels adjacent to the encoding block, and N is a preset integer value; then, the N first image component values corresponding to the N adjacent reference pixels are compared to determine the first The reference pixel subset and the second reference pixel subset, the first reference pixel subset includes: among the preset number of first image component values, two corresponding to the smallest first image component value and the second smallest first image component value Adjacent reference pixels, the second reference pixel subset includes: two adjacent reference pixels corresponding to the largest first image component value and the second largest first image component value among the preset number of first image component values Point; Calculate the average value of the N adjacent reference pixel points to obtain the first average value point; compare the first image component value of the first reference pixel subset and/or the second reference pixel subset through the first average value point , Determine two fitting points; determine model parameters based on the two fitting points, and obtain the prediction model corresponding to the image component to be predicted according to the model parameters to obtain the predicted value corresponding to the image component to be predicted; in this way, after the second comparison process , The preset number of adjacent reference pixels can be divided into multiple situations, and two different fitting points are used to construct the prediction model for these multiple situations, thereby improving the robustness of CCLM prediction; that is, through Optimizing the fitting points used in the model parameter derivation can make the constructed prediction model more accurate and improve the coding and decoding prediction performance of the video image.
本申请的另一实施例中,实际应用中,由于模型参数推导所使用的相邻参考像素点一般为4个;也就是说,预设数量可以为4个。下面将以预设数量等于4为例进行详细描述。In another embodiment of the present application, in practical applications, the number of adjacent reference pixels used for derivation of the model parameters is generally 4; that is, the preset number may be 4. A detailed description will be given below taking the preset number equal to 4 as an example.
如图3所示的流程,假定待预测图像分量为色度分量,当获取到0个相邻参考像素点时,可以直接将第一模型参数α设置为0,第二模型参数β设置为默认值(也可以是色度分量中间值);当获取到2个相邻参考像素点时,可以对该2个相邻参考像素点进行复制,从而得到4个相邻参考像素点,然后可以根据4个相邻参考像素点的操作进行预测模型的构建。也就是说,在获取到4个相邻参考像素点之后,可以通过4次比较和求均值点的方式来确定两个拟合点,从而利用“两点确定一条直线”原则来推导出模型参数;这样根据模型参数可以构建出待预测图像分量对应的预测模型,以得到待预测图像分量对应的预测值。As shown in the process shown in Figure 3, assuming that the image component to be predicted is a chrominance component, when 0 adjacent reference pixels are obtained, the first model parameter α can be directly set to 0, and the second model parameter β is set to the default Value (it can also be the median value of chrominance component); when two adjacent reference pixels are obtained, the two adjacent reference pixels can be copied to obtain 4 adjacent reference pixels, and then according to The operation of 4 adjacent reference pixels is used to construct the prediction model. In other words, after obtaining 4 adjacent reference pixels, two fitting points can be determined by 4 comparisons and average points, so as to use the principle of "two points to determine a straight line" to derive model parameters ; In this way, a prediction model corresponding to the image component to be predicted can be constructed according to the model parameters to obtain the predicted value corresponding to the image component to be predicted.
具体地,假定待预测图像分量为色度分量,而且是通过亮度分量来预测色度分量。假设所得到的4个相邻参考像素点的编号分别为0、1、2、3。通过对这4个相邻参考像素点进行比较,基于四次比较,就可以进一步选择出亮度值较大的2个相邻参考像素点(可以包括亮度值最大的像素点和亮度值次最大的像素点)以及亮度值较小的2个相邻参考像素点(可以包括亮度值最小的像素点和亮度值次最小的像素点)。进一步地,可以设置minIdx[2]和maxIdx[2]两个数组分别存放两组相邻参考像素点,初始时先将编号为0和2的相邻参考像素点放入minIdx[2],将编号为1和3的相邻参考像素点放入maxIdx[2],如下所示,Specifically, it is assumed that the image component to be predicted is a chrominance component, and the chrominance component is predicted by the luminance component. It is assumed that the numbers of the obtained 4 adjacent reference pixels are 0, 1, 2, and 3 respectively. By comparing these four adjacent reference pixels, based on four comparisons, two adjacent reference pixels with larger brightness values can be further selected (including the pixel with the largest brightness value and the pixel with the next largest brightness value). Pixels) and 2 adjacent reference pixels with smaller brightness values (may include the pixel with the smallest brightness value and the pixel with the next smallest brightness value). Further, two arrays of minIdx[2] and maxIdx[2] can be set to store two sets of adjacent reference pixels respectively. Initially, the adjacent reference pixels numbered 0 and 2 are put into minIdx[2], and The adjacent reference pixels numbered 1 and 3 are put into maxIdx[2], as shown below,
Init:minIdx[2]={0,2},maxIdx[2]={1,3}Init:minIdx[2]={0,2},maxIdx[2]={1,3}
在此之后,通过四次比较,可以使得minIdx[2]中存放的是亮度值较小的2个相邻参考像素点,maxIdx[2]中存放的是亮度值较大的2个相邻参考像素点,具体如下所示,After that, through four comparisons, the two adjacent reference pixels with smaller brightness values can be stored in minIdx[2], and the two adjacent reference pixels with larger brightness values are stored in maxIdx[2] Pixels, as shown below,
Step1:if(L[minIdx[0]]>L[minIdx[1]],swap(minIdx[0],minIdx[1])Step1: if(L[minIdx[0]]>L[minIdx[1]],swap(minIdx[0],minIdx[1])
Step2:if(L[maxIdx[0]]>L[maxIdx[1]],swap(maxIdx[0],maxIdx[1])Step2: if(L[maxIdx[0]]>L[maxIdx[1]],swap(maxIdx[0],maxIdx[1])
Step3:if(L[minIdx[0]]>L[maxIdx[1]],swap(minIdx,maxIdx)Step3: if(L[minIdx[0]]>L[maxIdx[1]],swap(minIdx,maxIdx)
Step4:if(L[minIdx[1]]>L[maxIdx[0]],swap(minIdx[1],maxIdx[0])Step4: if(L[minIdx[1]]>L[maxIdx[0]],swap(minIdx[1],maxIdx[0])
这样,可以得到亮度值较小的2个相邻参考像素点,其对应的亮度值分别用luma 0 min和luma 1 min表示,对应的色度值分别用chroma 0 min和chroma 1 min表示;同时还可以得到亮度值较大的两个相邻参考像素点,其对应的亮度值分别用luma 0 max和luma 1 max表示,对应的色度值分别用chroma 0 max和chroma 1 max表示。进一步地,针对2个较小的相邻参考像素点求均值,可以得到均值点用mean min表 示,该均值点mean min所对应的为亮度值用luma min,色度值为chroma min;针对2个较大的相邻参考像素点求均值,可以得到第二均值点用mean max表示,该均值点mean max对应的亮度值为luma max,色度值为chroma max;具体如下所示, In this way, two adjacent reference pixels with smaller brightness values can be obtained, and their corresponding brightness values are represented by luma 0 min and luma 1 min respectively, and the corresponding chromaticity values are represented by chroma 0 min and chroma 1 min respectively ; at the same time Two adjacent reference pixels with larger brightness values can also be obtained, and the corresponding brightness values are represented by luma 0 max and luma 1 max respectively, and the corresponding chromaticity values are represented by chroma 0 max and chroma 1 max respectively . Further, for the two smaller averaging the neighboring reference pixels can be obtained is represented by the mean point min Mean, Mean min the mean point corresponding to the luminance value LUMA min, min Chroma color value; for 2 The average value of a larger adjacent reference pixel point can be obtained, and the second average value point can be represented by mean max . The luminance value of this mean value point mean max is luma max and the chroma value is chroma max ; the details are as follows,
luma min=(luma 0 min+luma 1 min+1)>>1 luma min = (luma 0 min +luma 1 min +1)>>1
luma max=(luma 0 max+luma 1 max+1)>>1 luma max = (luma 0 max +luma 1 max +1)>>1
chroma min=(chroma 0 min+chroma 1 min+1)>>1 chroma min = (chroma 0 min +chroma 1 min +1)>>1
chroma max=(chroma 0 max+chroma 1 max+1)>>1 chroma max = (chroma 0 max +chroma 1 max +1)>>1
也就是说,在得到两个均值点mean min(luma min,chroma min)和mean max(luma max,chroma max)之后,此时可以将这两个均值点作为两个拟合点,模型参数可以由这两个拟合点通过“两点确定一条直线”的计算方式得到。具体的,模型参数α和β可以由式(5)计算得到, In other words, after obtaining two mean points mean min (luma min , chroma min ) and mean max (luma max , chroma max ), these two mean points can be used as two fitting points at this time, and the model parameters can be These two fitting points are obtained through the calculation method of "two points determine a straight line". Specifically, the model parameters α and β can be calculated by formula (5),
Figure PCTCN2019092859-appb-000002
Figure PCTCN2019092859-appb-000002
其中,模型参数α为预测模型中的斜率,模型参数β为预测模型中的截距。这样,在推导出模型参数之后,可以根据模型参数得到色度分量对应的预测模型,如式(1)所示;然后利用该预测模型对色度分量进行预测处理,以得到色度分量对应的预测值。Among them, the model parameter α is the slope in the prediction model, and the model parameter β is the intercept in the prediction model. In this way, after the model parameters are derived, the prediction model corresponding to the chrominance component can be obtained according to the model parameters, as shown in equation (1); then the prediction model is used to predict the chrominance component to obtain the corresponding chrominance component Predictive value.
这样,由于目前的传统方案统一使用4个相邻参考像素点中较大的两个相邻参考像素点的均值点和较小的两个相邻参考像素点的均值点作为两个拟合点来构建预测模型,该预测模型缺乏鲁棒性,从而当4个相邻参考像素点分布不均匀的情况下,将会导致所构建的预测模型并不能够准确地拟合出这4个相邻参考像素点的分布情况,比如图4B所示的预测模型的对比示意图。In this way, because the current traditional scheme uniformly uses the average point of the larger two adjacent reference pixels and the average point of the smaller two adjacent reference pixels among the four adjacent reference pixels as two fitting points To build a prediction model, the prediction model lacks robustness, so when the distribution of 4 adjacent reference pixels is not uniform, the built prediction model will not be able to accurately fit these 4 adjacent reference pixels. The distribution of reference pixels, such as the comparison schematic diagram of the prediction model shown in FIG. 4B.
本申请实施例中,在保留传统方案推导出两个均值点mean min和mean max的基础上,先对这4个相邻参考像素点求取亮度均值,在获取到4个相邻参考像素点的亮度均值之后,可以通过该亮度均值对第一参考像素子集和/或第二参考像素子集进行第二比较处理,以将预设数量的相邻参考像素点分成多种情况,针对这多种情况使用不同的两个拟合点来构建预测模型。 In the embodiment of this application, on the basis of deriving the two mean points mean min and mean max by retaining the traditional scheme, first obtain the luminance mean value of these 4 adjacent reference pixels, and after obtaining 4 adjacent reference pixels After the brightness average value, the first reference pixel subset and/or the second reference pixel subset may be subjected to a second comparison process through the brightness average value, so as to divide the preset number of adjacent reference pixels into multiple situations. In many cases, two different fitting points are used to construct a predictive model.
参见图10,其示出了本申请实施例提供的一种模型参数推导方案的流程示意图。如图10所示,该流程可以包括:Refer to FIG. 10, which shows a schematic flowchart of a model parameter derivation solution provided by an embodiment of the present application. As shown in Figure 10, the process may include:
S1001:获取4个相邻参考像素点;S1001: Obtain 4 adjacent reference pixels;
S1002:经过4次比较获得亮度分量较大值的两个像素点和较小值的两个像素点;S1002: Obtain two pixels with a larger value and two pixels with a smaller brightness component after 4 comparisons;
S1003:计算两个均值点mean min和mean maxS1003: Calculate two mean points mean min and mean max ;
S1004:计算4个相邻参考像素点对应的亮度均值meanL;S1004: Calculate the brightness mean value meanL corresponding to 4 adjacent reference pixels;
S1005:通过一次比较,确定最小亮度值对应的第一像素点和次最小亮度值对应的第二像素点;S1005: Through a comparison, determine the first pixel corresponding to the minimum brightness value and the second pixel corresponding to the second minimum brightness value;
S1006:判断meanL是否大于或等于次最小亮度值;S1006: Determine whether meanL is greater than or equal to the second minimum brightness value;
S1007:当meanL小于次最小亮度值时,将最小亮度值对应的第一像素点和均值点mean max作为两个拟合点构建预测模型; S1007: When meanL is less than the second minimum brightness value, use the first pixel point corresponding to the minimum brightness value and the mean point mean max as two fitting points to construct a prediction model;
S1008:当meanL大于或等于次最小亮度值时,通过一次比较,确定次最大亮度值对应的第三像素点和最大亮度值对应的第四像素点;S1008: When meanL is greater than or equal to the second minimum brightness value, determine the third pixel point corresponding to the second maximum brightness value and the fourth pixel point corresponding to the maximum brightness value through a comparison;
S1009:判断meanL是否小于或等于次最大亮度值;S1009: Determine whether meanL is less than or equal to the second maximum brightness value;
S1010:当meanL大于次最大亮度值时,将最大亮度值对应的第四像素点和均值点mean min作为两个拟合点构建预测模型; S1010: when meanL is greater than the second maximum brightness value, use the fourth pixel point corresponding to the maximum brightness value and the mean point mean min as two fitting points to construct a prediction model;
S1011:当meanL小于或等于次最大亮度值时,将均值点mean min和mean max作为两个拟合点构建预测模型; S1011: When meanL is less than or equal to the second maximum brightness value, use the mean points mean min and mean max as two fitting points to construct a prediction model;
S1012:根据构建的预测模型进行色度分量的预测处理。S1012: Perform chroma component prediction processing according to the constructed prediction model.
需要说明的是,对于步骤S1006和S1009,这两个步骤的比较无顺序之分;也就是说,可以是先执行步骤S1006的比较,然后执行步骤S1009的比较;也可以是先执行步骤S1009的比较,然后执行步骤S1006的比较;本申请实施例不作具体限定。例如,如果先执行步骤S1009的比较,此时的流程修改如下,在通过一次比较确定次最大亮度值对应的第三像素点和最大亮度值对应的第四像素点之后,判断meanL是否小于或等于次最大亮度值;当meanL大于次最大亮度值时,将最大亮度值对应的第四像素点和均值点mean min作为两个拟合点构建预测模型;当meanL小于或等于次最大亮度值时,通过一次比较确定最小亮度值对应的第一像素点和次最小亮度值对应的第二像素点;然 后判断meanL是否大于或等于次最小亮度值;当meanL小于次最小亮度值时,将最小亮度值对应的第一像素点和均值点mean max作为两个拟合点构建预测模型;当meanL大于或等于次最小亮度值时,将均值点mean min和mean max作为两个拟合点构建预测模型;最后根据构建的预测模型进行色度分量的预测处理。 It should be noted that for steps S1006 and S1009, the comparison of these two steps is not in order; that is, the comparison of step S1006 can be performed first, and then the comparison of step S1009 can be performed; or the comparison of step S1009 can be performed first. Compare, and then perform the comparison in step S1006; the embodiment of the present application does not specifically limit it. For example, if the comparison of step S1009 is performed first, the flow at this time is modified as follows. After the third pixel corresponding to the second maximum brightness value and the fourth pixel corresponding to the maximum brightness value are determined through a comparison, it is determined whether meanL is less than or equal to The second maximum brightness value; when meanL is greater than the second maximum brightness value, the fourth pixel point corresponding to the maximum brightness value and the mean point mean min are used as two fitting points to construct the prediction model; when meanL is less than or equal to the second maximum brightness value, Determine the first pixel corresponding to the minimum brightness value and the second pixel corresponding to the second minimum brightness value through a comparison; then determine whether meanL is greater than or equal to the second minimum brightness value; when meanL is less than the second minimum brightness value, the minimum brightness value The corresponding first pixel point and mean point mean max are used as two fitting points to construct a prediction model; when meanL is greater than or equal to the second minimum brightness value, the mean points mean min and mean max are used as two fitting points to construct a prediction model; Finally, the chrominance component is predicted according to the constructed prediction model.
还需要说明的是,利用“两点确定一条直线”原则来构建预测模型;这里的两点可以称为拟合点。在获取4个相邻参考像素点之后,可以通过对这4个相邻参考像素点求均值(可以用mean表示),然后利用该均值进行第二比较处理,确定出不同情况下的拟合点。It should also be noted that the principle of "two points determine a straight line" is used to construct the prediction model; the two points here can be called fitting points. After obtaining 4 adjacent reference pixels, you can calculate the average value of these 4 adjacent reference pixels (which can be represented by mean), and then use the average value for the second comparison process to determine the fitting points in different situations .
具体地,假定待预测图像分量为色度分量,而且是通过亮度分量来预测色度分量。将会执行如下步骤,Specifically, it is assumed that the image component to be predicted is a chrominance component, and the chrominance component is predicted by the luminance component. The following steps will be performed,
第一步,假设所获得的四个相邻参考像素点的编号分别为0、1、2、3,对该四个相邻参考像素点进行四次比较,可以选择出亮度值较大的2个相邻参考像素点(可以包括亮度值最大的像素点和亮度值次最大的像素点)以及亮度值较小的2个相邻参考像素点(可以包括亮度值最小的像素点和亮度值次最小的像素点)。进一步地,可以设置minIdx[2]和maxIdx[2]两个数组分别存放两组相邻参考像素点,初始时先将编号为0和2的相邻参考像素点放入minIdx[2],将编号为1和3的相邻参考像素点放入maxIdx[2],如下所示,In the first step, assuming that the numbers of the four adjacent reference pixels obtained are 0, 1, 2, and 3 respectively, the four adjacent reference pixels are compared four times, and the 2 with the larger brightness value can be selected. Two adjacent reference pixels (can include the pixel with the largest brightness value and the pixel with the next largest brightness value) and 2 adjacent reference pixels with a smaller brightness value (can include the pixel with the smallest brightness value and the second brightness value The smallest pixel). Further, two arrays of minIdx[2] and maxIdx[2] can be set to store two sets of adjacent reference pixels respectively. Initially, the adjacent reference pixels numbered 0 and 2 are put into minIdx[2], and The adjacent reference pixels numbered 1 and 3 are put into maxIdx[2], as shown below,
Init:minIdx[2]={0,2},maxIdx[2]={1,3}Init:minIdx[2]={0,2},maxIdx[2]={1,3}
第二步,通过四次比较,可以使得minIdx[2]中存放的是亮度值较小的2个相邻参考像素点,maxIdx[2]中存放的是亮度值较大的2个相邻参考像素点,具体如下所示,In the second step, through four comparisons, the two adjacent reference pixels with smaller brightness values can be stored in minIdx[2], and the two adjacent reference pixels with larger brightness values are stored in maxIdx[2] Pixels, as shown below,
Step1:if(L[minIdx[0]]>L[minIdx[1]],swap(minIdx[0],minIdx[1])Step1: if(L[minIdx[0]]>L[minIdx[1]],swap(minIdx[0],minIdx[1])
Step2:if(L[maxIdx[0]]>L[maxIdx[1]],swap(maxIdx[0],maxIdx[1])Step2: if(L[maxIdx[0]]>L[maxIdx[1]],swap(maxIdx[0],maxIdx[1])
Step3:if(L[minIdx[0]]>L[maxIdx[1]],swap(minIdx,maxIdx)Step3: if(L[minIdx[0]]>L[maxIdx[1]],swap(minIdx,maxIdx)
Step4:if(L[minIdx[1]]>L[maxIdx[0]],swap(minIdx[1],maxIdx[0])Step4: if(L[minIdx[1]]>L[maxIdx[0]],swap(minIdx[1],maxIdx[0])
第三步,可以得到亮度值较小的2个相邻参考像素点,其对应的亮度值分别用luma 0 min和luma 1 min表示,对应的色度值分别用chroma 0 min和chroma 1 min表示;同时还可以得到亮度值较大的两个相邻参考像素点,其对应的亮度值分别用luma 0 max和luma 1 max表示,对应的色度值分别用chroma 0 max和chroma 1 max表示。进一步地,针对2个较小的相邻参考像素点求均值,可以得到均值点用mean min表示,该均值点mean min所对应的为亮度值用luma min,色度值为chroma min;针对2个较大的相邻参考像素点求均值,可以得到第二均值点用mean max表示,该均值点mean max对应的亮度值为luma max,色度值为chroma max;具体如下所示, In the third step, two adjacent reference pixels with smaller brightness values can be obtained. The corresponding brightness values are represented by luma 0 min and luma 1 min , and the corresponding chroma values are represented by chroma 0 min and chroma 1 min . ; At the same time, two adjacent reference pixels with larger brightness values can be obtained, and the corresponding brightness values are represented by luma 0 max and luma 1 max respectively, and the corresponding chromaticity values are represented by chroma 0 max and chroma 1 max respectively . Further, for the two smaller averaging the neighboring reference pixels can be obtained is represented by the mean point min Mean, Mean min the mean point corresponding to the luminance value LUMA min, min Chroma color value; for 2 The average value of a larger adjacent reference pixel point can be obtained, and the second average value point can be represented by mean max . The luminance value of this mean value point mean max is luma max and the chroma value is chroma max ; the details are as follows,
luma min=(luma 0 min+luma 1 min+1)>>1 luma min = (luma 0 min +luma 1 min +1)>>1
luma max=(luma 0 max+luma 1 max+1)>>1 luma max = (luma 0 max +luma 1 max +1)>>1
chroma min=(chroma 0 min+chroma 1 min+1)>>1 chroma min = (chroma 0 min +chroma 1 min +1)>>1
chroma max=(chroma 0 max+chroma 1 max+1)>>1 chroma max = (chroma 0 max +chroma 1 max +1)>>1
第四步,在得到两个均值点mean min(luma min,chroma min)和mean max(luma max,chroma max)之后,可以继续执行如下模块: In the fourth step, after obtaining the two mean points mean min (luma min , chroma min ) and mean max (luma max , chroma max ), you can continue to execute the following modules:
Ⅰ,求取编号分别为0,1,2,3的四个相邻参考像素点的亮度均值meanL。需要注意的是,可以通过将四个相邻参考像素点对应的亮度值相加除四获取,也可以通过(luma min+luma max+1)>>1获取; Ⅰ. Obtain the meanL of the brightness of the four adjacent reference pixels with numbers 0, 1, 2, and 3 respectively. It should be noted that it can be obtained by adding the brightness values corresponding to four adjacent reference pixels and dividing by four, or by (luma min +luma max +1)>>1;
Ⅱ,比较luma 0 min和luma 1 min,确定最小点和次最小点(第一次比较); Ⅱ, compare luma 0 min and luma 1 min to determine the minimum point and the second minimum point (first comparison);
Ⅲ,比较四个相邻参考像素点的亮度均值meanL和次最小点的亮度值(第二次比较):Ⅲ. Compare the average brightness value meanL of four adjacent reference pixels with the brightness value of the second smallest point (second comparison):
若meanL大于或等于次最小点的亮度值;If meanL is greater than or equal to the brightness value of the second smallest point;
跳到Ⅳ;Skip to Ⅳ;
否则otherwise
确定第一拟合点为最小点,第二拟合点为mean max(luma max,chroma max),同时退出此模块,跳到第五步; Determine that the first fitting point is the minimum point, and the second fitting point is mean max (luma max , chroma max ), exit this module at the same time, and skip to step 5;
Ⅳ,比较luma 0 max和luma 1 max,确定最大点和次最大点(第三次比较); Ⅳ, compare luma 0 max and luma 1 max to determine the maximum point and the second maximum point (the third comparison);
Ⅴ,比较四个相邻参考像素点的亮度均值meanL和次最大点的亮度值(第四次比较):Ⅴ, compare the brightness meanL of the four adjacent reference pixels with the brightness value of the second largest point (fourth comparison):
若meanL小于或等于次最大点的亮度值;If meanL is less than or equal to the brightness value of the next maximum point;
确定第一拟合点为mean min(luma min,chroma min),第二拟合点为mean max(luma max,chroma max),同时退出此模块,跳到第五步; Determine that the first fitting point is mean min (luma min , chroma min ), and the second fitting point is mean max (luma max , chroma max ), exit this module at the same time, and skip to step 5;
否则otherwise
确定第一拟合点为mean min(luma min,chroma min),第二拟合点为最大点,同时退出此模块, 跳到第五步; Determine that the first fitting point is mean min (luma min , chroma min ), and the second fitting point is the maximum point. At the same time, exit this module and skip to step 5;
第五步,模型参数可以由这两个拟合点利用“两点确定一条直线”推导得到。假定第一拟合点(luma 1,chroma 1)和第二拟合点(luma 2,chroma 2),模型参数α和β可以由式(6)计算得到, In the fifth step, the model parameters can be derived from these two fitting points using "two points to determine a straight line". Assuming the first fitting point (luma 1 , chroma 1 ) and the second fitting point (luma 2 , chroma 2 ), the model parameters α and β can be calculated by formula (6),
Figure PCTCN2019092859-appb-000003
Figure PCTCN2019092859-appb-000003
第六步,在推导出模型参数之后,可以通过预测模型对当前编码块进行色度值的预测。In the sixth step, after the model parameters are derived, the chrominance value of the current coding block can be predicted through the prediction model.
也就是说,在推导出两个模型参数(α和β)之后,可以根据模型参数得到色度分量对应的预测模型,如式(1)所示;然后利用该预测模型对色度分量进行预测处理,以得到色度分量对应的预测值。In other words, after deriving the two model parameters (α and β), the prediction model corresponding to the chrominance component can be obtained according to the model parameters, as shown in equation (1); then the prediction model is used to predict the chrominance component Processing to obtain the predicted value corresponding to the chrominance component.
在推导出两个拟合点之前,本申请实施例与传统方案相比,该过程增加了一次求均值的操作,以及最少两次最多四次比较的操作,从而可以将这4个相邻参考像素点分成3种情况,针对这3种情况使用不同的两个拟合点来构建预测模型,从而提高了CCLM预测的鲁棒性。Before deriving the two fitting points, compared with the traditional solution, the embodiment of the present application adds one averaging operation and at least two and up to four comparison operations, so that the four adjacent reference The pixel points are divided into three cases, and two different fitting points are used to construct the prediction model for these three cases, thereby improving the robustness of CCLM prediction.
本申请的又一实施例中,预设数量为4个,这样,这4个相邻参考像素点可能存在3种情况,比如亮度均值meanL在次最大点的亮度值和次最小点的亮度值之间,或者亮度均值meanL小于次最小点的亮度值,或者亮度均值meanL大于次最大点的亮度值;也就是说,这4个相邻参考像素点可以呈均匀分布趋势或者非均匀分布趋势,下面将针对这三种情况进行具体描述。In another embodiment of the present application, the preset number is 4, so there may be 3 situations for these 4 adjacent reference pixels, such as the brightness value of meanL at the second maximum point and the brightness value of the second minimum point. Between, or the mean brightness value meanL is less than the brightness value of the second smallest point, or the mean brightness value meanL is greater than the brightness value of the second largest point; that is, the four adjacent reference pixels can show a uniform distribution trend or a non-uniform distribution trend, The following will specifically describe these three situations.
可选地,在一些实施例中,当亮度均值meanL在次最大点的亮度值和次最小点的亮度值之间时,这4个相邻参考像素点呈均匀分布。参见图11,其示出了本申请实施例提供的一种本申请方案与传统方案下预测模型的对比示意图。如图11所示,4个黑色圆点为4个相邻参考像素点,2个灰色圆点分别为这4个相邻参考像素点中2个较大的相邻参考像素点对应的均值点和2个较小的相邻参考像素点对应的均值点(即传统方案所得到的两个拟合点),这样灰色斜线为根据传统方案所构建的预测模型;加粗黑色虚线为4个相邻参考像素点的亮度均值,两个加粗黑圈为两个拟合点,这样加粗黑色斜线为根据本申请实施例的方案所构建的预测模型;灰色点线为这4个相邻参考像素点使用LMS算法拟合出的预测模型;从图11中可以看出,两个拟合点为mean min(luma min,chroma min)和mean max(luma max,chroma max),这样灰色斜线和加粗黑色斜线重合,而且和灰色点线比较接近,即本申请实施例的方案与传统方案所构建的预测模型相同,均能够准确地拟合出这4个相邻参考像素点的分布情况。 Optionally, in some embodiments, when the mean brightness value meanL is between the brightness value of the second-maximum point and the brightness value of the second-minimum point, the four adjacent reference pixel points are uniformly distributed. Refer to FIG. 11, which shows a schematic diagram of a comparison between a prediction model provided by an embodiment of the present application and a traditional solution provided by the present application. As shown in Figure 11, the 4 black dots are the 4 adjacent reference pixels, and the 2 gray dots are the average points corresponding to the two larger adjacent reference pixels among the 4 adjacent reference pixels. Mean points corresponding to 2 smaller adjacent reference pixels (that is, the two fitting points obtained by the traditional scheme), so the gray diagonal line is the prediction model constructed according to the traditional scheme; the bold black dashed line is 4 The brightness average value of adjacent reference pixels, the two bold black circles are two fitting points, so the bold black diagonal line is the prediction model constructed according to the scheme of the embodiment of the application; the gray dotted line is the four phases The prediction model fitted by the adjacent reference pixel using the LMS algorithm; as can be seen from Figure 11, the two fitted points are mean min (luma min , chroma min ) and mean max (luma max , chroma max ), so gray The oblique line and the bold black oblique line overlap, and are relatively close to the gray dotted line, that is, the scheme of the embodiment of the application is the same as the prediction model constructed by the traditional scheme, and both can accurately fit these 4 adjacent reference pixels Distribution.
可选地,在一些实施例中,当亮度均值meanL小于次最小点的亮度值时,这4个相邻参考像素点呈非均匀分布。参见图12,其示出了本申请实施例提供的另一种本申请方案与传统方案下预测模型的对比示意图。如图12所示,4个黑色圆点为4个相邻参考像素点,2个灰色圆点分别为这4个相邻参考像素点中2个较大的相邻参考像素点对应的均值点和2个较小的相邻参考像素点对应的均值点(即传统方案所得到的两个拟合点),这样灰色斜线为根据传统方案所构建的预测模型;加粗黑色虚线为4个相邻参考像素点的亮度均值,两个加粗黑圈为两个拟合点,这样加粗黑色斜线为根据本申请实施例的方案所构建的预测模型;灰色点线为这4个相邻参考像素点使用LMS算法拟合出的预测模型;从图12中可以看出,两个拟合点为最小点和mean max(luma max,chroma max),这样加粗黑色斜线和灰色点线更接近,即本申请实施例的方案比传统方案所构建的预测模型更符合这4个相邻参考像素点的分布情况。 Optionally, in some embodiments, when the mean brightness value meanL is less than the brightness value of the second smallest point, the four adjacent reference pixel points are non-uniformly distributed. Refer to FIG. 12, which shows a schematic diagram of comparison between another prediction model provided by the present application and the traditional scheme provided by an embodiment of the present application. As shown in Figure 12, the 4 black dots are the 4 adjacent reference pixels, and the 2 gray dots are the average points corresponding to the two larger adjacent reference pixels among the 4 adjacent reference pixels. Mean points corresponding to 2 smaller adjacent reference pixels (that is, the two fitting points obtained by the traditional scheme), so the gray diagonal line is the prediction model constructed according to the traditional scheme; the bold black dashed line is 4 The brightness average value of adjacent reference pixels, the two bold black circles are two fitting points, so the bold black diagonal line is the prediction model constructed according to the scheme of the embodiment of the application; the gray dotted line is the four phases The prediction model fitted by the adjacent reference pixel using the LMS algorithm; as can be seen from Figure 12, the two fitted points are the minimum point and the mean max (luma max , chroma max ), so the black diagonal line and the gray point are bolded The line is closer, that is, the solution of the embodiment of the present application is more in line with the distribution of the four adjacent reference pixels than the prediction model constructed by the traditional solution.
可选地,在一些实施例中,当亮度均值meanL大于次最大点的亮度值时,这4个相邻参考像素点呈非均匀分布。参见图13,其示出了本申请实施例提供的又一种本申请方案与传统方案下预测模型的对比示意图。如图13所示,4个黑色圆点为4个相邻参考像素点,2个灰色圆点分别为这4个相邻参考像素点中2个较大的相邻参考像素点对应的均值点和2个较小的相邻参考像素点对应的均值点(即传统方案所得到的两个拟合点),这样灰色斜线为根据传统方案所构建的预测模型;加粗黑色虚线为4个相邻参考像素点的亮度均值,两个加粗黑圈为两个拟合点,这样加粗黑色斜线为根据本申请实施例的方案所构建的预测模型;灰色点线为这4个相邻参考像素点使用LMS算法拟合出的预测模型;从图13中可以看出,两个拟合点为mean min(luma min,chroma min)和最大点,这样加粗黑色斜线和灰色点线更接近,即本申请实施例的方案比传统方案所构建的预测模型更符合这4个相邻参考像素点的分布情况。 Optionally, in some embodiments, when the mean brightness value meanL is greater than the brightness value of the second largest point, the four adjacent reference pixel points are non-uniformly distributed. Refer to FIG. 13, which shows a schematic diagram of a comparison between another prediction model provided by the present application and the traditional solution provided by an embodiment of the present application. As shown in Figure 13, the 4 black dots are the 4 adjacent reference pixels, and the 2 gray dots are the average points corresponding to the two larger adjacent reference pixels among the 4 adjacent reference pixels. Mean points corresponding to 2 smaller adjacent reference pixels (that is, the two fitting points obtained by the traditional scheme), so the gray diagonal line is the prediction model constructed according to the traditional scheme; the bold black dashed line is 4 The brightness average value of adjacent reference pixels, the two bold black circles are two fitting points, so the bold black diagonal line is the prediction model constructed according to the scheme of the embodiment of the application; the gray dotted line is the four phases The prediction model fitted by the adjacent reference pixel using the LMS algorithm; as can be seen from Figure 13, the two fitted points are the mean min (luma min , chroma min ) and the maximum point, so the black diagonal line and the gray point are thickened The line is closer, that is, the solution of the embodiment of the present application is more in line with the distribution of the four adjacent reference pixels than the prediction model constructed by the traditional solution.
从图12或者图13可以看出,当4个相邻参考像素点呈不均匀分布时,目前的传统方案所构造 的线性模型缺乏鲁棒性,不能很好的拟合这4个相邻参考像素点,从而使预测性能不准确。而在本申请实施例中,通过对模型参数推导所使用的所使用的拟合点进行优化,可以使得CCLM预测更具有鲁棒性;具体来说,本申请实施例在经过一次均值计算、最少两次比较且最多四次比较的复杂度增加前提下,将4个相邻参考点的分布分为三种情况,并对这三种情况使用不同的拟合点构造预测模型,从而提高了CCLM预测的鲁棒性;而且在对这4个相邻参考像素点的不同分布情况下,使用不同的拟合点来推导模型参数,可以避免“除3”的操作,同时其他除法的操作,比如“除2”和“除4”等,可以完全采用移位方式代替,从而还可以节省计算复杂度。例如,基于VVC最新参考软件VTM5.0,在All intra条件下,对JVET所要求的测试序列按照通测条件,这样在Y分量、Cb分量和Cr分量上BD-rate平均变化分别为-0.03%、-0.18%、-0.17%,也就说明了本申请实施例的方案对复杂度增加很小的前提下,带来了一定的预测性能提升。It can be seen from Figure 12 or Figure 13 that when the 4 adjacent reference pixels are unevenly distributed, the linear model constructed by the current traditional scheme lacks robustness and cannot fit these 4 adjacent references well. Pixels, thus making the prediction performance inaccurate. In the embodiment of the present application, by optimizing the fitting points used in the derivation of the model parameters, the CCLM prediction can be made more robust; specifically, the embodiment of the present application has undergone a mean value calculation with minimum Under the premise that the complexity of two comparisons and up to four comparisons increases, the distribution of 4 adjacent reference points is divided into three cases, and different fitting points are used to construct the prediction model for these three cases, thereby improving the CCLM Robustness of prediction; and in the case of different distributions of these 4 adjacent reference pixels, using different fitting points to derive model parameters can avoid the "divide by 3" operation and other division operations, such as "Division by 2" and "division by 4", etc., can be completely replaced by shifting, which can also save computational complexity. For example, based on the latest VVC reference software VTM5.0, under All intra conditions, the test sequence required by JVET is subject to the general test conditions, so that the average change of BD-rate on the Y component, Cb component and Cr component is -0.03%. , -0.18%, -0.17%, it also shows that the solution of the embodiment of the present application brings a certain improvement in prediction performance under the premise of a small increase in complexity.
本申请的再一实施例中,在按照VVC最新参考软件VTM5.0获取4个相邻参考像素点之后,还可以按照4个相邻参考像素点的亮度均值将这4个相邻参考像素点分为两类,比如左边一类用L表示,右边一类用R表示。在分类后,当L类或R类中像素点个数为3个时,此时求该类的均值将会出现“除3”的操作,本申请实施例中该类的拟合点可以不再是3个像素点的均值点,而是从3个像素点中任意选取2个像素点的均值点。这种操作可以在确定拟合点的过程中去除“除法”操作(准确来说,可以避免“除3”的操作,而“除2”和“除4”的操作可以用移位方式代替),从而可以节省计算复杂度。因此,在一些实施例中,在S701之后,该方法还可以包括:In still another embodiment of the present application, after obtaining 4 adjacent reference pixels according to the latest VVC reference software VTM5.0, the 4 adjacent reference pixels can also be calculated according to the average brightness of the 4 adjacent reference pixels Divided into two categories, for example, the left one is represented by L, and the right one is represented by R. After classification, when the number of pixels in the L or R class is 3, the operation of "dividing by 3" will occur when the average value of the class is calculated. In the embodiment of the application, the fitting points of this class may not be Then there is the mean point of 3 pixels, but the mean point of 2 pixels is arbitrarily selected from the 3 pixels. This operation can remove the "divide" operation in the process of determining the fitting point (to be precise, the "divide by 3" operation can be avoided, and the "divide by 2" and "divide by 4" operations can be replaced by shifting) , Which can save computational complexity. Therefore, in some embodiments, after S701, the method may further include:
通过第一均值点对所述预设数量的相邻参考像素点进行分组处理,得到第三参考像素子集和第四参考像素子集;Grouping the preset number of adjacent reference pixel points by the first average point to obtain a third reference pixel subset and a fourth reference pixel subset;
基于第三参考像素子集,确定第一拟合点;基于第四参考像素子集确定第二拟合点;Determine the first fitting point based on the third reference pixel subset; determine the second fitting point based on the fourth reference pixel subset;
基于所述第一拟合点和所述第二拟合点,确定模型参数,根据所述模型参数得到所述待预测图像分量对应的预测模型;其中,所述预测模型用于实现对所述待预测图像分量的预测处理,以得到所述待预测图像分量对应的预测值。Based on the first fitting point and the second fitting point, the model parameters are determined, and the prediction model corresponding to the image component to be predicted is obtained according to the model parameters; wherein, the prediction model is used to realize the Prediction processing of the image component to be predicted to obtain the predicted value corresponding to the image component to be predicted.
进一步地,所述基于第三参考像素子集,确定第一拟合点;基于第四参考像素子集确定第二拟合点,可以包括:Further, the determining the first fitting point based on the third reference pixel subset; and determining the second fitting point based on the fourth reference pixel subset may include:
从所述第三参考像素子集中选取部分相邻参考像素点,对所述部分相邻参考像素点进行均值计算,将计算得到的均值点作为所述第一拟合点;Selecting part of adjacent reference pixels from the third reference pixel subset, performing average calculation on the part of adjacent reference pixels, and using the calculated average point as the first fitting point;
从所述第四参考像素子集中选取部分相邻参考像素点,对所述部分相邻参考像素点进行均值计算,将计算得到的均值点作为所述第二拟合点。A part of adjacent reference pixels is selected from the fourth reference pixel subset, the average value of the part of adjacent reference pixels is calculated, and the calculated average point is used as the second fitting point.
进一步地,所述基于第三参考像素子集,确定第一拟合点;基于第四参考像素子集确定第二拟合点,可以包括:Further, the determining the first fitting point based on the third reference pixel subset; and determining the second fitting point based on the fourth reference pixel subset may include:
从所述第三参考像素子集中选取其中一个相邻参考像素点作为所述第一拟合点;Selecting one of the adjacent reference pixel points from the third reference pixel subset as the first fitting point;
从所述第四参考像素子集中选取其中一个相邻参考像素点作为所述第二拟合点。One of the adjacent reference pixel points is selected from the fourth reference pixel subset as the second fitting point.
进一步地,假定预设数量的取值为4,所述基于第三参考像素子集,确定第一拟合点;基于第四参考像素子集确定第二拟合点,包括:Further, assuming that the value of the preset number is 4, determining the first fitting point based on the third reference pixel subset; and determining the second fitting point based on the fourth reference pixel subset includes:
若所述第三参考像素子集包括3个相邻参考像素点,所述第四参考像素子集包括1个相邻参考像素点,则从所述第三参考像素子集中选取2个相邻参考像素点,对选取的2个相邻参考像素点进行均值计算,将计算得到的均值点作为所述第一拟合点,将所述第四参考像素子集中的1个相邻参考像素点作为第二拟合点;If the third reference pixel subset includes three adjacent reference pixels and the fourth reference pixel subset includes one adjacent reference pixel, then two adjacent reference pixels are selected from the third reference pixel subset For reference pixels, perform average calculation on the selected two adjacent reference pixels, use the calculated average point as the first fitting point, and use 1 adjacent reference pixel in the fourth reference pixel subset As the second fitting point;
若所述第三参考像素子集包括1个相邻参考像素点,所述第四参考像素子集包括3个相邻参考像素点,则从所述第四参考像素子集中选取2个相邻参考像素点,对选取的2个相邻参考像素点进行均值计算,将计算得到的均值点作为所述第二拟合点,将所述第三参考像素子集中的1个相邻参考像素点作为第一拟合点。If the third reference pixel subset includes 1 adjacent reference pixel, and the fourth reference pixel subset includes 3 adjacent reference pixels, select 2 adjacent reference pixels from the fourth reference pixel subset For reference pixels, perform average calculation on the selected two adjacent reference pixels, use the calculated average point as the second fitting point, and use 1 adjacent reference pixel in the third reference pixel subset As the first fitting point.
需要说明的是,这里选取的部分参考像素点可以是2的幂次个数的参考像素点。例如,在按照亮度均值将4个相邻参考像素点分为两类(比如L类和R类)之后,若L类或R类中像素点个数为3个时,此时从3个像素点中任意选取2个像素点求均值点的操作时,还可以是若L类中有3个像素点,则选取亮度值较小的2个像素点;若R类中有3个像素点,则选取亮度值较大的2个像素点。It should be noted that some of the reference pixels selected here may be reference pixels of a power of two. For example, after dividing 4 adjacent reference pixels into two categories (such as L and R) according to the average brightness value, if the number of pixels in the L or R category is 3, then from 3 pixels In the operation of arbitrarily selecting 2 pixels in the points to find the average point, it can also be that if there are 3 pixels in the L class, then 2 pixels with the smaller brightness value are selected; if there are 3 pixels in the R class, Then select 2 pixels with larger brightness value.
除此之外,在按照亮度均值将4个相邻参考像素点分为两类(比如L类和R类)之后,还可以从L类中任意选取1个像素点作为第一拟合点,从R类中任意选取1个像素点作为第二拟合点。这样,可以利用这两个拟合点构造模型参数α和β;也可以用这两个拟合点推导第一模型参数α,并 用4个像素点的均值点推导第二模型参数β;在该过程中,可以省掉对L类和R类分别求均值点的操作。In addition, after dividing the 4 adjacent reference pixels into two categories (such as the L category and the R category) according to the average brightness, one pixel point from the L category can be selected as the first fitting point. One pixel point is arbitrarily selected from the R category as the second fitting point. In this way, the two fitting points can be used to construct the model parameters α and β; the two fitting points can also be used to derive the first model parameter α, and the average point of 4 pixels can be used to derive the second model parameter β; In the process, it is possible to omit the operation of averaging points for L and R respectively.
参见图14,其示出了本申请实施例提供的一种确定拟合点的结构示意图。如图14所示,在获取到所有相邻参考像素点之后,可以求取这些相邻参考像素点的亮度均值L mean,然后利用该L mean将这些相邻参考像素点分成L类和R类,每一类中的均值点(L Lmean,C Lmean)和(L Rmean,C Rmean)可以作为两个拟合点,根据这两个拟合点推导第一模型参数α,同时利用均值点(L mean,C mean)来推导第二模型参数β: Refer to FIG. 14, which shows a schematic structural diagram for determining a fitting point provided by an embodiment of the present application. As shown in Figure 14, after all the adjacent reference pixels are obtained, the luminance mean value L mean of these adjacent reference pixels can be obtained, and then the adjacent reference pixels can be divided into L type and R type using the L mean , The mean points (L Lmean , C Lmean ) and (L Rmean , C Rmean ) in each category can be used as two fitting points, and the first model parameter α is derived from these two fitting points, and the mean point ( L mean , C mean ) to derive the second model parameter β:
Figure PCTCN2019092859-appb-000004
Figure PCTCN2019092859-appb-000004
还需要说明的是,上述方式也可以应用于VTM5.0中的最多4个有效的相邻参考像素点的情况。例如,在通过VTM5.0获取到4个相邻参考像素点之后,首先获取4个像素点的亮度均值,用均值将这4个像素点分为两类,然后将两类的均值点作为两个拟合点来推导模型参数α和β。当这4个像素点按照亮度均值无法分为两类,即L类中像素点个数为0或者R类中像素点个数为0时,可以直接使用这4个像素点的色度均值作为色度预测值。另外,还可以使用这两类的均值点作为两个拟合点推导第一模型参数α,并使用4个像素点的均值点推导第二模型参数β。It should also be noted that the above method can also be applied to the case of at most 4 valid adjacent reference pixels in VTM5.0. For example, after obtaining 4 adjacent reference pixels through VTM5.0, first obtain the average brightness of the 4 pixels, use the average to divide the 4 pixels into two categories, and then use the average points of the two categories as two Fitting points to derive the model parameters α and β. When these 4 pixels cannot be divided into two categories according to the average brightness value, that is, when the number of pixels in the L type is 0 or the number of pixels in the R type is 0, the chromaticity average value of these 4 pixels can be directly used as Chromaticity prediction value. In addition, the two types of mean points can be used as two fitting points to derive the first model parameter α, and the mean point of 4 pixels can be used to derive the second model parameter β.
本申请的再一实施例中,在根据已经获取的两个拟合点来推导第一模型参数α的基础上,还可以直接使用已经获取的两个均值点的均值点推导第二模型参数β。基于VTM5.0,两个均值点是向下取整为整型,此时对这两个均值点再次求取均值,与对4个相邻参考像素点直接求取均值所得的结果可能会不同,从而使得预测性能略有差别。In another embodiment of the present application, on the basis of deriving the first model parameter α based on the two fitting points that have been obtained, the second model parameter β may be derived directly using the mean point of the two mean points that have been obtained. . Based on VTM5.0, the two average points are rounded down to an integer. At this time, the average value of these two average points is calculated again, and the result obtained by directly calculating the average value of 4 adjacent reference pixels may be different. , Which makes the prediction performance slightly different.
参见图15,其示出了本申请实施例提供的一种预测模型的结构示意图。如图15所示,灰色斜线表示基于两个拟合点所计算出的斜率,即第一模型参数α;然后将灰色斜线进行平移,该平移量为第二模型参数β,所得的加粗黑色斜线即为最终的预测模型。具体地,基于JVET-N0524提案,可以在VTM4.0中CCLM技术的基础上,使用编码块对应的所有相邻参考点中的亮度值最大和最小的两个点作为两个拟合点推导第一模型参数α,同时使用所有相邻参考像素点的均值点推导第二模型参数β。除此之外,JVET-N0524提案也可以应用于VTM5.0中的最多4个有效的相邻参考像素点的情况,这时候可以使用4个像素点中两个较大点的均值点和两个较小点的均值点作为两个拟合点推导第一模型参数α,然后使用这4个像素点的均值点推导第二模型参数β。Refer to FIG. 15, which shows a schematic structural diagram of a prediction model provided by an embodiment of the present application. As shown in Figure 15, the gray oblique line represents the slope calculated based on the two fitting points, that is, the first model parameter α; then the gray oblique line is translated, and the amount of translation is the second model parameter β. The thick black diagonal line is the final prediction model. Specifically, based on the JVET-N0524 proposal, on the basis of the CCLM technology in VTM4.0, the two points with the largest and smallest brightness values among all adjacent reference points corresponding to the coding block can be used as the two fitting points to derive the first A model parameter α, and a second model parameter β is derived using the mean value of all adjacent reference pixels at the same time. In addition, the JVET-N0524 proposal can also be applied to the case of up to 4 valid adjacent reference pixels in VTM5.0. At this time, the average point and two of the two larger points of the 4 pixels can be used. The mean point of the two smaller points is used as two fitting points to derive the first model parameter α, and then the mean point of these 4 pixels is used to derive the second model parameter β.
本实施例提供了一种图像分量预测方法,通过本实施例的技术方案,在经过第二比较处理之后,可以将预设数量的相邻参考像素点分成3种情况,针对这3种情况将会使用不同的两个拟合点来构建预测模型,从而提高了CCLM预测的鲁棒性;也就是说,通过对模型参数推导所使用的拟合点进行优化,可以使得构建的预测模型更准确,还能够提升视频图像的编解码预测性能。。This embodiment provides an image component prediction method. With the technical solution of this embodiment, after the second comparison processing, a preset number of adjacent reference pixels can be divided into 3 cases, and the three cases Two different fitting points are used to construct the prediction model, thereby improving the robustness of CCLM prediction; that is to say, by optimizing the fitting points used in the model parameter derivation, the constructed prediction model can be made more accurate , Can also improve the codec prediction performance of video images. .
基于前述实施例相同的发明构思,参见图16,其示出了本申请实施例提供的一种图像分量预测装置160的组成结构示意图。该图像分量预测装置160可以包括:获取单元1601、比较单元1602、计算单元1603和预测单元1604,其中,Based on the same inventive concept as the foregoing embodiment, refer to FIG. 16, which shows a schematic diagram of the composition structure of an image component prediction apparatus 160 provided by an embodiment of the present application. The image component prediction device 160 may include: an acquisition unit 1601, a comparison unit 1602, a calculation unit 1603, and a prediction unit 1604, where
所述获取单元1601,配置为获取视频图像中编码块的待预测图像分量对应的N个相邻参考像素点;其中,所述N个相邻参考像素点为与所述编码块相邻的参考像素点,N为预设的整数值;The acquiring unit 1601 is configured to acquire N adjacent reference pixels corresponding to the image components to be predicted of the encoding block in the video image; wherein, the N adjacent reference pixels are reference adjacent to the encoding block Pixels, N is a preset integer value;
所述比较单元1602,配置为比较所述N个相邻参考像素点对应的N个第一图像分量值,确定第一参考像素子集和第二参考像素子集;其中,预设数量的相邻参考像素点对应预设数量的第一图像分量值,所述第一参考像素子集中包括:在预设数量的第一图像分量值中最小第一图像分量值和次最小第一图像分量值所对应的两个相邻参考像素点,所述第二参考像素子集中包括:在预设数量的第一图像分量值中最大第一图像分量值和次最大第一图像分量值所对应的两个相邻参考像素点;The comparing unit 1602 is configured to compare the N first image component values corresponding to the N adjacent reference pixels to determine a first reference pixel subset and a second reference pixel subset; wherein, a preset number of phases The adjacent reference pixel points correspond to a preset number of first image component values, and the first reference pixel subset includes: among the preset number of first image component values, the smallest first image component value and the second smallest first image component value Corresponding to two adjacent reference pixels, the second reference pixel subset includes: among the preset number of first image component values, the two corresponding to the largest first image component value and the second largest first image component value Adjacent reference pixels;
所述计算单元1603,配置为计算所述N个相邻参考像素点的均值,得到第一均值点;The calculation unit 1603 is configured to calculate the average value of the N adjacent reference pixel points to obtain a first average value point;
所述比较单元1602,还配置为通过所述第一均值点对所述第一参考像素子集和/或第二参考像素子集进行第一图像分量值的比较,确定两个拟合点;The comparison unit 1602 is further configured to compare the values of the first image component of the first reference pixel subset and/or the second reference pixel subset through the first average point, and determine two fitting points;
所述预测单元1604,配置为基于所述两个拟合点,确定模型参数,根据所述模型参数得到所述待预测图像分量对应的预测模型;其中,所述预测模型用于实现对所述待预测图像分量的预测处理,以得到所述待预测图像分量对应的预测值。The prediction unit 1604 is configured to determine model parameters based on the two fitting points, and obtain a prediction model corresponding to the image component to be predicted according to the model parameters; wherein, the prediction model is used to Prediction processing of the image component to be predicted to obtain the predicted value corresponding to the image component to be predicted.
在上述方案中,参见图16,该图像分量预测装置160还可以包括筛选单元1605,In the above solution, referring to FIG. 16, the image component prediction device 160 may further include a screening unit 1605,
所述获取单元1601,还配置为获取视频图像中编码块的待预测图像分量对应的第一参考像素集合;The obtaining unit 1601 is further configured to obtain a first reference pixel set corresponding to the image component to be predicted of the coding block in the video image;
所述筛选单元1605,配置为对所述第一参考像素集合进行筛选处理,得到第二参考像素集合;其中,所述第二参考像素集合包括有N个相邻参考像素点。The screening unit 1605 is configured to perform screening processing on the first reference pixel set to obtain a second reference pixel set; wherein, the second reference pixel set includes N adjacent reference pixels.
在上述方案中,所述获取单元1601,具体配置为获取与所述编码块至少一个边相邻的参考像素点;其中,所述至少一个边包括所述编码块的左侧边和/或所述编码块的上侧边;以及基于所述参考像素点,组成所述待预测图像分量对应的第一参考像素集合。In the above solution, the acquiring unit 1601 is specifically configured to acquire reference pixels adjacent to at least one side of the encoding block; wherein, the at least one side includes the left side of the encoding block and/or the The upper side of the coding block; and based on the reference pixels, a first reference pixel set corresponding to the image component to be predicted is formed.
在上述方案中,所述获取单元1601,具体配置为获取与所述编码块相邻的参考行或者参考列中的参考像素点;其中,所述参考行是由所述编码块的上侧边以及右上侧边所相邻的行组成的,所述参考列是由所述编码块的左侧边以及左下侧边所相邻的列组成的;以及基于所述参考像素点,组成所述待预测图像分量对应的第一参考像素集合。In the above solution, the acquiring unit 1601 is specifically configured to acquire reference pixels in a reference row or a reference column adjacent to the encoding block; wherein, the reference row is defined by the upper side of the encoding block And the rows adjacent to the upper right side, the reference column is composed of the columns adjacent to the left side and the lower left side of the coding block; and based on the reference pixels, the waiting Predict the first reference pixel set corresponding to the image component.
在上述方案中,所述筛选单元1605,具体配置为基于所述第一参考像素集合中每个相邻参考像素点对应的像素位置和/或图像分量强度,确定待选择像素点位置;以及根据确定的待选择像素点位置,从所述第一参考像素集合中选取与所述待选择像素点位置对应的相邻参考像素点,将选取得到的相邻参考像素点组成第二参考像素集合;其中,所述第二参考像素集合包括有N个相邻参考像素点。In the above solution, the screening unit 1605 is specifically configured to determine the position of the pixel to be selected based on the pixel position and/or image component intensity corresponding to each adjacent reference pixel in the first reference pixel set; and Determining the position of the pixel to be selected, selecting adjacent reference pixels corresponding to the position of the pixel to be selected from the first reference pixel set, and composing the selected adjacent reference pixels into a second reference pixel set; Wherein, the second reference pixel set includes N adjacent reference pixels.
在上述方案中,所述获取单元1601,还配置为基于所述预设数量的相邻参考像素点,获取预设数量的第一图像分量值;In the above solution, the acquiring unit 1601 is further configured to acquire a preset number of first image component values based on the preset number of adjacent reference pixels;
所述比较单元1602,具体配置为对预设数量的第一图像分量值进行多次比较,得到最小第一图像分量值和次最小第一图像分量值所组成的第一数组以及最大第一图像分量值和次最大第一图像分量值所组成的第二数组;以及将第一数组所对应的两个相邻参考像素点放入第一参考像素子集中,得到所述第一参考像素子集;以及将第二数组所对应的两个相邻参考像素点放入第二参考像素子集中,得到所述第二参考像素子集。The comparing unit 1602 is specifically configured to perform multiple comparisons on a preset number of first image component values to obtain a first array composed of the smallest first image component value and the second smallest first image component value and the largest first image A second array composed of component values and the next largest first image component value; and placing two adjacent reference pixels corresponding to the first array into the first reference pixel subset to obtain the first reference pixel subset And placing two adjacent reference pixels corresponding to the second array into a second reference pixel subset to obtain the second reference pixel subset.
在上述方案中,所述获取单元1601,还配置为基于所述N个相邻参考像素点中每个相邻参考像素点对应的第一图像分量值和第二图像分量值,获得N个第一图像分量对应的第一图像分量均值和N个第二图像分量对应的第二图像分量均值,得到所述第一均值点。In the above solution, the obtaining unit 1601 is further configured to obtain the Nth image component value and the second image component value corresponding to each of the N adjacent reference pixels. The average value of the first image component corresponding to one image component and the average value of the second image components corresponding to the N second image components are obtained to obtain the first average point.
在上述方案中,所述获取单元1601,还配置为基于所述第一参考像素子集中每个相邻参考像素点对应的第一图像分量值和第二图像分量值,获得所述第一参考像素子集中多个第一图像分量对应的第一图像分量均值和多个第二图像分量对应的第二图像分量均值,得到第二均值点;以及基于所述第二参考像素子集中每个相邻参考像素点对应的第一图像分量值和第二图像分量值,获得所述第二参考像素子集中多个第一图像分量对应的第一图像分量均值和多个第二图像分量对应的第二图像分量均值,得到第三均值点。In the above solution, the obtaining unit 1601 is further configured to obtain the first reference pixel based on the first image component value and the second image component value corresponding to each adjacent reference pixel in the first reference pixel subset. The mean values of the first image components corresponding to the multiple first image components in the pixel subset and the mean values of the second image components corresponding to the multiple second image components to obtain the second mean point; and based on each phase in the second reference pixel subset The first image component value and the second image component value corresponding to the adjacent reference pixels are obtained, and the first image component average value corresponding to the multiple first image components in the second reference pixel subset and the first image component corresponding to the multiple second image components are obtained. The two image components are averaged to obtain the third average point.
在上述方案中,所述比较单元1602,具体配置为将第一均值点的第一图像分量与所述第一参考像素子集中次最小第一图像分量值进行比较;以及当第一均值点的第一图像分量大于或等于所述第一参考像素子集中次最小第一图像分量值时,将第一均值点的第一图像分量与所述第二参考像素子集中次最大第一图像分量值进行比较;以及当第一均值点的第一图像分量小于或等于所述第二参考像素子集中次最大第一图像分量值时,将第二均值点作为第一拟合点,将第三均值点作为第二拟合点;其中,所述两个拟合点包括第一拟合点和第二拟合点。In the above solution, the comparing unit 1602 is specifically configured to compare the first image component of the first mean point with the second smallest first image component value in the first reference pixel subset; and when the first mean point is When the first image component is greater than or equal to the second smallest first image component value in the first reference pixel subset, the first image component of the first mean point is combined with the second largest first image component value in the second reference pixel subset Compare; and when the first image component of the first average point is less than or equal to the second largest first image component value in the second reference pixel subset, the second average point is used as the first fitting point, and the third average Point as the second fitting point; wherein, the two fitting points include a first fitting point and a second fitting point.
在上述方案中,所述比较单元1602,还配置为对所述第一参考像素子集中两个相邻参考像素点各自对应的第一图像分量值进行比较;以及根据比较的结果,确定两个第一图像分量值中的最小第一图像分量值和次最小第一图像分量值,并将最小第一图像分量值对应的像素点作为第一相邻参考像素点,次最小第一图像分量值对应的像素点作为第二相邻参考像素点;In the above solution, the comparison unit 1602 is further configured to compare the first image component values corresponding to two adjacent reference pixel points in the first reference pixel subset; and determine two values according to the comparison result. The smallest first image component value and the second smallest first image component value among the first image component values, and the pixel corresponding to the smallest first image component value is used as the first adjacent reference pixel, and the second smallest first image component value The corresponding pixel is used as the second adjacent reference pixel;
所述比较单元1602,还配置为对所述第二参考像素子集中两个相邻参考像素点各自对应的第一图像分量值进行比较;以及根据比较的结果,确定两个第一图像分量值中的次最大第一图像分量值和最大第一图像分量值,并将次最大第一图像分量值对应的像素点作为第三相邻参考像素点,最大第一图像分量值对应的像素点作为第四相邻参考像素点。The comparing unit 1602 is further configured to compare the first image component values corresponding to two adjacent reference pixels in the second reference pixel subset; and determine the two first image component values according to the comparison result The second largest first image component value and the largest first image component value in the, and the pixel corresponding to the second largest first image component value is taken as the third adjacent reference pixel, and the pixel corresponding to the largest first image component value is taken as The fourth adjacent reference pixel.
在上述方案中,所述比较单元1602,还配置为当第一均值点的第一图像分量小于次最小第一图像分量值时,将第一相邻参考像素点作为第一拟合点,将第三均值点作为第二拟合点。In the above solution, the comparing unit 1602 is further configured to use the first adjacent reference pixel as the first fitting point when the first image component of the first mean point is less than the second smallest first image component value, and The third mean point is used as the second fitting point.
在上述方案中,所述比较单元1602,还配置为当第一均值点的第一图像分量大于次最大第一图像分量值时,将第二均值点作为第一拟合点,将第四相邻参考像素点作为第二拟合点。In the above solution, the comparing unit 1602 is further configured to use the second average point as the first fitting point when the first image component of the first average point is greater than the second largest first image component value, and the fourth phase The adjacent reference pixel is used as the second fitting point.
在上述方案中,所述计算单元1603,还配置为基于所述第一拟合点和所述第二拟合点,通过第一预设因子计算模型获得所述第一模型参数;以及基于所述第一模型参数以及所述第一拟合点,通过第二预设因子计算模型获得所述第二模型参数。In the above solution, the calculation unit 1603 is further configured to obtain the first model parameter based on the first fitting point and the second fitting point, by calculating a model with a first preset factor; and For the first model parameter and the first fitting point, the second model parameter is obtained through a second preset factor calculation model.
在上述方案中,所述计算单元1603,还配置为基于所述第一模型参数以及所述第一均值点,通过第二预设因子计算模型获得所述第二模型参数。In the above solution, the calculation unit 1603 is further configured to obtain the second model parameter through a second preset factor calculation model based on the first model parameter and the first average point.
在上述方案中,参见图16,该图像分量预测装置160还可以包括分组单元1606和确定单元1607,其中,In the above solution, referring to FIG. 16, the image component prediction device 160 may further include a grouping unit 1606 and a determining unit 1607, where:
所述分组单元1606,配置为通过第一均值点对所述N个相邻参考像素点进行分组处理,得到第三参考像素子集和第四参考像素子集;The grouping unit 1606 is configured to perform grouping processing on the N adjacent reference pixels through a first average point to obtain a third reference pixel subset and a fourth reference pixel subset;
所述确定单元1607,配置为基于第三参考像素子集,确定第一拟合点;基于第四参考像素子集确定第二拟合点;The determining unit 1607 is configured to determine the first fitting point based on the third reference pixel subset; determine the second fitting point based on the fourth reference pixel subset;
所述预测单元1604,具体配置为基于所述第一拟合点和所述第二拟合点,确定模型参数,根据所述模型参数得到所述待预测图像分量对应的预测模型;其中,所述预测模型用于实现对所述待预测图像分量的预测处理,以得到所述待预测图像分量对应的预测值。The prediction unit 1604 is specifically configured to determine model parameters based on the first fitting point and the second fitting point, and obtain a prediction model corresponding to the image component to be predicted according to the model parameters; wherein, The prediction model is used to implement prediction processing on the image component to be predicted, so as to obtain the predicted value corresponding to the image component to be predicted.
在上述方案中,所述确定单元1607,具体配置为从所述第三参考像素子集中选取部分相邻参考像素点,对所述部分相邻参考像素点进行均值计算,将计算得到的均值点作为所述第一拟合点;以及从所述第四参考像素子集中选取部分相邻参考像素点,对所述部分相邻参考像素点进行均值计算,将计算得到的均值点作为所述第二拟合点。In the above solution, the determining unit 1607 is specifically configured to select a part of adjacent reference pixels from the third reference pixel subset, perform average calculation on the part of adjacent reference pixels, and calculate the average value obtained by the calculation. As the first fitting point; and selecting a part of adjacent reference pixel points from the fourth reference pixel subset, performing average calculation on the part of adjacent reference pixels, and using the calculated average point as the first Two fitting points.
在上述方案中,所述确定单元1607,具体配置为从所述第三参考像素子集中选取其中一个相邻参考像素点作为所述第一拟合点;以及从所述第四参考像素子集中选取其中一个相邻参考像素点作为所述第二拟合点。In the above solution, the determining unit 1607 is specifically configured to select one of the adjacent reference pixel points from the third reference pixel subset as the first fitting point; and from the fourth reference pixel subset One of the adjacent reference pixels is selected as the second fitting point.
在上述方案中,N的取值为4;所述确定单元1607,具体配置为若所述第三参考像素子集包括3个相邻参考像素点,所述第四参考像素子集包括1个相邻参考像素点,则从所述第三参考像素子集中选取2个相邻参考像素点,对选取的2个相邻参考像素点进行均值计算,将计算得到的均值点作为所述第一拟合点,将所述第四参考像素子集中的1个相邻参考像素点作为第二拟合点;以及若所述第三参考像素子集包括1个相邻参考像素点,所述第四参考像素子集包括3个相邻参考像素点,则从所述第四参考像素子集中选取2个相邻参考像素点,对选取的2个相邻参考像素点进行均值计算,将计算得到的均值点作为所述第二拟合点,将所述第三参考像素子集中的1个相邻参考像素点作为第一拟合点。In the above solution, the value of N is 4; the determining unit 1607 is specifically configured to: if the third reference pixel subset includes 3 adjacent reference pixels, the fourth reference pixel subset includes 1 Adjacent reference pixels, select two adjacent reference pixels from the third reference pixel subset, perform average calculation on the selected two adjacent reference pixels, and use the calculated average point as the first Fitting point, using one adjacent reference pixel in the fourth reference pixel subset as the second fitting point; and if the third reference pixel subset includes one adjacent reference pixel, the first The four reference pixel subset includes 3 adjacent reference pixels, then 2 adjacent reference pixels are selected from the fourth reference pixel subset, and the average value of the selected 2 adjacent reference pixels is calculated to obtain The mean point of is used as the second fitting point, and an adjacent reference pixel point in the third reference pixel subset is used as the first fitting point.
在上述方案中,所述预测单元1604,具体配置为基于所述预测模型对所述编码块中每个像素点的待预测图像分量进行预测处理,得到每个像素点的待预测图像分量对应的预测值。In the above solution, the prediction unit 1604 is specifically configured to perform prediction processing on the image component to be predicted for each pixel in the coding block based on the prediction model to obtain the image component to be predicted for each pixel. Predictive value.
可以理解地,在本实施例中,“单元”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是模块,还可以是非模块化的。而且在本实施例中的各组成部分可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。It can be understood that, in this embodiment, a "unit" may be a part of a circuit, a part of a processor, a part of a program, or software, etc., of course, may also be a module, or may be non-modular. Moreover, the various components in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The above-mentioned integrated unit can be realized in the form of hardware or software function module.
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中,基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或processor(处理器)执行本实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the technical solution of this embodiment is essentially or It is said that the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product. The computer software product is stored in a storage medium and includes several instructions to enable a computer device (which can A personal computer, server, or network device, etc.) or a processor (processor) executes all or part of the steps of the method described in this embodiment. The aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
因此,本实施例提供了一种计算机存储介质,该计算机存储介质存储有图像分量预测程序,所述图像分量预测程序被至少一个处理器执行时实现前述实施例中任一项所述的方法。Therefore, this embodiment provides a computer storage medium that stores an image component prediction program that implements the method described in any one of the foregoing embodiments when the image component prediction program is executed by at least one processor.
基于上述图像分量预测装置160的组成以及计算机存储介质,参见图17,其示出了本申请实施例提供的图像分量预测装置160的具体硬件结构,可以包括:网络接口1701、存储器1702和处理器1703;各个组件通过总线系统1704耦合在一起。可理解,总线系统1704用于实现这些组件之间的连接通信。总线系统1704除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图19中将各种总线都标为总线系统1704。其中,网络接口1701,用于在与其他外部网元之间进行收发信息过程中,信号的接收和发送;Based on the composition of the image component prediction device 160 and the computer storage medium described above, refer to FIG. 17, which shows the specific hardware structure of the image component prediction device 160 provided by the embodiment of the present application, which may include: a network interface 1701, a memory 1702, and a processor 1703: The components are coupled together through the bus system 1704. It can be understood that the bus system 1704 is used to implement connection and communication between these components. In addition to the data bus, the bus system 1704 also includes a power bus, a control bus, and a status signal bus. However, for clarity of description, various buses are marked as the bus system 1704 in FIG. 19. Among them, the network interface 1701 is used to receive and send signals in the process of sending and receiving information with other external network elements;
存储器1702,用于存储能够在处理器1703上运行的计算机程序;The memory 1702 is configured to store computer programs that can run on the processor 1703;
处理器1703,用于在运行所述计算机程序时,执行:The processor 1703 is configured to execute: when the computer program is running:
获取视频图像中编码块的待预测图像分量对应的N个相邻参考像素点;其中,所述N个相邻参考像素点为与所述编码块相邻的参考像素点,N为预设的整数值;Acquire N adjacent reference pixels corresponding to the to-be-predicted image components of the encoding block in the video image; wherein, the N adjacent reference pixels are reference pixels adjacent to the encoding block, and N is a preset Integer value
比较所述N个相邻参考像素点对应的N个第一图像分量值,确定第一参考像素子集和第二参考像素子集;其中,预设数量的相邻参考像素点对应预设数量的第一图像分量值,所述第一参考像素子集中包括:在预设数量的第一图像分量值中最小第一图像分量值和次最小第一图像分量值所对应的两个相邻参考像素点,所述第二参考像素子集中包括:在预设数量的第一图像分量值中最大第一图像分量值和次最大第一图像分量值所对应的两个相邻参考像素点;The N first image component values corresponding to the N adjacent reference pixels are compared to determine the first reference pixel subset and the second reference pixel subset; wherein the preset number of adjacent reference pixels corresponds to the preset number The first image component value of the first reference pixel subset includes: two adjacent references corresponding to the smallest first image component value and the second smallest first image component value among the preset number of first image component values Pixels, the second reference pixel subset includes: two adjacent reference pixels corresponding to the largest first image component value and the second largest first image component value among the preset number of first image component values;
计算所述N个相邻参考像素点的均值,得到第一均值点;Calculating the average value of the N adjacent reference pixel points to obtain the first average value point;
通过所述第一均值点对所述第一参考像素子集和/或第二参考像素子集进行第一图像分量值的比较,确定两个拟合点;Comparing the values of the first image component to the first reference pixel subset and/or the second reference pixel subset by using the first average point to determine two fitting points;
基于所述两个拟合点,确定模型参数,根据所述模型参数得到所述待预测图像分量对应的预测模型;其中,所述预测模型用于实现对所述待预测图像分量的预测处理,以得到所述待预测图像分量对应的预测值。Based on the two fitting points, the model parameters are determined, and the prediction model corresponding to the image component to be predicted is obtained according to the model parameters; wherein, the prediction model is used to realize the prediction processing of the image component to be predicted, To obtain the predicted value corresponding to the image component to be predicted.
可以理解,本申请实施例中的存储器1702可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本文描述的系统和方法的存储器1702旨在包括但不限于这些和任意其它适合类型的存储器。It can be understood that the memory 1702 in the embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory. Among them, the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), and electrically available Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory. The volatile memory may be a random access memory (Random Access Memory, RAM), which is used as an external cache. By way of exemplary but not restrictive description, many forms of RAM are available, such as static random access memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), synchronous dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), Synchronous Link Dynamic Random Access Memory (Synchlink DRAM, SLDRAM) And Direct Rambus RAM (DRRAM). The memory 1702 of the systems and methods described herein is intended to include, but is not limited to, these and any other suitable types of memory.
而处理器1703可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器1703中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器1703可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1702,处理器1703读取存储器1702中的信息,结合其硬件完成上述方法的步骤。The processor 1703 may be an integrated circuit chip with signal processing capabilities. In the implementation process, the steps of the foregoing method can be completed by hardware integrated logic circuits in the processor 1703 or instructions in the form of software. The aforementioned processor 1703 may be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (Field Programmable Gate Array, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor. The software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers. The storage medium is located in the memory 1702, and the processor 1703 reads the information in the memory 1702, and completes the steps of the foregoing method in combination with its hardware.
可以理解,本文描述的这些实施例可以用硬件、软件、固件、中间件、微码或其组合来实现。对于硬件实现,处理单元可以实现在一个或多个专用集成电路(Application Specific Integrated Circuits,ASIC)、数字信号处理器(Digital Signal Processing,DSP)、数字信号处理设备(DSP Device,DSPD)、可编程逻辑设备(Programmable Logic Device,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、通用处理器、控制器、微控制器、微处理器、用于执行本申请所述功能的其它电子单元或其组合中。It can be understood that the embodiments described herein can be implemented by hardware, software, firmware, middleware, microcode, or a combination thereof. For hardware implementation, the processing unit can be implemented in one or more Application Specific Integrated Circuits (ASIC), Digital Signal Processing (DSP), Digital Signal Processing Equipment (DSP Device, DSPD), programmable Logic device (Programmable Logic Device, PLD), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), general-purpose processors, controllers, microcontrollers, microprocessors, and others for performing the functions described in this application Electronic unit or its combination.
对于软件实现,可通过执行本文所述功能的模块(例如过程、函数等)来实现本文所述的技术。软件代码可存储在存储器中并通过处理器执行。存储器可以在处理器中或在处理器外部实现。For software implementation, the technology described herein can be implemented through modules (such as procedures, functions, etc.) that perform the functions described herein. The software codes can be stored in the memory and executed by the processor. The memory can be implemented in the processor or external to the processor.
可选地,作为另一个实施例,处理器1703还配置为在运行所述计算机程序时,执行前述实施例中任一项所述的方法。Optionally, as another embodiment, the processor 1703 is further configured to execute the method described in any one of the foregoing embodiments when the computer program is running.
参见图18,其示出了本申请实施例提供的一种编码器的组成结构示意图。如图18所示,编码器180至少可以包括前述实施例中任一项所述的图像分量预测装置160。Refer to FIG. 18, which shows a schematic diagram of the composition structure of an encoder provided by an embodiment of the present application. As shown in FIG. 18, the encoder 180 may at least include the image component prediction device 160 described in any of the foregoing embodiments.
参见图19,其示出了本申请实施例提供的一种解码器的组成结构示意图。如图19所示,解码器190至少可以包括前述实施例中任一项所述的图像分量预测装置190。Refer to FIG. 19, which shows a schematic diagram of the composition structure of a decoder provided by an embodiment of the present application. As shown in FIG. 19, the decoder 190 may at least include the image component prediction device 190 described in any of the foregoing embodiments.
需要说明的是,在本申请中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有 明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。It should be noted that in this application, the terms "include", "include" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article or device including a series of elements not only includes those elements , And also includes other elements not explicitly listed, or elements inherent to the process, method, article, or device. If there are no more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other identical elements in the process, method, article or device that includes the element.
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the foregoing embodiments of the present application are for description only, and do not represent the superiority of the embodiments.
本申请所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。The methods disclosed in the several method embodiments provided in this application can be combined arbitrarily without conflict to obtain new method embodiments.
本申请所提供的几个产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。The features disclosed in the several product embodiments provided in this application can be combined arbitrarily without conflict to obtain new product embodiments.
本申请所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。The features disclosed in the several method or device embodiments provided in this application can be combined arbitrarily without conflict to obtain a new method embodiment or device embodiment.
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The above are only specific implementations of this application, but the protection scope of this application is not limited to this. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed in this application. Should be covered within the scope of protection of this application. Therefore, the protection scope of this application should be subject to the protection scope of the claims.
工业实用性Industrial applicability
本申请实施例中,首先针对视频图像中编码块的待预测图像分量,获取N个相邻参考像素点,这里的N个相邻参考像素点为与所述编码块相邻的参考像素点,N为预设的整数值;然后比较所述N个相邻参考像素点对应的N个第一图像分量值,确定第一参考像素子集和第二参考像素子集,该第一参考像素子集中包括:在预设数量的第一图像分量值中最小第一图像分量值和次最小第一图像分量值所对应的两个相邻参考像素点,该第二参考像素子集中包括:在预设数量的第一图像分量值中最大第一图像分量值和次最大第一图像分量值所对应的两个相邻参考像素点;计算所述N个相邻参考像素点的均值,得到第一均值点;通过第一均值点对第一参考像素子集和/或第二参考像素子集进行第一图像分量值的比较,确定两个拟合点;再基于两个拟合点确定模型参数,根据模型参数得到待预测图像分量对应的预测模型,以得到待预测图像分量对应的预测值;这样,经过第二比较处理之后,可以将预设数量的相邻参考像素点分成多种情况,针对这多种情况使用不同的两个拟合点来构建预测模型,可以提高CCLM预测的鲁棒性;也就是说,通过对模型参数推导所使用的拟合点进行优化,从而使得构建的预测模型更准确,而且提升了视频图像的编解码预测性能。In the embodiment of the present application, first, for the image components to be predicted of the coding block in the video image, N adjacent reference pixels are obtained, where the N adjacent reference pixels are reference pixels adjacent to the coding block. N is a preset integer value; then, the N first image component values corresponding to the N adjacent reference pixels are compared to determine the first reference pixel subset and the second reference pixel subset, the first reference pixel The set includes: two adjacent reference pixel points corresponding to the smallest first image component value and the second smallest first image component value among the preset number of first image component values, and the second reference pixel subset includes: Set the two adjacent reference pixels corresponding to the largest first image component value and the second largest first image component value in the number of first image component values; calculate the average value of the N adjacent reference pixels to obtain the first Mean point; compare the first image component values of the first reference pixel subset and/or the second reference pixel subset through the first mean point to determine two fitting points; then determine the model parameters based on the two fitting points , Obtain the prediction model corresponding to the image component to be predicted according to the model parameters to obtain the predicted value corresponding to the image component to be predicted; in this way, after the second comparison process, the preset number of adjacent reference pixels can be divided into multiple situations, For these various situations, using two different fitting points to construct the prediction model can improve the robustness of CCLM prediction; that is to say, by optimizing the fitting points used in the derivation of model parameters, the constructed prediction The model is more accurate and improves the coding and decoding prediction performance of video images.

Claims (22)

  1. 一种图像分量预测方法,所述方法包括:An image component prediction method, the method includes:
    获取视频图像中编码块的待预测图像分量对应的N个相邻参考像素点;其中,所述N个相邻参考像素点为与所述编码块相邻的参考像素点,N为预设的整数值;Acquire N adjacent reference pixels corresponding to the to-be-predicted image components of the encoding block in the video image; wherein, the N adjacent reference pixels are reference pixels adjacent to the encoding block, and N is a preset Integer value
    比较所述N个相邻参考像素点对应的N个第一图像分量值,确定第一参考像素子集和第二参考像素子集;其中,所述第一参考像素子集中包括:在预设数量的第一图像分量值中最小第一图像分量值和次最小第一图像分量值所对应的两个相邻参考像素点,所述第二参考像素子集中包括:在预设数量的第一图像分量值中最大第一图像分量值和次最大第一图像分量值所对应的两个相邻参考像素点;Compare the N first image component values corresponding to the N adjacent reference pixels to determine a first reference pixel subset and a second reference pixel subset; wherein, the first reference pixel subset includes: The two adjacent reference pixel points corresponding to the smallest first image component value and the second smallest first image component value in the number of first image component values, the second reference pixel subset includes: a preset number of first Two adjacent reference pixels corresponding to the largest first image component value and the second largest first image component value in the image component values;
    计算所述N个相邻参考像素点的均值,得到第一均值点;Calculating the average value of the N adjacent reference pixel points to obtain the first average value point;
    通过所述第一均值点对所述第一参考像素子集和/或第二参考像素子集进行第一图像分量值的比较,确定两个拟合点;Comparing the values of the first image component to the first reference pixel subset and/or the second reference pixel subset by using the first average point to determine two fitting points;
    基于所述两个拟合点,确定模型参数,根据所述模型参数得到所述待预测图像分量对应的预测模型;其中,所述预测模型用于实现对所述待预测图像分量的预测处理,以得到所述待预测图像分量对应的预测值。Based on the two fitting points, the model parameters are determined, and the prediction model corresponding to the image component to be predicted is obtained according to the model parameters; wherein, the prediction model is used to realize the prediction processing of the image component to be predicted, To obtain the predicted value corresponding to the image component to be predicted.
  2. 根据权利要求1所述的方法,其中,所述获取视频图像中编码块的待预测图像分量对应的N个相邻参考像素点,包括:The method according to claim 1, wherein the obtaining N adjacent reference pixels corresponding to the image component to be predicted of the coding block in the video image comprises:
    获取视频图像中编码块的待预测图像分量对应的第一参考像素集合;Acquiring the first reference pixel set corresponding to the image component to be predicted of the coding block in the video image;
    对所述第一参考像素集合进行筛选处理,得到第二参考像素集合;其中,所述第二参考像素集合包括有N个相邻参考像素点。The first reference pixel set is screened to obtain a second reference pixel set; wherein, the second reference pixel set includes N adjacent reference pixels.
  3. 根据权利要求2所述的方法,其中,所述获取视频图像中编码块的待预测图像分量对应的第一参考像素集合,包括:The method according to claim 2, wherein said obtaining the first reference pixel set corresponding to the image component to be predicted of the coding block in the video image comprises:
    获取与所述编码块至少一个边相邻的参考像素点;其中,所述至少一个边包括所述编码块的左侧边和/或所述编码块的上侧边;Acquiring reference pixels adjacent to at least one side of the coding block; wherein the at least one side includes the left side of the coding block and/or the upper side of the coding block;
    基于所述参考像素点,组成所述待预测图像分量对应的第一参考像素集合。Based on the reference pixel points, a first reference pixel set corresponding to the image component to be predicted is formed.
  4. 根据权利要求2所述的方法,其中,所述获取视频图像中编码块的待预测图像分量对应的第一参考像素集合,包括:The method according to claim 2, wherein said obtaining the first reference pixel set corresponding to the image component to be predicted of the coding block in the video image comprises:
    获取与所述编码块相邻的参考行或者参考列中的参考像素点;其中,所述参考行是由所述编码块的上侧边以及右上侧边所相邻的行组成的,所述参考列是由所述编码块的左侧边以及左下侧边所相邻的列组成的;Obtain a reference row or reference pixel point in a reference column adjacent to the encoding block; wherein the reference row is composed of the upper side of the encoding block and the row adjacent to the upper right side, the The reference column is composed of columns adjacent to the left side and the lower left side of the coding block;
    基于所述参考像素点,组成所述待预测图像分量对应的第一参考像素集合。Based on the reference pixel points, a first reference pixel set corresponding to the image component to be predicted is formed.
  5. 根据权利要求2至4任一项所述的方法,其中,所述对所述第一参考像素集合进行筛选处理,得到第二参考像素集合,包括:The method according to any one of claims 2 to 4, wherein the filtering process on the first reference pixel set to obtain a second reference pixel set comprises:
    基于所述第一参考像素集合中每个相邻参考像素点对应的像素位置和/或图像分量强度,确定待选择像素点位置;Determine the position of the pixel to be selected based on the pixel position and/or image component intensity corresponding to each adjacent reference pixel in the first reference pixel set;
    根据确定的待选择像素点位置,从所述第一参考像素集合中选取与所述待选择像素点位置对应的相邻参考像素点,将选取得到的相邻参考像素点组成第二参考像素集合;其中,所述第二参考像素集合包括有N个相邻参考像素点。According to the determined position of the pixel to be selected, adjacent reference pixels corresponding to the position of the pixel to be selected are selected from the first reference pixel set, and the selected adjacent reference pixels form a second reference pixel set ; Wherein, the second reference pixel set includes N adjacent reference pixels.
  6. 根据权利要求1所述的方法,其中,所述比较所述N个相邻参考像素点对应的N个第一图像分量值,确定第一参考像素子集和第二参考像素子集,包括:The method according to claim 1, wherein the comparing the N first image component values corresponding to the N adjacent reference pixels to determine the first reference pixel subset and the second reference pixel subset comprises:
    基于所述预设数量的相邻参考像素点,获取预设数量的第一图像分量值;Obtaining a preset number of first image component values based on the preset number of adjacent reference pixels;
    对所述预设数量的第一图像分量值进行多次比较,得到最小第一图像分量值和次最小第一图像分量值所组成的第一数组以及最大第一图像分量值和次最大第一图像分量值所组成的第二数组;The preset number of first image component values are compared multiple times to obtain a first array composed of the smallest first image component value and the second smallest first image component value, and the largest first image component value and the second largest first image component value. A second array of image component values;
    将所述第一数组所对应的两个相邻参考像素点放入第一参考像素子集中,得到所述第一参考像素子集;Putting two adjacent reference pixels corresponding to the first array into a first reference pixel subset to obtain the first reference pixel subset;
    将所述第二数组所对应的两个相邻参考像素点放入第二参考像素子集中,得到所述第二参考像素子集。Put two adjacent reference pixels corresponding to the second array into a second reference pixel subset to obtain the second reference pixel subset.
  7. 根据权利要求1所述的方法,其中,所述计算所述N个相邻参考像素点的均值,得到第一 均值点,包括:The method according to claim 1, wherein the calculating the average value of the N adjacent reference pixel points to obtain the first average value point comprises:
    基于所述N个相邻参考像素点中每个相邻参考像素点对应的第一图像分量值和第二图像分量值,获得N个第一图像分量对应的第一图像分量均值和N个第二图像分量对应的第二图像分量均值,得到所述第一均值点。Based on the first image component value and the second image component value corresponding to each adjacent reference pixel in the N adjacent reference pixels, the first image component average value and the Nth image component corresponding to the N first image components are obtained. The average value of the second image component corresponding to the two image components is obtained to obtain the first average value point.
  8. 根据权利要求1所述的方法,其中,在所述比较所述N个相邻参考像素点对应的N给第一图像分量值,确定第一参考像素子集和第二参考像素子集之后,所述方法还包括:The method according to claim 1, wherein after said comparing the N-to-first image component values corresponding to the N adjacent reference pixels to determine the first reference pixel subset and the second reference pixel subset, The method also includes:
    基于所述第一参考像素子集中每个相邻参考像素点对应的第一图像分量值和第二图像分量值,获得所述第一参考像素子集中多个第一图像分量对应的第一图像分量均值和多个第二图像分量对应的第二图像分量均值,得到第二均值点;Based on the first image component value and the second image component value corresponding to each adjacent reference pixel point in the first reference pixel subset, first images corresponding to the multiple first image components in the first reference pixel subset are obtained A component average value and a second image component average value corresponding to a plurality of second image components to obtain a second average value point;
    基于所述第二参考像素子集中每个相邻参考像素点对应的第一图像分量值和第二图像分量值,获得所述第二参考像素子集中多个第一图像分量对应的第一图像分量均值和多个第二图像分量对应的第二图像分量均值,得到第三均值点。Based on the first image component value and the second image component value corresponding to each adjacent reference pixel point in the second reference pixel subset, first images corresponding to the multiple first image components in the second reference pixel subset are obtained The component average value and the second image component average value corresponding to the multiple second image components are obtained to obtain the third average value point.
  9. 根据权利要求8所述的方法,其中,所述通过第一均值点对所述第一参考像素子集和/或第二参考像素子集进行第一图像分量值的比较,确定两个拟合点,包括:8. The method according to claim 8, wherein the first image component value is compared on the first reference pixel subset and/or the second reference pixel subset through the first mean point to determine two fittings Points, including:
    将第一均值点的第一图像分量与所述第一参考像素子集中次最小第一图像分量值进行比较;Comparing the first image component of the first mean point with the second smallest first image component value in the first reference pixel subset;
    当第一均值点的第一图像分量大于或等于所述第一参考像素子集中次最小第一图像分量值时,将第一均值点的第一图像分量与所述第二参考像素子集中次最大第一图像分量值进行比较;When the first image component of the first average point is greater than or equal to the second smallest first image component value in the first reference pixel subset, the first image component of the first average point and the second reference pixel subset are divided into the second The largest first image component value for comparison;
    当第一均值点的第一图像分量小于或等于所述第二参考像素子集中次最大第一图像分量值时,将第二均值点作为第一拟合点,将第三均值点作为第二拟合点;其中,所述两个拟合点包括第一拟合点和第二拟合点。When the first image component of the first mean point is less than or equal to the second largest first image component value in the second reference pixel subset, the second mean point is taken as the first fitting point, and the third mean point is taken as the second Fitting points; wherein the two fitting points include a first fitting point and a second fitting point.
  10. 根据权利要求9所述的方法,其中,在所述将第一均值点的第一图像分量与所述第一参考像素子集中次最小第一图像分量值进行比较之前,所述方法还包括:The method according to claim 9, wherein before said comparing the first image component of the first mean point with the second smallest first image component value in the first reference pixel subset, the method further comprises:
    对所述第一参考像素子集中两个相邻参考像素点各自对应的第一图像分量值进行比较;Comparing the first image component values corresponding to two adjacent reference pixels in the first reference pixel subset;
    根据比较的结果,确定两个第一图像分量值中的最小第一图像分量值和次最小第一图像分量值,并将最小第一图像分量值对应的像素点作为第一相邻参考像素点,次最小第一图像分量值对应的像素点作为第二相邻参考像素点;According to the result of the comparison, determine the smallest first image component value and the second smallest first image component value of the two first image component values, and use the pixel corresponding to the smallest first image component value as the first adjacent reference pixel , The pixel corresponding to the second smallest first image component value is used as the second adjacent reference pixel;
    相应的,在所述将第一均值点的第一图像分量与所述第二参考像素子集中次最大第一图像分量值进行比较之前,所述方法还包括:Correspondingly, before the comparing the first image component of the first average point with the second largest first image component value in the second reference pixel subset, the method further includes:
    对所述第二参考像素子集中两个相邻参考像素点各自对应的第一图像分量值进行比较;Comparing the first image component values corresponding to two adjacent reference pixels in the second reference pixel subset;
    根据比较的结果,确定两个第一图像分量值中的次最大第一图像分量值和最大第一图像分量值,并将次最大第一图像分量值对应的像素点作为第三相邻参考像素点,最大第一图像分量值对应的像素点作为第四相邻参考像素点。According to the result of the comparison, determine the second largest first image component value and the largest first image component value among the two first image component values, and use the pixel corresponding to the second largest first image component value as the third adjacent reference pixel Point, the pixel point corresponding to the largest first image component value is used as the fourth adjacent reference pixel point.
  11. 根据权利要求10所述的方法,其中,在所述将第一均值点的第一图像分量与所述第一参考像素子集中次最小第一图像分量值进行比较之后,所述方法还包括:The method according to claim 10, wherein after the comparing the first image component of the first mean point with the second smallest first image component value in the first reference pixel subset, the method further comprises:
    当第一均值点的第一图像分量小于次最小第一图像分量值时,将第一相邻参考像素点作为第一拟合点,将第三均值点作为第二拟合点。When the first image component of the first average point is smaller than the second smallest first image component value, the first adjacent reference pixel point is used as the first fitting point, and the third average point is used as the second fitting point.
  12. 根据权利要求10所述的方法,其中,在所述将第一均值点的第一图像分量与所述第二参考像素子集中次最大第一图像分量值进行比较之后,所述方法还包括:The method according to claim 10, wherein after the comparing the first image component of the first mean point with the second largest first image component value in the second reference pixel subset, the method further comprises:
    当第一均值点的第一图像分量大于次最大第一图像分量值时,将第二均值点作为第一拟合点,将第四相邻参考像素点作为第二拟合点。When the first image component of the first average point is greater than the second largest first image component value, the second average point is taken as the first fitting point, and the fourth adjacent reference pixel point is taken as the second fitting point.
  13. 根据权利要求9至12任一项所述的方法,其中,所述基于所述两个拟合点,确定模型参数,包括:The method according to any one of claims 9 to 12, wherein the determining model parameters based on the two fitting points comprises:
    基于所述第一拟合点和所述第二拟合点,通过第一预设因子计算模型获得所述第一模型参数;Based on the first fitting point and the second fitting point, obtaining the first model parameter through a first preset factor calculation model;
    基于所述第一模型参数以及所述第一拟合点,通过第二预设因子计算模型获得所述第二模型参数。Based on the first model parameter and the first fitting point, the second model parameter is obtained through a second preset factor calculation model.
  14. 根据权利要求13所述的方法,其中,在所述通过第一预设因子计算模型获得所述第一模型参数之后,所述方法还包括:The method according to claim 13, wherein, after the first model parameter is obtained through the first preset factor calculation model, the method further comprises:
    基于所述第一模型参数以及所述第一均值点,通过第二预设因子计算模型获得所述第二模型参数。Based on the first model parameter and the first average point, the second model parameter is obtained through a second preset factor calculation model.
  15. 根据权利要求1所述的方法,其中,在所述获取预设数量的相邻参考像素点之后,所述方法还包括:The method according to claim 1, wherein after said obtaining a preset number of adjacent reference pixels, the method further comprises:
    通过第一均值点对所述N个相邻参考像素点进行分组处理,得到第三参考像素子集和第四参考像素子集;Grouping the N adjacent reference pixel points by the first average point to obtain a third reference pixel subset and a fourth reference pixel subset;
    基于第三参考像素子集,确定第一拟合点;基于第四参考像素子集确定第二拟合点;Determine the first fitting point based on the third reference pixel subset; determine the second fitting point based on the fourth reference pixel subset;
    基于所述第一拟合点和所述第二拟合点,确定模型参数,根据所述模型参数得到所述待预测图像分量对应的预测模型;其中,所述预测模型用于实现对所述待预测图像分量的预测处理,以得到所述待预测图像分量对应的预测值。Based on the first fitting point and the second fitting point, the model parameters are determined, and the prediction model corresponding to the image component to be predicted is obtained according to the model parameters; wherein, the prediction model is used to realize the Prediction processing of the image component to be predicted to obtain the predicted value corresponding to the image component to be predicted.
  16. 根据权利要求15所述的方法,其中,所述基于第三参考像素子集,确定第一拟合点;基于第四参考像素子集确定第二拟合点,包括:The method according to claim 15, wherein the determining the first fitting point based on the third reference pixel subset; and determining the second fitting point based on the fourth reference pixel subset comprises:
    从所述第三参考像素子集中选取部分相邻参考像素点,对所述部分相邻参考像素点进行均值计算,将计算得到的均值点作为所述第一拟合点;Selecting part of adjacent reference pixels from the third reference pixel subset, performing average calculation on the part of adjacent reference pixels, and using the calculated average point as the first fitting point;
    从所述第四参考像素子集中选取部分相邻参考像素点,对所述部分相邻参考像素点进行均值计算,将计算得到的均值点作为所述第二拟合点。A part of adjacent reference pixels is selected from the fourth reference pixel subset, the average value of the part of adjacent reference pixels is calculated, and the calculated average point is used as the second fitting point.
  17. 根据权利要求15所述的方法,其中,所述基于第三参考像素子集,确定第一拟合点;基于第四参考像素子集确定第二拟合点,包括:The method according to claim 15, wherein the determining the first fitting point based on the third reference pixel subset; and determining the second fitting point based on the fourth reference pixel subset comprises:
    从所述第三参考像素子集中选取其中一个相邻参考像素点作为所述第一拟合点;Selecting one of the adjacent reference pixel points from the third reference pixel subset as the first fitting point;
    从所述第四参考像素子集中选取其中一个相邻参考像素点作为所述第二拟合点。One of the adjacent reference pixel points is selected from the fourth reference pixel subset as the second fitting point.
  18. 根据权利要求16所述的方法,其中,N的取值为4;所述基于第三参考像素子集,确定第一拟合点;基于第四参考像素子集确定第二拟合点,包括:The method according to claim 16, wherein the value of N is 4; the determining the first fitting point based on the third reference pixel subset; determining the second fitting point based on the fourth reference pixel subset includes :
    若所述第三参考像素子集包括3个相邻参考像素点,所述第四参考像素子集包括1个相邻参考像素点,则从所述第三参考像素子集中选取2个相邻参考像素点,对选取的2个相邻参考像素点进行均值计算,将计算得到的均值点作为所述第一拟合点,将所述第四参考像素子集中的1个相邻参考像素点作为第二拟合点;If the third reference pixel subset includes three adjacent reference pixels and the fourth reference pixel subset includes one adjacent reference pixel, then two adjacent reference pixels are selected from the third reference pixel subset For reference pixels, perform average calculation on the selected two adjacent reference pixels, use the calculated average point as the first fitting point, and use 1 adjacent reference pixel in the fourth reference pixel subset As the second fitting point;
    若所述第三参考像素子集包括1个相邻参考像素点,所述第四参考像素子集包括3个相邻参考像素点,则从所述第四参考像素子集中选取2个相邻参考像素点,对选取的2个相邻参考像素点进行均值计算,将计算得到的均值点作为所述第二拟合点,将所述第三参考像素子集中的1个相邻参考像素点作为第一拟合点。If the third reference pixel subset includes 1 adjacent reference pixel, and the fourth reference pixel subset includes 3 adjacent reference pixels, select 2 adjacent reference pixels from the fourth reference pixel subset For reference pixels, perform average calculation on the selected two adjacent reference pixels, use the calculated average point as the second fitting point, and use 1 adjacent reference pixel in the third reference pixel subset As the first fitting point.
  19. 根据权利要求1至18任一项所述的方法,其中,在所述根据所述模型参数得到所述待预测图像分量对应的预测模型之后,所述方法还包括:The method according to any one of claims 1 to 18, wherein, after obtaining the prediction model corresponding to the image component to be predicted according to the model parameters, the method further comprises:
    基于所述预测模型对所述编码块中每个像素点的待预测图像分量进行预测处理,得到每个像素点的待预测图像分量对应的预测值。Performing prediction processing on the image component to be predicted for each pixel in the coding block based on the prediction model to obtain a predicted value corresponding to the image component to be predicted for each pixel.
  20. 一种图像分量预测装置,所述图像分量预测装置包括:获取单元、比较单元、计算单元和预测单元,其中,An image component prediction device. The image component prediction device includes: an acquisition unit, a comparison unit, a calculation unit, and a prediction unit, wherein,
    所述获取单元,配置为获取视频图像中编码块的待预测图像分量对应的N个相邻参考像素点;其中,所述N个相邻参考像素点为与所述编码块相邻的参考像素点,N为预设的整数值;The acquiring unit is configured to acquire N adjacent reference pixels corresponding to the image components to be predicted of the encoding block in the video image; wherein the N adjacent reference pixels are reference pixels adjacent to the encoding block Point, N is a preset integer value;
    所述比较单元,配置为比较所述N个相邻参考像素点对应的N给第一图像分量值,确定第一参考像素子集和第二参考像素子集;其中,所述第一参考像素子集中包括:在预设数量的第一图像分量值中最小第一图像分量值和次最小第一图像分量值所对应的两个相邻参考像素点,所述第二参考像素子集中包括:在预设数量的第一图像分量值中最大第一图像分量值和次最大第一图像分量值所对应的两个相邻参考像素点;The comparing unit is configured to compare the N-to-first image component values corresponding to the N adjacent reference pixels to determine a first reference pixel subset and a second reference pixel subset; wherein, the first reference pixel The subset includes: two adjacent reference pixel points corresponding to the smallest first image component value and the second smallest first image component value among the preset number of first image component values, and the second reference pixel subset includes: Two adjacent reference pixels corresponding to the largest first image component value and the second largest first image component value among the preset number of first image component values;
    所述计算单元,配置为计算所述N个相邻参考像素点的均值,得到第一均值点;The calculation unit is configured to calculate the average value of the N adjacent reference pixel points to obtain a first average value point;
    所述比较单元,还配置为通过所述第一均值点对所述第一参考像素子集和/或第二参考像素子集进行第一图像分量值的第二比较处理,确定两个拟合点;The comparison unit is further configured to perform a second comparison process of the values of the first image component on the first reference pixel subset and/or the second reference pixel subset through the first average point, and determine two fittings point;
    所述预测单元,配置为基于所述两个拟合点,确定模型参数,根据所述模型参数得到所述待预测图像分量对应的预测模型;其中,所述预测模型用于实现对所述待预测图像分量的预测处理,以得到所述待预测图像分量对应的预测值。The prediction unit is configured to determine model parameters based on the two fitting points, and obtain a prediction model corresponding to the image component to be predicted according to the model parameters; wherein, the prediction model is used to implement Prediction processing of the predicted image component to obtain the predicted value corresponding to the image component to be predicted.
  21. 一种图像分量预测装置,其中,所述图像分量预测装置包括:存储器和处理器;An image component prediction device, wherein the image component prediction device includes: a memory and a processor;
    所述存储器,用于存储能够在所述处理器上运行的计算机程序;The memory is used to store a computer program that can run on the processor;
    所述处理器,用于在运行所述计算机程序时,执行如权利要求1至19任一项所述的方法。The processor is configured to execute the method according to any one of claims 1 to 19 when running the computer program.
  22. 一种计算机存储介质,其中,所述计算机存储介质存储有图像分量预测程序,所述图像分量预测程序被至少一个处理器执行时实现如权利要求1至19任一项所述的方法。A computer storage medium, wherein the computer storage medium stores an image component prediction program that implements the method according to any one of claims 1 to 19 when the image component prediction program is executed by at least one processor.
PCT/CN2019/092859 2019-06-25 2019-06-25 Image component prediction method and apparatus, and computer storage medium WO2020258053A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980084801.1A CN113196770A (en) 2019-06-25 2019-06-25 Image component prediction method, device and computer storage medium
PCT/CN2019/092859 WO2020258053A1 (en) 2019-06-25 2019-06-25 Image component prediction method and apparatus, and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/092859 WO2020258053A1 (en) 2019-06-25 2019-06-25 Image component prediction method and apparatus, and computer storage medium

Publications (1)

Publication Number Publication Date
WO2020258053A1 true WO2020258053A1 (en) 2020-12-30

Family

ID=74061179

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/092859 WO2020258053A1 (en) 2019-06-25 2019-06-25 Image component prediction method and apparatus, and computer storage medium

Country Status (2)

Country Link
CN (1) CN113196770A (en)
WO (1) WO2020258053A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120183057A1 (en) * 2011-01-14 2012-07-19 Samsung Electronics Co., Ltd. System, apparatus, and method for encoding and decoding depth image
GB2492394A (en) * 2011-06-30 2013-01-02 Canon Kk Image block encoding and decoding methods using symbol alphabet probabilistic distributions
CN104081436A (en) * 2012-01-19 2014-10-01 西门子公司 Methods and devices for pixel-prediction for compression of visual data
CN105069819A (en) * 2015-07-23 2015-11-18 西安交通大学 Predicted value compensation method based on MED predication algorithm
CN109510994A (en) * 2018-10-26 2019-03-22 西安科锐盛创新科技有限公司 Pixel-level component reference prediction method for compression of images
CN109561301A (en) * 2018-10-26 2019-04-02 西安科锐盛创新科技有限公司 A kind of prediction technique in video compress

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9919805D0 (en) * 1999-08-21 1999-10-27 Univ Manchester Video cording
JP2003091726A (en) * 2001-09-17 2003-03-28 Nhk Engineering Services Inc Reflection parameter acquiring device, reflection component separator, reflection parameter acquiring program and reflection component separation program
CN101505431B (en) * 2009-03-18 2014-06-25 北京中星微电子有限公司 Shadow compensation method and apparatus for image sensor
CN103260019B (en) * 2012-02-16 2018-09-07 乐金电子(中国)研究开发中心有限公司 Intra-frame image prediction decoding method and Video Codec
GB2580078A (en) * 2018-12-20 2020-07-15 Canon Kk Piecewise modeling for linear component sample prediction

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120183057A1 (en) * 2011-01-14 2012-07-19 Samsung Electronics Co., Ltd. System, apparatus, and method for encoding and decoding depth image
GB2492394A (en) * 2011-06-30 2013-01-02 Canon Kk Image block encoding and decoding methods using symbol alphabet probabilistic distributions
CN104081436A (en) * 2012-01-19 2014-10-01 西门子公司 Methods and devices for pixel-prediction for compression of visual data
CN105069819A (en) * 2015-07-23 2015-11-18 西安交通大学 Predicted value compensation method based on MED predication algorithm
CN109510994A (en) * 2018-10-26 2019-03-22 西安科锐盛创新科技有限公司 Pixel-level component reference prediction method for compression of images
CN109561301A (en) * 2018-10-26 2019-04-02 西安科锐盛创新科技有限公司 A kind of prediction technique in video compress

Also Published As

Publication number Publication date
CN113196770A (en) 2021-07-30

Similar Documents

Publication Publication Date Title
WO2022104498A1 (en) Intra-frame prediction method, encoder, decoder and computer storage medium
WO2021203394A1 (en) Loop filtering method and apparatus
WO2021134706A1 (en) Loop filtering method and device
US20230388491A1 (en) Colour component prediction method, encoder, decoder and storage medium
WO2021258841A1 (en) Inter-frame prediction method, coder, decoder, and computer storage medium
WO2020258052A1 (en) Image component prediction method and device, and computer storage medium
WO2020258053A1 (en) Image component prediction method and apparatus, and computer storage medium
WO2022077490A1 (en) Intra prediction method, encoder, decoder, and storage medium
WO2022174469A1 (en) Illumination compensation method, encoder, decoder, and storage medium
WO2020140214A1 (en) Prediction decoding method, device and computer storage medium
JP7448568B2 (en) Image component prediction method, apparatus and computer storage medium
WO2023197193A1 (en) Coding method and apparatus, decoding method and apparatus, and coding device, decoding device and storage medium
WO2024077569A1 (en) Encoding method, decoding method, code stream, encoder, decoder, and storage medium
WO2023197189A1 (en) Coding method and apparatus, decoding method and apparatus, and coding device, decoding device and storage medium
US11973936B2 (en) Image component prediction method and device, and computer storage medium
WO2023123736A1 (en) Communication method, apparatus, device, system, and storage medium
WO2024077562A1 (en) Coding method and apparatus, decoding method and apparatus, coder, decoder, code stream, and storage medium
WO2023193254A1 (en) Decoding method, encoding method, decoder, and encoder
WO2022140905A1 (en) Prediction methods, encoder, decoder, and storage medium
JP2024059916A (en) Method, apparatus and computer storage medium for predicting image components
WO2023122968A1 (en) Intra-frame prediction method, device and system, and storage medium
CN113347438B (en) Intra-frame prediction method and device, video encoding device and storage medium
WO2021134700A1 (en) Method and apparatus for video encoding and decoding
WO2020192180A1 (en) Image component prediction method, encoder, decoder, and computer storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19935048

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19935048

Country of ref document: EP

Kind code of ref document: A1