CN113596429A - Pixel point pair selection method, device and computer readable storage medium - Google Patents

Pixel point pair selection method, device and computer readable storage medium Download PDF

Info

Publication number
CN113596429A
CN113596429A CN202110855539.5A CN202110855539A CN113596429A CN 113596429 A CN113596429 A CN 113596429A CN 202110855539 A CN202110855539 A CN 202110855539A CN 113596429 A CN113596429 A CN 113596429A
Authority
CN
China
Prior art keywords
prediction block
pixel point
reconstructed
pixel
chroma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110855539.5A
Other languages
Chinese (zh)
Other versions
CN113596429B (en
Inventor
王�琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
MIGU Culture Technology Co Ltd
Original Assignee
Peking University
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University, MIGU Culture Technology Co Ltd filed Critical Peking University
Priority to CN202110855539.5A priority Critical patent/CN113596429B/en
Publication of CN113596429A publication Critical patent/CN113596429A/en
Application granted granted Critical
Publication of CN113596429B publication Critical patent/CN113596429B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/77Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention provides a pixel point pair selection method, a pixel point pair selection device and a computer readable storage medium. The parameter acquisition method comprises the following steps: determining N pixel point sets corresponding to a chroma prediction block and N pixel point sets corresponding to a brightness prediction block, wherein the chroma prediction block corresponds to the brightness prediction block, and N is an integer greater than 1; and acquiring N groups of parameters according to the N pixel point sets corresponding to the chroma prediction block and the N pixel point sets corresponding to the brightness prediction block, wherein the N groups of parameters are used for predicting the chroma value of the chroma prediction block. The invention can increase the number of groups for acquiring the parameters, thereby improving the accuracy of the acquired parameters.

Description

Pixel point pair selection method, device and computer readable storage medium
The application is a divisional application of an application with the application date of 2019, 08 and 27 months, the application number of 201910796327.7, and the invention name of 'parameter acquisition method, pixel point selection method and related equipment'.
Technical Field
The present invention relates to the field of video technologies, and in particular, to a method and an apparatus for selecting a pixel point pair, and a computer-readable storage medium.
Background
In the third generation source coding standard (3)rdGeneration Audio Video coding Standard, AVS3), in which a variety of Prediction modes including a chroma Two-Step Prediction Mode (TSCPM) are used for chroma Prediction, thereby improving the accuracy of chroma Component Prediction.
When the chroma components of the chroma prediction block are predicted using the TSCPM prediction mode, the chroma components are predicted using a linear model using a linear relationship between the luma component and the chroma components. In the process of calculating the chrominance components, parameters (α and β) corresponding to the linear model need to be calculated, and the chrominance components of the chrominance prediction block are predicted based on the calculated parameters.
At present, only one group of parameters is obtained when the parameters corresponding to the linear model are calculated, and the accuracy of the calculated parameters is low.
Disclosure of Invention
Embodiments of the present invention provide a pixel point pair selection method, a device, and a computer-readable storage medium, so as to solve the problem that in the prior art, only one set of parameters is obtained when a parameter corresponding to a linear model is calculated, and the accuracy of the calculated parameter is low.
In order to solve the problems, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a parameter obtaining method, where the method includes:
determining N pixel point sets corresponding to a chroma prediction block and N pixel point sets corresponding to a brightness prediction block, wherein the chroma prediction block corresponds to the brightness prediction block, and N is an integer greater than 1;
and acquiring N groups of parameters according to the N pixel point sets corresponding to the chroma prediction block and the N pixel point sets corresponding to the brightness prediction block, wherein the N groups of parameters are used for predicting the chroma value of the chroma prediction block.
Optionally, after acquiring N sets of parameters corresponding to a prediction model according to the N pixel point sets corresponding to the chroma prediction block and the N pixel point sets corresponding to the luma prediction block, the method further includes:
selecting a target parameter group from the N groups of parameters, wherein the target parameter group is a group of parameters corresponding to the minimum coding cost in the N groups of parameters;
predicting chroma values of the chroma predicted block using the set of target parameters.
Optionally, the upper side and the left side of a first prediction block comprise reconstructed pixel points, and the first prediction block is the chroma prediction block or the brightness prediction block;
the N pixel point sets corresponding to the first prediction block comprise at least two of the following pixel point sets:
a first pixel point set, wherein the first pixel point set comprises a reconstruction pixel point right above the first prediction block and a reconstruction pixel point right left of the first prediction block;
a second set of pixels comprising only reconstructed pixels directly above the first prediction block;
a third pixel point set, where the third pixel point set includes a reconstructed pixel point at the first upper left or upper right of the first prediction block, and the first upper left is the left right of the first prediction block;
a fourth set of pixels comprising only reconstructed pixels to the right left of the first prediction block;
a fifth pixel point set, where the fifth pixel point set includes reconstructed pixel points at a second upper left side or a lower left side of the first prediction block, and the second upper left side is above a right left side of the first prediction block.
Optionally, a reconstructed pixel is included above the first prediction block, and a reconstructed pixel is not included on the left of the first prediction block, where the first prediction block is the chroma prediction block or the luma prediction block;
the N pixel point sets corresponding to the first prediction block comprise at least two of the following pixel point sets:
a second set of pixels comprising only reconstructed pixels directly above the first prediction block;
a third pixel point set, where the third pixel point set includes a reconstructed pixel point at the first upper left or upper right of the first prediction block, and the first upper left is the left right of the first prediction block;
a sixth set of pixels comprising non-reconstructed pixels to the left of the first prediction block.
Optionally, a reconstructed pixel is not included above the first prediction block, a reconstructed pixel is included on the left of the first prediction block, and the first prediction block is the chroma prediction block or the luma prediction block;
the N pixel point sets corresponding to the first prediction block comprise at least two of the following pixel point sets:
a fourth set of pixels comprising only reconstructed pixels to the right left of the first prediction block;
a fifth pixel point set, where the fifth pixel point set includes a reconstructed pixel point at a second upper left side or a lower left side of the first prediction block, and the second upper left side is above a right left side of the first prediction block;
a seventh set of pixels comprising non-reconstructed pixels above the first prediction block.
Optionally, the third pixel point set includes any one of the following items:
a reconstruction pixel point at the first upper left of the first prediction block;
a reconstruction pixel point at the upper right of the first prediction block;
reconstructing pixel points at the first upper left and upper right of the first prediction block;
reconstruction pixel points at the first upper left and right upper parts of the first prediction block;
reconstruction pixel points right above and right above the first prediction block;
and reconstructing pixel points at the first upper left, right upper and right upper parts of the first prediction block.
Optionally, the fifth pixel point set includes any one of the following items:
a second upper left reconstructed pixel of the first prediction block;
a reconstructed pixel point at the lower left of the first prediction block;
reconstructing pixel points at the second upper left and lower left of the first prediction block;
second upper left and right reconstructed pixel points of the first prediction block;
reconstructing pixel points on the right left and the left lower of the first prediction block;
and the reconstructed pixel points at the second upper left, right left and lower left of the first prediction block.
Optionally, the N pixel point sets corresponding to the first prediction block include first pixel points above the first prediction block, where the first prediction block is the chroma prediction block or the luma prediction block;
the first pixel point is as follows: and pixel points in the I-line pixel points closest to the first prediction block in the P-line pixel points above the first prediction block, wherein P is an integer larger than I, and I is a positive integer smaller than or equal to 4.
Optionally, the N pixel point sets corresponding to the first prediction block include a second pixel point on the left of the first prediction block, where the first prediction block is the chroma prediction block or the luma prediction block;
the second pixel point is as follows: and pixel points in J-row pixel points closest to the first prediction block in Q-row pixel points on the left side of the first prediction block, wherein Q is an integer larger than J, and J is a positive integer smaller than or equal to 4.
Optionally, the first lengths corresponding to all the pixel points of the third pixel point set satisfy any one of the following:
the first length is no more than twice a width of the first prediction block;
the first length does not exceed a sum of a width and a height of the first prediction block.
Optionally, the first length L1 satisfies: l1 is more than or equal to K1 and is more than or equal to K2;
k1 is the total length of all pixels included in a first target pixel, the first target pixel is R pixels selected from the third pixel set, the first target pixel is used for obtaining one set of parameters in the N sets of parameters, and R is a positive integer; k2 is the total length of all reconstructed pixel points included above the first prediction block.
Optionally, the pixel points included in the first target pixel point are all reconstruction pixel points.
Optionally, the first length L1 satisfies: k3 is more than L1 and less than or equal to K4;
k3 is the total length of all reconstructed pixels included above the first prediction block, and K4 is twice the width of the first prediction block or the sum of the width and height of the first prediction block.
Optionally, the first target pixel point includes an unreconstructed pixel point;
the first target pixel points are R pixel points selected from the third pixel point set, the first target pixel points are used for obtaining one group of parameters in the N groups of parameters, and R is a positive integer.
Optionally, the second lengths corresponding to all the pixel points of the fifth pixel point set satisfy any one of the following:
the second length is no more than twice a height of the first prediction block;
the second length does not exceed a sum of a width and a height of the first prediction block.
Optionally, the second length L2 satisfies: l2 is more than or equal to K5 and is more than or equal to K6;
k5 is the total length of all pixels included in a second target pixel, the second target pixel is R pixels selected from the fifth pixel set, the second target pixel is used for obtaining one set of parameters in the N sets of parameters, and R is a positive integer; k6 is the total length of all reconstructed pixel points included to the left of the first prediction block.
Optionally, the pixel points included by the second target pixel point are all reconstruction pixel points.
Optionally, the second length L2 satisfies: k7 is more than L2 and less than or equal to K8;
wherein K7 is a total length of all reconstructed pixels included to the left of the first prediction block, and K8 is twice a height of the first prediction block or a sum of a width and a height of the first prediction block.
Optionally, the second target pixel point includes an unreconstructed pixel point;
the second target pixel points are R pixel points selected from the fifth pixel point set, the second target pixel points are used for obtaining one group of parameters in the N groups of parameters, and R is a positive integer.
In a second aspect, an embodiment of the present invention further provides a pixel point pair selection method, where the pixel point pair selection method includes:
determining a target reconstruction pixel point corresponding to a second prediction block, wherein the second prediction block comprises a chroma prediction block and a brightness prediction block corresponding to the chroma prediction block;
selecting R groups of reconstructed pixel point pairs from the target reconstructed pixel points, wherein the R groups of reconstructed pixel point pairs comprise R reconstructed pixel points corresponding to the chroma prediction block and R reconstructed pixel points corresponding to the luma prediction block, and R is a positive integer;
wherein the target reconstruction pixel point comprises: reconstruction pixel points right above and right above the second prediction block; or, the reconstructed pixel points at the right left and the left lower of the second prediction block.
In a third aspect, an embodiment of the present invention further provides a pixel point pair selection method, where the pixel point pair selection method includes:
determining a third target pixel point corresponding to a second prediction block, wherein the second prediction block comprises a chroma prediction block and a brightness prediction block corresponding to the chroma prediction block;
selecting R groups of pixel point pairs from the third target pixel points, wherein the R groups of pixel point pairs comprise R pixel points corresponding to the chroma prediction block and R pixel points corresponding to the brightness prediction block, and R is a positive integer;
wherein the third target pixel point includes any one of:
in a case where both the upper side and the left side of the second prediction block include reconstructed pixel points, the third target pixel point includes: a reconstruction pixel point right above the second prediction block; or, a reconstructed pixel point right to the left of the second prediction block;
under the condition that a reconstruction pixel point is included above the second prediction block and a reconstruction pixel point is not included on the left side of the second prediction block, the third target pixel point comprises: an un-reconstructed pixel point to the left of the second prediction block;
under the condition that the reconstructed pixel point is not included above the second prediction block and the reconstructed pixel point is included on the left of the second prediction block, the third target pixel point comprises: and the non-reconstructed pixel point above the second prediction block.
In a fourth aspect, an embodiment of the present invention further provides a parameter obtaining device, where the parameter obtaining device includes:
a first determining module, configured to use N pixel sets corresponding to a chroma prediction block and N pixel sets corresponding to a luma prediction block, where the chroma prediction block corresponds to the luma prediction block, and N is an integer greater than 1;
and the acquisition module is used for acquiring N groups of parameters corresponding to a prediction model according to the N pixel point sets corresponding to the chroma prediction block and the N pixel point sets corresponding to the brightness prediction block, wherein the N groups of parameters are used for predicting the chroma value of the chroma prediction block.
In a fifth aspect, an embodiment of the present invention further provides a pixel point pair selection apparatus, where the pixel point pair selection apparatus includes:
a second determining module, configured to determine a target reconstructed pixel corresponding to a second prediction block, where the second prediction block includes a chroma prediction block and a luma prediction block corresponding to the chroma prediction block;
a second selecting module, configured to select R groups of reconstructed pixel point pairs from the target reconstructed pixel points, where the R groups of reconstructed pixel point pairs include R reconstructed pixel points corresponding to the chroma prediction block and R reconstructed pixel points corresponding to the luma prediction block, and R is a positive integer;
wherein the target reconstruction pixel point comprises: reconstruction pixel points right above and right above the second prediction block; or, the reconstructed pixel points at the right left and the left lower of the second prediction block.
In a sixth aspect, an embodiment of the present invention further provides a pixel point pair selection apparatus, where the pixel point pair selection apparatus includes:
a third determining and obtaining module, configured to determine a third target pixel corresponding to a second prediction block, where the second prediction block includes a chroma prediction block and a luma prediction block corresponding to the chroma prediction block;
a third selecting module, configured to select R groups of pixel point pairs from the third target pixel points, where the R groups of pixel point pairs include R pixel points corresponding to the chroma prediction block and R pixel points corresponding to the luma prediction block, and R is a positive integer;
wherein the third target pixel point includes any one of:
in a case where both the upper side and the left side of the second prediction block include reconstructed pixel points, the third target pixel point includes: a reconstruction pixel point right above the second prediction block; or, a reconstructed pixel point right to the left of the second prediction block;
under the condition that a reconstruction pixel point is included above the second prediction block and a reconstruction pixel point is not included on the left side of the second prediction block, the third target pixel point comprises: an un-reconstructed pixel point to the left of the second prediction block;
under the condition that the reconstructed pixel point is not included above the second prediction block and the reconstructed pixel point is included on the left of the second prediction block, the third target pixel point comprises: and the non-reconstructed pixel point above the second prediction block.
In a seventh aspect, an embodiment of the present invention further provides a parameter obtaining apparatus, including: a transceiver, a memory, a processor, and a computer program stored on the memory and executable on the processor; the processor is configured to read a program in the memory to implement the steps in the parameter obtaining method according to the first aspect.
In an eighth aspect, an embodiment of the present invention further provides a pixel point pair selection apparatus, including: a transceiver, a memory, a processor, and a computer program stored on the memory and executable on the processor; characterized in that the processor is configured to read a program in the memory to implement the steps in the pixel point pair selection method according to the second aspect or the steps in the pixel point pair selection method according to the third aspect.
In a ninth aspect, embodiments of the present invention further provide a computer-readable storage medium for storing a computer program, which when executed by a processor implements the steps in the parameter acquisition method according to the first aspect, or the steps in the pixel point pair selection method according to the second aspect, or the steps in the pixel point pair selection method according to the third aspect.
In the embodiment of the present invention, N sets of parameters may be obtained by N pixel point sets corresponding to the chroma prediction block and N pixel point sets corresponding to the luma prediction block, where N is an integer greater than 1, and the N sets of parameters are used to predict the chroma value of the chroma prediction block. Compared with the prior art that only one group of parameters can be acquired, the method and the device can increase the number of groups for acquiring the parameters, thereby improving the accuracy of the acquired parameters.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a parameter obtaining method according to an embodiment of the present invention;
FIG. 2a is a schematic diagram of the upper orientation of a chroma prediction block provided by an embodiment of the present invention;
FIG. 2b is a diagram illustrating a left direction of a chroma prediction block according to an embodiment of the present invention;
fig. 3a is one of schematic diagrams of reconstructed pixel points around a chroma prediction block according to an embodiment of the present invention;
FIG. 3b is one of the schematic diagrams of reconstructed pixel points around a luma prediction block provided by the embodiment of the present invention;
FIG. 4a is a second schematic diagram of reconstructed pixel points around a chroma prediction block according to an embodiment of the present invention;
FIG. 4b is a second schematic diagram of reconstructed pixel points around the luma prediction block provided by the embodiment of the present invention;
FIG. 5a is a third schematic diagram of reconstructed pixel points around a chroma prediction block according to an embodiment of the present invention;
FIG. 5b is a third schematic diagram of reconstructed pixel points around a luma prediction block provided by an embodiment of the present invention;
FIG. 6a is a fourth schematic diagram of reconstructed pixel points around a chroma prediction block according to an embodiment of the present invention;
FIG. 6b is a fourth schematic diagram of reconstructed pixel points around the luma prediction block provided by the embodiment of the present invention;
FIG. 7a is a fifth schematic diagram of reconstructed pixel points around a chroma prediction block according to an embodiment of the present invention;
FIG. 7b is a fifth schematic diagram of reconstructed pixel points around a luma prediction block provided by an embodiment of the present invention;
FIG. 8a is a sixth schematic diagram of reconstructed pixel points around a chroma prediction block according to an embodiment of the present invention;
FIG. 8b is a sixth schematic diagram of reconstructed pixel points around a luma prediction block provided by an embodiment of the present invention;
FIG. 9 is a seventh exemplary illustration of reconstructed pixel points around a chroma prediction block according to an embodiment of the present invention;
fig. 10 is a flowchart of a pixel point pair selection method according to an embodiment of the present invention;
FIG. 11 is a second flowchart of a pixel point pair selection method according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a parameter obtaining apparatus according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of a pixel point pair selection device according to an embodiment of the present invention;
fig. 14 is a second schematic structural diagram of a pixel point pair selection apparatus according to an embodiment of the present invention;
fig. 15 is a second schematic structural diagram of a parameter obtaining apparatus according to an embodiment of the present invention;
fig. 16 is a third schematic structural diagram of a pixel point pair selection apparatus according to an embodiment of the present invention;
fig. 17 is a fourth schematic structural diagram of a pixel point pair selection apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," and the like in this application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Further, as used herein, "and/or" means at least one of the connected objects, e.g., a and/or B and/or C, means 7 cases including a alone, B alone, C alone, and both a and B present, B and C present, both a and C present, and A, B and C present.
For convenience of understanding, the following description is directed to some of the aspects of the embodiments of the present invention:
there is strong correlation between different components of the video sequence, and the coding performance can be improved by utilizing the correlation between different components of the video sequence. In order to reduce redundant information between components, in TSCPM, a chroma component is predicted based on a reconstructed luma component at the same location, using the following linear model:
predC(i,j)=α·recL(i,j)+β
therein, predCRefers to the chroma prediction block, rec, obtained by linear calculationLReferring to the luma component of the co-located luma coding block, the parameters α and β are derived by minimizing the regression error between neighboring reconstructed luma and chroma samples as follows:
Figure BDA0003183837710000091
Figure BDA0003183837710000092
where l (n) represents reconstructed luma samples of left-neighboring reconstructed pixels and top-neighboring reconstructed pixels, and c (n) represents reconstructed chroma samples of left-and top-neighboring current chroma blocks. Alpha and beta do not need to be transmitted and are calculated in the same way in the decoder.
In TSCPM, 4 available pixel point pairs are obtained by dividing the availability of neighboring block pixels into 3 cases, α and β are calculated from the 4 available pixel point pairs, and after α and β are obtained, a chroma prediction value is obtained by reconstructing pixels from luminance according to a linear relationship between luminance and chroma.
When selecting 4 pairs of available pixel points, the availability of the upper side pixel and the left side pixel needs to be considered, which is divided into the following 3 cases:
case one, if both the right upper side and right left side pixels of the current block are "available", 2 pixel point pairs are selected from the upper side and 2 pixel point pairs are selected from the left side.
Case two, if the current block is only available on the top side, then 4 pixel point pairs are all selected from the top side, the width of the selected position is: 0/4,1/4,2/4,3/4.
Case three, if only the left pixel is available for the current block, then 4 pixel point pairs are all selected from the right left, the selected position being the height: 0/4,1/4,2/4,3/4.
After determining the prediction mode of the chroma component, the chroma coding scheme is as shown in table one without considering redundancy (redundancy):
table one: chroma coding mode
Pattern indexing Pattern name Binarization method
0 DM 1
1 DC 001
2 Horizontal 0001
3 Vertical 00001
4 Bilinear 00000
5 TSCPM 01
Wherein, if the binIdx is 0, context coding No. 0 is used; if binIdx is 1 and the mode is TSCPM, context coding No. 1 is used, otherwise context coding No. 2 is used.
Binarization of the schema may result in a bin string, which, as in Table one, results from binarization of TSCPM as: 01. binIdx identifies a bin in a bin string. A bin Idx of 0 identifies the 1 st bin in the bin string, a bin Idx of 1 identifies the 2 nd bin in the bin string, and so on.
Referring to fig. 1, fig. 1 is a schematic flow chart of a parameter obtaining method according to an embodiment of the present invention. As shown in fig. 1, the parameter obtaining method according to the embodiment of the present invention may include the following steps:
step 101, determining N pixel point sets corresponding to a chroma prediction block and N pixel point sets corresponding to a brightness prediction block, wherein the chroma prediction block corresponds to the brightness prediction block, and N is an integer greater than 1.
It should be understood that N describes the number of the pixel point sets, each pixel point set includes a plurality of pixel points, and the pixel points included in the pixel point sets may be continuous or discontinuous. In addition, the pixel set corresponding to the chroma prediction block comprises a plurality of chroma pixels, so the pixel set corresponding to the chroma prediction block can be called as a chroma pixel set; the pixel set corresponding to the brightness prediction block comprises a plurality of brightness pixels, and therefore the pixel set corresponding to the brightness prediction block can be called a brightness pixel set.
The N chrominance pixel point sets and the N luminance pixel point sets have a one-to-one correspondence relationship. For the chroma pixel point set and the brightness pixel point set which have corresponding relations, the direction information of each chroma pixel point in the chroma pixel point set relative to the chroma prediction block is the same as the direction information of each brightness pixel point in the brightness pixel point set relative to the brightness prediction block. Such as: assuming that the chrominance pixel set 1 and the luminance pixel set 1 have a corresponding relationship, and each chrominance pixel in the chrominance pixel set 1 is right above the chrominance prediction block, correspondingly, each luminance pixel in the luminance pixel set 1 is right above the luminance prediction block.
Therefore, in the embodiment of the present invention, the chroma prediction block and the luminance prediction block may respectively correspond to at least two pixel point sets, so that compared with the prior art in which the chroma prediction block and the luminance prediction block both correspond to only one pixel point set, the acquisition of the pixel point sets corresponding to the chroma prediction block and the luminance prediction block is enriched.
Step 102, obtaining N groups of parameters corresponding to a prediction model according to N pixel point sets corresponding to the chroma prediction block and N pixel point sets corresponding to the brightness prediction block, wherein the N groups of parameters are used for predicting chroma values of the chroma prediction block.
In specific implementation, for each chrominance pixel point set and the corresponding luminance pixel point set, R chrominance pixel points can be selected from the chrominance pixel point set, R luminance pixel points are selected from the luminance pixel point set, and R groups of pixel point pairs are obtained, wherein each group of pixel point pairs comprises one chrominance pixel point and one luminance pixel point. And then, calculating a group of parameters alpha and beta by using the calculation formula of alpha and beta by using the R groups of pixel point pairs. In practical application, R may be 4, but it should be understood that the invention is not limited to the specific value of R, and the value of R may be determined according to actual requirements.
In a scene with the value of R being 4, the positions of 4 pixels selected from the pixel point set may be 0/4, 1/4, 2/4, and 3/4 pixels at any position in the pixel point set, where the pixel point at the position 0/4 may be understood as an initial pixel point in the pixel point set, and so on for other positions. Such as: if a certain pixel point set comprises 8 pixel points in the azimuth 1, 4 pixel points selected from the pixel point set can be the 1 st pixel point, the 3 rd pixel point, the 5 th pixel point and the 7 th pixel point of the 8 pixel points.
As can be seen from the above, a set of parameters can be obtained according to a chrominance pixel point set and a luminance pixel point set. Therefore, it can be understood that N sets of parameters can be obtained according to the N chrominance pixel point sets and the N luminance pixel point sets.
In the parameter obtaining method of this embodiment, N sets of parameters may be obtained by using N pixel sets corresponding to the chroma prediction block and N pixel sets corresponding to the luma prediction block, where N is an integer greater than 1, and the N sets of parameters are used to predict the chroma value of the chroma prediction block. Compared with the prior art that only one group of parameters can be acquired, the method and the device can increase the number of groups for acquiring the parameters, thereby improving the accuracy of the acquired parameters.
In the embodiment of the present invention, the chroma values of the chroma prediction block may be predicted by using any one of the N sets of parameters. Optionally, after acquiring N sets of parameters corresponding to a prediction model according to the N pixel point sets corresponding to the first prediction block, the method further includes:
selecting a target parameter group from the N groups of parameters, wherein the target parameter group is a group of parameters corresponding to the minimum coding cost in the N groups of parameters;
predicting chroma values of the chroma predicted block using the set of target parameters.
In specific implementation, the coding cost corresponding to each group of parameters can be calculated to obtain N coding costs. Then, a target parameter set corresponding to the minimum coding cost is selected from the N sets of transmissions, so as to calculate chroma values of the chroma prediction block by using the target parameter set.
In this way, compared with the method that the chroma prediction block is calculated by using the parameter sets except the target parameter set in the N sets of parameters, the coding cost of the chroma prediction block can be reduced, the code rate required by the chroma prediction variable coding is reduced, the coding efficiency is improved, and the coding gain is brought.
The embodiment of the present invention relates to the description of a plurality of azimuth information above, directly above, first above left, above right, left, directly left, second above left and below left of a first prediction block (chroma prediction block or luma prediction block), and for conveniently distinguishing the plurality of azimuth information, it is described with reference to fig. 2a and 2b that:
in fig. 2a, the upper side of the chroma prediction block is taken as the first reference line 21, and the upper side of the first reference line 21 is regarded as the upper side of the chroma prediction block. Further, as shown in fig. 2a, the upper part of the chroma prediction block may be divided into three parts, specifically, the first upper left, right upper and upper right.
In FIG. 2b, the left side of the chroma prediction block is taken as the second reference line 22, and the left side of the second reference line 22 is taken as the left side of the chroma prediction block. Further, as shown in fig. 2b, the left of the chroma prediction block may be divided into three parts, specifically, a second upper left, a right left, and a lower left.
In the embodiment of the present invention, the set of pixels corresponding to the first prediction block is related to the availability of the neighboring blocks of the first prediction block. Specifically, the set of pixels corresponding to the first prediction block is related to whether reconstructed pixel points are included above and to the left of the first prediction block. Therefore, based on the different determination results of whether the upper side and the left side of the first prediction block include the reconstructed pixel, the pixel sets corresponding to the first prediction block may be different.
The first decision result, the upper side and the left side of the first prediction block comprise reconstructed pixel points.
In the embodiment of the present invention, as long as one of the three orientations above the first prediction block includes a reconstruction pixel, it can be considered that the reconstruction pixel is included above the first prediction block. Similarly, as long as one of the three orientations of the left side of the first prediction block includes a reconstructed pixel, it can be considered that the left side of the first prediction block includes a reconstructed pixel.
As shown in fig. 3a, the first upper left, the right upper, the second upper left and the right left of the first chroma pixel block 31 include reconstructed pixel points, and the upper right and the lower left of the first chroma pixel block 31 do not include reconstructed pixel points. As shown in fig. 3b, the first upper left, right upper, second upper left and right left of the first luma pixel block 32 each include reconstructed pixel points, and the upper right and lower left of the first luma pixel block 32 do not include reconstructed pixel points.
As shown in fig. 4a, the first upper left, right, upper right, second upper left, right left and lower left pixel block 41 of the second chrominance pixel block includes reconstructed pixel points. As shown in fig. 4b, the first upper left, right above, second upper left, right left, and lower left of the second luminance pixel block 42 each include a reconstructed pixel point.
It should be understood that the distribution positions of the reconstructed pixel points in fig. 3a to 4b are merely examples, and the distribution positions of the reconstructed pixel points of the first prediction block corresponding to the decision result one are not limited accordingly.
Corresponding to the first determination result, optionally, the N pixel point sets corresponding to the first prediction block may include at least two of the following pixel point sets:
a first pixel point set, wherein the first pixel point set comprises a reconstruction pixel point right above the first prediction block and a reconstruction pixel point right left of the first prediction block;
a second set of pixels comprising only reconstructed pixels directly above the first prediction block;
a third pixel point set, where the third pixel point set includes a reconstructed pixel point at the first upper left or upper right of the first prediction block, and the first upper left is the left right of the first prediction block;
a fourth set of pixels comprising only reconstructed pixels to the right left of the first prediction block;
a fifth pixel point set, where the fifth pixel point set includes reconstructed pixel points at a second upper left side or a lower left side of the first prediction block, and the second upper left side is above a right left side of the first prediction block.
In specific implementation, for a first pixel point set, the first pixel point set includes reconstructed pixel points above and to the left of a first prediction block, and the first pixel point set at least includes reconstructed pixel points directly above the first prediction block and reconstructed pixel points directly to the left of the first prediction block.
In an implementation manner, the first pixel set corresponding to the first prediction block may be composed of a reconstructed pixel right above the first prediction block and a reconstructed pixel right left of the first prediction block.
In another implementation, the first pixel point set corresponding to the first prediction block may further include reconstructed pixel points of at least one of a first upper left, an upper right, a second upper left, and a lower left of the first prediction block.
For the second pixel point set, it only includes the reconstructed pixel point right above the first prediction block, and therefore, the second pixel point set corresponding to the first prediction block may be composed of the reconstructed pixel point right above the first prediction block.
And for a third pixel point set, only including the pixel points above the first prediction block, and the third pixel point set at least including the reconstructed pixel points above the first left or above the right of the first prediction block.
Optionally, the third pixel point set may include any one of the following items:
a reconstruction pixel point at the first upper left of the first prediction block;
a reconstruction pixel point at the upper right of the first prediction block;
reconstructing pixel points at the first upper left and upper right of the first prediction block;
reconstruction pixel points at the first upper left and right upper parts of the first prediction block;
reconstruction pixel points right above and right above the first prediction block;
and reconstructing pixel points at the first upper left, right upper and right upper parts of the first prediction block.
In concrete implementation, the pixel points included in the third pixel point set need to be further determined by combining the distribution positions of the reconstructed pixel points above the first prediction block.
In a case where both the first upper left and right sides of the first prediction block include reconstructed pixel points and the upper right side of the first prediction block does not include reconstructed pixel points, the third set of pixel points may include any one of:
a reconstruction pixel point at the first upper left of the first prediction block;
and reconstructing pixel points at the first upper left and right upper parts of the first prediction block.
In a case where a first upper left, right, and upper right of the first prediction block include reconstructed pixel points, the third set of pixel points may include any one of:
a reconstruction pixel point at the first upper left of the first prediction block;
a reconstruction pixel point at the upper right of the first prediction block;
reconstructing pixel points at the first upper left and upper right of the first prediction block;
reconstruction pixel points at the first upper left and right upper parts of the first prediction block;
reconstruction pixel points right above and right above the first prediction block;
and reconstructing pixel points at the first upper left, right upper and right upper parts of the first prediction block.
For the fourth pixel point set, only the reconstructed pixel point right left of the first prediction block is included, and therefore, the fourth pixel point set corresponding to the first prediction block may be composed of the reconstructed pixel point right left of the reconstructed pixel point of the first prediction block.
And for a fifth pixel point set, the fifth pixel point set only comprises pixels on the left side of the first prediction block, and the fifth pixel point set at least comprises reconstructed pixels on the second upper left side or the lower left side of the first prediction block.
Optionally, the fifth pixel point set may include any one of the following items:
a second upper left reconstructed pixel of the first prediction block;
a reconstructed pixel point at the lower left of the first prediction block;
reconstructing pixel points at the second upper left and lower left of the first prediction block;
second upper left and right reconstructed pixel points of the first prediction block;
reconstructing pixel points on the right left and the left lower of the first prediction block;
and the reconstructed pixel points at the second upper left, right left and lower left of the first prediction block.
In concrete implementation, the pixel points included in the fifth pixel point set need to be further determined by combining the distribution position of the reconstructed pixel point on the left side of the first prediction block.
In a case where both the second upper left and the positive left of the first prediction block include reconstructed pixel points and the left lower of the first prediction block does not include reconstructed pixel points, the fifth set of pixel points may include any one of:
a second upper left reconstructed pixel of the first prediction block;
second upper left and right reconstructed pixel points of the first prediction block;
in a case where a second upper left, right left, and lower left of the first prediction block each include reconstructed pixel points, the fifth set of pixel points may include any one of:
a second upper left reconstructed pixel of the first prediction block;
a reconstructed pixel point at the lower left of the first prediction block;
reconstructing pixel points at the second upper left and lower left of the first prediction block;
second upper left and right reconstructed pixel points of the first prediction block;
reconstructing pixel points on the right left and the left lower of the first prediction block;
and the reconstructed pixel points at the second upper left, right left and lower left of the first prediction block.
As can be seen from the above, on the one hand, compared with the prior art that only the reconstructed pixel points right above and right left of the first prediction block are utilized, the embodiment of the present invention fully utilizes the reconstructed pixel points in each direction of the first prediction block, and improves the utilization rate of the reconstructed pixel points in each direction of the first prediction block. On the other hand, compared with the prior art that the reconstructed pixel points above and to the left of the first prediction block are required to be utilized when the reconstructed pixel points above and to the left of the first prediction block are both included, the embodiment of the present invention may only utilize the reconstructed pixel points above or to the left of the first prediction block, thereby improving the flexibility of utilizing the reconstructed pixel points.
And judging that a second result is that reconstructed pixels are arranged above the first prediction block, and the left of the first prediction block does not comprise reconstructed pixels.
As shown in fig. 5a, the first upper left, right upper, second upper left and right left of the third chroma pixel block 51 include reconstructed pixel points, and the upper right and lower left of the third chroma pixel block 51 do not include reconstructed pixel points. As shown in fig. 5b, the first upper left, the right upper, the second upper left and the right left of the third luma pixel block 52 each include reconstructed pixel points, and the upper right and the lower left of the third luma pixel block 52 do not include reconstructed pixel points.
As shown in fig. 6a, the first upper left, right, upper right, second upper left, right left and lower left of the fourth chrominance pixel block 61 comprises reconstructed pixel points. As shown in fig. 6b, the first upper left, directly above, second upper left, directly left, and left below of the fourth luminance pixel block 62 each include a reconstructed pixel point.
It should be understood that the distribution positions of the reconstructed pixel points in fig. 5a to 6b are merely examples, and the distribution positions of the reconstructed pixel points of the first prediction block corresponding to the decision result two are not limited accordingly.
For the second determination result, optionally, the N pixel point sets corresponding to the first prediction block include at least two of the following pixel point sets:
a second set of pixels comprising only reconstructed pixels directly above the first prediction block;
a third pixel point set, where the third pixel point set includes a reconstructed pixel point at the first upper left or upper right of the first prediction block, and the first upper left is the left right of the first prediction block;
a sixth set of pixels comprising non-reconstructed pixels to the left of the first prediction block.
It should be noted that the second pixel point set and the third pixel point set in the second determination result are the same as the second pixel point set and the third pixel point set in the first determination result, and the description in the first determination result may be specifically referred to, and details are not repeated here.
For the sixth pixel point set, the pixel information of the left non-reconstructed pixel point included in the sixth pixel point set may be filled using a preset filling rule.
As can be seen from the above, in the case that the reconstructed pixel point is included above the first prediction block, and the reconstructed pixel point is not included on the left of the first prediction block, the embodiment of the present invention may fully utilize the reconstructed pixel points in each direction above the first prediction block, or the reconstructed pixel point on the left of the first prediction block. Therefore, compared with the prior art that only reconstructed pixel points right above the first prediction block are utilized, the utilization rate of the reconstructed pixel points around the first prediction block is improved.
And judging a third result, wherein the upper part of the first prediction block does not comprise a reconstruction pixel point, and the left part of the first prediction block comprises a reconstruction pixel point.
As shown in fig. 7a, the first upper left, the right upper, the second upper left and the right left of the fifth chroma pixel block 71 each include reconstructed pixel points, and the upper right and the lower left of the fifth chroma pixel block 71 do not include reconstructed pixel points. As shown in fig. 7b, the first upper left, the right upper, the second upper left and the right left of the fifth luminance pixel block 72 include reconstructed pixel points, and the upper right and the lower left of the fifth luminance pixel block 72 do not include reconstructed pixel points.
As shown in fig. 8a, the first upper left, right, upper right, second upper left, right left, and lower left pixel block 81 of the sixth chroma pixel block includes reconstructed pixel points. As shown in fig. 8b, the first upper left, directly above, second upper left, directly left, and left below of the sixth luminance pixel block 82 each includes a reconstructed pixel point.
It should be understood that the distribution positions of the reconstructed pixel points in fig. 7a to 8b are merely examples, and the distribution positions of the reconstructed pixel points at three times of the determination result are not limited accordingly.
For the third determination result, optionally, the N pixel point sets corresponding to the first prediction block include at least two of the following pixel point sets:
a fourth set of pixels comprising only reconstructed pixels to the right left of the first prediction block;
a fifth pixel point set, where the fifth pixel point set includes a reconstructed pixel point at a second upper left side or a lower left side of the first prediction block, and the second upper left side is above a right left side of the first prediction block;
a seventh set of pixels comprising non-reconstructed pixels above the first prediction block.
It should be noted that the fourth pixel point set and the fifth pixel point set in the third determination result are the same as the fourth pixel point set and the fifth pixel point set in the first determination result, and the description in the first determination result may be specifically referred to, and details are not repeated here.
For the seventh pixel point set, the pixel information of the left non-reconstructed pixel point included in the seventh pixel point set may be filled using a preset filling rule.
As can be seen from the above, when the reconstructed pixel is not included above the first prediction block and the reconstructed pixel is included on the left of the first prediction block, the embodiment of the present invention may fully utilize the reconstructed pixel on the left of the first prediction block or the reconstructed pixel on the right of the first prediction block. Therefore, compared with the prior art that only the reconstructed pixel point on the right left side of the first prediction block is utilized, the utilization rate of the reconstructed pixel points around the first prediction block is improved.
In this embodiment of the present invention, optionally, the N pixel sets corresponding to the first prediction block include the first pixel above the first prediction block, and the first prediction block is the chroma prediction block or the luma prediction block;
the first pixel point is as follows: and pixel points in the I-line pixel points closest to the first prediction block in the P-line pixel points above the first prediction block, wherein P is an integer larger than I, and I is a positive integer smaller than or equal to 4.
In practical applications, I may be 4. In addition, under the condition that the first pixel point includes a plurality of pixel points, the plurality of pixel points may be pixel points in the same row above the first prediction block.
In this way, compared with the method of acquiring parameters by using the pixels in the P-line pixels, the method of acquiring parameters by using the pixels in the I-line pixels has lower coding cost, so that the utilization rate of the acquired parameters can be improved.
Optionally, the N pixel point sets corresponding to the first prediction block include a second pixel point on the left of the first prediction block, where the first prediction block is the chroma prediction block or the luma prediction block;
the second pixel point is as follows: and pixel points in J-row pixel points closest to the first prediction block in Q-row pixel points on the left side of the first prediction block, wherein Q is an integer larger than J, and J is a positive integer smaller than or equal to 4.
In practical applications, J may be 4. In addition, in the case that the second pixel includes a plurality of pixels, the plurality of pixels may be pixels in the same column on the left side of the first prediction block.
In this way, compared with the method of acquiring parameters by using the pixels in the Q-row pixels, the method of acquiring parameters by using the pixels in the J-row pixels has lower coding cost corresponding to the parameters, so that the utilization rate of the acquired parameters can be improved.
In the embodiment of the present invention, as can be seen from the foregoing, the third set of pixels may include reconstructed pixels at the first upper left or upper right of the first prediction block, and the fifth set of pixels may include reconstructed pixels at the second upper left or lower left of the first prediction block. In order to improve the utilization rate of the parameters obtained according to the third pixel point set and/or the fifth pixel point set, in the embodiment of the present invention, the first length L1 corresponding to all the pixel points of the third pixel point set and/or the second length L2 corresponding to all the pixel points of the fifth pixel point set may be defined.
In the embodiment of the present invention, the first prediction block is a chroma prediction block or a luma prediction block. Therefore, it should be understood that L1 (or L2) corresponding to the chroma prediction block is determined in the same manner as L1 (or L2) corresponding to the luma prediction block.
First, a third pixel point set is related.
Optionally, the first lengths corresponding to all the pixel points of the third pixel point set satisfy any one of the following:
the first length is no more than twice a width of the first prediction block;
the first length does not exceed a sum of a width and a height of the first prediction block.
The first length may be understood as a total length of all pixels in the third pixel set, specifically, the first length L1 is a × y, where a is a number of pixels included in the third pixel set, and y is a length of one pixel. For example, if the third pixel point set corresponding to the chroma prediction block includes 8 pixel points, and the length of one pixel point is y, the first length L1 corresponding to all pixel points of the third pixel point set corresponding to the chroma prediction block is 8 y.
When the width of the first prediction block is W and the height of the first prediction block is H, L1 satisfies: l1 is less than or equal to 2W; or L1 is less than or equal to W + H.
Therefore, the encoding cost corresponding to the parameter acquired by using the third pixel point set is low, and the utilization rate of the acquired parameter can be improved.
Further, in the embodiment of the present invention, the third pixel point set may include reconstructed pixel points, that is, pixel points that have been reconstructed, or may include non-reconstructed pixel points, that is, non-reconstructed pixel points.
In case one, the third pixel point set only includes reconstructed pixel points and does not include non-reconstructed pixel points.
In case one, the first length L1 satisfies:
K1≤L1<K2;
k1 is the total length of all pixels included in a first target pixel, the first target pixel is R pixels selected from the third pixel set, the first target pixel is used to obtain one of the N sets of parameters, R is a positive integer, that is, K1 is R × y; k2 is the total length of all reconstructed pixel points included above the first prediction block.
It should be understood that L1 is satisfied that: l1 is less than or equal to 2W; or, on the premise that L1 is not less than W + H, further satisfies that K1 is not less than L1 is more than K2.
In the first case, all the pixels included in the first target pixel are reconstructed pixels.
Thus, L1 in case one can ensure that the pixels included in the first target pixel are reconstructed pixels, and R pixels can be selected.
And in the second case, the third pixel point set comprises reconstructed pixel points and non-reconstructed pixel points.
In case two, the first length L1 satisfies:
K3<L1≤K4;
k3 is the total length of all reconstructed pixels included above the first prediction block, and K4 is twice the width of the first prediction block or the sum of the width and height of the first prediction block.
Thus, L1 in case two can ensure that the third set of pixels includes the non-reconstructed pixels.
In the second case, the first target pixel point may include an unreconstructed pixel point or may not include an unreconstructed pixel point, and is specifically determined according to the distribution positions of the reconstructed pixel point and the unreconstructed pixel point in the third pixel point set and the selection mode for selecting the first target pixel point from the third pixel point set. The first target pixel points are R pixel points selected from the third pixel point set, the first target pixel points are used for obtaining one group of parameters in the N groups of parameters, and R is a positive integer.
It should be noted that, in the embodiment of the present invention, the pixel information of an unreconfigured pixel point in a certain pixel point set may be filled with the pixel information of a third pixel point, where the third pixel point is a reconstructed pixel point closest to the unreconfigured pixel point in the pixel point set.
For the convenience of understanding the above first and second cases, the following is exemplified in conjunction with fig. 9:
in fig. 9, the seventh chroma prediction block 91 includes 8 × 8 pixels, the width of the seventh chroma prediction block 91 is W, the height of the seventh chroma prediction block 91 is H, 12 reconstructed pixels are above the seventh chroma prediction block 91, 4 non-reconstructed pixels, that is, the total length of the reconstructed pixels above the seventh chroma prediction block 91 is 1.5W, and the total length of the reconstructed pixels and the non-reconstructed pixels above the seventh chroma prediction block 91 is 2W. In addition, it is assumed that R takes a value of 4.
Thus, for FIG. 9, in case one, L1 needs to satisfy: l1 is more than or equal to 4y and less than 1.5W; in case two, L1 needs to satisfy: l1 is more than 1.5W and less than or equal to 2W; or L1 is more than 1.5W and less than or equal to W + H.
And II, regarding a fifth pixel point set.
Optionally, the second lengths corresponding to all the pixel points of the fifth pixel point set satisfy any one of the following:
the second length is no more than twice a height of the first prediction block;
the second length does not exceed a sum of a width and a height of the first prediction block.
Further, in the embodiment of the present invention, the fifth pixel point set may include reconstructed pixel points, that is, pixel points that have been reconstructed, or may include non-reconstructed pixel points, that is, non-reconstructed pixel points.
And in the third case, the fifth pixel point set only comprises reconstructed pixel points and does not comprise non-reconstructed pixel points.
In case one, the second length L2 satisfies:
K5≤L2<K6;
k5 is the total length of all pixels included in a second target pixel, the second target pixel is R pixels selected from the fifth pixel set, the second target pixel is used for obtaining one set of parameters in the N sets of parameters, and R is a positive integer; k6 is the total length of all reconstructed pixel points included to the left of the first prediction block.
In case three, all the pixels included in the second target pixel are reconstructed pixels.
Thus, in case three, L2 may ensure that all pixels included in the second target pixel are reconstructed pixels, and R pixels may be selected.
And in the fourth case, the fifth pixel point set comprises reconstructed pixel points and non-reconstructed pixel points.
In case four, the second length L2 satisfies:
K7<L2≤K8;
wherein K7 is a total length of all reconstructed pixels included to the left of the first prediction block, and K8 is twice a height of the first prediction block or a sum of a width and a height of the first prediction block.
Thus, L2 in case four can ensure that the fifth set of pixels includes the non-reconstructed pixels.
In the fourth case, the second target pixel point may include an unreconstructed pixel point or may not include an unreconstructed pixel point, and specifically, the second target pixel point is determined according to the distribution positions of the reconstructed pixel point and the unreconstructed pixel point in the fifth pixel point set and the selection manner of selecting the first target pixel point from the fifth pixel point set. The second target pixel points are R pixel points selected from the third pixel point set, the first target pixel points are used for obtaining one group of parameters in the N groups of parameters, and R is a positive integer.
It should be noted that the definition of L2 in the fifth pixel point set is similar to the definition of L1 in the third pixel point set, and specific reference may be made to the description of the third pixel point set, which is not repeated herein.
It should be noted that, various optional implementations described in the embodiments of the present invention may be implemented in combination with each other or separately without conflict between the implementations, and the embodiments of the present invention are not limited in this respect.
For ease of understanding, examples are illustrated below:
implementation mode one
Step one, when predicting the pixel value of the current chroma prediction block, counting the number of the reconstructed luma pixels which can be used on the upper side (including the upper right) and the left side (including the lower left) of the luma block corresponding to the current chroma block.
And step two, after the number of usable reconstructed pixels is obtained in the step one, selecting 4 pixel point pairs at corresponding positions of the upper luminance reconstructed pixel and the chrominance reconstructed pixel.
And step three, after the number of usable reconstructed pixels is obtained in the step one, selecting 4 pixel point pairs at corresponding positions of the left luminance reconstructed pixel and the left chrominance reconstructed pixel.
And step four, obtaining parameters required by the chromaticity two-step prediction mode to calculate the chromaticity prediction value through a manner specified by the chromaticity two-step prediction mode according to the 4 pixel point pairs obtained in the step two and the step three respectively.
The first embodiment can significantly improve the performance of coding, especially the chroma component, without affecting the coding time, thereby improving the coding efficiency and bringing coding gain.
Second embodiment
Step one, when a predicted value of a chroma prediction block is obtained, counting the number of reconstructed brightness pixels on the upper side (including the upper right side) of a brightness block corresponding to a current chroma block, and recording the number as numLenT;
step two, when a predicted value of a chroma prediction block is obtained, counting the number of reconstructed luminance pixels on the left side (including the lower left) of a luminance block corresponding to a current chroma block, and recording the numLenL;
step three, respectively selecting 4 pixel point pairs in the reconstructed pixels around the current chrominance block and the reconstructed pixels around the corresponding luminance block through the calculation in the step one and the step two, wherein the selected positions are available pixels: 0/4, 1/4, 2/4, 3/4 pixels.
Step four, the parameters α and β are calculated from the upper 4 pixel point pairs and the left 4 pixel point pairs obtained by the calculation in step three, respectively, in the manner specified in the AVS3 standard. And respectively calculating the predicted values of the corresponding chrominance blocks according to the two groups of calculated parameters, and calculating to obtain the corresponding coding cost.
And step five, selecting a minimum coding cost according to the two coding costs obtained in the step four and the coding cost obtained by calculating the original TSCPM in a mode of using the upper side pixel and the left side pixel.
Table one shows the results of the test sequence of the present embodiment at AVS3, with the test configuration of All Intra and the Quantization Parameters (QP) of 27, 32, 38, and 45. The evaluation criteria were the BD-rate calculation method proposed by Bjontegaard.
Table two: general test results for AVS3 test sequences
Figure BDA0003183837710000231
As can be seen from Table I, the present invention can improve the coding efficiency and bring the coding gain.
In the embodiment of the present invention, optionally, when deriving the parameters α and β information of the chroma two-step prediction mode, 4 pixel point pairs are selected, where the 4 pixel point pairs are all from the reference pixel information at the upper side (including the upper right) of the current chroma block and the reference pixel at the corresponding position of the luma block corresponding to the chroma block.
Optionally, when deriving the parameters α and β information of the chroma two-step prediction mode, 4 pixel point pairs are selected, where the 4 pixel point pairs are all from the reference pixel information of the same row on the left side (including the bottom left) of the current chroma block and the reference pixel at the corresponding position of the luma block corresponding to the chroma block.
Optionally, the left upper, right upper and right upper of the current chrominance block and the corresponding luminance block are included.
Optionally, the left reference pixels used include the top left, right left, and bottom left of the current chroma block and the corresponding luma block.
Optionally, when the pixels used by the 4 pixel point pairs are reference pixel information of the upper side of the current chroma block and the corresponding luma block, the reference pixels are within 4 adjacent lines of the upper side of the current block.
Optionally, when the pixels used by the 4 pixel point pairs are reference pixel information of the upper side of the current chrominance block and the corresponding luminance block, the position of the reference pixel does not exceed the length of the sum of the width and height of the current block or the double width of the current block.
Optionally, when the pixels used by the 4 pixel point pairs are the reconstructed pixel information on the upper side of the current chrominance block and the corresponding luminance block, the value range of the number of the used pixels is 4 or more and is smaller than any value in the length of the reconstructed pixel number.
Alternatively, when the pixels used by the 4 pixel point pairs include the current chroma block and the reference pixel information of the corresponding luma block that are not reconstructed on the upper side, at this time, the pixel information of the non-reconstructed position uses the filling rule of the intra reference pixels specified in the AVS3 standard, and the number of pixels used is a length greater than the number of pixels that have been reconstructed and equal to or less than twice the width of the current block or the sum of the width and height of the current block.
Optionally, when the pixels used by the 4 pixel point pairs are reference pixel information of the left side of the current chroma block and the corresponding luma block, the reference pixels are within 4 adjacent columns of the left side of the current block.
Optionally, when the pixels used by the 4 pixel point pairs are reference pixel information on the left side of the current chrominance block and the corresponding luminance block, the position of the reference pixel does not exceed twice the height of the current block or the length of the sum of the high broadening of the current block.
Optionally, when the pixels used by the 4 pixel point pairs are already reconstructed pixel information on the left side of the current chrominance block and the corresponding luminance block, the value range of the number of the used pixels is 4 or more and is smaller than any value in the length of the already reconstructed number of pixels.
Optionally, when the pixels used by the 4 pixel point pairs include the current chroma block and the non-reconstructed reference pixel information on the left side of the corresponding luma block, at this time, the pixel information of the non-reconstructed position uses the filling rule of the intra reference pixels specified in the AVS3 standard, and the number of pixels used is greater than the length of the number of pixels already reconstructed and less than or equal to the sum of twice the width of the current block or the width and height of the current block.
The embodiment of the invention mainly provides an optimization mode aiming at the chroma two-step prediction of images and videos, and the technology is applied to the coding and decoding process of chroma component prediction. The method aims to deduce parameters alpha and beta used in chroma two-step prediction at an encoding/decoding end according to encoded/decoded information around a current encoding block as much as possible, thereby reducing the code rate required by encoding without basically increasing encoding time and bringing encoding gain.
The embodiment of the invention also provides parameter acquisition equipment capable of executing the method embodiment. Because the principle of solving the problem of the parameter obtaining device is similar to the parameter obtaining method in the embodiment of the present invention, the implementation of the parameter obtaining device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 10, fig. 10 is a flowchart of a pixel point pair selection method according to an embodiment of the present invention. As shown in fig. 10, the method for selecting a pixel point pair according to an embodiment of the present invention may include the following steps:
step 1001, determining a target reconstruction pixel point corresponding to a second prediction block, where the second prediction block includes a chroma prediction block and a luma prediction block corresponding to the chroma prediction block.
Step 1002, selecting R groups of reconstructed pixel point pairs from the target reconstructed pixel points, where the R groups of reconstructed pixel point pairs include R reconstructed pixel points corresponding to the chroma prediction block and R reconstructed pixel points corresponding to the luma prediction block, where R is a positive integer;
wherein the target reconstruction pixel point comprises: reconstruction pixel points right above and right above the second prediction block; or, the reconstructed pixel points at the right left and the left lower of the second prediction block.
In specific implementation, the method comprises the following two implementation modes:
in the first implementation manner, the target reconstructed pixel includes reconstructed pixels directly above and rightwardly above the second prediction block, that is, the target reconstructed pixel includes reconstructed pixels directly above and rightwardly above the chroma prediction block and reconstructed pixels directly above and rightwardly above the luma prediction block.
It can be seen that, in the first implementation manner, R reconstructed pixel points corresponding to the chroma prediction block are from reconstructed pixel points right above and right above the chroma prediction block, and R reconstructed pixel points corresponding to the luminance prediction block are from reconstructed pixel points right above and right above the luminance prediction block. Thus, the present embodiment provides a new way of selecting pairs of pixel points, compared to selecting pairs of pixel points only from directly above the chroma prediction block and the luma prediction block in the prior art.
In a second implementation manner, the target reconstructed pixel point includes reconstructed pixel points on the right left and the left lower of the second prediction block, that is, the target reconstructed pixel point includes reconstructed pixel points on the right left and the left lower of the chroma prediction block and reconstructed pixel points on the right left and the left lower of the luma prediction block.
It can be seen that, in the second implementation manner, R reconstructed pixel points corresponding to the chroma prediction block are from reconstructed pixel points on the right left and left lower sides of the chroma prediction block, and R reconstructed pixel points corresponding to the luminance prediction block are from reconstructed pixel points on the right left and left lower sides of the luminance prediction block. Thus, the present embodiment provides a new way of selecting pairs of pixel points, compared to selecting pairs of pixel points only from the positive left of the chroma prediction block and the luma prediction block as in the prior art.
In this embodiment, R reconstructed pixel points corresponding to the chroma prediction block may be any R reconstructed pixel points in reconstructed pixel points on the right left and the left lower of the chroma prediction block; the R reconstructed pixel points corresponding to the luminance prediction block may be any R reconstructed pixel points among reconstructed pixel points on the right left and the left lower of the luminance prediction block.
In case R is 4, optionally:
the R reconstruction pixel points corresponding to the chroma prediction block are as follows: among reconstructed pixels right above and right above the chroma prediction block, reconstructed pixels at positions 0/4, 1/4, 2/4 and 3/4;
the R reconstruction pixel points corresponding to the brightness prediction block are as follows: among the reconstructed pixels directly above and to the upper right of the luminance prediction block, the reconstructed pixels at positions 0/4, 1/4, 2/4, and 3/4.
Illustratively, it is assumed that the right top and upper right of the chroma prediction block include 8 reconstructed pixels, and the right top and upper right of the luma prediction block include 16 reconstructed pixels.
Then, in this example, R reconstructed pixel points corresponding to the chroma prediction block are: and the 1 st, 3 rd, 5 th and 7 th reconstructed pixel points in the reconstructed pixel points right above and right above the chroma prediction block.
The R reconstruction pixel points corresponding to the brightness prediction block are as follows: and 1 st, 5 th, 9 th and 13 th reconstructed pixel points in the reconstructed pixel points right above and on the right above the brightness prediction block.
In this embodiment, after R groups of reconstructed pixel point pairs are obtained, α and β may be calculated in a manner specified by TSCPM, and subsequent steps are not changed. And for R groups of reconstruction pixel points obtained in different modes, the corresponding colorimetric value prediction modes are different.
As can be seen from the above, the present embodiment provides two new methods for selecting pixel point pairs, and therefore, the present embodiment adds two chrominance value prediction modes. For the R groups of pixel point pairs obtained in the first implementation manner, the corresponding chroma prediction mode may be recorded as TSCPM _ T; for the R groups of pixel point pairs obtained in the second implementation, the corresponding chroma prediction mode may be denoted as TSCPM _ L.
When determining the prediction mode of the chroma component, the chroma coding scheme without considering the redundant can be as shown in table three:
table three: chroma coding mode
Pattern indexing Pattern name Binarization method
0 DM 1
1 DC 00001
2 Horizontal 000001
3 Vertical 0000001
4 Bilinear 0000000
5 TSCPM 01
6 TSCPM_L 001
7 TSCPM_T 0001
As can be seen from table three, the bin string corresponding to the TSCPM _ L is increased by one bin as compared with the bin string corresponding to the TSCPM; the bin string for TSCPM _ T is increased by two bins compared to the bin string for TSCPM.
In the present embodiment, for TSCPM _ L and TSCPM _ T, the position whose binIdx is equal to 0 may adopt the same context as the position where TSCPM binIdx is equal to 0, and the position whose binIdx is equal to 1 may adopt the same context as the position where TSCPM binIdx is equal to 1.
The encoding method of the newly added bin is specifically described as follows:
optionally, after selecting R groups of reconstructed pixel point pairs from the target reconstructed pixel points, the method further includes:
coding a first bit bin by adopting a target coding mode, wherein the first bin is a bin with a first bin string increased relative to a second bin string, the first bin string is a bin string generated by carrying out binarization on a prediction mode corresponding to the target reconstruction pixel point, and the second bin string is a bin string generated by carrying out binarization on a chroma two-step prediction mode TSCPM; the target coding mode is as follows: a context encoding mode; or, Bypass the Bypass encoding mode.
Further, when the target coding scheme is a context coding scheme, the context model corresponding to the target coding scheme is: context model number 1; or, a newly established context model.
As can be seen, in this embodiment, for TSCPM T:
in the first mode, the newly added bin (i.e., the bin with binIdx equal to 3) can be coded using context number 1;
in the second approach, the newly added bin (i.e., the bin with binIdx equal to 3) may use Bypass coding.
For TSCPM _ L:
in the first mode, the newly added bins (i.e., the bin with binIdx equal to 3 and the bin with binIdx equal to 4) can be coded using context No. 1;
in the second mode, newly added bins (i.e., a bin whose binIdx is equal to 3 and a bin whose binIdx is equal to 4) may be encoded using Bypass.
In the scenario where the newly added bin is encoded in the first manner, the test performance is shown in table four:
table four: testing performance
Figure BDA0003183837710000281
From the above, the mode corresponding to the selection mode of the newly added pixel pair fully utilizes the information of the available pixels around the second prediction block, so that the gain of the chrominance component is significantly improved, and the luminance also has a slight gain.
Referring to fig. 11, fig. 11 is a second flowchart of a pixel point pair selection method according to an embodiment of the present invention. As shown in fig. 11, the method for selecting a pixel point pair according to an embodiment of the present invention may include the following steps:
step 1101, determining a third target pixel point corresponding to a second prediction block, where the second prediction block includes a chroma prediction block and a luma prediction block corresponding to the chroma prediction block.
Step 1102, selecting R groups of pixel point pairs from the third target pixel points, where the R groups of pixel point pairs include R pixel points corresponding to the chroma prediction block and R pixel points corresponding to the luma prediction block, and R is a positive integer.
Wherein the third target pixel point includes any one of:
in a case where both the upper side and the left side of the second prediction block include reconstructed pixel points, the third target pixel point includes: a reconstruction pixel point right above the second prediction block; or, a reconstructed pixel point right to the left of the second prediction block;
under the condition that a reconstruction pixel point is included above the second prediction block and a reconstruction pixel point is not included on the left side of the second prediction block, the third target pixel point comprises: an un-reconstructed pixel point to the left of the second prediction block;
under the condition that the reconstructed pixel point is not included above the second prediction block and the reconstructed pixel point is included on the left of the second prediction block, the third target pixel point comprises: and the non-reconstructed pixel point above the second prediction block.
As can be seen from the above, in the case where the reconstructed pixel points are included both above and to the left of the second prediction block, the R groups of pixel point pairs are from:
a reconstruction pixel point right above the second prediction block; or the like, or, alternatively,
and the reconstructed pixel point at the right left side of the second prediction block.
It can be seen that, compared with the prior art in which pixel point pairs are selected from the positions directly above and directly to the left of the chroma prediction block and the luma prediction block, the present embodiment selects pixel point pairs only from the positions directly above or directly to the left of the chroma prediction block and the luma prediction block, and provides a new way of selecting pixel point pairs.
And under the condition that reconstructed pixels are included above the second prediction block and reconstructed pixels are not included on the left of the second prediction block, R groups of pixel point pairs come from: an un-reconstructed pixel point to the left of the second prediction block.
It can be seen that, compared with the prior art in which the pixel point pair is selected from the right above the chroma prediction block and the luma prediction block, the present embodiment selects the pixel point pair from the left of the chroma prediction block and the luma prediction block, providing a new way of selecting the pixel point pair.
And under the condition that the reconstructed pixel point is not included above the second prediction block and the reconstructed pixel point is included on the left side of the second prediction block, R groups of pixel point pairs come from: and the non-reconstructed pixel point above the second prediction block.
It can be seen that, compared with the prior art in which pixel point pairs are selected from the right left of the chroma prediction block and the luma prediction block, the present embodiment selects pixel point pairs from the upper side of the chroma prediction block and the luma prediction block, providing a new way of selecting pixel point pairs.
Optionally, after selecting R groups of pixel point pairs from the third target pixel point, the method further includes:
and coding a second bin by adopting a context coding mode, wherein the second bin is a bin with a third bin string increased relative to the second bin string, the third bin string is a bin string generated by carrying out binarization on the prediction mode corresponding to the third target pixel point, and the second bin string is a bin string generated by carrying out binarization on the chroma two-step prediction mode TSCPM.
Further, the context model corresponding to the context coding mode is: context model number 1; or, a newly established context model.
It can be seen that, in this embodiment, for the newly added bin, context coding No. 1 may be used.
Referring to fig. 12, fig. 12 is a block diagram of a parameter obtaining apparatus according to an embodiment of the present invention. As shown in fig. 12, the parameter acquisition apparatus 1200 may include:
a first determining module 1201, configured to obtain N pixel sets corresponding to a chroma prediction block and N pixel sets corresponding to a luma prediction block, where the chroma prediction block corresponds to the luma prediction block, and N is an integer greater than 1;
an obtaining module 1202, configured to obtain N sets of parameters according to N pixel sets corresponding to the chroma prediction block and N pixel sets corresponding to the luma prediction block, where the N sets of parameters are used to predict a chroma value of the chroma prediction block.
Optionally, the parameter acquiring apparatus 1200 further includes:
a first selecting module, configured to select, by the second obtaining module, a target parameter set from N sets of parameters after obtaining N sets of parameters according to the N pixel sets corresponding to the chroma prediction block and the N pixel sets corresponding to the luma prediction block, where the target parameter set is a set of parameters corresponding to a minimum coding cost among the N sets of parameters;
a prediction module configured to predict chroma values of the chroma prediction block using the set of target parameters.
Optionally, the upper side and the left side of a first prediction block comprise reconstructed pixel points, and the first prediction block is the chroma prediction block or the brightness prediction block;
the N pixel point sets corresponding to the first prediction block comprise at least two of the following pixel point sets:
a first pixel point set, wherein the first pixel point set comprises a reconstruction pixel point right above the first prediction block and a reconstruction pixel point right left of the first prediction block;
a second set of pixels comprising only reconstructed pixels directly above the first prediction block;
a third pixel point set, where the third pixel point set includes a reconstructed pixel point at the first upper left or upper right of the first prediction block, and the first upper left is the left right of the first prediction block;
a fourth set of pixels comprising only reconstructed pixels to the right left of the first prediction block;
a fifth pixel point set, where the fifth pixel point set includes reconstructed pixel points at a second upper left side or a lower left side of the first prediction block, and the second upper left side is above a right left side of the first prediction block.
Optionally, a reconstructed pixel is included above the first prediction block, and a reconstructed pixel is not included on the left of the first prediction block, where the first prediction block is the chroma prediction block or the luma prediction block;
the N pixel point sets corresponding to the first prediction block comprise at least two of the following pixel point sets:
a second set of pixels comprising only reconstructed pixels directly above the first prediction block;
a third pixel point set, where the third pixel point set includes a reconstructed pixel point at the first upper left or upper right of the first prediction block, and the first upper left is the left right of the first prediction block;
a sixth set of pixels comprising non-reconstructed pixels to the left of the first prediction block.
Optionally, a reconstructed pixel is not included above the first prediction block, a reconstructed pixel is included on the left of the first prediction block, and the first prediction block is the chroma prediction block or the luma prediction block;
the N pixel point sets corresponding to the first prediction block comprise at least two of the following pixel point sets:
a fourth set of pixels comprising only reconstructed pixels to the right left of the first prediction block;
a fifth pixel point set, where the fifth pixel point set includes a reconstructed pixel point at a second upper left side or a lower left side of the first prediction block, and the second upper left side is above a right left side of the first prediction block;
a seventh set of pixels comprising non-reconstructed pixels above the first prediction block.
Optionally, the third pixel point set includes any one of the following items:
a reconstruction pixel point at the first upper left of the first prediction block;
a reconstruction pixel point at the upper right of the first prediction block;
reconstructing pixel points at the first upper left and upper right of the first prediction block;
reconstruction pixel points at the first upper left and right upper parts of the first prediction block;
reconstruction pixel points right above and right above the first prediction block;
and reconstructing pixel points at the first upper left, right upper and right upper parts of the first prediction block.
Optionally, the fifth pixel point set includes any one of the following items:
a second upper left reconstructed pixel of the first prediction block;
a reconstructed pixel point at the lower left of the first prediction block;
reconstructing pixel points at the second upper left and lower left of the first prediction block;
second upper left and right reconstructed pixel points of the first prediction block;
reconstructing pixel points on the right left and the left lower of the first prediction block;
and the reconstructed pixel points at the second upper left, right left and lower left of the first prediction block.
Optionally, the N pixel point sets corresponding to the first prediction block include first pixel points above the first prediction block, where the first prediction block is the chroma prediction block or the luma prediction block;
the first pixel point is as follows: and pixel points in the I-line pixel points closest to the first prediction block in the P-line pixel points above the first prediction block, wherein P is an integer larger than I, and I is a positive integer smaller than or equal to 4.
Optionally, the N pixel point sets corresponding to the first prediction block include a second pixel point on the left of the first prediction block, where the first prediction block is the chroma prediction block or the luma prediction block;
the second pixel point is as follows: and pixel points in J-row pixel points closest to the first prediction block in Q-row pixel points on the left side of the first prediction block, wherein Q is an integer larger than J, and J is a positive integer smaller than or equal to 4.
Optionally, the first lengths corresponding to all the pixel points of the third pixel point set satisfy any one of the following:
the first length is no more than twice a width of the first prediction block;
the first length does not exceed a sum of a width and a height of the first prediction block.
Optionally, the first length L1 satisfies: l1 is more than or equal to K1 and is more than or equal to K2;
k1 is the total length of all pixels included in a first target pixel, the first target pixel is R pixels selected from the third pixel set, the first target pixel is used for obtaining one set of parameters in the N sets of parameters, and R is a positive integer; k2 is the total length of all reconstructed pixel points included above the first prediction block.
Optionally, the pixel points included in the first target pixel point are all reconstruction pixel points.
Optionally, the first length L1 satisfies: k3 is more than L1 and less than or equal to K4;
k3 is the total length of all reconstructed pixels included above the first prediction block, and K4 is twice the width of the first prediction block or the sum of the width and height of the first prediction block.
Optionally, the first target pixel point includes an unreconstructed pixel point;
the first target pixel points are R pixel points selected from the third pixel point set, the first target pixel points are used for obtaining one group of parameters in the N groups of parameters, and R is a positive integer.
Optionally, the second lengths corresponding to all the pixel points of the fifth pixel point set satisfy any one of the following:
the second length is no more than twice a height of the first prediction block;
the second length does not exceed a sum of a width and a height of the first prediction block.
Optionally, the second length L2 satisfies: l2 is more than or equal to K5 and is more than or equal to K6;
k5 is the total length of all pixels included in a second target pixel, the second target pixel is R pixels selected from the fifth pixel set, the second target pixel is used for obtaining one set of parameters in the N sets of parameters, and R is a positive integer; k6 is the total length of all reconstructed pixel points included to the left of the first prediction block.
Optionally, the pixel points included by the second target pixel point are all reconstruction pixel points.
Optionally, the second length L2 satisfies: k7 is more than L2 and less than or equal to K8;
wherein K7 is a total length of all reconstructed pixels included to the left of the first prediction block, and K8 is twice a height of the first prediction block or a sum of a width and a height of the first prediction block.
Optionally, the second target pixel point includes an unreconstructed pixel point;
the second target pixel points are R pixel points selected from the fifth pixel point set, the second target pixel points are used for obtaining one group of parameters in the N groups of parameters, and R is a positive integer.
The parameter obtaining apparatus 1200 provided in the embodiment of the present invention may execute the parameter obtaining method embodiment, and the implementation principle and the technical effect are similar, which are not described herein again.
In the embodiment of the present invention, N sets of parameters may be obtained by N pixel point sets corresponding to the chroma prediction block and N pixel point sets corresponding to the luma prediction block, where N is an integer greater than 1, and the N sets of parameters are used to predict the chroma value of the chroma prediction block. Compared with the prior art that only one group of parameters can be acquired, the method and the device can increase the number of groups for acquiring the parameters, thereby improving the accuracy of the acquired parameters.
Referring to fig. 13, fig. 13 is one of structural diagrams of a pixel point pair selection device according to an embodiment of the present invention. As shown in fig. 13, the pixel point pair selection apparatus 1300 includes:
a second determining module 1301, configured to determine a target reconstructed pixel corresponding to a second prediction block, where the second prediction block includes a chroma prediction block and a luma prediction block corresponding to the chroma prediction block;
a second selecting module 1302, configured to select R groups of reconstructed pixel point pairs from the target reconstructed pixel points, where the R groups of reconstructed pixel point pairs include R reconstructed pixel points corresponding to the chroma prediction block and R reconstructed pixel points corresponding to the luma prediction block, and R is a positive integer;
wherein the target reconstruction pixel point comprises: reconstruction pixel points right above and right above the second prediction block; or, the reconstructed pixel points at the right left and the left lower of the second prediction block.
Optionally, R is 4;
the R reconstruction pixel points corresponding to the chroma prediction block are as follows: among reconstructed pixels right above and right above the chroma prediction block, reconstructed pixels at positions 0/4, 1/4, 2/4 and 3/4;
the R reconstruction pixel points corresponding to the brightness prediction block are as follows: among the reconstructed pixels directly above and to the upper right of the luminance prediction block, the reconstructed pixels at positions 0/4, 1/4, 2/4, and 3/4.
Optionally, the pixel point pair selecting apparatus 1300 further includes:
a first coding module, configured to code a first bit bin in a target coding manner, where the first bin is a bin in which a first bin string is increased relative to a second bin string, the first bin string is a bin string generated by binarizing a prediction mode corresponding to the target reconstruction pixel point, and the second bin string is a bin string generated by binarizing a chroma two-step prediction mode TSCPM;
the target coding mode is as follows: a context encoding mode; or, Bypass the Bypass encoding mode.
Optionally, when the target encoding manner is a context encoding manner, the context model corresponding to the target encoding manner is: context model number 1; or, a newly established context model.
The pixel pair selection apparatus 1300 according to the embodiment of the present invention may implement the pixel pair selection method embodiment corresponding to fig. 10, which has similar implementation principles and technical effects, and is not described herein again.
In the embodiment of the present invention, referring to fig. 14, fig. 14 is a second structural diagram of a pixel point pair selection device provided in the embodiment of the present invention. As shown in fig. 14, the pixel point pair selection apparatus 1400 includes:
a third determining module 1401, configured to determine a third target pixel corresponding to a second prediction block, where the second prediction block includes a chroma prediction block and a luma prediction block corresponding to the chroma prediction block;
a third selecting module 1402, configured to select R groups of pixel point pairs from the third target pixel points, where the R groups of pixel point pairs include R pixels corresponding to the chroma prediction block and R pixels corresponding to the luma prediction block, and R is a positive integer;
wherein the third target pixel point includes any one of:
in a case where both the upper side and the left side of the second prediction block include reconstructed pixel points, the third target pixel point includes: a reconstruction pixel point right above the second prediction block; or, a reconstructed pixel point right to the left of the second prediction block;
under the condition that a reconstruction pixel point is included above the second prediction block and a reconstruction pixel point is not included on the left side of the second prediction block, the third target pixel point comprises: an un-reconstructed pixel point to the left of the second prediction block;
under the condition that the reconstructed pixel point is not included above the second prediction block and the reconstructed pixel point is included on the left of the second prediction block, the third target pixel point comprises: and the non-reconstructed pixel point above the second prediction block.
Optionally, the pixel point pair selecting device 1400 further includes:
and the second coding module is used for coding a second bin by adopting a context coding mode, wherein the second bin is a bin with a third bin string increased relative to the second bin string, the third bin string is a bin string generated by binarizing the prediction mode corresponding to the third target pixel point, and the second bin string is a bin string generated by binarizing the chroma two-step prediction mode TSCPM.
Optionally, the context model corresponding to the context coding mode is: context model number 1; or, a newly established context model.
The pixel pair selection device 14000 provided by the embodiment of the present invention can execute the pixel pair selection method embodiment corresponding to fig. 11, which has similar implementation principles and technical effects, and is not described herein again.
Referring to fig. 15, fig. 15 is a second structural diagram of a parameter obtaining apparatus according to an embodiment of the present invention. As shown in fig. 15, the parameter acquisition apparatus 1500 may include:
the processor 1501, which is used to read the program in the memory 1502, executes the following processes:
determining N pixel point sets corresponding to a chroma prediction block and N pixel point sets corresponding to a brightness prediction block, wherein the chroma prediction block corresponds to the brightness prediction block, and N is an integer greater than 1;
and acquiring N groups of parameters according to the N pixel point sets corresponding to the chroma prediction block and the N pixel point sets corresponding to the brightness prediction block, wherein the N groups of parameters are used for predicting the chroma value of the chroma prediction block.
A transceiver 1503 for receiving and transmitting data under the control of the process 1501.
In fig. 15, among other things, the bus architecture may include any number of interconnected buses and bridges, with one or more processors, represented by processor 1501, being linked together with various circuits of memory, represented by memory 1502. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface. The transceiver 1503 may be a plurality of elements, including a transmitter and a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 1501 is responsible for managing a bus architecture and general processing, and the memory 1502 may store data used by the processor 1501 in performing operations.
The processor 1501 is responsible for managing a bus architecture and general processing, and the memory 1502 may store data used by the processor 1501 in performing operations.
Optionally, the processor 1501 is further configured to read the computer program, and execute the following steps:
selecting a target parameter group from the N groups of parameters, wherein the target parameter group is a group of parameters corresponding to the minimum coding cost in the N groups of parameters;
predicting chroma values of the chroma predicted block using the set of target parameters.
Optionally, the upper side and the left side of a first prediction block comprise reconstructed pixel points, and the first prediction block is the chroma prediction block or the brightness prediction block;
the N pixel point sets corresponding to the first prediction block comprise at least two of the following pixel point sets:
a first pixel point set, wherein the first pixel point set comprises a reconstruction pixel point right above the first prediction block and a reconstruction pixel point right left of the first prediction block;
a second set of pixels comprising only reconstructed pixels directly above the first prediction block;
a third pixel point set, where the third pixel point set includes a reconstructed pixel point at the first upper left or upper right of the first prediction block, and the first upper left is the left right of the first prediction block;
a fourth set of pixels comprising only reconstructed pixels to the right left of the first prediction block;
a fifth pixel point set, where the fifth pixel point set includes reconstructed pixel points at a second upper left side or a lower left side of the first prediction block, and the second upper left side is above a right left side of the first prediction block.
Optionally, a reconstructed pixel is included above the first prediction block, and a reconstructed pixel is not included on the left of the first prediction block, where the first prediction block is the chroma prediction block or the luma prediction block;
the N pixel point sets corresponding to the first prediction block comprise at least two of the following pixel point sets:
a second set of pixels comprising only reconstructed pixels directly above the first prediction block;
a third pixel point set, where the third pixel point set includes a reconstructed pixel point at the first upper left or upper right of the first prediction block, and the first upper left is the left right of the first prediction block;
a sixth set of pixels comprising non-reconstructed pixels to the left of the first prediction block.
Optionally, a reconstructed pixel is not included above the first prediction block, a reconstructed pixel is included on the left of the first prediction block, and the first prediction block is the chroma prediction block or the luma prediction block;
the N pixel point sets corresponding to the first prediction block comprise at least two of the following pixel point sets:
a fourth set of pixels comprising only reconstructed pixels to the right left of the first prediction block;
a fifth pixel point set, where the fifth pixel point set includes a reconstructed pixel point at a second upper left side or a lower left side of the first prediction block, and the second upper left side is above a right left side of the first prediction block;
a seventh set of pixels comprising non-reconstructed pixels above the first prediction block.
Optionally, the third pixel point set includes any one of the following items:
a reconstruction pixel point at the first upper left of the first prediction block;
a reconstruction pixel point at the upper right of the first prediction block;
reconstructing pixel points at the first upper left and upper right of the first prediction block;
reconstruction pixel points at the first upper left and right upper parts of the first prediction block;
reconstruction pixel points right above and right above the first prediction block;
and reconstructing pixel points at the first upper left, right upper and right upper parts of the first prediction block.
Optionally, the fifth pixel point set includes any one of the following items:
a second upper left reconstructed pixel of the first prediction block;
a reconstructed pixel point at the lower left of the first prediction block;
reconstructing pixel points at the second upper left and lower left of the first prediction block;
second upper left and right reconstructed pixel points of the first prediction block;
reconstructing pixel points on the right left and the left lower of the first prediction block;
and the reconstructed pixel points at the second upper left, right left and lower left of the first prediction block.
Optionally, the N pixel point sets corresponding to the first prediction block include first pixel points above the first prediction block, where the first prediction block is the chroma prediction block or the luma prediction block;
the first pixel point is as follows: and pixel points in the I-line pixel points closest to the first prediction block in the P-line pixel points above the first prediction block, wherein P is an integer larger than I, and I is a positive integer smaller than or equal to 4.
Optionally, the first prediction block is the chroma prediction block or the luma prediction block, where the N pixel point sets corresponding to the first prediction block include a second pixel point on the left of the first prediction block;
the second pixel point is as follows: and pixel points in J-row pixel points closest to the first prediction block in Q-row pixel points on the left side of the first prediction block, wherein Q is an integer larger than J, and J is a positive integer smaller than or equal to 4.
Optionally, the first lengths corresponding to all the pixel points of the third pixel point set satisfy any one of the following:
the first length is no more than twice a width of the first prediction block;
the first length does not exceed a sum of a width and a height of the first prediction block.
Optionally, the first length L1 satisfies: l1 is more than or equal to K1 and is more than or equal to K2;
k1 is the total length of all pixels included in a first target pixel, the first target pixel is R pixels selected from the third pixel set, the first target pixel is used for obtaining one set of parameters in the N sets of parameters, and R is a positive integer; k2 is the total length of all reconstructed pixel points included above the first prediction block.
Optionally, the pixel points included in the first target pixel point are all reconstruction pixel points.
Optionally, the first length L1 satisfies: k3 is more than L1 and less than or equal to K4;
k3 is the total length of all reconstructed pixels included above the first prediction block, and K4 is twice the width of the first prediction block or the sum of the width and height of the first prediction block.
Optionally, the first target pixel point includes an unreconstructed pixel point;
the first target pixel points are R pixel points selected from the third pixel point set, the first target pixel points are used for obtaining one group of parameters in the N groups of parameters, and R is a positive integer.
Optionally, the second lengths corresponding to all the pixel points of the fifth pixel point set satisfy any one of the following:
the second length is no more than twice a height of the first prediction block;
the second length does not exceed a sum of a width and a height of the first prediction block.
Optionally, the second length L2 satisfies: l2 is more than or equal to K5 and is more than or equal to K6;
k5 is the total length of all pixels included in a second target pixel, the second target pixel is R pixels selected from the fifth pixel set, the second target pixel is used for obtaining one set of parameters in the N sets of parameters, and R is a positive integer; k6 is the total length of all reconstructed pixel points included to the left of the first prediction block.
Optionally, the pixel points included by the second target pixel point are all reconstruction pixel points.
Optionally, the second length L2 satisfies: k7 is more than L2 and less than or equal to K8;
wherein K7 is a total length of all reconstructed pixels included to the left of the first prediction block, and K8 is twice a height of the first prediction block or a sum of a width and a height of the first prediction block.
Optionally, the second target pixel point includes an unreconstructed pixel point;
the second target pixel points are R pixel points selected from the fifth pixel point set, the second target pixel points are used for obtaining one group of parameters in the N groups of parameters, and R is a positive integer.
The parameter obtaining device provided in the embodiment of the present invention may implement the above-mentioned parameter obtaining method embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
Referring to fig. 16, fig. 16 is a third structural diagram of a pixel pair selection apparatus according to an embodiment of the present invention. As shown in fig. 16, the pixel pair selection device 1600 may include:
a processor 1601 for reading the program in the memory 1602, and executing the following processes:
determining a target reconstruction pixel point corresponding to a second prediction block, wherein the second prediction block comprises a chroma prediction block and a brightness prediction block corresponding to the chroma prediction block;
selecting R groups of reconstructed pixel point pairs from the target reconstructed pixel points, wherein the R groups of reconstructed pixel point pairs comprise R reconstructed pixel points corresponding to the chroma prediction block and R reconstructed pixel points corresponding to the luma prediction block, and R is a positive integer;
wherein the target reconstruction pixel point comprises: reconstruction pixel points right above and right above the second prediction block; or, the reconstructed pixel points at the right left and the left lower of the second prediction block.
A transceiver 1603 for receiving and transmitting data under the control of the process 1601.
In fig. 16, among other things, the bus architecture may include any number of interconnected buses and bridges with one or more processors represented by processor 1601 and various circuits of memory represented by memory 1602 being linked together. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface. The transceiver 1603 may be a plurality of elements, i.e., including a transmitter and a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 1601 is responsible for managing the bus architecture and general processing, and the memory 1602 may store data used by the processor 1601 in performing operations.
The processor 1601 is responsible for managing the bus architecture and general processing, and the memory 1602 may store data used by the processor 1601 in performing operations.
Optionally, R is 4;
the R reconstruction pixel points corresponding to the chroma prediction block are as follows: among reconstructed pixels right above and right above the chroma prediction block, reconstructed pixels at positions 0/4, 1/4, 2/4 and 3/4;
the R reconstruction pixel points corresponding to the brightness prediction block are as follows: among the reconstructed pixels directly above and to the upper right of the luminance prediction block, the reconstructed pixels at positions 0/4, 1/4, 2/4, and 3/4.
Optionally, the processor 1601 is further configured to read the computer program, and execute the following steps:
coding a first bit bin by adopting a target coding mode, wherein the first bin is a bin with a first bin string increased relative to a second bin string, the first bin string is a bin string generated by carrying out binarization on a prediction mode corresponding to the target reconstruction pixel point, and the second bin string is a bin string generated by carrying out binarization on a chroma two-step prediction mode TSCPM;
the target coding mode is as follows: a context encoding mode; or, Bypass the Bypass encoding mode.
Optionally, when the target encoding manner is a context encoding manner, the context model corresponding to the target encoding manner is: context model number 1; or, a newly established context model.
The pixel pair selection device provided in the embodiment of the present invention may implement the above-mentioned pixel pair selection method embodiment corresponding to fig. 10, and the implementation principle and technical effect are similar, and this embodiment is not described herein again.
Referring to fig. 17, fig. 17 is a fourth structural diagram of a pixel pair selection apparatus according to an embodiment of the present invention. As shown in fig. 17, the pixel pair selection apparatus 1700 may include:
the processor 1701, which reads the program in the memory 1702, executes the following processes:
determining a third target pixel point corresponding to a second prediction block, wherein the second prediction block comprises a chroma prediction block and a brightness prediction block corresponding to the chroma prediction block;
selecting R groups of pixel point pairs from the third target pixel points, wherein the R groups of pixel point pairs comprise R pixel points corresponding to the chroma prediction block and R pixel points corresponding to the brightness prediction block, and R is a positive integer;
wherein the third target pixel point includes any one of:
in a case where both the upper side and the left side of the second prediction block include reconstructed pixel points, the third target pixel point includes: a reconstruction pixel point right above the second prediction block; or, a reconstructed pixel point right to the left of the second prediction block;
under the condition that a reconstruction pixel point is included above the second prediction block and a reconstruction pixel point is not included on the left side of the second prediction block, the third target pixel point comprises: an un-reconstructed pixel point to the left of the second prediction block;
under the condition that the reconstructed pixel point is not included above the second prediction block and the reconstructed pixel point is included on the left of the second prediction block, the third target pixel point comprises: and the non-reconstructed pixel point above the second prediction block.
A transceiver 1703 for receiving and transmitting data under the control of the process 1701.
In fig. 17, among other things, the bus architecture may include any number of interconnected buses and bridges with one or more processors, represented by the processor 1701, and various circuits, represented by the memory 1702, linked together. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface. The transceiver 1703 may be a number of elements including a transmitter and a transceiver providing a means for communicating with various other apparatus over a transmission medium. The processor 1701 is responsible for managing the bus architecture and general processing, and the memory 1702 may store data used by the processor 1701 in performing operations.
The processor 1701 is responsible for managing the bus architecture and general processing, and the memory 1702 may store data used by the processor 1701 in performing operations.
Optionally, the processor 1701 is further configured to read the computer program and execute the following steps:
and coding a second bin by adopting a context coding mode, wherein the second bin is a bin with a third bin string increased relative to the second bin string, the third bin string is a bin string generated by carrying out binarization on the prediction mode corresponding to the third target pixel point, and the second bin string is a bin string generated by carrying out binarization on the chroma two-step prediction mode TSCPM.
Optionally, the context model corresponding to the context coding mode is: context model number 1; or, a newly established context model.
The pixel pair selection device provided in the embodiment of the present invention may implement the above-mentioned pixel pair selection method embodiment corresponding to fig. 11, and the implementation principle and technical effect are similar, and details of this embodiment are not described herein again.
Furthermore, a computer-readable storage medium of an embodiment of the present invention stores a computer program.
In one case, the computer program is executable by a processor to perform the steps of:
acquiring N pixel point sets corresponding to a chroma prediction block and N pixel point sets corresponding to a brightness prediction block, wherein the chroma prediction block corresponds to the brightness prediction block, and N is an integer greater than 1;
and according to the N pixel point sets corresponding to the chroma prediction block and the N pixel point sets corresponding to the brightness prediction block, the N groups of parameters are used for predicting the chroma value of the chroma prediction block.
Optionally, after obtaining N sets of parameters according to the N pixel point sets corresponding to the chroma prediction block and the N pixel point sets corresponding to the luma prediction block, the computer program may be further executed by the processor to implement the following steps:
selecting a target parameter group from the N groups of parameters, wherein the target parameter group is a group of parameters corresponding to the minimum coding cost in the N groups of parameters;
predicting chroma values of the chroma predicted block using the set of target parameters.
Optionally, the upper side and the left side of a first prediction block comprise reconstructed pixel points, and the first prediction block is the chroma prediction block or the brightness prediction block;
the N pixel point sets corresponding to the first prediction block comprise at least two of the following pixel point sets:
a first pixel point set, wherein the first pixel point set comprises a reconstruction pixel point right above the first prediction block and a reconstruction pixel point right left of the first prediction block;
a second set of pixels comprising only reconstructed pixels directly above the first prediction block;
a third pixel point set, where the third pixel point set includes a reconstructed pixel point at the first upper left or upper right of the first prediction block, and the first upper left is the left right of the first prediction block;
a fourth set of pixels comprising only reconstructed pixels to the right left of the first prediction block;
a fifth pixel point set, where the fifth pixel point set includes reconstructed pixel points at a second upper left side or a lower left side of the first prediction block, and the second upper left side is above a right left side of the first prediction block.
Optionally, a reconstructed pixel is included above the first prediction block, and a reconstructed pixel is not included on the left of the first prediction block, where the first prediction block is the chroma prediction block or the luma prediction block;
the N pixel point sets corresponding to the first prediction block comprise at least two of the following pixel point sets:
a second set of pixels comprising only reconstructed pixels directly above the first prediction block;
a third pixel point set, where the third pixel point set includes a reconstructed pixel point at the first upper left or upper right of the first prediction block, and the first upper left is the left right of the first prediction block;
a sixth set of pixels comprising non-reconstructed pixels to the left of the first prediction block.
Optionally, a reconstructed pixel is not included above the first prediction block, a reconstructed pixel is included on the left of the first prediction block, and the first prediction block is the chroma prediction block or the luma prediction block;
the N pixel point sets corresponding to the first prediction block comprise at least two of the following pixel point sets:
a fourth set of pixels comprising only reconstructed pixels to the right left of the first prediction block;
a fifth pixel point set, where the fifth pixel point set includes a reconstructed pixel point at a second upper left side or a lower left side of the first prediction block, and the second upper left side is above a right left side of the first prediction block;
a seventh set of pixels comprising non-reconstructed pixels above the first prediction block.
Optionally, the third pixel point set includes any one of the following items:
a reconstruction pixel point at the first upper left of the first prediction block;
a reconstruction pixel point at the upper right of the first prediction block;
reconstructing pixel points at the first upper left and upper right of the first prediction block;
reconstruction pixel points at the first upper left and right upper parts of the first prediction block;
reconstruction pixel points right above and right above the first prediction block;
and reconstructing pixel points at the first upper left, right upper and right upper parts of the first prediction block.
Optionally, the fifth pixel point set includes any one of the following items:
a second upper left reconstructed pixel of the first prediction block;
a reconstructed pixel point at the lower left of the first prediction block;
reconstructing pixel points at the second upper left and lower left of the first prediction block;
second upper left and right reconstructed pixel points of the first prediction block;
reconstructing pixel points on the right left and the left lower of the first prediction block;
and the reconstructed pixel points at the second upper left, right left and lower left of the first prediction block.
Optionally, the N pixel point sets corresponding to the first prediction block include first pixel points above the first prediction block, where the first prediction block is the chroma prediction block or the luma prediction block;
the first pixel point is as follows: and pixel points in the I-line pixel points closest to the first prediction block in the P-line pixel points above the first prediction block, wherein P is an integer larger than I, and I is a positive integer smaller than or equal to 4.
Optionally, the N pixel point sets corresponding to the first prediction block include a second pixel point on the left of the first prediction block, where the first prediction block is the chroma prediction block or the luma prediction block;
the second pixel point is as follows: and pixel points in J-row pixel points closest to the first prediction block in Q-row pixel points on the left side of the first prediction block, wherein Q is an integer larger than J, and J is a positive integer smaller than or equal to 4.
Optionally, the first lengths corresponding to all the pixel points of the third pixel point set satisfy any one of the following:
the first length is no more than twice a width of the first prediction block;
the first length does not exceed a sum of a width and a height of the first prediction block.
Optionally, the first length L1 satisfies: l1 is more than or equal to K1 and is more than or equal to K2;
k1 is the total length of all pixels included in a first target pixel, the first target pixel is R pixels selected from the third pixel set, the first target pixel is used for obtaining one set of parameters in the N sets of parameters, and R is a positive integer; k2 is the total length of all reconstructed pixel points included above the first prediction block.
Optionally, the pixel points included in the first target pixel point are all reconstruction pixel points.
Optionally, the first length L1 satisfies: k3 is more than L1 and less than or equal to K4;
k3 is the total length of all reconstructed pixels included above the first prediction block, and K4 is twice the width of the first prediction block or the sum of the width and height of the first prediction block.
Optionally, the first target pixel point includes an unreconstructed pixel point;
the first target pixel points are R pixel points selected from the third pixel point set, the first target pixel points are used for obtaining one group of parameters in the N groups of parameters, and R is a positive integer.
Optionally, the second lengths corresponding to all the pixel points of the fifth pixel point set satisfy any one of the following:
the second length is no more than twice a height of the first prediction block;
the second length does not exceed a sum of a width and a height of the first prediction block.
Optionally, the second length L2 satisfies: l2 is more than or equal to K5 and is more than or equal to K6;
k5 is the total length of all pixels included in a second target pixel, the second target pixel is R pixels selected from the fifth pixel set, the second target pixel is used for obtaining one set of parameters in the N sets of parameters, and R is a positive integer; k6 is the total length of all reconstructed pixel points included to the left of the first prediction block.
Optionally, the pixel points included by the second target pixel point are all reconstruction pixel points.
Optionally, the second length L2 satisfies: k7 is more than L2 and less than or equal to K8;
wherein K7 is a total length of all reconstructed pixels included to the left of the first prediction block, and K8 is twice a height of the first prediction block or a sum of a width and a height of the first prediction block.
Optionally, the second target pixel point includes an unreconstructed pixel point;
the second target pixel points are R pixel points selected from the fifth pixel point set, the second target pixel points are used for obtaining one group of parameters in the N groups of parameters, and R is a positive integer.
In case two, the computer program is executable by a processor to implement the steps of:
acquiring a target reconstruction pixel point corresponding to a second prediction block, wherein the second prediction block comprises a chroma prediction block and a brightness prediction block corresponding to the chroma prediction block;
selecting R groups of reconstructed pixel point pairs from the target reconstructed pixel points, wherein the R groups of reconstructed pixel point pairs comprise R reconstructed pixel points corresponding to the chroma prediction block and R reconstructed pixel points corresponding to the luma prediction block, and R is a positive integer;
wherein the target reconstruction pixel point comprises: reconstruction pixel points right above and right above the second prediction block; or, the reconstructed pixel points at the right left and the left lower of the second prediction block.
Optionally, R is 4;
the R reconstruction pixel points corresponding to the chroma prediction block are as follows: among reconstructed pixels right above and right above the chroma prediction block, reconstructed pixels at positions 0/4, 1/4, 2/4 and 3/4;
the R reconstruction pixel points corresponding to the brightness prediction block are as follows: among the reconstructed pixels directly above and to the upper right of the luminance prediction block, the reconstructed pixels at positions 0/4, 1/4, 2/4, and 3/4.
Optionally, after selecting R groups of reconstructed pixel point pairs from the target reconstructed pixel points, the method further includes:
coding a first bit bin by adopting a target coding mode, wherein the first bin is a bin with a first bin string increased relative to a second bin string, the first bin string is a bin string generated by carrying out binarization on a prediction mode corresponding to the target reconstruction pixel point, and the second bin string is a bin string generated by carrying out binarization on a chroma two-step prediction mode TSCPM;
the target coding mode is as follows: a context encoding mode; or, Bypass the Bypass encoding mode.
Optionally, when the target encoding manner is a context encoding manner, the context model corresponding to the target encoding manner is: context model number 1; or, a newly established context model.
In case three, the computer program is executable by a processor to implement the steps of:
acquiring a third target pixel point corresponding to a second prediction block, wherein the second prediction block comprises a chroma prediction block and a brightness prediction block corresponding to the chroma prediction block;
selecting R groups of pixel point pairs from the third target pixel points, wherein the R groups of pixel point pairs comprise R pixel points corresponding to the chroma prediction block and R pixel points corresponding to the brightness prediction block, and R is a positive integer;
wherein the third target pixel point includes any one of:
in a case where both the upper side and the left side of the second prediction block include reconstructed pixel points, the third target pixel point includes: a reconstruction pixel point right above the second prediction block; or, a reconstructed pixel point right to the left of the second prediction block;
under the condition that a reconstruction pixel point is included above the second prediction block and a reconstruction pixel point is not included on the left side of the second prediction block, the third target pixel point comprises: an un-reconstructed pixel point to the left of the second prediction block;
under the condition that the reconstructed pixel point is not included above the second prediction block and the reconstructed pixel point is included on the left of the second prediction block, the third target pixel point comprises: and the non-reconstructed pixel point above the second prediction block.
Optionally, after selecting R groups of pixel point pairs from the third target pixel point, the method further includes:
and coding a second bin by adopting a context coding mode, wherein the second bin is a bin with a third bin string increased relative to the second bin string, the third bin string is a bin string generated by carrying out binarization on the prediction mode corresponding to the third target pixel point, and the second bin string is a bin string generated by carrying out binarization on the chroma two-step prediction mode TSCPM.
Optionally, the context model corresponding to the context coding mode is: context model number 1; or, a newly established context model.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be physically included alone, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) to execute some steps of the parameter obtaining method according to various embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (11)

1. A pixel point pair selection method, the method comprising:
determining a target reconstruction pixel point corresponding to a second prediction block, wherein the second prediction block comprises a chroma prediction block and a brightness prediction block corresponding to the chroma prediction block;
selecting R groups of reconstructed pixel point pairs from the target reconstructed pixel points, wherein the R groups of reconstructed pixel point pairs comprise R reconstructed pixel points corresponding to the chroma prediction block and R reconstructed pixel points corresponding to the luma prediction block, and R is a positive integer;
wherein the target reconstruction pixel point comprises: reconstruction pixel points right above and right above the second prediction block; or, the reconstructed pixel points at the right left and the left lower of the second prediction block.
2. The method of claim 1, wherein R is 4;
the R reconstruction pixel points corresponding to the chroma prediction block are as follows: among reconstructed pixels right above and right above the chroma prediction block, reconstructed pixels at positions 0/4, 1/4, 2/4 and 3/4;
the R reconstruction pixel points corresponding to the brightness prediction block are as follows: among the reconstructed pixels directly above and to the upper right of the luminance prediction block, the reconstructed pixels at positions 0/4, 1/4, 2/4, and 3/4.
3. The method of claim 1, wherein after said selecting R sets of pairs of reconstructed pixel points from said target reconstructed pixel points, the method further comprises:
coding a first bit bin by adopting a target coding mode, wherein the first bin is a bin with a first bin string increased relative to a second bin string, the first bin string is a bin string generated by carrying out binarization on a prediction mode corresponding to the target reconstruction pixel point, and the second bin string is a bin string generated by carrying out binarization on a chroma two-step prediction mode TSCPM;
the target coding mode is as follows: a context encoding mode; or, Bypass the Bypass encoding mode.
4. The method according to claim 3, wherein, when the target coding scheme is a context coding scheme, the context model corresponding to the target coding scheme is: context model number 1; or, a newly established context model.
5. A pixel point pair selection method, the method comprising:
determining a third target pixel point corresponding to a second prediction block, wherein the second prediction block comprises a chroma prediction block and a brightness prediction block corresponding to the chroma prediction block;
selecting R groups of pixel point pairs from the third target pixel points, wherein the R groups of pixel point pairs comprise R pixel points corresponding to the chroma prediction block and R pixel points corresponding to the brightness prediction block, and R is a positive integer;
wherein the third target pixel point includes any one of:
in a case where both the upper side and the left side of the second prediction block include reconstructed pixel points, the third target pixel point includes: a reconstruction pixel point right above the second prediction block; or, a reconstructed pixel point right to the left of the second prediction block;
under the condition that a reconstruction pixel point is included above the second prediction block and a reconstruction pixel point is not included on the left side of the second prediction block, the third target pixel point comprises: an un-reconstructed pixel point to the left of the second prediction block;
under the condition that the reconstructed pixel point is not included above the second prediction block and the reconstructed pixel point is included on the left of the second prediction block, the third target pixel point comprises: and the non-reconstructed pixel point above the second prediction block.
6. The method of claim 5, wherein after selecting R sets of pixel point pairs from the third target pixel point, the method further comprises:
and coding a second bin by adopting a context coding mode, wherein the second bin is a bin with a third bin string increased relative to the second bin string, the third bin string is a bin string generated by carrying out binarization on the prediction mode corresponding to the third target pixel point, and the second bin string is a bin string generated by carrying out binarization on the chroma two-step prediction mode TSCPM.
7. The method according to claim 6, wherein the context model corresponding to the context coding scheme is: context model number 1; or, a newly established context model.
8. A pixel point pair selection device, characterized in that the pixel point pair selection device comprises:
a second determining module, configured to determine a target reconstructed pixel corresponding to a second prediction block, where the second prediction block includes a chroma prediction block and a luma prediction block corresponding to the chroma prediction block;
a second selecting module, configured to select R groups of reconstructed pixel point pairs from the target reconstructed pixel points, where the R groups of reconstructed pixel point pairs include R reconstructed pixel points corresponding to the chroma prediction block and R reconstructed pixel points corresponding to the luma prediction block, and R is a positive integer;
wherein the target reconstruction pixel point comprises: reconstruction pixel points right above and right above the second prediction block; or, the reconstructed pixel points at the right left and the left lower of the second prediction block.
9. A pixel point pair selection device, characterized in that the pixel point pair selection device comprises:
a third determining module, configured to determine a third target pixel corresponding to a second prediction block, where the second prediction block includes a chroma prediction block and a luma prediction block corresponding to the chroma prediction block;
a third selecting module, configured to select R groups of pixel point pairs from the third target pixel points, where the R groups of pixel point pairs include R pixel points corresponding to the chroma prediction block and R pixel points corresponding to the luma prediction block, and R is a positive integer;
wherein the third target pixel point includes any one of:
in a case where both the upper side and the left side of the second prediction block include reconstructed pixel points, the third target pixel point includes: a reconstruction pixel point right above the second prediction block; or, a reconstructed pixel point right to the left of the second prediction block;
under the condition that a reconstruction pixel point is included above the second prediction block and a reconstruction pixel point is not included on the left side of the second prediction block, the third target pixel point comprises: an un-reconstructed pixel point to the left of the second prediction block;
under the condition that the reconstructed pixel point is not included above the second prediction block and the reconstructed pixel point is included on the left of the second prediction block, the third target pixel point comprises: and the non-reconstructed pixel point above the second prediction block.
10. A pixel point pair selection apparatus comprising: a transceiver, a memory, a processor, and a computer program stored on the memory and executable on the processor; characterized in that the processor, for reading the program in the memory, implements the steps in the pixel point pair selection method according to any one of claims 1 to 4 or the steps in the pixel point pair selection method according to any one of claims 5 to 7.
11. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the steps in the pixel point pair selection method according to any one of claims 1 to 4 or the steps in the pixel point pair selection method according to any one of claims 5 to 7.
CN202110855539.5A 2019-08-27 2019-08-27 Pixel point pair selection method, device and computer readable storage medium Active CN113596429B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110855539.5A CN113596429B (en) 2019-08-27 2019-08-27 Pixel point pair selection method, device and computer readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910796327.7A CN110557621B (en) 2019-08-27 2019-08-27 Parameter acquisition method, pixel point pair selection method and related equipment
CN202110855539.5A CN113596429B (en) 2019-08-27 2019-08-27 Pixel point pair selection method, device and computer readable storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910796327.7A Division CN110557621B (en) 2019-08-27 2019-08-27 Parameter acquisition method, pixel point pair selection method and related equipment

Publications (2)

Publication Number Publication Date
CN113596429A true CN113596429A (en) 2021-11-02
CN113596429B CN113596429B (en) 2023-04-14

Family

ID=68738336

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110855539.5A Active CN113596429B (en) 2019-08-27 2019-08-27 Pixel point pair selection method, device and computer readable storage medium
CN201910796327.7A Active CN110557621B (en) 2019-08-27 2019-08-27 Parameter acquisition method, pixel point pair selection method and related equipment

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201910796327.7A Active CN110557621B (en) 2019-08-27 2019-08-27 Parameter acquisition method, pixel point pair selection method and related equipment

Country Status (2)

Country Link
CN (2) CN113596429B (en)
WO (1) WO2021036462A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113596429B (en) * 2019-08-27 2023-04-14 咪咕文化科技有限公司 Pixel point pair selection method, device and computer readable storage medium
CN113497937B (en) * 2020-03-20 2023-09-05 Oppo广东移动通信有限公司 Image encoding method, image decoding method and related devices

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101193305A (en) * 2006-11-21 2008-06-04 安凯(广州)软件技术有限公司 Inter-frame prediction data storage and exchange method for video coding and decoding chip
CN101605263A (en) * 2009-07-09 2009-12-16 杭州士兰微电子股份有限公司 Method of intra-prediction and device
US20100091860A1 (en) * 2008-10-10 2010-04-15 Igor Anisimov System and method for low-latency processing of intra-frame video pixel block prediction
CN103096055A (en) * 2011-11-04 2013-05-08 华为技术有限公司 Image signal intra-frame prediction and decoding method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105306944B (en) * 2015-11-30 2018-07-06 哈尔滨工业大学 Chromatic component Forecasting Methodology in hybrid video coding standard
US10419757B2 (en) * 2016-08-31 2019-09-17 Qualcomm Incorporated Cross-component filter
US11025903B2 (en) * 2017-01-13 2021-06-01 Qualcomm Incorporated Coding video data using derived chroma mode
CN109274969B (en) * 2017-07-17 2020-12-22 华为技术有限公司 Method and apparatus for chroma prediction
CN109005408B (en) * 2018-08-01 2020-05-29 北京奇艺世纪科技有限公司 Intra-frame prediction method and device and electronic equipment
CN113596429B (en) * 2019-08-27 2023-04-14 咪咕文化科技有限公司 Pixel point pair selection method, device and computer readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101193305A (en) * 2006-11-21 2008-06-04 安凯(广州)软件技术有限公司 Inter-frame prediction data storage and exchange method for video coding and decoding chip
US20100091860A1 (en) * 2008-10-10 2010-04-15 Igor Anisimov System and method for low-latency processing of intra-frame video pixel block prediction
CN101605263A (en) * 2009-07-09 2009-12-16 杭州士兰微电子股份有限公司 Method of intra-prediction and device
CN103096055A (en) * 2011-11-04 2013-05-08 华为技术有限公司 Image signal intra-frame prediction and decoding method and device

Also Published As

Publication number Publication date
CN110557621A (en) 2019-12-10
CN110557621B (en) 2022-06-14
WO2021036462A1 (en) 2021-03-04
CN113596429B (en) 2023-04-14

Similar Documents

Publication Publication Date Title
US20240098275A1 (en) Video encoding and decoding method
CN103141103B (en) The method and apparatus of processing video data
CN101389021B (en) Video encoding/decoding method and apparatus
JP5480775B2 (en) Video compression method
US10091526B2 (en) Method and apparatus for motion vector encoding/decoding using spatial division, and method and apparatus for image encoding/decoding using same
US11979580B2 (en) Method and apparatus for encoding or decoding video data in FRUC mode with reduced memory accesses
US8718140B1 (en) Encoding video data
CN101584218B (en) Method and apparatus for encoding and decoding based on intra prediction
US20140119439A1 (en) Method and apparatus of intra mode coding
CN103260018B (en) Intra-frame image prediction decoding method and Video Codec
CN104853209A (en) Image coding and decoding method and device
CN103067704B (en) A kind of method for video coding of skipping in advance based on coding unit level and system
CN103096055A (en) Image signal intra-frame prediction and decoding method and device
CN104994386A (en) Method and apparatus for encoding and decoding image through intra prediction
CN110557621B (en) Parameter acquisition method, pixel point pair selection method and related equipment
KR20190122638A (en) Apparatus and method for intra prediction coding/decoding based on adaptive candidate modes
CN116506608A (en) Chroma intra prediction method and device, and computer storage medium
CN103051896B (en) Mode skipping-based video frequency coding method and mode skipping-based video frequency coding system
CN102196253A (en) Video coding method and device based on frame type self-adaption selection
TW202032995A (en) Encoding and decoding a picture
CN113365080B (en) Encoding and decoding method, device and storage medium for string coding technology
Song et al. Unified depth intra coding for 3D video extension of HEVC
CN103430543A (en) Method for reconstructing and coding image block
CN102364948B (en) Method for two-way compensation of video coding in merging mode
CN104539967A (en) Inter-frame prediction method in mixed video coding standard

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant