WO2020056767A1 - Procédé et appareil de prédiction de composante d'image vidéo, et support de stockage informatique - Google Patents

Procédé et appareil de prédiction de composante d'image vidéo, et support de stockage informatique Download PDF

Info

Publication number
WO2020056767A1
WO2020056767A1 PCT/CN2018/107109 CN2018107109W WO2020056767A1 WO 2020056767 A1 WO2020056767 A1 WO 2020056767A1 CN 2018107109 W CN2018107109 W CN 2018107109W WO 2020056767 A1 WO2020056767 A1 WO 2020056767A1
Authority
WO
WIPO (PCT)
Prior art keywords
image component
value
reference value
pixel point
coding block
Prior art date
Application number
PCT/CN2018/107109
Other languages
English (en)
Chinese (zh)
Inventor
霍俊彦
马彦卓
柴小燕
万帅
杨付正
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to CN201880094931.9A priority Critical patent/CN112313950B/zh
Priority to PCT/CN2018/107109 priority patent/WO2020056767A1/fr
Publication of WO2020056767A1 publication Critical patent/WO2020056767A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component

Definitions

  • the embodiments of the present application relate to the technical field of video encoding and decoding, and in particular, to a method, a device, and a computer storage medium for predicting video image components.
  • H.265 / High Efficiency Video Coding is the latest international video compression standard.
  • the compression performance of H.265 / HEVC is better than the previous generation video coding standard H.264 / Advanced Video Coding. , AVC) increased by about 50%, but still can not meet the needs of rapid development of video applications, especially new high-definition, virtual reality (Virtual Reality, VR) and other new video applications.
  • JEM Joint Exploration Test Model
  • VVC Versatile Video Coding
  • the embodiments of the present application expect to provide a method, a device, and a computer storage medium for predicting video image components, which can effectively improve the prediction accuracy of the video image components and make the predicted values of the video image components closer to the video image components The original value, which saves the coding rate.
  • an embodiment of the present application provides a method for predicting a video image component, where the method includes:
  • first image component reconstruction value represents that at least one pixel of the coding block corresponds to A reconstruction value of a first image component, the first image component neighboring reference value and the second image component neighboring reference value respectively representing each neighboring pixel point in the coding block neighboring reference pixel corresponding to the first image
  • a second image component prediction value corresponding to each pixel in the coding block is obtained.
  • an embodiment of the present application provides a device for predicting a video image component.
  • the device for predicting a video image component includes: an acquiring part, a determining part, and a predicting part;
  • the acquiring section is configured to acquire a first image component reconstruction value, a first image component adjacent reference value, and a second image component adjacent reference value corresponding to a coding block; wherein the first image component reconstruction value represents the At least one pixel of the coding block corresponds to the reconstructed value of the first image component, and the first image component neighboring reference value and the second image component neighboring reference value respectively represent each phase in the coding block neighboring reference pixel.
  • the neighboring pixel points correspond to the reference value of the first image component and the reference value of the second image component;
  • the determining section is configured to determine a model parameter according to the acquired first image component reconstruction value, the first image component adjacent reference value, and the second image component adjacent reference value;
  • the prediction section is configured to obtain, according to the model parameter, a prediction value of a second image component corresponding to each pixel in the coding block.
  • an embodiment of the present application provides a device for predicting a video image component.
  • the device for predicting a video image component includes: a memory and a processor;
  • the memory is configured to store a computer program capable of running on the processor
  • the processor is configured to execute the steps of the method according to the first aspect when the computer program is run.
  • an embodiment of the present application provides a computer storage medium that stores a prediction program for a video image component, where the prediction program for a video image component is executed by at least one processor to implement the first aspect. The steps of the method described.
  • Embodiments of the present application provide a method, a device, and a computer storage medium for predicting a video image component, by acquiring a first image component reconstruction value corresponding to a coding block, a first image component adjacent reference value, and a second image component adjacent reference
  • the reconstruction value of the first image component represents the reconstruction value of at least one pixel of the coding block corresponding to the first image component, the first image component neighboring reference value and the second image component neighboring reference
  • the values respectively represent the reference value corresponding to the first image component and the reference value of the second image component of each adjacent pixel point in the adjacent reference pixels of the coding block; according to the reconstructed value of the first image component, the first An image component neighboring reference value and the second image component neighboring reference value are used to determine a model parameter; according to the model parameter, a second image component predicted value corresponding to each pixel in the coding block is obtained;
  • the determination of the model parameters in the embodiment of the application not only considers the first image component neighboring reference value and the
  • FIGS. 1A to 1C are schematic structural diagrams of video image sampling formats in related technical solutions
  • FIG. 2A and FIG. 2B are schematic diagrams of sampling a first image component neighboring reference value and a second image component neighboring reference value of a coding block in a related technical solution;
  • 3A to 3C are schematic structural diagrams of a CCLM preset model in a related technical solution
  • FIG. 4 is a grouping schematic diagram of the first image component neighboring reference value and the second image component neighboring reference value in the MMLM prediction mode in the related technical solution;
  • FIG. 5 is a schematic diagram of distribution of each pixel point and neighboring reference pixels in a coding block according to an embodiment of the present application
  • FIG. 6 is a schematic block diagram of a video encoding system according to an embodiment of the present application.
  • FIG. 7 is a schematic block diagram of a video decoding system according to an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of a method for predicting a video image component according to an embodiment of the present application
  • FIG. 9 is a schematic structural diagram of a device for predicting a video image component according to an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of another apparatus for predicting a video image component according to an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of another apparatus for predicting a video image component according to an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a specific hardware structure of a video image component prediction apparatus according to an embodiment of the present application.
  • a first image component, a second image component, and a third image component are generally used to characterize a coding block; wherein the three image components are a luminance component, a blue chrominance component, and a red chrominance component, respectively.
  • the luminance component is usually expressed by the symbol Y
  • the blue chrominance component is usually expressed by the symbol Cb
  • the red chrominance component is usually expressed by the symbol Cr.
  • the first image component may be a luminance component Y
  • the second image component may be a blue chrominance component Cb
  • the third image component may be a red chrominance component Cr
  • the currently commonly used sampling format is the YCbCr format.
  • the YCbCr format includes the following types, as shown in Figures 1A to 1C, where the cross (X) in the figure represents the sampling point of the first image component, and the circle ( ⁇ ) represents the second Image component or third image component sampling point.
  • the YCbCr format includes:
  • the video image uses the 4: 2: 0 format of YCbCr
  • the first image component of the video image is a 2N ⁇ 2N encoding block
  • the corresponding second image component or third image component is N ⁇ N size Coded block, where N is the side length of the coded block.
  • the following description will be described by taking the 4: 2: 0 format as an example, but the technical solutions of the embodiments of the present application are also applicable to other sampling formats.
  • CCLM Cross-component linear model prediction
  • H.266 CCLM implements the prediction from the first image component to the second image component, the first image component to the third image component, and the prediction between the second image component and the third image component.
  • the prediction from the component to the second image component is described as an example, but the technical solution in the embodiment of the present application can also be applied to the prediction of other image components.
  • the first image component and the second image component are of the same coding block, and the second image component is The prediction is performed based on the reconstructed value of the first image component of the same coding block, for example, using a preset model as shown in equation (1):
  • i, j represent the position coordinates of the sampling points in the coding block
  • i represents the horizontal direction
  • j represents the vertical direction
  • Pred C [i, j] represents the sampling points whose position coordinates in the coding block are [i, j].
  • the second image component prediction value, Rec Y [i, j] represents the reconstructed value of the first image component corresponding to the sampling point with the position coordinate [i, j] in the same coding block (downsampled)
  • ⁇ and ⁇ are the above
  • the model parameters of the preset model can be derived by minimizing the regression error of the first image component neighboring reference value and the second image component neighboring reference value around the coding block, as calculated using equation (2):
  • Y (n) represents the adjacent reference values of all the first image components on the left and upper sides after downsampling
  • C (n) represents the adjacent reference values of all the second image components on the left and upper sides
  • FIG. 2A and FIG. 2B which are schematic diagrams of sampling a first image component neighboring reference value and a second image component neighboring reference value of a coding block in a related technical solution, respectively.
  • the bold The larger box is used to highlight the first image component coding block 21
  • the gray solid circle is used to indicate the adjacent reference value Y (n) of the first image component coding block 21; in FIG.
  • FIG. 2A shows a 2N ⁇ 2N size first image component coding block 21, and for a 4: 2: 0 format video image, the size of a 2N ⁇ 2N size first image component corresponding to the size of the second image component Is N ⁇ N, as shown in 22 in FIG. 2B; that is, FIG. 2A and FIG. 2B are schematic diagrams of encoding blocks obtained by sampling the first image component and the second image component respectively for the same encoding block.
  • formula (2) can be directly applied; for non-square coding blocks, the neighboring samples with longer edges are first down-sampled to obtain the number of samples equal to the number of samples with shorter edges.
  • ⁇ and ⁇ do not need to be transmitted, and can also be calculated by using formula (2) in the decoder; in the embodiment of the present application, this is not specifically limited.
  • FIGS. 3A to 3C are schematic diagrams illustrating the principle structure of a CCLM preset model in related technical solutions.
  • a, b, and c are adjacent reference values of the first image component
  • A, B, and C Is the second image component neighboring reference value
  • e is the reconstructed value of the first image component corresponding to a pixel in the coding block
  • E is the predicted value of the second image component corresponding to the pixel
  • the image component adjacent reference value Y (n) and the second image component adjacent reference value C (n) can be calculated according to formula (2), ⁇ and ⁇ , and according to the calculated ⁇ and ⁇ , and formula (1), Establish a preset model, as shown in Figure 3C; bring the reconstruction value e of the first image component corresponding to a pixel in the coding block into the preset model described in equation (1), and calculate the first corresponding to the pixel Two image component prediction values E.
  • CCLM prediction modes there are currently two CCLM prediction modes: one is a single model CCLM prediction mode; the other is a multiple model CCLM (Multiple Model CCLM, MMLM) prediction mode, also known as a MMLM prediction mode.
  • the single model CCLM prediction mode has only one preset model to predict the second image component from the first image component; and the MMLM prediction mode has multiple preset models to achieve the prediction from the first image component.
  • the second image component For example, in the prediction mode of MMLM, the first image component neighboring reference value and the second image component neighboring reference value of the coding block are divided into two groups, and each group can be used separately as a training for deriving model parameters in a preset model.
  • each group can derive a set of model parameters ⁇ and ⁇ ; and the reconstructed value of the first image component of the coding block can also be grouped according to the classification method of the adjacent reference values of the first image component, and use the corresponding
  • the model parameters ⁇ and ⁇ are used to establish a preset model.
  • FIG. 4 shows a grouping schematic diagram of the first image component neighboring reference value and the second image component neighboring reference value in the MMLM prediction mode in the related technical solution; wherein the threshold value is used to indicate the basis for establishing multiple preset models
  • the set value and the threshold value are obtained by averaging the adjacent reference values Y (n) of the first image component.
  • the threshold value is represented by Threshold, and Threshold is used as the dividing point.
  • the component neighboring reference value is less than or equal to the threshold, it is divided into the first group; if the first image component neighboring reference value is greater than the threshold, it is divided into the second group; here, according to the first image component neighboring reference value of the first group
  • Rec Y [i, j] represents the reconstructed value of the first image component corresponding to the position coordinates in the coding block at [i, j] pixels;
  • Pred 1C [i, j] represents the position coordinates in the coding block at [i, j ] Pixel points are predicted values of the second image component according to the first preset model M1
  • Pred 2C [i, j] indicates that the position coordinates in the coding block are [i, j] pixels are obtained according to the second preset model M2 The predicted value of the second image component.
  • the first and second image component neighboring reference values of the coding block are used to calculate the model parameters ⁇ and ⁇ of the preset model. Specifically, ⁇ and ⁇ are minimized The regression error between the first image component neighboring reference value and the second image component neighboring reference value is obtained, as shown in the above formula (2).
  • the image spatial texture often changes, and the distribution characteristics of pixels in different regions are different. For example, some pixels are high brightness and some pixels are low brightness. If you simply use adjacent reference pixels to Because the model parameters for constructing the preset model are not comprehensive enough, the constructed model parameters are not optimal, so that the predicted value of the second image component obtained by the preset model is not accurate enough.
  • a prediction method of a video image component proposes to construct a model parameter of a preset model based on a first image component reconstruction value and a second image component temporary value of a coding block, wherein the second image component temporary value is Obtained according to the degree of similarity between the reconstructed value of the first image component of the coding block and the adjacent reference value of the first image component, so that the constructed model parameters are as close as possible to the optimal model parameters; see FIG. 5, which shows A schematic diagram of the distribution of each pixel point and adjacent reference pixels in a coding block provided in the embodiment of the present application; as shown in FIG.
  • the neighboring reference pixels of the coding block are mainly high-brightness, and the pixel points in the coding block are based on Medium to low brightness is dominant; because the embodiment of the present application not only considers the first image component adjacent reference value and the second image component adjacent reference value corresponding to adjacent reference pixels, but also considers the first image component reconstruction value and the first image component adjacent reference value.
  • the degree of similarity between adjacent reference values of an image component makes the constructed model parameters as close to the optimal model parameters as possible.
  • the reconstruction value of the first image component of the coding block not only participates in the application of the preset model, but also participates in the calculation of the model parameters, which further makes the predicted value of the second image component obtained by the embodiment of the present application even more. Close to the original value of the second image component.
  • the video encoding system 600 includes transformation and quantization 601, intra estimation 602, intra prediction 603, motion compensation 604, and motion.
  • a video coding block can be obtained by dividing the coding tree block (Coding Tree Unit, CTU), and then transform and quantize 601 Transform the video coding block, including transforming the residual information from the pixel domain to the transform domain, and quantizing the resulting transform coefficients to further reduce the bit rate;
  • intra estimation 602 and intra prediction 603 are used to The video encoding block performs intra prediction; specifically
  • the residual block is reconstructed in the pixel domain.
  • the reconstructed residual block is filtered by the filter control analysis 607 and deblocking filtering and SAO filtering 608 to remove the block effect Artifacts, and then add this reconstructed residual block to a predictive block in the frame of the decoded image buffer 610 to produce a reconstructed Video encoding block; header information encoding and CABAC 609 are used to encode the quantized transform coefficients.
  • the context content can be based on neighboring encoding blocks and can be used to encode information indicating the determined intra prediction mode.
  • the code stream of the video signal is output; and the decoded image buffer 610 is used to store reconstructed video encoding blocks. As the video image encoding progresses, new reconstructed video encoding blocks are continuously generated. These reconstructed videos The encoded blocks are stored in the decoded image buffer 610.
  • the video decoding system 700 includes header information decoding and CABAC decoding 701, inverse transform and inverse quantization 702, intra prediction 703, Motion compensation 704, deblocking filtering and SAO filtering 705 and decoded image buffer 706 and other components; after the input video signal is subjected to the encoding processing in FIG.
  • the code stream of the video signal is output; the code stream is input into the video decoding system 700, first After header information decoding and CABAC decoding 701, it is used to obtain the decoded transform coefficients; the transform coefficients are processed by inverse transform and inverse quantization 702 to generate residual blocks in the pixel domain; intra prediction 703 can be used to The determined intra-prediction mode and data from a previously decoded block of the current frame or picture generate prediction data for the current video decoded block; motion compensation 704 is determined by parsing the motion vector and other associated syntax elements to determine the video decoded block Prediction information, and use the prediction information to generate predictive blocks of the video decoding block being decoded; The residual block of 702 is summed with the corresponding predictive block generated by motion compensation 704 to form a decoded video block; the decoded video signal is deblocked and SAO filtered 705 in order to remove block effect artifacts, which can be improved Video quality; the decoded video block is then stored in the decoded image buffer 706,
  • the embodiment of the present application is mainly applied to the intra prediction part 603 as shown in FIG. 6 and the intra prediction part 703 as shown in FIG. 7; that is, the embodiment of the present application can work simultaneously on the encoding system and the decoding system.
  • the application example does not specifically limit this.
  • FIG. 8 it illustrates a method for predicting a video image component according to an embodiment of the present application.
  • the method may include:
  • S801 Obtain a first image component reconstruction value, a first image component adjacent reference value, and a second image component adjacent reference value corresponding to a coding block; wherein the first image component reconstruction value represents at least one pixel of the coding block Point corresponds to the reconstruction value of the first image component, and the first image component neighboring reference value and the second image component neighboring reference value respectively represent each neighboring pixel point in the coding block neighboring reference pixel corresponding to the first A reference value of an image component and a reference value of a second image component;
  • S802 Determine a model parameter according to the acquired reconstruction value of the first image component, the first image component neighboring reference value, and the second image component neighboring reference value;
  • S803 Obtain a second image component prediction value corresponding to each pixel in the coding block according to the model parameters.
  • the coding block is the current coding block for which the second image component prediction or the third image component prediction is to be performed; the first image component reconstruction value is used to characterize the first image component corresponding to at least one pixel in the coding block.
  • Reconstruction value, the first image component neighboring reference value is used to characterize the reference value of the first image component corresponding to the neighboring reference pixel point of the coding block, and the second image component neighboring reference value is used to represent the neighboring reference pixel point of the coding block The reference value of the corresponding second image component.
  • the first image component reconstruction value, the first image component adjacent reference value, and the second image component adjacent reference value corresponding to the coding block are obtained; wherein the first image component reconstruction Value represents the reconstruction value of at least one pixel point of the coding block corresponding to the first image component, and the first image component neighboring reference value and the second image component neighboring reference value respectively represent the coding block neighboring reference pixel
  • the first image component reconstruction Value represents the reconstruction value of at least one pixel point of the coding block corresponding to the first image component
  • the first image component neighboring reference value and the second image component neighboring reference value respectively represent the coding block neighboring reference pixel
  • the optimal model parameters of the preset model are analyzed from the perspective of mathematical theory.
  • the original value of the second image component and the second image component are generally expected.
  • i and j represent the position coordinates of pixels in the coding block
  • i represents a horizontal direction
  • j represents a vertical direction
  • C [i, j] is the second image corresponding to the pixel with the position coordinate [i, j] in the encoding block Component original value
  • C Pred [i, j] is the predicted value of the second image component corresponding to the pixel point whose position coordinate is [i, j] in the coding block
  • the optimal model parameters ⁇ opt and ⁇ opt of the preset model It can be obtained by the least square method, as shown in equation (5),
  • the second image component temporary value is constructed based on the similarity between the first image component reconstruction value of the coding block and the first image component neighboring reference value, and the constructed second image component
  • the temporary value replaces the original value of the second image component corresponding to each pixel in the coding block; in this case, the optimal model parameters can be obtained without introducing additional bit overhead.
  • the reconstructed value according to the acquired first image component, the neighboring image reference value of the first image component, and the second image component phase are obtained.
  • Neighbor reference values to determine model parameters including:
  • a model parameter is determined according to the first image component reconstruction value and the acquired second image component temporary value.
  • the determination of the model parameters in the embodiments of the present application not only considers the first image component neighboring reference value and the second image component neighboring reference value, but also considers the first image component reconstruction value; wherein, according to The degree of similarity between the reconstructed value of the first image component and the adjacent reference value of the first image component can obtain the matching pixel point of each pixel in the coding block, and the temporary value of the second image component is corresponding to the matched pixel point.
  • the second image component neighboring reference value is obtained; that is, the second image component temporary value is based on the first pixel corresponding to at least one pixel of the coding block in the coding block adjacent reference pixel corresponding to the first pixel
  • Two image components are obtained by adjacent reference values.
  • the reconstruction value according to the first image component, the first image component neighboring reference value, and the second The adjacent reference values of the image components to obtain the temporary value of the second image components include:
  • the second image component adjacent reference value corresponding to the matching pixel point is used as the temporary value of the second image component corresponding to each pixel point.
  • the The result of the difference calculation, and obtaining a matching pixel point of each pixel point from the adjacent reference pixels of the coding block includes:
  • the adjacent pixel points are used as matching pixel points of each pixel point.
  • the The result of the difference calculation, and obtaining a matching pixel point of each pixel point from the adjacent reference pixels of the coding block includes:
  • the temporary value of the second image component may be obtained by using a construction method such as interpolation. Therefore, in another possible implementation manner, the first image component reconstruction value, the first image component adjacent reference value, and the second image component adjacent reference value are used to obtain a first Two image component temporary values, including:
  • first matching pixel point represents the first matching pixel point An adjacent pixel point corresponding to a first image component adjacent reference value that is greater than the first image component reconstruction value and has the smallest difference among adjacent reference values of the image component
  • second matching pixel point represents the first An adjacent pixel point corresponding to a first image component adjacent reference value of an image component adjacent reference value that is smaller than the first image component reconstruction value and has the smallest difference
  • the search can be performed from the first image component neighboring reference value of the coding block, and first, a value greater than And with The closest reference value Y 1 of the first image component, the adjacent pixel point corresponding to Y 1 is the first matching pixel point, and the second reference value of the second image component corresponding to the first matching pixel point is C 1 ; Get a less than And with The closest reference value Y 2 of the first image component, the adjacent pixel point corresponding to Y 2
  • the position coordinate is the temporary value of the second image component corresponding to the [i, j] pixel point, which is represented by C '[i, j]; when the temporary value of the second image component of all pixels of the coding block has been found, you can use Instead Calculate the model parameters and use the model parameters to construct the predicted value of the second image component.
  • a set of original values of the second image component corresponding to the coding block can also be constructed.
  • the temporary value of the second image component for obtaining the temporary value of the second image component, not only the temporary value of the second image component can be obtained according to a matching method using the closest pixel, but also the temporary value of the second image component can be obtained by using an interpolation method. It is even possible to use the matching method of the closest pixel to some pixels and the interpolation method to obtain the temporary value of the second image component together; the embodiment of the present application does not specifically limit it.
  • the search range of the matching pixel point can also be expanded or reduced.
  • the search range can be limited to a row and column that differs from the coordinate position of the pixel point corresponding to the temporary value of the second image component to be determined by no more than n, where n is an integer greater than 1; the search range can also be extended to adjacent m rows And / or pixel position information of the m column, m is an integer greater than 1; pixel position information of other coding blocks in the lower left or upper right region may also be used; the embodiment of the present application does not specifically limit it.
  • the model parameters may be determined; in a possible implementation manner, the reconstructed value and the first image component according to the obtained first image component and the The obtained temporary value of the second image component and determining the model parameters include:
  • the first model parameter, the first image component reconstruction value, and the second image component temporary value are input into a second preset factor calculation model to obtain a second model parameter.
  • model parameters include the first model parameter and the second model parameter; after all the temporary image component corresponding values of the coding block are obtained, linear regression is still performed using the least square method to obtain the first parameter in the preset model.
  • a model parameter ⁇ 'and a second model parameter ⁇ ' are as follows:
  • the predicted value of the second image component corresponding to each pixel in the coding block can be obtained according to the established preset model; therefore, in the above implementation manner, Specifically, obtaining the predicted value of the second image component corresponding to each pixel in the coding block according to the model parameter includes:
  • a preset model is established based on the first model parameter and the second model parameter, wherein the preset model is used to characterize a first image component reconstruction value corresponding to each pixel point in the coding block and a second image component reconstruction value.
  • a second image component prediction value corresponding to each pixel point in the coding block is obtained.
  • a second image component prediction value Pred C [i, j] corresponding to a position coordinate of [i, j] pixels can be obtained.
  • a second image component temporary value is constructed to replace the original value of the second image component corresponding to the coding block, and then the first model is calculated according to the reconstructed value of the first image component and the original value of the second image component.
  • Parameters and second model parameters, and using the first model parameters and the second model parameters to establish a preset model can make the deviation of the established preset model from the expected model smaller, so that the second The image component prediction value is closer to the original value of the second image component, thereby improving the prediction accuracy of the video image component.
  • the obtained temporary value of the second image component may also be directly As the second image component prediction value, this embodiment of the present application does not specifically limit this. If the temporary value of the second image component is directly used as the prediction value of the second image component, the calculation of the first model parameter, the second model parameter, and the establishment of a preset model is not required at this time, which greatly reduces the calculation amount of the second image component prediction. .
  • the method further includes:
  • the third image component neighboring reference value represents a third image component reference value corresponding to each neighboring pixel point in the neighboring reference pixels of the coding block;
  • a third image component prediction value corresponding to each pixel in the coding block is obtained.
  • the determining a sub-model parameter according to the acquired second image component reconstruction value, the second image component neighboring reference value, and the third image component neighboring reference value include:
  • the embodiment of the present application in addition to the prediction of the first image component to the second image component, the second image component to the third image component, or the third image component to the second image component may be performed. Prediction of image components; wherein the prediction method from the third image component to the second image component is similar to the prediction method from the second image component to the third image component, the embodiment of the present application will use the prediction of the second image component to the third image component
  • the embodiment of the present application will use the prediction of the second image component to the third image component.
  • the same method as used for determining the temporary value of the second image component is adopted.
  • the closest pixel matching method may be used, or the interpolation method may also be used, so as to obtain the temporary value of the third image component.
  • the determined sub-model parameters include a first sub-model parameter and a second sub-model parameter; it can be established according to the first sub-model parameter and the second sub-model parameter.
  • a sub-preset model; according to the sub-preset model and a second image component reconstruction value corresponding to each pixel in the coding block, a third image component prediction corresponding to each pixel in the coding block can be obtained value.
  • the first sub-model parameter ⁇ * and the second sub-model parameter ⁇ * in the sub-preset model are as follows:
  • a sub-preset model After obtaining the first sub-model parameter ⁇ * and the second sub-model parameter ⁇ * , a sub-preset model can be established.
  • the established sub-preset model is shown in equation (9),
  • the above-mentioned prediction method applied to the CCLM prediction mode is also applicable to the MMLM prediction mode; as the name implies, the MMLM prediction mode has multiple preset models to realize the prediction of the second image component from the first image component; therefore Based on the technical solution shown in FIG. 8, in a possible implementation manner, the method further includes:
  • the model parameters are determined separately according to each of the at least two sets of the first image component reconstruction values and the second image component temporary values to obtain at least two sets of model parameters.
  • obtaining the predicted value of the second image component corresponding to each pixel in the coding block according to the model parameter includes:
  • a second image component prediction value corresponding to each pixel in the encoding block is obtained.
  • the threshold is the classification basis of the reconstructed value of the first image component of the coding block.
  • the threshold is a setting value used to indicate the establishment of multiple preset models.
  • the size of the threshold is related to the reconstruction value of the first image component corresponding to the coding block. Specifically, the first image component corresponding to the coding block may be calculated. The average value of the reconstruction values may also be obtained by calculating the median value of the reconstruction value of the first image component corresponding to the coding block, which is not specifically limited in this embodiment of the present application.
  • the mean Mean is calculated according to the reconstruction value of the first image component corresponding to at least one pixel of the coding block and Equation (10):
  • Mean represents the average value of the reconstructed value of the first image component corresponding to the coding block
  • M represents the number of samples of the reconstruction values of the first image component corresponding to the coding block.
  • the mean Mean can be directly used as a threshold, and two preset models can be established by using the threshold; however, it should be noted that the embodiment of the present application is not limited to only establishing two preset models.
  • a first model parameter ⁇ 1 ′ of a first preset model may be derived respectively And the second model parameter ⁇ 1 ′ and the first model parameter ⁇ 2 ′ and the second model parameter ⁇ 2 ′ of the second preset model; in combination with formula (11), a first preset model M1 ′ and a second preset are established Model M2 ':
  • the first preset model M1 'and the second preset model M2' After obtaining the first preset model M1 'and the second preset model M2', reconstruct the first image component value of each pixel in the coding block Compare with Threshold, if Then, the first preset model M1 'is selected, and according to the first preset model, the second image component prediction value Pred 1C [i, j] corresponding to the pixel coordinates of the coding block position [i, j] is obtained; Then, a second preset model M2 'is selected, and a second image component prediction value Pred 2C [i, j] corresponding to a pixel point coordinate of [i, j] is obtained according to the second preset model.
  • the reconstruction value of the first image component is compared with the mean value Mean, and Mean is used as a dividing point. If the reconstruction value of an image component is less than or equal to the threshold, it is divided into the first group; if the reconstruction value of the first image component is greater than the threshold, it is divided into the second group; in this way, the first image component reconstruction value set and the second group of the first group can be obtained.
  • the first image component of the group reconstructs a set of values.
  • the median calculation can also be performed for the first image component reconstruction value set of the first group to obtain the median value of the first group (requires The explanation is that the reconstruction value corresponding to only one pixel point in the first group is left); then the median calculation is continued for the first image component reconstruction value set of the second group to obtain the median value of the second group (which needs to be explained) (Yes, there is also a reconstruction value corresponding to only one pixel in the second group); according to the value corresponding to only one pixel in the first group and the value corresponding to only one pixel in the second group, these two can be obtained
  • the reconstructed values of the first image component and the temporary values of the second image component respectively corresponding to the pixels, so that the model parameters can be determined, and a preset model is established according to the model parameters, and the second image component can be obtained according to the established preset model.
  • Prediction value this prediction method can also greatly reduce the
  • This embodiment provides a method for predicting a video image component by acquiring a first image component reconstruction value, a first image component adjacent reference value, and a second image component adjacent reference value corresponding to a coding block;
  • An image component reconstruction value represents the reconstruction value of at least one pixel point of the coding block corresponding to the first image component, and the first image component neighboring reference value and the second image component neighboring reference value respectively represent the coding block.
  • Each adjacent pixel point in the adjacent reference pixel corresponds to the reference value of the first image component and the reference value of the second image component; according to the acquired first image component reconstruction value, the first image component adjacent reference value Determine a model parameter adjacent to the second image component reference value; obtain a predicted value of the second image component corresponding to each pixel in the coding block according to the model parameter; in the embodiment of the present application, the model parameter
  • FIG. 9 it illustrates a composition of a video image component prediction device 90 provided in an embodiment of the present application.
  • the video image component prediction device 90 may include: an obtaining section 901 , A determination section 902 and a prediction section 903;
  • the obtaining section 901 is configured to obtain a first image component reconstruction value, a first image component adjacent reference value, and a second image component adjacent reference value corresponding to a coding block; wherein the first image component reconstruction value
  • the at least one pixel point of the coding block corresponds to the reconstruction value of the first image component
  • the first image component neighboring reference value and the second image component neighboring reference value respectively characterize each of the coding block neighboring reference pixels
  • Adjacent pixel points correspond to the reference value of the first image component and the reference value of the second image component;
  • the determining section 902 is configured to determine a model parameter according to the acquired first image component reconstruction value, the first image component neighboring reference value, and the second image component neighboring reference value;
  • the prediction section 903 is configured to obtain a prediction value of a second image component corresponding to each pixel in the coding block according to the model parameter.
  • the obtaining section 901 is further configured to obtain a second image according to the first image component reconstruction value, the first image component neighboring reference value, and the second image component neighboring reference value.
  • Component temporary value wherein the second image component temporary value represents a temporary value of at least one pixel of the coding block corresponding to the second image component;
  • the determining section 902 is configured to determine a model parameter according to the first image component reconstruction value and the acquired second image component temporary value.
  • the video image component prediction device 90 further includes a calculation section 904, where:
  • the calculation section 904 is configured to perform, for each pixel point of the encoding block, any one of the first image component neighboring reference values and a first image component reconstruction value corresponding to each pixel point. Difference calculation
  • the obtaining section 901 is further configured to obtain a matching pixel point of each pixel point from neighboring reference pixels of the coding block according to a result of the difference calculation; and a second image corresponding to the matching pixel point.
  • the component neighboring reference value is used as the temporary value of the second image component corresponding to each pixel.
  • the obtaining section 901 is configured to obtain, according to a result of the difference calculation, an adjacent pixel point corresponding to a first image component adjacent reference value with the smallest difference; and the adjacent pixel The points serve as matching pixels for each pixel.
  • the obtaining section 901 is configured to obtain, according to a result of the difference calculation, a set of adjacent pixel points corresponding to the first reference value of the first image component with the smallest difference;
  • the calculation section 904 is further configured to calculate a distance value between each adjacent pixel point in the adjacent pixel point set and each of the pixel points;
  • the obtaining section 901 is further configured to select the adjacent pixel point with the smallest distance value as a matching pixel point of each pixel point.
  • the calculation section 904 is configured to, for each pixel point of the coding block, compare any reference value of the first image component neighboring reference value with the first value corresponding to each pixel point.
  • the obtaining section 901 is further configured to obtain a first matching pixel point and a second matching pixel point of each pixel point from neighboring reference pixels of the coding block according to a result of the difference calculation; wherein, the The first matching pixel point indicates an adjacent pixel point corresponding to a first image component adjacent reference value that is greater than the first image component reconstruction value and has the smallest difference among the first image component adjacent reference values.
  • the second matching pixel point represents an adjacent pixel point corresponding to the first image component adjacent reference value that is smaller than the first image component reconstruction value and has the smallest difference among the first image component adjacent reference values; and according to Performing a interpolation operation on a second image component neighboring reference value corresponding to the first matching pixel point and a second image component neighboring reference value corresponding to the second matching pixel point to obtain a second image corresponding to each pixel point Component temporary value.
  • the model parameters include a first model parameter and a second model parameter
  • the obtaining section 901 is further configured to input the first image component reconstruction value and the second image component temporary value to a first Obtaining a first model parameter in a preset factor calculation model
  • the obtaining section 901 is further configured to input the first model parameter, the first image component reconstruction value, and the second image component temporary value into a second preset factor calculation model to obtain the second Model parameters.
  • the video image component prediction device 90 further includes a establishing section 905, where:
  • the establishing section 905 is configured to establish a preset model based on the first model parameter and the second model parameter; wherein the preset model is used to characterize a first corresponding to each pixel in the coding block.
  • the prediction section 903 is further configured to obtain a second image component corresponding to each pixel in the encoding block according to the preset model and a first image component corresponding to each pixel in the encoding block. Predictive value.
  • the obtaining section 901 is further configured to obtain a second image component reconstruction value and a third image component neighboring reference value corresponding to the coding block; wherein the second image component reconstruction value represents the A second image component reconstruction value corresponding to at least one pixel point of the coding block, and the third image component neighboring reference value represents a third image component reference value corresponding to each neighboring pixel point in the neighboring reference pixels of the coding block;
  • the determining section 902 is further configured to determine a sub-model parameter according to the acquired second image component reconstruction value, the second image component neighboring reference value, and the third image component neighboring reference value;
  • the prediction section 903 is further configured to obtain a predicted value of a third image component corresponding to each pixel in the coding block according to the sub-model parameters.
  • the obtaining section 901 is configured to obtain a third image component according to the second image component reconstruction value, the second image component adjacent reference value, and the third image component adjacent reference value.
  • Temporary value wherein the third image component temporary value is obtained based on a third image component neighboring reference value corresponding to at least one pixel point of the coding block corresponding to a matching pixel point among neighboring reference pixels of the coding block ;
  • the determining section 902 is configured to determine a sub-model parameter according to the second image component reconstruction value and the acquired third image component temporary value.
  • the obtaining section 901 is further configured to obtain at least one threshold value based on a first image component reconstruction value corresponding to at least one pixel of the coding block; and according to the first image component reconstruction value and the first image component reconstruction value, Grouping comparison results of at least one threshold to obtain at least two sets of first image component reconstruction values and second image component temporary values; and according to each of the at least two sets of first image component reconstruction values and second image component temporary values One group determines the model parameters separately and obtains at least two sets of model parameters.
  • the establishing section 905 is further configured to establish at least two preset models based on the obtained at least two sets of model parameters; wherein the at least two preset models and the at least two sets of models Parameters have a corresponding relationship;
  • the prediction section 903 is further configured to select, from the at least two preset models, a pixel corresponding to each pixel in the coding block according to a comparison result between the first image component reconstruction value and the at least one threshold.
  • a preset model ; and obtaining a second image component prediction value corresponding to each pixel in the coding block according to a preset model corresponding to each pixel in the coding block and the first image component reconstruction value.
  • the “part” may be a part of a circuit, a part of a processor, a part of a program or software, etc., of course, it may be a unit, a module, or a non-modular.
  • each component in this embodiment may be integrated into one processing unit, or each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or in the form of software functional modules.
  • the integrated unit is implemented in the form of a software functional module and is not sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solution of this embodiment is essentially or It is said that a part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium and includes several instructions for making a computer device (can It is a personal computer, a server, or a network device) or a processor (processor) to perform all or part of the steps of the method described in this embodiment.
  • the foregoing storage media include: U disks, mobile hard disks, read only memories (ROM, Read Only Memory), random access memories (RAM, Random Access Memory), magnetic disks or optical disks, and other media that can store program codes.
  • this embodiment provides a computer storage medium that stores a prediction program for a video image component.
  • the prediction program for the video image component is executed by at least one processor, the technical solution shown in FIG. 8 is implemented. Steps of the method as described.
  • the composition of the prediction device 90 based on the video image component and the computer storage medium shown in FIG. 12, which shows a specific hardware structure of the prediction device 90 for a video image component provided in the embodiment of the present application, may include: a network interface 1201, a memory 1202 and processor 1203; the various components are coupled together through a bus system 1204. It can be understood that the bus system 1204 is configured to implement connection and communication between these components.
  • the bus system 1204 includes a power bus, a control bus, and a status signal bus in addition to the data bus. However, for the sake of clarity, various buses are marked as the bus system 1204 in FIG. 12.
  • the network interface 1201 is used to receive and send signals during the process of sending and receiving information with other external network elements.
  • the memory 1202 is configured to store a computer program capable of running on the processor 1203;
  • the processor 1203 is configured to, when running the computer program, execute:
  • first image component reconstruction value represents that at least one pixel of the coding block corresponds to A reconstruction value of a first image component, the first image component neighboring reference value and the second image component neighboring reference value respectively representing each neighboring pixel point in the coding block neighboring reference pixel corresponding to the first image
  • a second image component prediction value corresponding to each pixel in the coding block is obtained.
  • the memory 1202 in the embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), and an electronic memory. Erase programmable read-only memory (EPROM, EEPROM) or flash memory.
  • the volatile memory may be Random Access Memory (RAM), which is used as an external cache.
  • RAM Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • Synchronous Dynamic Random Access Memory Synchronous Dynamic Random Access Memory
  • SDRAM double data rate synchronous dynamic random access memory
  • Double Data Rate SDRAM DDRSDRAM
  • enhanced SDRAM ESDRAM
  • synchronous connection dynamic random access memory Synchronous DRAM, SLDRAM
  • Direct RAMbus RAM Direct RAMbus RAM, DRRAM
  • the memory 1202 of the systems and methods described herein is intended to include, but is not limited to, these and any other suitable types of memory.
  • the processor 1203 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method may be completed by an integrated logic circuit of hardware in the processor 1203 or an instruction in the form of software.
  • the above-mentioned processor 1203 may be a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (Field Programmable Gate Array, FPGA), or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA ready-made programmable gate array
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in combination with the embodiments of the present application may be directly implemented by a hardware decoding processor, or may be performed by using a combination of hardware and software modules in the decoding processor.
  • the software module may be located in a mature storage medium such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, or an electrically erasable programmable memory, a register, and the like.
  • the storage medium is located in the memory 1202, and the processor 1203 reads the information in the memory 1202 and completes the steps of the foregoing method in combination with its hardware.
  • the embodiments described herein may be implemented by hardware, software, firmware, middleware, microcode, or a combination thereof.
  • the processing unit can be implemented in one or more application-specific integrated circuits (ASICs), digital signal processors (DSP), digital signal processing devices (DSPD), programmable Logic device (Programmable Logic Device, PLD), Field Programmable Gate Array (FPGA), general purpose processor, controller, microcontroller, microprocessor, other for performing the functions described in this application Electronic unit or combination thereof.
  • ASICs application-specific integrated circuits
  • DSP digital signal processors
  • DSPD digital signal processing devices
  • PLD programmable Logic Device
  • FPGA Field Programmable Gate Array
  • controller microcontroller
  • microprocessor other for performing the functions described in this application Electronic unit or combination thereof.
  • the techniques described herein can be implemented through modules (e.g., procedures, functions, etc.) that perform the functions described herein.
  • Software codes may be stored in a memory and executed by a processor.
  • the memory may be implemented in the processor or external to the processor.
  • the processor 1203 is further configured to execute the steps of the method for predicting a video image component in the technical solution shown in FIG. 8 when the computer program is run.
  • the first image component reconstruction value, the first image component adjacent reference value, and the second image component adjacent reference value corresponding to the encoding block are obtained; wherein the first image component reconstruction value represents the encoding.
  • At least one pixel of the block corresponds to the reconstruction value of the first image component, and the first image component neighboring reference value and the second image component neighboring reference value respectively characterize each neighboring of the coding block neighboring reference pixels.
  • the pixels correspond to the reference value of the first image component and the reference value of the second image component; according to the acquired reconstruction value of the first image component, the first image component adjacent reference value, and the second image component adjacent
  • the reference value determines the model parameter; according to the model parameter, the second image component prediction value corresponding to each pixel in the coding block is obtained; thereby the prediction accuracy of the video image component can be effectively improved, and the video image component prediction value can be effectively improved It is closer to the original value of the video image component, which saves the coding rate.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un procédé et un appareil de prédiction de composante d'image vidéo, et un support de stockage informatique. Le procédé comprend les étapes consistant à : acquérir une première valeur de reconstruction de composante d'image, une première valeur de référence adjacente de composante d'image et une seconde valeur de référence adjacente de composante d'image qui correspondent à un bloc de codage, la première valeur de reconstruction de composante d'image représentant une valeur de reconstruction, correspondant à une première composante d'image, d'au moins un point de pixel du bloc de codage ; et la première valeur de référence adjacente de composante d'image et la seconde valeur de référence adjacente de composante d'image représentant respectivement des valeurs de référence, correspondant à la première composante d'image et à une seconde composante d'image, de chaque point de pixel adjacent dans un pixel de référence adjacent du bloc de codage ; déterminer un paramètre de modèle en fonction de la première valeur de reconstruction de composante d'image acquise, de la première valeur de référence adjacente de composante d'image et de la seconde valeur de référence adjacente de composante d'image ; et selon le paramètre de modèle, acquérir une seconde valeur de prédiction de composante d'image correspondant à chaque point de pixel dans le bloc de codage.
PCT/CN2018/107109 2018-09-21 2018-09-21 Procédé et appareil de prédiction de composante d'image vidéo, et support de stockage informatique WO2020056767A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880094931.9A CN112313950B (zh) 2018-09-21 2018-09-21 视频图像分量的预测方法、装置及计算机存储介质
PCT/CN2018/107109 WO2020056767A1 (fr) 2018-09-21 2018-09-21 Procédé et appareil de prédiction de composante d'image vidéo, et support de stockage informatique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/107109 WO2020056767A1 (fr) 2018-09-21 2018-09-21 Procédé et appareil de prédiction de composante d'image vidéo, et support de stockage informatique

Publications (1)

Publication Number Publication Date
WO2020056767A1 true WO2020056767A1 (fr) 2020-03-26

Family

ID=69888083

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/107109 WO2020056767A1 (fr) 2018-09-21 2018-09-21 Procédé et appareil de prédiction de composante d'image vidéo, et support de stockage informatique

Country Status (2)

Country Link
CN (1) CN112313950B (fr)
WO (1) WO2020056767A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784830A (zh) * 2021-01-28 2021-05-11 联想(北京)有限公司 一种字符识别方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103379321A (zh) * 2012-04-16 2013-10-30 华为技术有限公司 视频图像分量的预测方法和装置
CN105306944A (zh) * 2015-11-30 2016-02-03 哈尔滨工业大学 混合视频编码标准中色度分量预测方法
CN107079166A (zh) * 2014-10-28 2017-08-18 联发科技(新加坡)私人有限公司 用于视频编码的引导交叉分量预测的方法
CN107211121A (zh) * 2015-01-22 2017-09-26 联发科技(新加坡)私人有限公司 色度分量的视频编码方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2495941B (en) * 2011-10-25 2015-07-08 Canon Kk Method and apparatus for processing components of an image
CN103533374B (zh) * 2012-07-06 2018-02-16 乐金电子(中国)研究开发中心有限公司 一种视频编码、解码的方法及装置
JP6005572B2 (ja) * 2013-03-28 2016-10-12 Kddi株式会社 動画像符号化装置、動画像復号装置、動画像符号化方法、動画像復号方法、およびプログラム
JP6352141B2 (ja) * 2014-09-30 2018-07-04 Kddi株式会社 動画像符号化装置、動画像復号装置、動画像圧縮伝送システム、動画像符号化方法、動画像復号方法、およびプログラム
US10652575B2 (en) * 2016-09-15 2020-05-12 Qualcomm Incorporated Linear model chroma intra prediction for video coding
CN107580222B (zh) * 2017-08-01 2020-02-14 北京交通大学 一种基于线性模型预测的图像或视频编码方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103379321A (zh) * 2012-04-16 2013-10-30 华为技术有限公司 视频图像分量的预测方法和装置
CN107079166A (zh) * 2014-10-28 2017-08-18 联发科技(新加坡)私人有限公司 用于视频编码的引导交叉分量预测的方法
CN107211121A (zh) * 2015-01-22 2017-09-26 联发科技(新加坡)私人有限公司 色度分量的视频编码方法
CN105306944A (zh) * 2015-11-30 2016-02-03 哈尔滨工业大学 混合视频编码标准中色度分量预测方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784830A (zh) * 2021-01-28 2021-05-11 联想(北京)有限公司 一种字符识别方法及装置

Also Published As

Publication number Publication date
CN112313950A (zh) 2021-02-02
CN112313950B (zh) 2023-06-02

Similar Documents

Publication Publication Date Title
WO2022104498A1 (fr) Procédé de prédiction intra-trame, codeur, décodeur et support de stockage informatique
WO2020029187A1 (fr) Procédé et dispositif de prédiction de composante d'image vidéo, et support de stockage informatique
WO2020132908A1 (fr) Procédé et appareil de décodage par prédiction, et support de stockage informatique
WO2022227622A1 (fr) Procédés et dispositifs de codage et de décodage de prédiction conjointe inter-trame et intra-trame configurables en poids
JP2023113908A (ja) 予測デコーディング方法、装置及びコンピュータ記憶媒体
WO2020056767A1 (fr) Procédé et appareil de prédiction de composante d'image vidéo, et support de stockage informatique
CN116472707A (zh) 图像预测方法、编码器、解码器以及计算机存储介质
AU2019357929A1 (en) Video image component prediction method and apparatus, and computer storage medium
CN113766233B (zh) 图像预测方法、编码器、解码器以及存储介质
TW202139707A (zh) 幀間預測方法、編碼器、解碼器以及儲存媒介
JP2022521366A (ja) イントラ予測方法、装置およびコンピュータ記憶媒体
CN113412621A (zh) 图像分量的预测方法、编码器、解码器及计算机存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18933812

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18933812

Country of ref document: EP

Kind code of ref document: A1