WO2020192085A1 - 图像预测方法、编码器、解码器以及存储介质 - Google Patents
图像预测方法、编码器、解码器以及存储介质 Download PDFInfo
- Publication number
- WO2020192085A1 WO2020192085A1 PCT/CN2019/110834 CN2019110834W WO2020192085A1 WO 2020192085 A1 WO2020192085 A1 WO 2020192085A1 CN 2019110834 W CN2019110834 W CN 2019110834W WO 2020192085 A1 WO2020192085 A1 WO 2020192085A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image component
- predicted
- current block
- prediction
- value
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 98
- 238000001914 filtration Methods 0.000 claims abstract description 139
- 238000013139 quantization Methods 0.000 claims description 46
- 238000005070 sampling Methods 0.000 claims description 35
- 230000003044 adaptive effect Effects 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 description 16
- 238000010586 diagram Methods 0.000 description 15
- 230000006870 function Effects 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 5
- 238000010276 construction Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- 230000003068 static effect Effects 0.000 description 2
- KLDZYURQCUYZBL-UHFFFAOYSA-N 2-[3-[(2-hydroxyphenyl)methylideneamino]propyliminomethyl]phenol Chemical compound OC1=CC=CC=C1C=NCCCN=CC1=CC=CC=C1O KLDZYURQCUYZBL-UHFFFAOYSA-N 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 201000001098 delayed sleep phase syndrome Diseases 0.000 description 1
- 208000033921 delayed sleep phase type circadian rhythm sleep disease Diseases 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
Definitions
- the embodiments of the present application relate to the technical field of video coding and decoding, and in particular, to an image prediction method, an encoder, a decoder, and a storage medium.
- cross-component prediction In the latest video coding standard H.266/Versatile Video Coding (VVC), cross-component prediction has been allowed; among them, cross-component Linear Model Prediction (CCLM) is typical One of the cross-component prediction techniques.
- VVC Video Coding
- CCLM Linear Model Prediction
- one component can be used to predict another component (or its residual), such as predicting chrominance components from luminance components, or predicting luminance components from chrominance components, or predicting chrominance components from chrominance components, etc. .
- the embodiments of the present application provide an image prediction method, an encoder, a decoder, and a storage medium, which balance the statistical characteristics of each image component after cross-component prediction, thereby not only improving the prediction efficiency, but also improving the coding and decoding efficiency of video images .
- an embodiment of the present application provides an image prediction method applied to an encoder or a decoder, and the method includes:
- an encoder which includes a first prediction unit and a first processing unit, wherein:
- the first prediction unit is configured to obtain the initial prediction value of the image component to be predicted of the current block in the image through the prediction model;
- the first processing unit is configured to perform filtering processing on the initial predicted value to obtain the target predicted value of the image component to be predicted of the current block.
- an embodiment of the present application provides an encoder.
- the encoder includes a first memory and a first processor, where:
- the first memory is configured to store a computer program that can run on the first processor
- the first processor is configured to execute the method described in the first aspect when running the computer program.
- an embodiment of the present application provides a decoder, which includes a second prediction unit and a second processing unit, wherein:
- the second prediction unit is configured to obtain the initial prediction value of the image component to be predicted of the current block in the image through the prediction model;
- the second processing unit is configured to perform filtering processing on the initial predicted value to obtain the target predicted value of the image component to be predicted of the current block.
- an embodiment of the present application provides a decoder, which includes a second memory and a second processor, wherein:
- the second memory is used to store a computer program that can run on the second processor
- the second processor is configured to execute the method described in the first aspect when running the computer program.
- an embodiment of the present application provides a computer storage medium that stores an image prediction program, and when the image prediction program is executed by a first processor or a second processor, the Methods.
- the embodiments of the present application provide an image prediction method, an encoder, a decoder, and a storage medium.
- the initial prediction value of the image component to be predicted of the current block in the image is obtained through a prediction model; then the initial prediction value is filtered Processing to obtain the target predicted value of the image component to be predicted of the current block; in this way, after predicting at least one image component of the current block, continue to perform filtering processing on the at least one image component, which can balance the inter-component prediction.
- the statistical properties of the image components not only improve the prediction efficiency, but also because the target predicted value obtained is closer to the true value, the prediction residual of the image component is smaller, so that the bit rate transmitted during the encoding and decoding process is small. It can also improve the coding and decoding efficiency of video images.
- FIG. 1 is a schematic diagram of the composition structure of a traditional cross-component prediction architecture provided by related technical solutions
- FIG. 2 is a schematic diagram of the composition of a video encoding system provided by an embodiment of the application.
- FIG. 3 is a schematic diagram of the composition of a video decoding system provided by an embodiment of the application.
- FIG. 4 is a schematic flowchart of an image prediction method provided by an embodiment of the application.
- FIG. 5 is a schematic diagram of the composition structure of an improved cross-component prediction architecture provided by an embodiment of the application.
- FIG. 6 is a schematic diagram of the composition structure of an encoder provided by an embodiment of the application.
- FIG. 7 is a schematic diagram of a specific hardware structure of an encoder provided by an embodiment of the application.
- FIG. 8 is a schematic diagram of the composition structure of a decoder provided by an embodiment of the application.
- FIG. 9 is a schematic diagram of a specific hardware structure of a decoder provided by an embodiment of the application.
- the first image component, the second image component, and the third image component are generally used to characterize the coding block; among them, the three image components are a luminance component, a blue chrominance component, and a red chrominance component.
- the luminance component is usually represented by the symbol Y
- the blue chrominance component is usually represented by the symbol Cb or U
- the red chrominance component is usually represented by the symbol Cr or V; in this way, the video image can be represented in YCbCr format or YUV. Format representation.
- the first image component may be a luminance component
- the second image component may be a blue chrominance component
- the third image component may be a red chrominance component
- H.266/VCC proposed CCLM cross-component prediction technology.
- the cross-component prediction technology based on CCLM can not only realize the prediction of the luminance component to the chrominance component, that is, the prediction of the first image component to the second image component, or the first image component to the third image component, but also the color The prediction from the degree component to the luminance component, that is, the prediction from the second image component to the first image component, or the third image component to the first image component, and even the prediction between the chrominance component and the chrominance component, that is, the first Prediction from the second image component to the third image component, or the third image component to the second image component, etc.
- the following will take the prediction from the first image component to the second image component as an example for description, but the technical solutions of the embodiments of the present application can also be applied to the prediction of other image components.
- FIG. 1 shows a schematic diagram of the composition structure of a traditional cross-component prediction architecture provided by related technical solutions.
- the first image component for example, represented by Y component
- the second image component for example, represented by U component
- the video image adopts the 4:2:0 format of YUV, the Y component and U component
- the method of using the Y component to predict the third image component for example, represented by the V component
- the traditional cross-component prediction architecture 10 may include a Y component coding block 110, a resolution adjustment unit 120, a Y 1 component coding block 130, a U component coding block 140, a prediction model 150, and a cross component prediction unit 160.
- the Y component of the video image is represented by a Y component coding block 110 with a size of 2N ⁇ 2N.
- the larger bolded box here is used to highlight the Y component coding block 110, and the surrounding gray solid circle is used to indicate the Y component coding block.
- the neighboring reference value Y(n) of 110; the U component of the video image is represented by the U-component coding block 140 of size N ⁇ N.
- the larger block in bold here is used to highlight the U-component coding block 140, and the surrounding gray
- the solid circle is used to indicate the adjacent reference value C(n) of the U component encoding block 140; since the Y component and the U component have different resolutions, the resolution adjustment unit 120 needs to adjust the resolution of the Y component to obtain N ⁇ N-size Y 1 component coding block 130; for Y 1 component coding block 130, the larger bolded box here is used to highlight Y 1 component coding block 130, and the surrounding gray solid circle is used to indicate Y 1 component coding block 130 adjacent reference values Y 1 (n-); and Y 1 (n-) and the U component encoding block adjacent to the reference value C (n) 140 of the prediction model can be constructed by a Y component value encoding neighboring reference blocks 130 150; According to the Y component of the Y 1 component encoding block 130, the pixel value and the prediction model 150 are reconstructed, component prediction can be performed across the component prediction unit 160, and the U component prediction value
- the image component prediction is not comprehensively considered, for example, the difference in statistical characteristics between the image components is not considered, which makes the prediction efficiency low.
- the embodiment of the present application provides an image prediction method. First, the initial prediction value of the image component to be predicted of the current block in the image is obtained through the prediction model; then the initial prediction value is filtered to obtain the The target predicted value of the image component to be predicted of the current block; in this way, after at least one image component of the current block is predicted, the filtering process is continued on the at least one image component, which can balance the statistics of each image component after the inter-component prediction This not only improves the prediction efficiency, but also improves the coding and decoding efficiency of video images.
- the video encoding system 20 includes a transform and quantization unit 201, an intra-frame estimation unit 202, and an intra-frame
- the encoding unit 209 can implement header information encoding and context-based adaptive binary arithmetic coding (Context-based Adaptive Binary Arithmatic Coding, CABAC).
- a coding block can be obtained by dividing the coding tree block (Coding Tree Unit, CTU), and then the residual pixel information obtained after intra-frame or inter-frame prediction is performed by the transform and quantization unit 201.
- the encoding block is transformed, including transforming the residual information from the pixel domain to the transform domain, and quantizing the resulting transform coefficients to further reduce the bit rate;
- the intra-frame estimation unit 202 and the intra-frame prediction unit 203 are used to The coding block performs intra prediction; specifically, the intra estimation unit 202 and the intra prediction unit 203 are used to determine the intra prediction mode to be used to code the coding block;
- the motion compensation unit 204 and the motion estimation unit 205 are used to perform Inter-frame prediction coding of the received coding block with respect to one or more blocks in one or more reference frames to provide temporal prediction information;
- the motion estimation performed by the motion estimation unit 205 is a process of generating a motion vector, the The motion vector can estimate the motion of the coded block, and then the
- the context content can be based on adjacent coding
- the block can be used to encode information indicating the determined intra-frame prediction mode and output the code stream of the video signal; and the decoded image buffer unit 210 is used to store reconstructed video blocks for prediction reference. As the encoding of the video image progresses, new reconstructed video blocks are continuously generated, and these reconstructed video blocks are all stored in the decoded image buffer unit 210.
- the video decoding system 30 includes a decoding unit 301, an inverse transform and inverse quantization unit 302, and an intra-frame
- the code stream of the video signal is output; the code stream is input into the video decoding system 30, and first passes through the decoding unit 301 to obtain the decoded transform coefficient;
- the inverse transform and inverse quantization unit 302 performs processing to generate a residual block in the pixel domain;
- the intra prediction unit 303 can be used to generate data based on the determined intra prediction mode and the data from the previous decoded block of the current frame or picture The prediction data of the current video block to be decoded;
- the motion compensation unit 304 determines the prediction information for the video block to be decoded by analyzing the motion vector and other associated syntax elements, and uses the prediction information to generate the video being decoded
- the predictive block of the block; the residual block from the inverse transform and inverse quantization unit 302 and the corresponding predictive block generated by the intra prediction unit 303 or the motion compensation unit 304 are summed to form a decoded video block;
- the decoded video block is passed through the filtering unit 305 in order to remove the block arti
- the embodiment of this application is mainly applied to the part of the intra prediction unit 203 shown in FIG. 2 and the part of the intra prediction unit 303 shown in FIG. 3; that is, the embodiment of this application can be applied to both video coding systems and It can be applied to a video decoding system, which is not specifically limited in the embodiment of the present application.
- FIG. 4 shows a schematic flowchart of an image prediction method provided by an embodiment of the present application.
- the method may include:
- S402 Perform filtering processing on the initial predicted value to obtain a target predicted value of the image component to be predicted of the current block.
- each image block currently to be encoded can be called an encoding block.
- each coding block may include a first image component, a second image component, and a third image component, and the current block is the coding of the first image component, the second image component, or the third image component currently to be predicted in the video image. Piece.
- the image prediction method in the embodiments of the present application can be applied to both a video encoding system and a video decoding system, and can even be applied to both a video encoding system and a video decoding system. Specific restrictions.
- the initial prediction value of the image component to be predicted of the current block in the image is first obtained through the prediction model; then the initial prediction value is filtered to obtain the target prediction of the image component to be predicted of the current block In this way, after predicting at least one image component, continue to filter the at least one image component, which can balance the statistical characteristics of each image component after cross-component prediction, thereby not only improving the prediction efficiency, but also improving the video image Codec efficiency.
- the method may further include:
- the at least one image component includes an image component to be predicted and/or an image component to be referenced, and the image component to be predicted is different from the image component to be referenced;
- the reference value of the image component to be predicted of the current block and/or the reference value of the image component to be referenced of the current block is obtained; wherein, the image component to be predicted is when the prediction model is constructed
- the predicted component, the to-be-referenced image component is the component used for prediction when the prediction model is constructed.
- At least one image component of the current block may be an image component to be predicted, an image component to be referenced, or even an image component to be predicted and an image component to be referenced.
- the prediction of the first image component to the second image component is achieved by the prediction model
- the image component to be predicted is the second image component and the reference image component is the first image component; or, the first image is assumed to be realized by the prediction model If the component predicts the third image component, the image component to be predicted is the third image component, and the image component to be referenced is still the first image component.
- the reference value of the image component to be predicted of the current block and/or the reference value of the image component to be referenced in the current block can be obtained.
- the initial prediction value of the image component to be predicted in the current block may be filtered according to the reference value of the image component to be predicted in the current block and/or the reference value of the image component to be referenced in the current block.
- the processing the initial predicted value corresponding to the at least one image component according to the reference value of the at least one image component may include:
- the initial predicted value is filtered using a preset processing mode, wherein the preset The processing mode includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing and dequantization processing;
- the target predicted value is obtained.
- the preset The processing mode performs filtering processing on the initial prediction value.
- filtering processing may be used to filter the initial predicted value, or grouping processing may be used to filter the initial predicted value, or value correction processing may be used to filter the initial predicted value, or
- the quantization process may be used to filter the initial predicted value, or the inverse quantization process (also called dequantization process) may also be used to filter the initial predicted value, etc., which is not specifically limited in the embodiment of the present application.
- the luminance component is used to predict the chrominance component.
- the preset processing mode adopts value correction processing, Since the luminance component and the chrominance component have different statistical characteristics, based on the difference in the statistical characteristics of the two image components, a deviation factor can be obtained; then the deviation factor is used to correct the initial predicted value (for example, the initial predicted value Add to the deviation factor) to balance the statistical characteristics of the image components after cross-component prediction, so as to obtain the target predicted value of the chroma component.
- the target predicted value of the chroma component and the chroma component The true value is closer; if the preset processing mode uses filter processing, since the luminance component and the chrominance component have different statistical characteristics, according to the difference in the statistical characteristics of the two image components, the initial predicted value can be filtered to balance After cross-component prediction, the statistical characteristics of the image components are obtained to obtain the target predicted value corresponding to the chroma component. At this time, the target predicted value of the chroma component is closer to the true value of the chroma component; if preset processing The mode uses grouping processing.
- the initial prediction value can be grouped to balance the statistics between the image components after cross-component prediction
- the target prediction value corresponding to the chroma component can be obtained.
- the target prediction value of the chroma component is closer to the true value of the chroma component;
- the process of determining the initial prediction value involves the quantization and dequantization of the luminance component and the chrominance component.
- the difference in the statistical characteristics of the two image components may be affected. This leads to differences between quantization processing and dequantization processing.
- the initial prediction value can be quantized to balance the statistical characteristics of the image components after cross-component prediction, so that the obtained color
- the target predicted value corresponding to the degree component At this time, the target predicted value of the chrominance component is closer to the true value of the chrominance component; if the preset processing mode is dequantized, the initial predicted value can be dequantized to balance
- the statistical characteristics of each image component are obtained, and the target predicted value corresponding to the chroma component is obtained.
- the target predicted value of the chroma component is closer to the true value of the chroma component; thereby improving the prediction
- the accuracy of the value also improves the prediction efficiency.
- the method may further include:
- the initial prediction residual is filtered using a preset processing mode, wherein the prediction It is assumed that the processing mode includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing and dequantization processing;
- the target prediction residual is obtained.
- the obtaining the target prediction value of the image component to be predicted of the current block may include:
- the target prediction value of the image component to be predicted of the current block is calculated.
- the prediction residual is obtained from the difference between the predicted value of the image component and the true value of the image component; in order to improve the coding and decoding efficiency of the video image, it is necessary to ensure that the prediction residual transmitted by the current block is as much as possible The ground is small.
- the initial prediction value can be filtered according to the preset processing mode to obtain the Predict the target predicted value of the image component, because the target predicted value of the image component to be predicted is as close as possible to the true value of the image component to be predicted, so that the prediction residual between the two is as small as possible;
- the prediction model obtains the initial prediction value corresponding to the image component to be predicted, it can also determine the initial prediction value of the image component to be predicted based on the difference between the initial prediction value of the image component to be predicted and the true value of the image component to be predicted.
- Predict the residual and then filter the initial prediction residual according to the preset processing mode to obtain the target prediction residual of the image component to be predicted, and the target prediction value of the image component to be predicted can be obtained according to the target prediction residual; Since the target prediction residual is as small as possible, the target prediction value of the image component to be predicted is as close as possible to the true value of the image component to be predicted.
- the embodiment of the present application can be applied not only to filtering the initial prediction value of the image component to be predicted in the current block, but also to filtering the initial prediction residual of the image component to be predicted in the current block; After filtering, the statistical characteristics of each image component after cross-component prediction can be balanced, which not only improves the prediction efficiency, but also because the target prediction value obtained is closer to the true value, the prediction residual of the image component to be predicted is smaller.
- the bit rate transmitted is less, and the encoding and decoding efficiency of the video image is also improved.
- the method may further include:
- Determining the reference value of the image component to be predicted of the current block where the reference value of the image component to be predicted of the current block is the value of the image component to be predicted of adjacent pixels of the current block;
- the prediction model is constructed according to the calculated model parameters, wherein the prediction model is used to perform cross-component prediction processing on the to-be-predicted image component of the current block according to the to-be-referenced image component of the current block.
- the prediction model in the embodiment of the present application may be a linear model, such as the cross-component prediction technology of CCLM; the prediction model may also be a non-linear model, such as multi-model CCLM (Multiple Model CCLM, MMLM) cross-component prediction Technology, it is composed of multiple linear models.
- CCLM Multiple Model CCLM, MMLM
- the embodiment of the present application will take the prediction model as a linear model as an example for the following description, but the image prediction method of the embodiment of the present application can also be applied to a nonlinear model.
- the model parameters include a first model parameter (denoted by ⁇ ) and a second model parameter (denoted by ⁇ ).
- ⁇ and ⁇ There are many ways to calculate ⁇ and ⁇ . It can be a preset factor calculation model constructed by the least squares method, or a preset factor calculation model constructed by the maximum and minimum values, or even other ways.
- the preset factor calculation model is not specifically limited in the embodiment of this application.
- the reference value of the image component to be predicted in the current block and the reference value of the image component to be referenced in the current block where the reference value of the image component to be referenced in the current block
- the value can be the reference image component value of the neighboring pixels of the current block (such as the neighboring reference value of the first image component), and the reference value of the to-be-predicted image component of the current block can be the to-be-predicted image component value of the neighboring pixels of the current block (such as The adjacent reference value of the second image component); using the adjacent reference pixel value of the first image component and the minimized regression error of the adjacent reference pixel value of the second image component to derive the model parameters of the prediction model, specifically, as in the formula ( 1) Shown:
- L(n) represents the adjacent reference value of the first image component corresponding to the left side and the upper side of the current block after down-sampling
- C(n) represents the first image component corresponding to the left side and the upper side of the current block.
- i, j represent the position coordinates of the pixel in the current block
- i represents the horizontal direction
- j represents the vertical direction
- Pred C [i,j] represents the pixel corresponding to the pixel with the location coordinate [i,j] in the current block
- Rec L [i, j] represents the reconstructed value of the first image component corresponding to the pixel with the position coordinate [i, j] in the same current block (down-sampled).
- the preset factor calculation model constructed by the maximum and minimum values as an example, it provides a simplified version of the derivation method of model parameters. Specifically, it can search for the adjacent reference value of the largest first image component and the smallest first image component.
- the adjacent reference values of image components are used to derive the model parameters of the prediction model according to the principle of "two points determine one line", as shown in formula (3):
- L max and L min represent the maximum and minimum values found in the adjacent reference values of the first image component corresponding to the left side and the upper side of the current block after down-sampling
- C max and C min represent L max The adjacent reference value of the second image component corresponding to the reference pixel at the position corresponding to L min .
- the first model parameter ⁇ and the second model parameter ⁇ can also be obtained; based on ⁇ and ⁇ , it is assumed that the second model parameter is predicted according to the first image component. Image component, then the constructed prediction model is still as shown in the above equation (2).
- the image components can be predicted according to the prediction model; for example, according to the prediction model shown in formula (2), the first image component can be used to predict the second image component, such as the luminance component to predict the chrominance component , Thereby obtaining the initial predicted value of the chrominance component, and subsequently, according to the reference value of the luminance component and/or the reference value of the chrominance component, the initial predicted value can be filtered using the preset processing mode to obtain the chrominance component
- the target predicted value of the; the second image component can also be used to predict the first image component, for example, the chrominance component is used to predict the luminance component, thereby obtaining the initial predicted value of the luminance component, which can then be based on the reference value of the luminance component and/or chrominance
- the reference value of the component, the initial prediction value is filtered using the preset processing mode, so as to obtain the target prediction value corresponding to the brightness component; even the second image component can be used to predict the third image component,
- the initial prediction value can be performed using the preset processing mode according to the reference value of the blue chrominance component and/or the reference value of the red chrominance component. Filtering processing to obtain the target predicted value of the red chrominance component; the purpose of improving the prediction efficiency can be achieved.
- the resolution of each image component is not the same.
- the resolution of the image component also needs to be adjusted (including up-sampling or down-sampling the image component) to achieve Predict the target resolution of the image component.
- the method may further include:
- the resolution of the image component to be predicted of the current block is different from the resolution of the image component to be referenced in the current block, the resolution of the image component to be referenced is adjusted; wherein, the resolution Adjustment includes up-sampling adjustment or down-sampling adjustment;
- the reference value of the image component to be referenced in the current block is updated to obtain the first reference value of the image component to be referenced in the current block; wherein, after adjustment
- the resolution of the image component to be referenced is the same as the resolution of the image component to be predicted.
- the method may further include:
- the reference value of the image component to be referenced in the current block is adjusted to obtain the current The first reference value of the to-be-referenced image component of the block, wherein the adjustment processing includes one of the following: down-sampling filtering, up-sampling filtering, cascaded filtering of down-sampling filtering and low-pass filtering, up-sampling filtering and low-pass filtering. Cascade filtering of pass filtering.
- the resolution of the to-be-predicted image component of the current block is different from the resolution of the to-be-referenced image component of the current block, the resolution of the to-be-referenced image component can be adjusted so that the adjusted to-be-referenced
- the resolution of the image component is the same as the resolution of the image component to be predicted; the resolution adjustment here includes up-sampling adjustment or down-sampling adjustment; and according to the adjusted resolution of the image component to be referenced, The reference value of is updated to obtain the first reference value of the image component to be referenced in the current block.
- the reference value of the image component to be referenced in the current block can also be adjusted to obtain the reference value of the current block.
- the first reference value of the image component; the adjustment processing here includes one of the following: down-sampling filtering, up-sampling filtering, cascading filtering of down-sampling filtering and low-pass filtering, cascading filtering of up-sampling filtering and low-pass filtering .
- the calculation of the model parameters of the prediction model according to the reference value of the image component to be predicted of the current block and the reference value of the image component to be referenced of the current block may include:
- the model parameter of the prediction model is calculated according to the reference value of the image component to be predicted of the current block and the first reference value of the image component to be referenced of the current block.
- the model parameters of the prediction model are calculated according to the first reference value of the image component to be referenced in the current block and the first reference value of the image component to be referenced in the current block.
- the luminance component is used to predict the chrominance component.
- the image component to be used is the luminance component
- the image component to be predicted is the chrominance component
- the resolution of the luminance component needs to be adjusted at this time, such as down-sampling the luminance component to make the adjusted luminance component
- the resolution meets the target resolution; on the contrary, if the chrominance component is used to predict the luminance component, after the target resolution of the luminance component is obtained, since the resolution of the chrominance component does not meet the target resolution, the chrominance component Resolution adjustment, such as upsampling the chrominance component, can make the resolution of the adjusted chrominance component meet the target resolution; in addition, if the blue chrominance component is used to predict the red chrominance component, the red After the target resolution
- the initial prediction value of the image component to be predicted in the current block may be filtered according to the reference value of the image component to be predicted in the current block.
- the filtering process on the initial predicted value may include:
- the initial predicted value is filtered according to the reference value of the image component to be predicted of the current block to obtain the target predicted value; wherein, the reference value of the image component to be predicted of the current block is obtained by comparing the image
- the image component to be predicted or the image component to be predicted of the current block is obtained by performing characteristic statistics.
- the filtering the initial prediction value according to the reference value of the image component to be predicted of the current block may include:
- a preset processing mode is used to perform filtering processing on the initial predicted value; wherein, the preset processing mode includes at least one of the following: filtering processing, grouping processing , Value correction processing, quantization processing, inverse quantization processing, low-pass filter processing and adaptive filter processing.
- performing filtering processing on the initial predicted value includes:
- a preset processing mode is used to perform filtering processing on the initial prediction residual; wherein the preset processing mode includes at least one of the following: filtering processing, grouping Processing, value correction processing, quantization processing, inverse quantization processing, low-pass filter processing and adaptive filter processing.
- the preset processing mode may be filtering processing, grouping processing, value correction processing, quantization processing, inverse quantization processing, low-pass filtering processing or adaptive filtering processing, etc.
- the reference value of the to-be-predicted image component of the current block may be obtained by performing characteristic statistics on the to-be-predicted image component of the image or the to-be-predicted image component of the current block.
- the characteristic statistics here are not limited to the to-be-predicted image of the current block.
- the component can also be extended to the image component to be predicted of the image to which the current block belongs.
- the initial predicted value can be filtered using a preset processing mode according to the reference value of the image component to be predicted of the current block.
- the initial prediction value can also be used to calculate the initial prediction residual of the image component to be predicted in the current block, and then according to the reference value of the image component to be predicted in the current block, the preset processing mode
- the initial prediction residual is filtered to obtain the target prediction residual, and the target prediction value can also be obtained according to the target prediction residual.
- the initial prediction value of the image component to be predicted in the current block may also be filtered according to the reference value of the image component to be predicted in the current block and the reference value of the image component to be referenced in the current block.
- the method may further include:
- the model parameters of the prediction model are calculated according to the reference value of the image component to be predicted in the current block and the reference value of the image component to be referenced in the current block.
- the method may further include:
- the initial prediction value is filtered using a preset processing mode, wherein the preset processing mode It includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing, inverse quantization processing, low-pass filtering processing and adaptive filtering processing.
- the initial prediction value can also be filtered, which can balance the statistical characteristics of each image component after cross-component prediction, thereby improving the prediction efficiency.
- FIG. 5 shows a schematic diagram of the composition structure of an improved cross-component prediction architecture provided by an embodiment of the present application.
- the improved cross-component prediction architecture 50 may further include a processing unit 510, which is mainly used for processing the cross-component prediction unit 160 The predicted value after that is processed to obtain a more accurate target predicted value.
- the processing unit 510 can also perform related processing on the initial predicted value of the U component, such as filtering, grouping, value correction, quantization, and inverse quantization, so as to obtain the U component target predicted value; Since the U-component target predicted value is
- the prediction residual when the image prediction method is applied to the encoder side, after obtaining the target prediction value, the prediction residual can be determined according to the difference between the target prediction value and the true value, and then the prediction residual The difference is written into the code stream; at the same time, the model parameters of the prediction model can be calculated according to the reference value of the image component to be predicted in the current block and the reference value of the image component to be referenced in the current block, and then the calculated model parameters are written into In the code stream; the code stream is transmitted from the encoder side to the decoder side; correspondingly, when the image prediction method is applied to the decoder side, the prediction residual can be obtained by parsing the code stream, and the code stream can also be parsed To obtain the model parameters of the prediction model to construct the prediction model; in this way, on the decoder side, the prediction model is still used to obtain the initial prediction value of the image component to be predicted in the current block; then the initial prediction value is filtered to obtain The target predicted value of the image component to be predicted in the
- This embodiment provides an image prediction method.
- the initial prediction value of the image component to be predicted of the current block in the image is obtained; the initial prediction value is filtered to obtain the image component to be predicted of the current block
- the prediction residual of the image component is smaller, so that the bit rate transmitted during the encoding and decoding process is less, and the encoding and decoding efficiency of the video image can also be improved.
- FIG. 6 shows a schematic diagram of the composition structure of an encoder 60 provided by an embodiment of the present application.
- the encoder 60 may include a first prediction unit 601 and a first processing unit 602, where
- the first prediction unit 601 is configured to obtain the initial prediction value of the image component to be predicted of the current block in the image through a prediction model
- the first processing unit 602 is configured to perform filtering processing on the initial predicted value to obtain the target predicted value of the image component to be predicted of the current block.
- the encoder 60 may further include a first statistics unit 603 and a first acquisition unit 604, where:
- the first statistical unit 603 is configured to perform characteristic statistics on at least one image component of the current block; wherein the at least one image component includes an image component to be predicted and/or an image component to be referenced, and the image to be predicted Component is different from the component of the image to be referenced;
- the first obtaining unit 604 is configured to obtain the reference value of the image component to be predicted of the current block and/or the reference value of the image component to be referenced of the current block according to the result of characteristic statistics;
- the predicted image component is the predicted component when constructing the prediction model, and the to-be-referenced image component is the component used for prediction when constructing the prediction model.
- the first processing unit 602 is configured to use a preset processing mode according to the reference value of the image component to be predicted of the current block and/or the reference value of the image component to be referenced in the current block.
- the initial prediction value is subjected to filtering processing, wherein the preset processing mode includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing, and dequantization processing;
- the first obtaining unit 604 is configured to obtain the target predicted value according to the result of the processing.
- the encoder 60 may further include a calculation unit 605 configured to calculate the initial prediction residual of the image component to be predicted of the current block based on the initial prediction component value;
- the first processing unit 602 is further configured to use a preset processing mode to predict the initial prediction based on the reference value of the image component to be predicted of the current block and/or the reference value of the image component to be referenced in the current block.
- the residual is subjected to filtering processing, wherein the preset processing mode includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing, and dequantization processing;
- the first obtaining unit 604 is further configured to obtain the target prediction residual according to the processing result.
- the calculation unit 605 is further configured to calculate the target prediction value of the image component to be predicted of the current block according to the target prediction residual.
- the encoder 60 may further include a first determining unit 606 and a first constructing unit 607, where
- the first determining unit 606 is configured to determine the reference value of the to-be-predicted image component of the current block, where the reference value of the to-be-predicted image component of the current block is the to-be-predicted image component of the neighboring pixels of the current block. Predicted image component value; and determining the reference value of the to-be-referenced image component of the current block, wherein the to-be-referenced image component of the current block is different from the to-be-predicted image component, and the to-be-referenced image component of the current block
- the reference value is the reference image component value of the neighboring pixels of the current block;
- the calculation unit 605 is further configured to calculate the model parameters of the prediction model according to the reference value of the image component to be predicted of the current block and the reference value of the image component to be referenced of the current block;
- the first construction unit 607 is configured to construct the prediction model according to the calculated model parameters, wherein the prediction model is used to compare the image to be predicted of the current block according to the image component to be referenced in the current block.
- the components undergo cross-component prediction processing.
- the encoder 60 may further include a first adjustment unit 608 configured to when the resolution of the image component to be predicted of the current block is different from the resolution of the image component to be referenced in the current block. At the same time, adjust the resolution of the image component to be referenced; wherein the resolution adjustment includes up-sampling adjustment or down-sampling adjustment; and based on the adjusted resolution of the image component to be referenced, The reference value of the to-be-referenced image component of the current block is updated to obtain the first reference value of the to-be-referenced image component of the current block; wherein the adjusted resolution of the to-be-referenced image component and the to-be-predicted image The resolution of the components is the same.
- the first adjustment unit 608 is further configured to: when the resolution of the to-be-predicted image component of the current block is different from the resolution of the to-be-referenced image component of the current block, The reference value of the to-be-referenced image component is adjusted to obtain the first reference value of the to-be-referenced image component of the current block, wherein the adjustment processing includes one of the following: down-sampling filtering, up-sampling filtering, down Cascade filtering of sampling filtering and low-pass filtering, cascading filtering of upsampling filtering and low-pass filtering.
- the calculation unit 605 is further configured to calculate the model of the prediction model according to the reference value of the image component to be predicted of the current block and the first reference value of the image component to be referenced in the current block parameter.
- the first processing unit 602 is further configured to perform filtering processing on the initial predicted value according to the reference value of the image component to be predicted of the current block to obtain the target predicted value; wherein, the The reference value of the image component to be predicted of the current block is obtained by performing characteristic statistics on the image component to be predicted of the image or the image component to be predicted of the current block.
- the first processing unit 602 is further configured to use a preset processing mode to filter the initial predicted value according to the reference value of the image component to be predicted of the current block; wherein, the prediction It is assumed that the processing mode includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing, inverse quantization processing, low-pass filtering processing, and adaptive filtering processing.
- the calculation unit 605 is further configured to use the initial prediction value to calculate the initial prediction residual of the image component to be predicted of the current block;
- the first processing unit 602 is further configured to use a preset processing mode to filter the initial prediction residual according to the reference value of the image component to be predicted of the current block; wherein, the preset processing mode is at least Including one of the following: filtering processing, grouping processing, value correction processing, quantization processing, inverse quantization processing, low-pass filtering processing and adaptive filtering processing.
- the first statistical unit 603 is further configured to perform characteristic statistics on the image components to be predicted of the image
- the first determining unit 606 is further configured to determine the reference value of the image component to be predicted of the current block and the reference value of the image component to be referenced of the current block according to the result of the characteristic statistics; wherein, the The image component to be referenced is different from the image component to be predicted;
- the calculation unit 605 is further configured to calculate the model parameters of the prediction model according to the reference value of the image component to be predicted of the current block and the reference value of the image component to be referenced in the current block.
- the first processing unit 602 is further configured to use a preset processing mode to perform processing on all components according to the reference value of the image component to be predicted of the current block and the reference value of the image component to be referenced in the current block.
- the initial prediction value is subjected to filtering processing, wherein the preset processing mode includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing, dequantization processing, low-pass filtering processing, and adaptive filtering processing .
- a “unit” may be a part of a circuit, a part of a processor, a part of a program, or software, etc., of course, may also be a module, or may also be non-modular.
- the various components in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the above-mentioned integrated unit can be realized in the form of hardware or software function module.
- the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer readable storage medium.
- the technical solution of this embodiment is essentially or It is said that the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product.
- the computer software product is stored in a storage medium and includes several instructions to enable a computer device (which can A personal computer, server, or network device, etc.) or a processor (processor) executes all or part of the steps of the method described in this embodiment.
- the aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
- an embodiment of the present application provides a computer storage medium that stores an image prediction program that implements the steps of the method described in the foregoing embodiment when the image prediction program is executed by at least one processor.
- FIG. 7 shows the specific hardware structure of the encoder 60 provided by the embodiment of the present application, which may include: a first communication interface 701, a first memory 702, and a first communication interface 701; Processor 703; the components are coupled together through the first bus system 704.
- the first bus system 704 is used to implement connection and communication between these components.
- the first bus system 704 also includes a power bus, a control bus, and a status signal bus.
- various buses are marked as the first bus system 704 in FIG. 7. among them,
- the first communication interface 701 is used for receiving and sending signals in the process of sending and receiving information with other external network elements;
- the first memory 702 is configured to store a computer program that can run on the first processor 703;
- the first processor 703 is configured to execute: when the computer program is running:
- the first memory 702 in the embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
- the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), and electrically available Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
- the volatile memory may be a random access memory (Random Access Memory, RAM), which is used as an external cache.
- RAM static random access memory
- DRAM dynamic random access memory
- DRAM synchronous dynamic random access memory
- DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
- Enhanced SDRAM, ESDRAM Synchronous Link Dynamic Random Access Memory
- Synchlink DRAM Synchronous Link Dynamic Random Access Memory
- DRRAM Direct Rambus RAM
- the first processor 703 may be an integrated circuit chip with signal processing capability. In the implementation process, the steps of the foregoing method can be completed by an integrated logic circuit of hardware in the first processor 703 or instructions in the form of software.
- the above-mentioned first processor 703 may be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (Field Programmable Gate Array, FPGA) Or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
- DSP Digital Signal Processor
- ASIC application specific integrated circuit
- FPGA ready-made programmable gate array
- the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
- the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
- the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
- the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers.
- the storage medium is located in the first memory 702, and the first processor 703 reads the information in the first memory 702, and completes the steps of the foregoing method in combination with its hardware.
- the embodiments described in this application can be implemented by hardware, software, firmware, middleware, microcode, or a combination thereof.
- the processing unit can be implemented in one or more Application Specific Integrated Circuits (ASIC), Digital Signal Processing (DSP), Digital Signal Processing Equipment (DSP Device, DSPD), programmable Logic device (Programmable Logic Device, PLD), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), general-purpose processors, controllers, microcontrollers, microprocessors, and others for performing the functions described in this application Electronic unit or its combination.
- ASIC Application Specific Integrated Circuits
- DSP Digital Signal Processing
- DSP Device Digital Signal Processing Equipment
- PLD programmable Logic Device
- PLD programmable Logic Device
- Field-Programmable Gate Array Field-Programmable Gate Array
- FPGA Field-Programmable Gate Array
- the technology described in this application can be implemented through modules (such as procedures, functions, etc.) that perform the functions described in this application.
- the first processor 703 is further configured to execute the method described in any one of the foregoing embodiments when the computer program is running.
- This embodiment provides an encoder, which may include a first prediction unit and a first processing unit, wherein the first prediction unit is configured to obtain an initial prediction of the image component to be predicted of the current block in the image through a prediction model Value; the first processing unit is configured to perform filtering processing on the initial predicted value to obtain the target predicted value of the image component to be predicted of the current block; in this way, continue the prediction of at least one image component of the current block Filtering of at least one image component can balance the statistical characteristics of each image component after inter-component prediction, thereby not only improving the prediction efficiency, but also because the obtained target prediction value is closer to the true value, making the prediction residual of the image component Smaller, so that the bit rate transmitted during the encoding and decoding process is less, and at the same time it can also improve the encoding and decoding efficiency of the video image.
- the first prediction unit is configured to obtain an initial prediction of the image component to be predicted of the current block in the image through a prediction model Value
- the first processing unit is configured to perform filtering processing on
- FIG. 8 shows a schematic diagram of the composition structure of a decoder 80 provided by an embodiment of the present application.
- the decoder 80 may include a second prediction unit 801 and a second processing unit 802, where:
- the second prediction unit 801 is configured to obtain the initial prediction value of the image component to be predicted of the current block in the image through a prediction model
- the second processing unit 802 is configured to perform filtering processing on the initial predicted value to obtain the target predicted value of the image component to be predicted of the current block.
- the decoder 80 may further include a second statistics unit 803 and a second acquisition unit 804, where:
- the second statistical unit 803 is configured to perform characteristic statistics on at least one image component of the current block; wherein the at least one image component includes an image component to be predicted and/or an image component to be referenced, and the image to be predicted Component is different from the component of the image to be referenced;
- the second obtaining unit 804 is configured to obtain the reference value of the image component to be predicted of the current block and/or the reference value of the image component to be referenced of the current block according to the result of characteristic statistics;
- the predicted image component is the predicted component when constructing the prediction model, and the to-be-referenced image component is the component used for prediction when constructing the prediction model.
- the second processing unit 802 is configured to use a preset processing mode according to the reference value of the image component to be predicted of the current block and/or the reference value of the image component to be referenced in the current block.
- the initial prediction value is subjected to filtering processing, wherein the preset processing mode includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing, and dequantization processing;
- the second obtaining unit 804 is configured to obtain the target predicted value according to the result of the processing.
- the decoder 80 may further include a parsing unit 805, configured to parse the code stream to obtain the initial prediction residual of the image component to be predicted of the current block;
- the second processing unit 802 is further configured to use a preset processing mode to predict the initial prediction according to the reference value of the image component to be predicted of the current block and/or the reference value of the image component to be referenced in the current block.
- the residual is subjected to filtering processing, wherein the preset processing mode includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing, and dequantization processing;
- the second obtaining unit 804 is further configured to obtain the target prediction residual according to the processing result.
- the decoder 80 may further include a second construction unit 806, wherein:
- the parsing unit 805 is further configured to analyze the code stream to obtain model parameters of the prediction model;
- the second construction unit 806 is configured to construct the prediction model according to the model parameters obtained by analysis, wherein the prediction model is used to compare the to-be-referenced image components of the current block to the to-be-predicted image of the current block.
- the components undergo cross-component prediction processing.
- the decoder 80 may further include a second adjustment unit 807, configured to: when the resolution of the image component to be predicted of the current block is different from the resolution of the image component to be referenced in the current block At the same time, adjust the resolution of the image component to be referenced; wherein the resolution adjustment includes up-sampling adjustment or down-sampling adjustment; and based on the adjusted resolution of the image component to be referenced, The reference value of the to-be-referenced image component of the current block is updated to obtain the first reference value of the to-be-referenced image component of the current block; wherein the adjusted resolution of the to-be-referenced image component and the to-be-predicted image The resolution of the components is the same.
- a second adjustment unit 807 configured to: when the resolution of the image component to be predicted of the current block is different from the resolution of the image component to be referenced in the current block At the same time, adjust the resolution of the image component to be referenced; wherein the resolution adjustment includes up-sampling adjustment or down
- the second adjustment unit 807 is further configured to: when the resolution of the to-be-predicted image component of the current block is different from the resolution of the to-be-referenced image component of the current block, The reference value of the to-be-referenced image component is adjusted to obtain the first reference value of the to-be-referenced image component of the current block, wherein the adjustment processing includes one of the following: down-sampling filtering, up-sampling filtering, down Cascade filtering of sampling filtering and low-pass filtering, cascading filtering of upsampling filtering and low-pass filtering.
- the second processing unit 802 is further configured to perform filtering processing on the initial predicted value according to the reference value of the image component to be predicted of the current block to obtain the target predicted value; wherein, the The reference value of the image component to be predicted of the current block is obtained by performing characteristic statistics on the image component to be predicted of the image or the image component to be predicted of the current block.
- the second processing unit 802 is further configured to use a preset processing mode to perform filtering processing on the initial predicted value according to the reference value of the image component to be predicted of the current block; wherein, the prediction It is assumed that the processing mode includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing, inverse quantization processing, low-pass filtering processing, and adaptive filtering processing.
- the parsing unit 805 is configured to analyze the code stream to obtain the initial prediction residual of the image component to be predicted of the current block;
- the second processing unit 802 is further configured to use a preset processing mode to filter the initial prediction residual according to the reference value of the image component to be predicted of the current block; wherein, the preset processing mode is at least Including one of the following: filtering processing, grouping processing, value correction processing, quantization processing, inverse quantization processing, low-pass filtering processing and adaptive filtering processing.
- the decoder 80 may further include a second determining unit 808, where:
- the second statistical unit 803 is further configured to perform characteristic statistics on the image components to be predicted of the image;
- the second determining unit 808 is further configured to determine the reference value of the to-be-predicted image component of the current block and the reference value of the to-be-referenced image component of the current block according to the result of the characteristic statistics; wherein, the The image component to be referenced is different from the image component to be predicted.
- the second processing unit 802 is further configured to use a preset processing mode to perform processing on all components according to the reference value of the image component to be predicted of the current block and the reference value of the image component to be referenced in the current block.
- the initial prediction value is subjected to filtering processing, wherein the preset processing mode includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing, dequantization processing, low-pass filtering processing, and adaptive filtering processing .
- a "unit" may be a part of a circuit, a part of a processor, a part of a program, or software, etc., of course, may also be a module, or may be non-modular.
- the various components in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the above-mentioned integrated unit can be realized in the form of hardware or software function module.
- the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer readable storage medium.
- this embodiment provides a computer storage medium that stores an image prediction program, and when the image prediction program is executed by a second processor, the method described in any one of the preceding embodiments is implemented. .
- FIG. 9 shows the specific hardware structure of the decoder 80 provided by the embodiment of the present application, which may include: a second communication interface 901, a second memory 902, and a second Processor 903; the components are coupled together through the second bus system 904.
- the second bus system 904 is used to implement connection and communication between these components.
- the second bus system 904 also includes a power bus, a control bus, and a status signal bus.
- various buses are marked as the second bus system 904 in FIG. 9. among them,
- the second communication interface 901 is used for receiving and sending signals in the process of sending and receiving information with other external network elements;
- the second memory 902 is configured to store a computer program that can run on the second processor 903;
- the second processor 903 is configured to execute: when the computer program is running:
- the second processor 903 is further configured to execute the method described in any one of the foregoing embodiments when the computer program is running.
- the hardware function of the second memory 902 is similar to that of the first memory 702, and the hardware function of the second processor 903 is similar to that of the first processor 703; it will not be detailed here.
- This embodiment provides a decoder, which may include a second prediction unit and a second processing unit, wherein the second prediction unit is configured to obtain an initial prediction of the image component to be predicted of the current block in the image through a prediction model Value; the second processing unit is configured to perform filtering processing on the initial predicted value to obtain the target predicted value of the image component to be predicted of the current block; in this way, continue the prediction of at least one image component of the current block Filtering processing of at least one image component can balance the statistical characteristics of each image component after cross-component prediction, thereby not only improving the prediction efficiency, but also improving the coding and decoding efficiency of the video image.
- the initial prediction value of the image component to be predicted in the current block in the image is first obtained through the prediction model; then the initial prediction value is filtered to obtain the target prediction of the image component to be predicted in the current block Value; in this way, after predicting at least one image component of the current block, continue to filter the at least one image component, which can balance the statistical characteristics of each image component after cross-component prediction, thereby not only improving the prediction efficiency, but also because The obtained target prediction value is closer to the true value, so that the prediction residual of the image component is smaller, so that the bit rate transmitted during the encoding and decoding process is small, and the encoding and decoding efficiency of the video image can also be improved.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (19)
- 一种图像预测方法,应用于编码器或解码器,所述方法包括:通过预测模型,获得图像中当前块的待预测图像分量的初始预测值;对所述初始预测值进行滤波处理,得到所述当前块的待预测图像分量的目标预测值。
- 根据权利要求1所述的方法,其中,在所述对所述初始预测值进行滤波处理之前,所述方法还包括:对所述当前块的至少一个图像分量进行特性统计;其中,所述至少一个图像分量包括待预测图像分量和/或待参考图像分量,所述待预测图像分量不同于所述待参考图像分量;根据特性统计的结果,获取所述当前块的待预测图像分量的参考值和/或所述当前块的待参考图像分量的参考值;其中,所述待预测图像分量为构建所述预测模型时所被预测的分量,所述待参考图像分量为构建所述预测模型时所用于预测的分量。
- 根据权利要求2所述的方法,其中,所述对所述初始预测值进行滤波处理,得到所述当前块的待预测图像分量的目标预测值,包括:根据所述当前块的待预测图像分量的参考值和/或所述当前块的待参考图像分量的参考值,利用预设处理模式对所述初始预测值进行滤波处理,其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理和去量化处理;根据所述处理的结果,得到所述目标预测值。
- 根据权利要求2所述的方法,其中,在所述通过预测模型,获得图像中当前块的待预测图像分量的初始预测值之后,所述方法还包括:基于所述初始预测分量值,计算得到所述当前块的待预测图像分量的初始预测残差;根据所述当前块的待预测图像分量的参考值和/或所述当前块的待参考图像分量的参考值,利用预设处理模式对所述初始预测残差进行滤波处理,其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理和去量化处理;根据所述处理的结果,得到所述目标预测残差。
- 根据权利要求4所述的方法,其中,所述得到所述当前块的待预测图像分量的目标预测值,包括:根据所述目标预测残差,计算得到所述当前块的待预测图像分量的目标预测值。
- 根据权利要求1至5任一项所述的方法,其中,在所述通过预测模型,获得图像中当前块的待预测图像分量的初始预测值之前,所述方法还包括:确定所述当前块的待预测图像分量的参考值,其中,所述当前块的待预测图像分量的参考值是所述当前块相邻像素的所述待预测图像分量值;确定所述当前块的待参考图像分量的参考值,其中,所述当前块的待参考图像分量不同于所述待预测图像分量,所述当前块的待参考图像分量的参考值是所述当前块相邻像素的所述参考图像分量值;根据所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的参考值,计算所述预测模型的模型参数;根据计算得到的模型参数,构建所述预测模型,其中,所述预测模型用于根据所述当前块的待参考图像分量对所述当前块的待预测图像分量进行跨分量预测处理。
- 根据权利要求6所述的方法,其中,在所述计算所述预测模型的模型参数之前,所述方法还包括:当所述当前块的待预测图像分量的分辨率与所述当前块的待参考图像分量的分辨 率不同时,对所述待参考图像分量的分辨率进行分辨率调整;其中,所述分辨率调整包括上采样调整或下采样调整;基于调整后的所述待参考图像分量的分辨率,对所述当前块的待参考图像分量的参考值进行更新,得到所述当前块的待参考图像分量的第一参考值;其中,调整后的所述待参考图像分量的分辨率与所述待预测图像分量的分辨率相同。
- 根据权利要求6所述的方法,其中,在所述计算所述预测模型的模型参数之前,所述方法还包括:当所述当前块的待预测图像分量的分辨率与所述当前块的待参考图像分量的分辨率不同时,对所述当前块的待参考图像分量的参考值进行调整处理,得到所述当前块的待参考图像分量的第一参考值,其中,所述调整处理包括下述其中之一:下采样滤波,上采样滤波,下采样滤波及低通滤波的级联滤波,上采样滤波及低通滤波的级联滤波。
- 根据权利要求7或8所述的方法,其中,所述根据所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的参考值,计算所述预测模型的模型参数,包括:根据所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的第一参考值,计算所述预测模型的模型参数。
- 根据权利要求1所述的方法,其中,所述对所述初始预测值进行滤波处理,包括:根据所述当前块的待预测图像分量的参考值对所述初始预测值进行滤波处理,得到所述目标预测值;其中,所述当前块的待预测图像分量的参考值是通过对所述图像的待预测图像分量或所述当前块的待预测图像分量进行特性统计得到。
- 根据权利要求10所述的方法,其中,所述根据所述当前块的待预测图像分量的参考值对所述初始预测值进行滤波处理,包括:根据所述当前块的待预测图像分量的参考值,利用预设处理模式对所述初始预测值进行滤波处理;其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理、反量化处理、低通滤波处理和自适应滤波处理。
- 根据权利要求10所述的方法,其中,所述根据所述当前块的待预测图像分量的参考值对所述初始预测值进行滤波处理,包括:利用所述初始预测值,计算得到所述当前块的待预测图像分量的初始预测残差;根据所述当前块的待预测图像分量的参考值,利用预设处理模式对所述初始预测残差进行滤波处理;其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理、反量化处理、低通滤波处理和自适应滤波处理。
- 根据权利要求1所述的方法,其中,在所述通过预测模型,获得图像中当前块的待预测图像分量的初始预测值之前,所述方法还包括:对所述图像的待预测图像分量进行特性统计;根据所述特性统计的结果,确定所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的参考值;其中,所述待参考图像分量不同于所述待预测图像分量;根据所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的参考值,计算所述预测模型的模型参数。
- 根据权利要求13所述的方法,其中,所述方法还包括:根据所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的参考值,利用预设处理模式对所述初始预测值进行滤波处理,其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理、反量化处理、低通滤波处理和自适应滤波处理。
- 一种编码器,所述编码器包括第一预测单元和第一处理单元,其中,所述第一预测单元,配置为通过预测模型,获得图像中当前块的待预测图像分量的初始预测值;所述第一处理单元,配置为对所述初始预测值进行滤波处理,得到所述当前块的待预测图像分量的目标预测值。
- 一种编码器,所述编码器包括第一存储器和第一处理器,其中,所述第一存储器,用于存储能够在所述第一处理器上运行的计算机程序;所述第一处理器,用于在运行所述计算机程序时,执行如权利要求1至14任一项所述的方法。
- 一种解码器,所述解码器包括第二预测单元和第二处理单元,其中,所述第二预测单元,配置为通过预测模型,获得图像中当前块的待预测图像分量的初始预测值;所述第二处理单元,配置为对所述初始预测值进行滤波处理,得到所述当前块的待预测图像分量的目标预测值。
- 一种解码器,所述解码器包括第二存储器和第二处理器,其中,所述第二存储器,用于存储能够在所述第二处理器上运行的计算机程序;所述第二处理器,用于在运行所述计算机程序时,执行如权利要求1至14任一项所述的方法。
- 一种计算机存储介质,其中,所述计算机存储介质存储有图像预测程序,所述图像预测程序被第一处理器或第二处理器执行时实现如权利要求1至14任一项所述的方法。
Priority Applications (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201980083504.5A CN113196765A (zh) | 2019-03-25 | 2019-10-12 | 图像预测方法、编码器、解码器以及存储介质 |
KR1020217032518A KR20210139328A (ko) | 2019-03-25 | 2019-10-12 | 화상 예측 방법, 인코더, 디코더 및 저장 매체 |
CN202111096320.8A CN113784128B (zh) | 2019-03-25 | 2019-10-12 | 图像预测方法、编码器、解码器以及存储介质 |
CN202310354072.5A CN116320472A (zh) | 2019-03-25 | 2019-10-12 | 图像预测方法、编码器、解码器以及存储介质 |
EP19922170.6A EP3944621A4 (en) | 2019-03-25 | 2019-10-12 | FRAME PREDICTION METHOD, ENCODER, DECODER, AND STORAGE MEDIA |
JP2021557118A JP7480170B2 (ja) | 2019-03-25 | 2019-10-12 | 画像予測方法、エンコーダー、デコーダー及び記憶媒体 |
US17/483,507 US20220014772A1 (en) | 2019-03-25 | 2021-09-23 | Method for picture prediction, encoder, and decoder |
JP2024069934A JP2024095842A (ja) | 2019-03-25 | 2024-04-23 | 画像予測方法、エンコーダー、デコーダー及び記憶媒体 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962823613P | 2019-03-25 | 2019-03-25 | |
US62/823,613 | 2019-03-25 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/483,507 Continuation US20220014772A1 (en) | 2019-03-25 | 2021-09-23 | Method for picture prediction, encoder, and decoder |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020192085A1 true WO2020192085A1 (zh) | 2020-10-01 |
Family
ID=72611276
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/110834 WO2020192085A1 (zh) | 2019-03-25 | 2019-10-12 | 图像预测方法、编码器、解码器以及存储介质 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20220014772A1 (zh) |
EP (1) | EP3944621A4 (zh) |
JP (2) | JP7480170B2 (zh) |
KR (1) | KR20210139328A (zh) |
CN (3) | CN116320472A (zh) |
WO (1) | WO2020192085A1 (zh) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110572673B (zh) * | 2019-09-27 | 2024-04-09 | 腾讯科技(深圳)有限公司 | 视频编解码方法和装置、存储介质及电子装置 |
WO2021136498A1 (en) * | 2019-12-31 | 2021-07-08 | Beijing Bytedance Network Technology Co., Ltd. | Multiple reference line chroma prediction |
CN116472707A (zh) * | 2020-09-30 | 2023-07-21 | Oppo广东移动通信有限公司 | 图像预测方法、编码器、解码器以及计算机存储介质 |
WO2024148016A1 (en) * | 2023-01-02 | 2024-07-11 | Bytedance Inc. | Method, apparatus, and medium for video processing |
CN117528098B (zh) * | 2024-01-08 | 2024-03-26 | 北京小鸟科技股份有限公司 | 基于深压缩码流提升画质的编解码系统、方法及设备 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106664425A (zh) * | 2014-06-20 | 2017-05-10 | 高通股份有限公司 | 视频译码中的跨分量预测 |
CN106717004A (zh) * | 2014-10-10 | 2017-05-24 | 高通股份有限公司 | 视频译码中的跨分量预测和自适应色彩变换的协调 |
CN107079157A (zh) * | 2014-09-12 | 2017-08-18 | Vid拓展公司 | 用于视频编码的分量间去相关 |
CN107211124A (zh) * | 2015-01-27 | 2017-09-26 | 高通股份有限公司 | 适应性跨分量残差预测 |
WO2018132710A1 (en) * | 2017-01-13 | 2018-07-19 | Qualcomm Incorporated | Coding video data using derived chroma mode |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008048489A2 (en) * | 2006-10-18 | 2008-04-24 | Thomson Licensing | Method and apparatus for video coding using prediction data refinement |
KR101455647B1 (ko) * | 2007-09-14 | 2014-10-28 | 삼성전자주식회사 | 컬러 기반으로 예측 값을 보정하는 방법 및 장치, 이것을이용한 이미지 압축/복원 방법과 장치 |
GB2501535A (en) * | 2012-04-26 | 2013-10-30 | Sony Corp | Chrominance Processing in High Efficiency Video Codecs |
US20140192862A1 (en) * | 2013-01-07 | 2014-07-10 | Research In Motion Limited | Methods and systems for prediction filtering in video coding |
US20150373362A1 (en) * | 2014-06-19 | 2015-12-24 | Qualcomm Incorporated | Deblocking filter design for intra block copy |
CN107079166A (zh) * | 2014-10-28 | 2017-08-18 | 联发科技(新加坡)私人有限公司 | 用于视频编码的引导交叉分量预测的方法 |
US10045023B2 (en) * | 2015-10-09 | 2018-08-07 | Telefonaktiebolaget Lm Ericsson (Publ) | Cross component prediction in video coding |
US10484712B2 (en) * | 2016-06-08 | 2019-11-19 | Qualcomm Incorporated | Implicit coding of reference line index used in intra prediction |
WO2018070914A1 (en) * | 2016-10-12 | 2018-04-19 | Telefonaktiebolaget Lm Ericsson (Publ) | Residual refinement of color components |
US11184636B2 (en) * | 2017-06-28 | 2021-11-23 | Sharp Kabushiki Kaisha | Video encoding device and video decoding device |
WO2020036130A1 (ja) * | 2018-08-15 | 2020-02-20 | 日本放送協会 | イントラ予測装置、画像符号化装置、画像復号装置、及びプログラム |
-
2019
- 2019-10-12 EP EP19922170.6A patent/EP3944621A4/en not_active Withdrawn
- 2019-10-12 WO PCT/CN2019/110834 patent/WO2020192085A1/zh unknown
- 2019-10-12 KR KR1020217032518A patent/KR20210139328A/ko unknown
- 2019-10-12 CN CN202310354072.5A patent/CN116320472A/zh active Pending
- 2019-10-12 CN CN202111096320.8A patent/CN113784128B/zh active Active
- 2019-10-12 CN CN201980083504.5A patent/CN113196765A/zh active Pending
- 2019-10-12 JP JP2021557118A patent/JP7480170B2/ja active Active
-
2021
- 2021-09-23 US US17/483,507 patent/US20220014772A1/en active Pending
-
2024
- 2024-04-23 JP JP2024069934A patent/JP2024095842A/ja active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106664425A (zh) * | 2014-06-20 | 2017-05-10 | 高通股份有限公司 | 视频译码中的跨分量预测 |
CN107079157A (zh) * | 2014-09-12 | 2017-08-18 | Vid拓展公司 | 用于视频编码的分量间去相关 |
CN106717004A (zh) * | 2014-10-10 | 2017-05-24 | 高通股份有限公司 | 视频译码中的跨分量预测和自适应色彩变换的协调 |
CN107211124A (zh) * | 2015-01-27 | 2017-09-26 | 高通股份有限公司 | 适应性跨分量残差预测 |
WO2018132710A1 (en) * | 2017-01-13 | 2018-07-19 | Qualcomm Incorporated | Coding video data using derived chroma mode |
Also Published As
Publication number | Publication date |
---|---|
EP3944621A1 (en) | 2022-01-26 |
KR20210139328A (ko) | 2021-11-22 |
US20220014772A1 (en) | 2022-01-13 |
JP2024095842A (ja) | 2024-07-10 |
CN113784128B (zh) | 2023-04-21 |
JP7480170B2 (ja) | 2024-05-09 |
CN116320472A (zh) | 2023-06-23 |
CN113196765A (zh) | 2021-07-30 |
EP3944621A4 (en) | 2022-05-18 |
CN113784128A (zh) | 2021-12-10 |
JP2022528635A (ja) | 2022-06-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020192085A1 (zh) | 图像预测方法、编码器、解码器以及存储介质 | |
CN113068028B (zh) | 视频图像分量的预测方法、装置及计算机存储介质 | |
WO2021120122A1 (zh) | 图像分量预测方法、编码器、解码器以及存储介质 | |
WO2021004155A1 (zh) | 图像分量预测方法、编码器、解码器以及存储介质 | |
WO2021134706A1 (zh) | 环路滤波的方法与装置 | |
WO2020186763A1 (zh) | 图像分量预测方法、编码器、解码器以及存储介质 | |
CN113068025A (zh) | 解码预测方法、装置及计算机存储介质 | |
WO2020192084A1 (zh) | 图像预测方法、编码器、解码器以及存储介质 | |
US12101495B2 (en) | Colour component prediction method, encoder, decoder, and computer storage medium | |
WO2020056767A1 (zh) | 视频图像分量的预测方法、装置及计算机存储介质 | |
RU2805048C2 (ru) | Способ предсказания изображения, кодер и декодер | |
CN112970257A (zh) | 解码预测方法、装置及计算机存储介质 | |
JP2024147829A (ja) | 画像予測方法、エンコーダ、デコーダ及び記憶媒体 | |
RU2827054C2 (ru) | Способ предсказания изображения, кодер, декодер | |
WO2021174396A1 (zh) | 图像预测方法、编码器、解码器以及存储介质 | |
WO2024011370A1 (zh) | 视频图像处理方法及装置、编解码器、码流、存储介质 | |
TW202404349A (zh) | 一種濾波方法、解碼器、編碼器及電腦可讀儲存媒介 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19922170 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021557118 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20217032518 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2019922170 Country of ref document: EP Effective date: 20211015 |