CN112203085B - Image processing method, device, terminal and storage medium - Google Patents

Image processing method, device, terminal and storage medium Download PDF

Info

Publication number
CN112203085B
CN112203085B CN202011060097.7A CN202011060097A CN112203085B CN 112203085 B CN112203085 B CN 112203085B CN 202011060097 A CN202011060097 A CN 202011060097A CN 112203085 B CN112203085 B CN 112203085B
Authority
CN
China
Prior art keywords
value
luminance
frame
block
chrominance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011060097.7A
Other languages
Chinese (zh)
Other versions
CN112203085A (en
Inventor
王萌
张莉
王诗淇
王悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Group HK Ltd
Original Assignee
ByteDance HK Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ByteDance HK Co Ltd filed Critical ByteDance HK Co Ltd
Priority to CN202011060097.7A priority Critical patent/CN112203085B/en
Publication of CN112203085A publication Critical patent/CN112203085A/en
Application granted granted Critical
Publication of CN112203085B publication Critical patent/CN112203085B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/109Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component

Abstract

The present disclosure provides an image processing method, apparatus, terminal and storage medium. The image processing method in some embodiments comprises the following steps: determining a reference block of a current image block, wherein the current image block is positioned in a current frame, and the reference block is positioned in a reference frame of the current frame; determining a luminance reconstruction value of the reference block and a chrominance reconstruction value of the reference block; performing illumination compensation on the brightness reconstruction value of the reference block to obtain a brightness compensation value of the reference block; and determining a luminance predicted value and a chrominance predicted value of the current image block, wherein the luminance predicted value of the current image block is a luminance compensation value of the reference block, and the chrominance predicted value of the current image block is a chrominance reconstruction value of the reference block. The method can reduce the data processing amount in the video coding and video decoding processes, reduce the complexity and improve the coding compression efficiency.

Description

Image processing method, device, terminal and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing device, a terminal, and a storage medium.
Background
Video includes a large number of image frames, each image frame includes tens or hundreds of thousands of pixels, each pixel is generally represented by 24 bits, so that a video needs to occupy a large amount of storage space and transmission bandwidth, and in order to reduce the space and bandwidth occupied by the video, the video is usually compressed, i.e. encoded, before being played, and decompressed, i.e. decoded. In general, compression techniques for video include intra-frame compression for removing spatial redundancy in image frames and inter-frame compression for removing temporal redundancy of images.
Disclosure of Invention
In order to solve the existing problems, the present disclosure provides an image processing method, apparatus, terminal, and storage medium.
The present disclosure adopts the following technical solutions.
In some embodiments, the present disclosure provides an image processing method, comprising:
determining a reference block of a current image block, wherein the current image block is positioned in a current frame, and the reference block is positioned in a reference frame of the current frame;
determining a luminance reconstruction value of the reference block and a chrominance reconstruction value of the reference block;
performing illumination compensation on the brightness reconstruction value of the reference block to obtain a brightness compensation value of the reference block;
and determining a luminance predicted value and a chrominance predicted value of the current image block, wherein the luminance predicted value of the current image block is a luminance compensation value of the reference block, and the chrominance predicted value of the current image block is a chrominance reconstruction value of the reference block.
In some embodiments, further comprising: and determining the value of the switch flag bit carried by the current frame as a first value.
In some embodiments, the value of the switch flag bit carried by the current frame is determined according to the difference between the luminance component of the current frame and the luminance component of the reference frame, and the difference between the chrominance component of the current frame and the chrominance component of the reference frame.
In some embodiments, in the case that the difference between the first change value and the second change value is greater than the difference threshold, the value of the switch flag bit carried by the current frame is the first value, where the first change value is the difference between the luminance component of the current frame and the luminance component of the reference frame, and the second change value is the difference between the chrominance component of the current frame and the chrominance component of the reference frame.
In some embodiments, the luminance prediction value of the current image block is determined according to formula (1):
wherein ,for the luminance prediction value of the current image block, +.>Reconstructing a value, alpha, for the luminance of the reference block Y Scaling coefficients, beta, for color channels of the luminance component Y Is the offset of the luminance component.
In some embodiments, the determining the luma and chroma predictors for the current image block includes: and if the value of the switch flag bit carried by the current frame is a first value, the brightness predicted value of the current image block is the brightness compensation value of the reference block, and the chroma predicted value of the current image block is the chroma reconstruction value of the reference block.
If the value of the switch flag bit carried by the current frame is the second value, the brightness predicted value and the chromaticity predicted value of the current image block are respectively equal to the brightness reconstruction value and the chromaticity reconstruction value of the reference block; or, the luminance predicted value and the chrominance predicted value of the current image block are respectively equal to the luminance compensation value and the chrominance compensation value of the reference block, and the chrominance compensation value is a value obtained by performing illumination compensation on the chrominance reconstruction value of the reference frame.
In some embodiments, the present disclosure provides an image processing method, comprising:
Determining a motion vector of a luminance component and a motion vector of a chrominance component of a current image block, wherein the current image block is positioned in a current frame, the motion vector of the chrominance component is different from a parity motion vector, and the parity motion vector is the motion vector of the luminance component, or the parity motion vector is a motion vector obtained by scaling the motion vector of the luminance component according to a luminance sampling rate and a chrominance sampling rate;
determining a luminance reference block of the luminance component of the current image block and a chrominance reference block of the chrominance component of the current image block according to the motion vector of the luminance component and the motion vector of the chrominance component of the current image block, wherein the luminance reference block and the chrominance reference block are positioned in a reference frame of the current frame;
determining a luminance reconstruction value of a luminance reference block and a chrominance reconstruction value of a chrominance reference block;
performing illumination compensation on the brightness reconstruction value of the brightness reference block to obtain a brightness compensation value of the brightness reference block;
performing illumination compensation on the chromaticity reconstruction value of the chromaticity reference block to obtain a chromaticity compensation value of the chromaticity reference block;
and determining a luminance predicted value and a chrominance predicted value of the current image block, wherein the luminance predicted value of the current image block is a luminance compensation value of a luminance reference block, and the chrominance predicted value of the current image block is a chrominance compensation value of a chrominance reference block.
In some embodiments, the motion vector of the chroma reference block is the same as the motion vector of the chroma component of the current image block.
In some embodiments, the motion vector of the chroma component of the current image block is (0, 0).
In some embodiments, the motion vector of the chroma component of the current image block is selected from a list of vector candidates, and the motion vector of the chroma component is recorded by encoding an index of the motion vector of the chroma component in the list of vector candidates.
In some embodiments, the luminance prediction value of the current image block is obtained using equation (2):
wherein ,for the luminance prediction value of the current image block, +.>Reconstructing a value, alpha, for the luminance of the reference block Y Scaling coefficients, beta, for color channels of the luminance component Y An offset amount for the luminance component;
and/or the number of the groups of groups,
obtaining a chroma prediction value of the current image block by adopting a formula (3) and a formula (4):
wherein ,u component of chroma prediction value for current image block, < >>For the V component of the chrominance prediction value of said current image block, is->U component of chroma reconstruction value for chroma reference block, ">V component, alpha, of chroma reconstruction value for chroma reference block U Scaling factor, alpha, for color channel of U component V Scaling factor, beta, for color channel of the V component U Offset of U component, beta V Is the offset of the V component.
In some embodiments, determining the motion vector of the luma component and the motion vector of the chroma component of the current image block comprises:
determining the value of a switch flag bit carried by the current frame;
if the value of the switch flag bit of the current frame is a first value, the brightness predicted value of the current image block is a brightness compensation value of a brightness reference block, and the chromaticity predicted value of the current image block is a chromaticity reconstruction value of a chromaticity reference block;
and if the value of the switch flag bit of the current frame is the second value, the brightness predicted value of the current image block is the brightness compensation value of the brightness reference block, and the chroma predicted value of the current image block is the chroma compensation value of the chroma reference block.
In some embodiments, the value of the switch flag bit carried by the current frame is determined according to the difference between the luminance component of the current frame and the luminance component of the reference frame, and the difference between the chrominance component of the current frame and the chrominance component of the reference frame.
In some embodiments, in the case that the difference between the first change value and the second change value is greater than the difference threshold, the value of the switch flag bit carried by the current frame is the first value, where the first change value is the difference between the luminance component of the current frame and the luminance component of the reference frame, and the second change value is the difference between the chrominance component of the current frame and the chrominance component of the reference frame.
In some embodiments, the present disclosure proposes an image processing apparatus including:
a determining unit, configured to determine a reference block of a current image block, where the current image block is located in a current frame, and the reference block is located in a reference frame of the current frame;
a determining unit, configured to determine a luminance reconstruction value of the reference block and a chrominance reconstruction value of the reference block;
the processing unit is used for carrying out illumination compensation on the brightness reconstruction value of the reference block to obtain a brightness compensation value of the reference block;
and the determining unit is further used for determining a luminance predicted value and a chrominance predicted value of the current image block, wherein the luminance predicted value of the current image block is a luminance compensation value of the reference block, and the chrominance predicted value of the current image block is a chrominance reconstruction value of the reference block.
In some embodiments, the present disclosure proposes an image processing apparatus including:
a determining module, configured to determine a motion vector of a luminance component and a motion vector of a chrominance component of a current image block, where the current image block is located in a current frame, and the motion vector of the chrominance component is different from a co-located motion vector, where the co-located motion vector is the motion vector of the luminance component, or the co-located motion vector is a motion vector obtained by scaling the motion vector of the luminance component according to a luminance sampling rate and a chrominance sampling rate;
The determining module is further configured to determine a luminance reference block of a luminance component of the current image block and a chrominance reference block of a chrominance component of the current image block according to a motion vector of the luminance component and a motion vector of the chrominance component of the current image block, where the luminance reference block and the chrominance reference block are located in a reference frame of the current frame;
the determining module is further configured to determine a luminance reconstruction value of the luminance reference block and a chrominance reconstruction value of the chrominance reference block;
the processing module is used for carrying out illumination compensation on the brightness reconstruction value of the brightness reference block to obtain a brightness compensation value of the brightness reference block;
the processing module is also used for carrying out illumination compensation on the chroma reconstruction value of the chroma reference block to obtain a chroma compensation value of the chroma reference block;
the determining module is further configured to determine a luminance prediction value and a chrominance prediction value of the current image block, where the luminance prediction value of the current image block is a luminance compensation value of the luminance reference block, and the chrominance prediction value of the current image block is a chrominance compensation value of the chrominance reference block.
In some embodiments, the present disclosure provides a terminal comprising: at least one memory and at least one processor;
The memory is used for storing program codes, and the processor is used for calling the program codes stored in the memory to execute the method.
In some embodiments, the present disclosure provides a storage medium for storing program code for performing the above-described method.
According to the image processing method provided by the embodiment of the disclosure, only illumination compensation is carried out on the brightness component in the inter-frame prediction process (the step of illumination compensation on the chroma component is reduced), so that the data processing amount in the video coding and video decoding processes is reduced, and the complexity of video coding and decoding is reduced. The method provided by the embodiment of the disclosure can also improve the accuracy of chroma component prediction, thereby reducing residual errors and reducing the coding rate, and further improving the coding compression performance.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram of a current image block and a reference block corresponding to the current image block according to an embodiment of the present disclosure.
Fig. 3 is a flowchart of another image processing method of an embodiment of the present disclosure.
Fig. 4 is a composition diagram of an image processing apparatus according to an embodiment of the present disclosure.
Fig. 5 is a composition diagram of another image processing apparatus of an embodiment of the present disclosure.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in and/or in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "a" and "an" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be construed as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
With the rise of short video application, more and more users shoot short videos through mobile terminals such as mobile phones and share the short videos, the user generated (UGC, user Generated Content) videos are obviously different from the videos shot in the prior art, UGC videos are generally shot in non-professional illumination scenes by non-professional users by using non-professional equipment, therefore, the light change in the same video is obvious, the shooting equipment of the UGC videos is various, the scenes are various, the video content is various, and the UGC videos are generally rendered through special effects and filters; UGC videos are usually shot by users through handheld terminal equipment such as mobile phones and tablets, and are uploaded to a video platform after being compressed. The UGC video can be encoded, compressed and uploaded after shooting, so that the video compression efficiency is improved, and traffic bandwidth and calculation power consumption can be saved.
In some techniques, when inter-frame compression is performed on video, in order to solve the problem of local illumination variation between adjacent frames in the time domain, in the motion compensation stage of inter prediction, local illumination compensation is performed on a luminance component and a chrominance component of a currently inter-coded coding unit, and the luminance coding unit (luminance block) and the chrominance coding unit (chrominance block) share motion vectors, however, illumination compensation performed on both the luminance coding unit and the chrominance coding unit may reduce the prediction performance of the chrominance component, and may increase the complexity of coding.
In order to at least partially solve the above-mentioned problems, an image processing method is provided in an embodiment of the present disclosure, where the image in the embodiment may be an image in a video, and a scheme provided by an embodiment of the present disclosure will be described in detail with reference to the accompanying drawings.
As shown in fig. 1, fig. 1 is a flowchart of an image processing method according to an embodiment of the present disclosure, including the following steps.
S11: a reference block for the current image block is determined.
Specifically, the current image block is located in a current Frame, the Reference block is located in a Reference Frame of the current Frame, in some embodiments, the image may be an image in a video, the video may be a UGC video, the image processing method proposed in the present disclosure may be used in an inter-Frame prediction process for video compression or decompression, the video includes a plurality of image frames, the current Frame may be any image Frame needing inter-Frame prediction in the video, for example, the current Frame may be a P-image or a B-image, the current Frame includes a plurality of coding units, where the current image block may be any coding unit, and for the inter-Frame prediction process, the current Frame corresponds to a Reference Frame, that is, an image for prediction, which is also commonly referred to as a Reference image (Reference Frame), the Reference Frame may be, for example, a previous image Frame or a next image Frame that is adjacent in the time domain to the current Frame, where the Reference Frame of the current Frame also includes a plurality of coding units, and the coding units corresponding to the current image block are used as Reference blocks, where the displacement from the Reference block to the current image block is generally referred to as Motion Vector (MV), the difference between the current image block and the Reference block is generally referred to as prediction residual (Prediction Residual), and in the video coding process, the process of determining the Reference block corresponding to the current image block is generally referred to as Motion estimation (Motion Estimation), and in this embodiment, the method of determining the Reference block corresponding to the current image block may be the same as in the prior art, for example, a full search algorithm, a two-dimensional logarithmic search algorithm, a three-step search method, or the like may be used, which is not limited thereto.
S12: luminance reconstruction values of the reference block and chrominance reconstruction values of the reference block are determined.
S13: and carrying out illumination compensation on the brightness reconstruction value of the reference block to obtain a brightness compensation value of the reference block.
Specifically, the luminance compensation value is a luminance value obtained by performing illumination compensation on the luminance reconstruction value.
S14: a luminance prediction value and a chrominance prediction value of the current image block are determined.
Specifically, the luminance prediction value of the current image block is the luminance compensation value of the reference block, and the chrominance prediction value of the current image block is the chrominance reconstruction value of the reference block. In this embodiment, a color space of luminance plus chromaticity may be used for the color, that is, YUV color space, where Y represents luminance, that is, gray-scale value, and U and V represent chromaticity. The illumination compensation is to solve the problem of local illumination change between adjacent time-domain frames, and acts on the motion compensation stage of inter-frame prediction, in the prior art, when the brightness predicted value and the chroma predicted value of the current frame are determined, the same illumination compensation method is adopted, and the values obtained after illumination compensation is respectively carried out on the brightness predicted value and the chroma predicted value of the reference block corresponding to the current image block in the inter-frame prediction process, so that the complexity is reduced, in the embodiment, when the brightness predicted value of the current image block is determined, the value obtained after illumination compensation is carried out on the brightness predicted value of the reference block corresponding to the current image block is used as the brightness predicted value of the current image block, the chroma reconstructed value of the reference block corresponding to the current image block is directly used as the chroma predicted value of the current image block, and the chroma predicted value does not need to be subjected to illumination compensation, compared with the prior art, the image processing method in the embodiment reduces the steps of illumination compensation on the chroma predicted value in the inter-frame prediction process, reduces the data processing amount in the video coding and video decoding process, reduces the complexity, and compared with the prior art, the chroma predicted value of the chroma predicted value is not subjected to the chroma predicted value of the current image component, and the accuracy is reduced, for example, the chroma component is not reduced, and the chroma component is not predicted when the chroma predicted value is coded in the current image component is compared with the current image.
In some embodiments of the present disclosure, the luminance prediction value of the current image block is obtained using formula (1):
wherein ,for the luminance prediction value of the current image block, +.>Reconstructing a value, alpha, for the luminance of a reference block corresponding to the current image block Y Scaling coefficients, beta, for color channels of the luminance component Y Is the offset of the luminance component.
Specifically, in some embodiments, a linear illumination transformation model is used to perform illumination compensation on a luminance prediction value of a reference block corresponding to a current image block in an inter-frame prediction process, where α Y and βY Can be derived in a linear regression manner from neighboring reference pixels of the current image block and neighboring reference pixels of the corresponding reference block in the current frame, for example, please refer to fig. 2, 2a in fig. 2 is neighboring reference pixels of the current image block and the current image block, 2b in fig. 2 is neighboring reference pixels of the reference block and the corresponding reference block, a solid circle in fig. 2 is neighboring reference pixels, and square errors of neighboring reference pixels of the current image block and neighboring reference pixels of the corresponding reference block can be used to obtain the parameter α Y 、β Y
In the prior art, the illumination compensation is usually performed on three color components (Y, U, V) of the coding unit needing the illumination compensation at the same time, but according to the characteristics of UGC video, there is a change in illumination between image frames of the video, but no or little change in chromaticity, that is, the luminance is changed but the chromaticity is not changed substantially, so in some embodiments of the present disclosure, only the luminance component Y of the current image block is subjected to the local illumination compensation, the chrominance component (U, V) is not subjected to the illumination compensation, and the chrominance component of the reference block corresponding to the current image block is directly used for prediction, that is wherein />Is the predicted value of the U component in the chrominance components of the current image block, is->Is the reconstructed value of the U component in the chrominance component of the current image block, ±>Is the reconstructed value of the U component in the chrominance component of the reference block in the reference frame corresponding to the current image block, is->Is the reconstructed value of the V component in the chrominance component of the reference block in the reference frame to which the current image block corresponds. In this way, not only the complexity of data processing is reduced and the coding and decoding efficiency is improved, but also the accuracy of the chroma prediction value is improved, and the compression performance is improved.
In some embodiments of the disclosure, the determining the luma and chroma predictors for the current image block includes: and determining that the current frame is a target frame or a non-target frame, and if the current image frame is the target frame, the brightness predicted value of the current image block is the brightness compensation value of the reference block, and the chroma predicted value of the current image block is the chroma reconstruction value of the reference block.
If the current frame is a non-target frame, the brightness predicted value and the chromaticity predicted value of the current image block are respectively equal to the brightness reconstruction value and the chromaticity reconstruction value of the reference block; or, the luminance predicted value and the chrominance predicted value of the current image block are respectively equal to the luminance compensation value and the chrominance compensation value of the reference block, and the chrominance compensation value is a value obtained by performing illumination compensation on the chrominance reconstruction value of the reference frame.
Specifically, in this embodiment, the image is divided into a target frame and a non-target frame, and when the current frame is the target frame, only the luminance prediction value is subjected to illumination compensation, and when the current frame is the non-target frame. When determining the luminance predicted value and the chrominance predicted value of an image block in a non-target frame, there may be two ways, one is to directly use the luminance reconstructed value and the chrominance reconstructed value of the reference block as the luminance predicted value and the chrominance predicted value, and the other is to use the luminance reconstructed value and the chrominance reconstructed value of the reference block to respectively perform illumination compensation, that is, the luminance compensation value and the chrominance compensation value. It should be noted that the non-target frames may use either of the two manners of determining the luminance prediction value and the chrominance prediction value, and the manners used by the respective non-target frames may be different.
In some embodiments, before determining the luma and chroma predictors for the current image block, further comprising: and determining the value of the switch flag bit carried by the current frame as a first value.
Specifically, in this embodiment, a switch flag bit is set in each image frame, where the switch flag bit may be used as a frame level switch to determine whether to perform illumination compensation on a chroma prediction value of a reference block, when a switch flag of a current frame is equal to a first value, the chroma prediction value of the current image block may be a chroma reconstruction value of the reference block, the switch flag bit may be randomly set, for example, a switch flag bit of 1 frame is set to the first value in every 2 frames, and when the switch flag bit of the current frame is set to the second value, a luma compensation value and a chroma compensation value of the reference block of the reference frame may be used as a luma prediction value and a chroma prediction value of the current image block, respectively, and the chroma compensation value is a chroma reconstruction value of the reference block, and may be, for example, a frame level switch (lic _sep_g) set in an image header file, for example, a switch flag bit may be set to 1 in the image header file, and a switch flag bit of a target frame may be set to the second value, and a switch flag bit of a non-target frame may be set to the second value, thereby determine the type of the image frame between prediction frames. When the image processing method is used for video coding, the image frames are firstly determined to be target images or non-target images, corresponding switch zone bits are set for the image frames according to the result, when the image processing method is used for video decoding, the values of the switch zone bits of the image frames are firstly identified, and how to obtain the brightness predicted value and the chromaticity predicted value of the current image block is determined according to the values of the switch zone bits.
In some embodiments of the present disclosure, a switch flag bit is set only in a target frame of a video; specifically, in this embodiment, the identification information is not set in the non-target frame, so that the chroma prediction value of the image block of the current frame is determined to be equal to the chroma reconstruction value of the reference block as long as the identification information is identified, and similarly, in some embodiments of the present disclosure, the switch flag bit is set only in the non-target frame of the video, and at this time, the chroma prediction value of the image block of the current frame is determined to be equal to the chroma reconstruction value of the reference block as long as the switch flag bit is not identified.
In some embodiments of the present disclosure, the value of the switch flag bit carried by the current frame is determined according to the difference between the luminance component of the current frame and the luminance component of the reference frame, and the difference between the chrominance component of the current frame and the chrominance component of the reference frame. Specifically, the differences between the luminance component and the chrominance component of the current frame and the luminance component and the chrominance component of the reference frame in the present disclosure indicate the variation magnitudes of the luminance component and the chrominance component relative to the reference frame, and if the variation magnitudes of the luminance component and the chrominance component are similar, the luminance component and the chrominance component can be considered to be calculated in a similar manner, otherwise, the luminance component and the chrominance component should be considered to be calculated in different manners.
In some embodiments of the present disclosure, when a difference between a first change value and a second change value is greater than a difference threshold, a value of a switch flag carried by the current frame is a first value, where the first change value is a difference between a luminance component of the current frame and a luminance component of the reference frame, and the second change value is a difference between a chrominance component of the current frame and a chrominance component of the reference frame. Specifically, the first change value is an absolute value of a difference between luminance components of the current frame and the reference frame, and likewise, the second change value is an absolute value of a difference between chrominance components of the current frame and the reference frame.
Specifically, this embodiment defines a determination criterion for determining whether to perform illumination compensation on only the luminance reconstruction value, that is, how to determine whether to apply illumination compensation on only the luminance reconstruction value and not to perform illumination compensation on the chrominance reconstruction value set forth in the embodiment of the present disclosure to an image frame.
In order to better explain the image processing method according to the present embodiment, a specific embodiment is provided below by taking the image processing method according to the present disclosure as an example for a video encoding end.
In the video coding process, the image frame of the video is divided into equal sizeNon-overlapping maximum coding unit. Then, with the largest coding unit as a node, different kinds of recursive tree division can be performed, for example, a quadtree, a binary tree, a trigeminal tree and the like can be used to form the coding unit. The coding units are the basic units of video coding, and each coding unit may contain one luminance block (Y) and two chrominance blocks (UV). Video coding performance results from the removal of data redundancy. The inter-frame mode prediction can effectively remove time domain redundancy, the content between the frames before and after the video has similarity, the illumination change can greatly influence the inter-frame coding efficiency, and when the inter-frame prediction is carried out, the accumulated difference between the current frame and the reference frame can be compared through the statistics of the frame-level histogram. If the change accumulated difference of the luminance component of the current frame and the reference frame is similar to the change accumulated difference of the chrominance component, the current frame is a non-target frame. On the contrary, if the accumulated difference of the luminance component variation of the current frame and the reference frame is far different from the accumulated difference of the chrominance component variation is larger, the current frame is the target frame. Whether the current frame is a target frame or a non-target frame is determined by setting an identification bit (switch flag) in each image frame, for example, whether the current frame is a target frame may be identified by a frame level switch (lic _sep_flag). lic _sep_flag can be determined from the histogram statistics of the current encoded frame and all frames in the reference frame list and passed as a frame level flag. If the current frame is a non-target frame, lic _sep_flag is 0, and if the current frame is a target frame, lic _sep_flag is 1. The following describes a method for determining a luminance prediction value and a chrominance prediction value, in this embodiment, assuming that a current frame is a target frame, the luminance prediction value of a current image block adopts the formula: Calculating a chromaticity predicted value of the current image block by adopting the formula: />The meaning of the parameters in the three formulas is the same as that described before, namely, the brightness reconstruction value of the reference block corresponding to the current image block is used as the brightness prediction value of the current image block after linear illumination compensation, and the current image block is directly usedAnd the chroma reconstruction value of the reference block corresponding to the previous image block is used as the chroma prediction value of the current image block. The illumination compensation is only carried out on the brightness predicted value for the current image block, the illumination compensation is not carried out on the chroma predicted value, and the illumination compensation is carried out on the brightness predicted value and the chroma predicted value for the non-target frame of the video.
When the image processing method provided in the embodiment of the present disclosure is used at a video decoding end, in a video decoding process, a switch flag bit of a current frame is obtained first, whether the current frame is a target frame or a non-target frame is determined according to whether the value of the switch flag bit is a first value, and then a luminance predicted value and a chrominance predicted value of the current frame are determined, where the method for determining the luminance predicted value and the chrominance predicted value is the same as the embodiment described above when the method is used for video encoding, and is not repeated herein.
When video is decoded and encoded, inter prediction needs to be performed, when inter prediction is performed, motion vectors corresponding to an encoding unit needs to be obtained, the encoding unit comprises a luminance block (i.e. a luminance encoding unit) and a chrominance block (i.e. a chrominance encoding unit), and because the luminance sampling rate and the chrominance sampling rate are different, the number of the luminance block and the chrominance block in one encoding unit is not necessarily the same, in the prior art, the encoding unit is determined to be the motion vector of the luminance component firstly, then the motion vector of the luminance component is scaled according to the luminance sampling rate and the chrominance sampling rate to obtain a parity motion vector, as the motion vector of the chrominance component, taking the luminance sampling rate as half of the chrominance sampling rate in both the horizontal direction and the vertical direction as an example, and in the 420 sampling format, the motion vector of the luminance component needs to be multiplied by 2 to obtain the motion vector of the chrominance component, namely the motion vector of the luminance component is obtained according to the sampling rate, however, the parity motion vector is not necessarily the motion vector which is the most matched with the chrominance block of the current encoding block, the chrominance component is directly used as the motion vector of the chrominance component, and the accuracy of the chrominance component may be reduced, and the distortion may be caused by scaling the chrominance component.
In some embodiments of the present disclosure, an image processing method is also provided, as shown in fig. 3, where the method in this embodiment includes:
s21: a motion vector for a luma component and a motion vector for a chroma component of the current image block are determined.
Specifically, the current image block is located in the current frame, the motion vector of the chroma component is different from the parity motion vector, and the parity motion vector is the motion vector of the luma component, or the parity motion vector is the motion vector quantity obtained by scaling the motion vector of the luma component according to the luma sampling rate and the chroma sampling rate; in some embodiments, when the luminance sample rate and the chrominance sample rate are the same, the co-located motion vector is equal to the motion vector of the luminance component. The image processing method in this embodiment may be used in an inter-frame prediction process of video decoding or video encoding, the current frame may be an image frame in video currently being encoded or decoded, and the current image block may be any encoding unit in the current frame. In some embodiments, the motion vector of the chroma component is different from a co-located motion vector, which is a motion vector of the luma component scaled according to the luma sample rate and the chroma sample rate. The method of determining the motion vector of the luminance component in this embodiment may be a method in the related art, and is not limited thereto.
S22: a luminance reference block of a luminance component of the current image block and a chrominance reference block of a chrominance component of the current image block are determined from a motion vector of the luminance component and a motion vector of the chrominance component of the current image block.
Specifically, the current frame corresponds to a reference frame, and the luminance reference block and the chrominance reference block are located in the reference frame corresponding to the current frame.
S23: and carrying out illumination compensation on the brightness reconstruction value of the brightness reference block to obtain a brightness compensation value of the brightness reference block.
Specifically, the luminance compensation value is a luminance value after illumination compensation is performed on the luminance reconstruction value of the luminance reference block.
S24: and carrying out illumination compensation on the chroma reconstruction value of the chroma reference block to obtain a chroma compensation value of the chroma reference block.
Specifically, the chroma compensation value is a chroma value obtained by performing illumination compensation on a chroma reconstruction value of the chroma reference block.
S25: a luminance prediction value and a chrominance prediction value of the current image block are determined,
specifically, the luminance prediction value of the current image block is the luminance compensation value of the luminance reference block, and the chrominance prediction value of the current image block is the chrominance compensation value of the chrominance reference block. In some embodiments, the method proposed in the embodiments of the present disclosure is used in an inter-frame prediction process, when inter-frame prediction is performed on a luminance component and a chrominance component in an encoding unit, a motion vector of the luminance component is obtained first, and then the luminance component is scaled according to a chrominance sampling rate and a luminance sampling rate to obtain a parity motion vector (when the luminance sampling rate and the chrominance sampling rate are the same, the motion vector of the luminance component is equal to the parity motion vector), and the parity motion vector is used as a chrominance motion vector of the encoding unit.
In some embodiments of the present disclosure, the motion vector of the chroma reference block is the same as the motion vector of the chroma component of the current image block. Specifically, in this embodiment, the corresponding motion vector is determined for the chroma component of the current image block alone, instead of using a co-located motion vector, where the method for determining the chroma reference block may use a Full Search (FS).
In some embodiments of the present disclosure, the motion vector of the chroma component of the current image block is (0, 0). Specifically, defaulting the motion vector of the chroma component of the current image block to (0, 0) in the present embodiment can save the encoding overhead of the motion vector and the computational power consumption in encoding and decoding.
In some embodiments of the present disclosure, the motion vector of the chrominance component of the current image block is selected from a vector candidate list, and the motion vector of the chrominance component is recorded by encoding an index of the motion vector of the chrominance component in the vector candidate list. Specifically, in this embodiment, a vector candidate list is separately established for the chrominance component, so as to improve the calculation speed of the motion vector of the chrominance component, where the vector candidate list may be a candidate list of Skip mode or Merge mode.
In some embodiments of the present disclosure, a current image block corresponds to a luminance reference block in a reference frame corresponding to a target frame; the luminance prediction value of the current image block is equal to the value after illumination compensation of the luminance reconstruction value of the luminance reference block. Specifically, the reference frame corresponding to the target frame may be an adjacent frame of the target frame, for example, a frame preceding or following the target frame in the time domain, and the luminance reference block may be a coding block in the reference frame that is closest to the current image block in the luminance component, where the luminance reconstruction value corresponding to the luminance reference block obtains the luminance prediction value of the current image block after illumination compensation.
In some embodiments of the present disclosure, a current image block corresponds to a chroma reference block in a reference frame corresponding to a target frame; the chroma prediction value of the current image block is equal to a value after illumination compensation of the chroma reconstruction value of the chroma reference block, or the chroma prediction value of the current image block is equal to the chroma reconstruction value of the chroma reference block. Specifically, the chroma prediction value of the current image block can be obtained in the same illumination compensation mode, and the chroma reconstruction value of the chroma reference block can also be directly adopted, so that the data processing capacity is reduced.
In some embodiments of the present disclosure, the luminance prediction value of the current image block is obtained using equation (2):
wherein ,for the brightness of the current image blockPredictive value->Reconstructing a value, alpha, for the luminance of the luminance reference block y Scaling coefficients, beta, for color channels of the luminance component Y Is the offset of the luminance component. Specifically, in the present embodiment, the linear illumination transformation model is used to perform illumination compensation on the luminance reconstruction value of the luminance reference block during the inter-frame prediction, where α Y and βY Can be derived in a linear regression from neighboring reference pixels of the current image block and neighboring reference pixels of the corresponding reference block in the target frame.
In some embodiments of the present disclosure, the following formulas (3) and (4) are used to obtain the chroma prediction value of the current image block:
wherein ,u component of chroma prediction value of current image block, V component of chroma prediction value of current image block,/->U component of chroma reconstruction value for chroma reference block, ">V component, alpha, of chroma reconstruction value for chroma reference block U Scaling factor, alpha, for color channel of U component V Scaling factor, beta, for color channel of the V component U Offset of U component, beta V Is the offset of the V component. In some embodiments, reconstruction of reference blocks using a linear illumination transformation model The values are illumination compensated, where alpha U 、α V 、β U 、β V Can be derived in a linear regression from neighboring reference pixels of the current image block and neighboring reference pixels of the corresponding reference block in the target frame.
In some embodiments of the disclosure, the determining the luma and chroma predictors for the current image block includes: and determining that the current frame is a target frame or a non-target frame, and if the current image frame is the target frame, the brightness predicted value of the current image block is the brightness compensation value of the brightness reference block, and the chromaticity predicted value of the current image block is the chromaticity reconstruction value of the chromaticity reference block.
If the current frame is a non-target frame, the brightness predicted value and the chromaticity predicted value of the current image block are respectively equal to the brightness reconstruction value of the brightness reference block and the chromaticity reconstruction value of the chromaticity reference block; or, the luminance predicted value and the chrominance predicted value of the current image block are respectively equal to the luminance compensation value of the luminance reference block and the chrominance compensation value of the chrominance reference block, and the chrominance compensation value is a value obtained by performing illumination compensation on the chrominance reconstruction value of the chrominance reference block of the reference frame. .
Specifically, in this embodiment, the image frame is divided into the target frame and the non-target frame, and when determining the luminance prediction value and the chrominance prediction value of the encoded block in the non-target frame, there may be two ways, one is to directly use the luminance reconstruction value and the chrominance reconstruction value of the reference block corresponding to the encoded block in the non-target frame, and the other is to use the luminance reconstruction value and the chrominance reconstruction value of the reference block corresponding to the encoded block in the non-target frame to respectively perform the illumination compensation. It should be noted that the non-target frames may use either of the two manners of determining the luminance prediction value and the chrominance prediction value, and the manners used by the respective non-target frames may be different.
In some embodiments of the present disclosure, determining a motion vector of a luminance component and a motion vector of a chrominance component of a current image block includes:
determining the value of a switch flag bit carried by the current frame;
if the value of the switch flag bit of the current frame is a first value, the brightness predicted value of the current image block is a brightness compensation value of a brightness reference block, and the chroma predicted value of the current image block is a chroma reconstruction value of the chroma reference block;
and if the value of the switch flag bit of the current frame is the second value, the brightness predicted value of the current image block is the brightness compensation value of the reference block, and the chromaticity predicted value of the current image block is the chromaticity compensation value of the reference block.
Specifically, in some embodiments, a switch flag bit is set in each image frame, where the switch flag bit is used as a frame level switch to determine a calculation method of a chroma prediction value and a luma prediction value of a current image block, for example, the switch flag bit may be set in a video header file, and the first value may be 1, and the second value may be 0, so as to determine the calculation method of the luma prediction value and the chroma prediction value of the current image block in an inter prediction process. When the image processing method provided by the disclosure is used for video coding, the value of each image frame which is a switch flag bit is determined, and then the brightness predicted value and the chromaticity predicted value are determined, wherein the brightness predicted value is equal to the brightness compensation value of the reference block, and the chromaticity predicted value can be the chromaticity compensation value or the chromaticity reconstruction value according to different value values.
In some embodiments, the value of the switch flag bit carried by the current frame is determined according to the difference between the luminance component of the current frame and the luminance component of the reference frame, and the difference between the chrominance component of the current frame and the chrominance component of the reference frame. Specifically, when the difference between the luminance component difference and the chrominance component difference of the current frame and the reference frame is smaller, the change trend of the current frame and the chrominance component is the same, the same processing method can be adopted, otherwise, different processing methods should be adopted.
In some embodiments of the present disclosure, when a difference between a first change value and a second change value is greater than a difference threshold, a value of a switch flag carried by the current frame is a first value, where the first change value is a difference between a luminance component of the current frame and a luminance component of the reference frame, and the second change value is a difference between a chrominance component of the current frame and a chrominance component of the reference frame.
Specifically, the embodiment defines a determination standard of the value of the switch flag, that is, how to determine whether the chroma prediction value of an image block should be subjected to illumination compensation, and the embodiment can be used in a video encoding process, for any currently encoded image frame, determine a difference value of a first variation value and a second variation value of the image frame, compare the difference value with a difference threshold, and set whether the image frame is a target frame or a non-target frame according to a comparison result, when the difference value is not greater than the difference threshold, it indicates that the variation trend of the luma component and the chroma component of the currently encoded image frame is similar, and when the difference value is not greater than the difference threshold, it indicates that the variation trend of the luma component and the chroma component is greater, and when the difference value is greater than the difference threshold, it does not use a similar processing manner, so that the motion vector of the chroma component and the parity motion vector should not be the same.
In order to better explain the image processing method according to the present embodiment, a specific embodiment is provided below by taking the image processing method according to the present disclosure as an example for a video encoding end.
In video coding, the image frames of a video are divided into equal-sized, non-overlapping, maximum coding units. Subsequently, with the largest coding unit as a node, different kinds of recursive tree partitions can be performed, such as quadtree, binary tree, trigeminal tree, and the like, to form the coding unit. The coding units are the basic units of video coding, and each coding unit may contain one luminance block (Y) and two chrominance blocks (UV). Video coding performance results from the removal of data redundancy. The inter-frame mode prediction can effectively remove time domain redundancy, the content between the frames before and after the video has similarity, the illumination change can greatly influence the inter-frame coding efficiency, and when the inter-frame prediction is carried out, the accumulated difference between the current frame and the reference frame can be compared through the statistics of the frame-level histogram. If the change accumulated difference of the luminance component of the current frame and the reference frame is similar to the change accumulated difference of the chrominance component, the current frame is a non-target frame. Otherwise, if the current frame and the reference frame The current frame is the target frame if the cumulative difference of the luminance component variation is far different from the cumulative difference of the chrominance component variation is large. Whether the current frame is a target frame or a non-target frame is determined by setting a switch flag in each image frame, for example, a frame level switch (lic _sep_flag) may be used as the switch flag. lic _sep_flag can be determined from the histogram statistics of the current encoded frame and all frames in the reference frame list and passed as a frame level flag. If the current frame is a non-target frame, lic _sep_flag is set to 0, and if the current frame is a non-target frame, lic _sep_flag is set to 1. The following describes a method for determining a luminance prediction value and a chrominance prediction value in detail, in this embodiment, when lic _sep_flag of a current frame is 1, for the luminance prediction value of a current image block, a linear illumination compensation model (α) is derived by using peripheral pixels of a luminance reference block and peripheral reference pixels of the current image block guided by a motion vector mv_y Y ,β Y ). The luminance prediction value of the luminance block after illumination compensation is calculated by the formula (2):
the motion vector information of the chrominance components U and V may be derived as (0, 0) or the motion vector of the chrominance component may be from a Merge/Skip candidate list, followed by a linear illumination compensation model (alpha) in accordance with the method consistent with the prediction of the luminance component U ,β U) and (αV ,β V ) And calculated according to formulas (3) and (4):
wherein ,βV And/or beta V Can be deduced by default to be 1, beta V And/or beta V May be deduced to be 0 by default.
When the image processing method provided by the embodiment of the disclosure is used for a video decoding end, in the video decoding process, for a current image frame, firstly acquiring the value of a switch flag bit of the current image frame, and determining whether a predicted value of a chrominance component is equal to a chrominance reconstruction value or is equal to a chrominance compensation value of a chrominance reference block according to the value of the switch flag bit, wherein the luminance predicted value is equal to a luminance compensation value of a luminance reference block.
An embodiment of the present disclosure proposes an image processing apparatus, as shown in fig. 4, including:
a determining unit 31, configured to determine a reference block of a current image block, where the current image block is located in a current frame, and the reference block is located in a reference frame of the current frame;
the determining unit 31 is further configured to determine a luminance reconstruction value of the reference block and a chrominance reconstruction value of the reference block
A processing unit 32, configured to perform illumination compensation on the luminance reconstruction value of the reference block, so as to obtain a luminance compensation value of the reference block;
the determining unit 31 is further configured to determine a luminance prediction value and a chrominance prediction value of the current image block, where the luminance prediction value of the current image block is a luminance compensation value of the reference block, and the chrominance prediction value of the current image block is a chrominance reconstruction value of the reference block.
An image processing apparatus according to an embodiment of the present disclosure, as shown in fig. 5, includes:
a determining module 41, configured to determine a motion vector of a luminance component and a motion vector of a chrominance component of a current image block, where the current image block is located in a current frame, and the motion vector of the chrominance component is different from a co-located motion vector, and the co-located motion vector is the motion vector of the luminance component, or the co-located motion vector is a motion vector obtained by scaling the motion vector of the luminance component according to a luminance sampling rate and a chrominance sampling rate;
a determining module 41, configured to determine a luminance reference block of a luminance component of the current image block and a chrominance reference block of a chrominance component of the current image block according to a motion vector of the luminance component and a motion vector of the chrominance component of the current image block, where the luminance reference block and the chrominance reference block are located in a reference frame of the current frame;
a determining module 41, configured to determine a luminance reconstruction value of the luminance reference block and a chrominance reconstruction value of the chrominance reference block;
a processing module 42, configured to perform illumination compensation on the luminance reconstruction value of the luminance reference block to obtain a luminance compensation value of the luminance reference block;
The processing module 42 is further configured to perform illumination compensation on the chroma reconstruction value of the chroma reference block to obtain a chroma compensation value of the chroma reference block;
the determining module 41 is further configured to determine a luminance prediction value and a chrominance prediction value of the current image block, where the luminance prediction value of the current image block is a luminance compensation value of the reference block, and the chrominance prediction value of the current image block is a chrominance compensation value of the reference block.
For embodiments of the device, reference is made to the description of method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the modules illustrated as separate modules may or may not be separate. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The method and apparatus of the present disclosure are described above based on the embodiments and applications. In addition, the present disclosure also provides a terminal and a storage medium, which are described below.
Referring now to fig. 6, a schematic diagram of an electronic device (e.g., a terminal device or server) 800 suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in the drawings is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 6, the electronic device 800 may include a processing means (e.g., a central processor, a graphics processor, etc.) 801, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 802 or a program loaded from a storage means 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data required for the operation of the electronic device 800 are also stored. The processing device 801, the ROM 802, and the RAM803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
In general, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, etc.; storage 808 including, for example, magnetic tape, hard disk, etc.; communication means 809. The communication means 809 may allow the electronic device 800 to communicate wirelessly or by wire with other devices to exchange data. While an electronic device 800 having various means is shown, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication device 809, or installed from storage device 808, or installed from ROM 802. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 801.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods of the present disclosure described above.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided an image processing method including:
determining a reference block corresponding to a current image block of a target frame in the video in a reference frame corresponding to the target frame;
and carrying out illumination compensation on the brightness reconstruction value of the reference block corresponding to the current image block to obtain a brightness prediction value of the current image block, and taking the chromaticity reconstruction value of the reference block corresponding to the current image block as the chromaticity prediction value of the current image block.
In accordance with one or more embodiments of the present disclosure, in some embodiments, the present disclosure provides an image processing method comprising:
determining a reference block of a current image block, wherein the current image block is positioned in a current frame, and the reference block is positioned in a reference frame of the current frame;
determining a luminance reconstruction value of the reference block and a chrominance reconstruction value of the reference block;
performing illumination compensation on the brightness reconstruction value of the reference block to obtain a brightness compensation value of the reference block;
and determining a luminance predicted value and a chrominance predicted value of the current image block, wherein the luminance predicted value of the current image block is a luminance compensation value of the reference block, and the chrominance predicted value of the current image block is a chrominance reconstruction value of the reference block.
In some embodiments, further comprising: and determining the value of the switch flag bit carried by the current frame as a first value.
In some embodiments, the value of the switch flag bit carried by the current frame is determined according to the difference between the luminance component of the current frame and the luminance component of the reference frame, and the difference between the chrominance component of the current frame and the chrominance component of the reference frame.
In some embodiments, in the case that the difference between the first change value and the second change value is greater than the difference threshold, the value of the switch flag bit carried by the current frame is the first value, where the first change value is the difference between the luminance component of the current frame and the luminance component of the reference frame, and the second change value is the difference between the chrominance component of the current frame and the chrominance component of the reference frame.
In some embodiments, the luminance prediction value of the current image block is determined according to formula (1):
wherein ,for the luminance prediction value of the current image block, +.>Reconstructing a value, alpha, for the luminance of the reference block Y Scaling coefficients, alpha, for color channels of the luminance component Y Is the offset of the luminance component.
In some embodiments, the determining the luma and chroma predictors for the current image block includes: and determining that the current frame is a target frame or a non-target frame, and if the current image frame is the target frame, the brightness predicted value of the current image block is the brightness compensation value of the reference block, and the chroma predicted value of the current image block is the chroma reconstruction value of the reference block.
If the current frame is a non-target frame, the brightness predicted value and the chromaticity predicted value of the current image block are respectively equal to the brightness reconstruction value and the chromaticity reconstruction value of the reference block; or, the luminance predicted value and the chrominance predicted value of the current image block are respectively equal to the luminance compensation value and the chrominance compensation value of the reference block, and the chrominance compensation value is a value obtained by performing illumination compensation on the chrominance reconstruction value of the reference frame.
In some embodiments, the present disclosure provides an image processing method, comprising:
determining a motion vector of a luminance component and a motion vector of a chrominance component of a current image block, wherein the current image block is positioned in a current frame, the motion vector of the chrominance component is different from a parity motion vector, and the parity motion vector is the motion vector of the luminance component, or the parity motion vector is a motion vector obtained by scaling the motion vector of the luminance component according to a luminance sampling rate and a chrominance sampling rate;
determining a luminance reference block of the luminance component of the current image block and a chrominance reference block of the chrominance component of the current image block according to the motion vector of the luminance component and the motion vector of the chrominance component of the current image block, wherein the luminance reference block and the chrominance reference block are positioned in a reference frame of the current frame;
Determining a luminance reconstruction value of a luminance reference block and a chrominance reconstruction value of a chrominance reference block;
performing illumination compensation on the brightness reconstruction value of the brightness reference block to obtain a brightness compensation value of the brightness reference block;
performing illumination compensation on the chromaticity reconstruction value of the chromaticity reference block to obtain a chromaticity compensation value of the chromaticity reference block;
and determining a luminance predicted value and a chrominance predicted value of the current image block, wherein the luminance predicted value of the current image block is a luminance compensation value of a luminance reference block, and the chrominance predicted value of the current image block is a chrominance compensation value of a chrominance reference block.
In some embodiments, the present disclosure provides an image processing method in which a motion vector of a chroma reference block is identical to a motion vector of a chroma component of a current image block.
In some embodiments, the present disclosure provides an image processing method, wherein a motion vector of a chrominance component of a current image block is (0, 0).
In some embodiments, the present disclosure provides an image processing method in which a motion vector of a chrominance component of a current image block is determined according to a vector candidate list, and an index of the motion vector of the chrominance component in the vector candidate list is used to indicate the motion vector of the chrominance component.
In some embodiments, the present disclosure provides an image processing method, where a luminance prediction value of a current image block is obtained using formula (2):
wherein ,for the luminance prediction value of said current image block, is->Reconstructing a value, alpha, for the luminance of the luminance reference block Y Scaling coefficients, beta, for the color channels of the luminance component Y An offset for the luminance component;
and/or the number of the groups of groups,
obtaining a chroma prediction value of the current image block by adopting a formula (3) and a formula (4):
wherein ,for the U component of the chroma prediction value of said current image block, is->For the V component of the chrominance prediction value of said current image block, is->For the U component of the chroma reconstruction value of said chroma reference block, +.>For the V component, α, of the chroma reconstruction value of said chroma reference block U Scaling the coefficients, alpha, for the color channels of the U component V Scaling coefficients, beta, for the color channels of the V component U For the offset of the U component, beta V Is the offset of the V component.
In some embodiments, the present disclosure provides an image processing method of determining a motion vector of a luminance component and a motion vector of a chrominance component of a current image block, comprising:
determining the value of a switch flag bit carried by the current frame;
if the value of the switch flag bit of the current frame is the first value, the brightness predicted value of the current image block is the brightness compensation value of the brightness reference block, and the chromaticity predicted value of the current image block is the chromaticity reconstruction value of the chromaticity reference block;
If the value of the switch flag bit of the current frame is the second value, the brightness predicted value of the current image block is the brightness compensation value of the brightness reference block, and the chromaticity predicted value of the current image block is the chromaticity compensation value of the chromaticity reference block.
In some embodiments, the disclosure provides an image processing method, where the value of the switch flag bit carried by the current frame is determined according to the difference between the luminance component of the current frame and the luminance component of the reference frame, and the difference between the chrominance component of the current frame and the chrominance component of the reference frame.
In some embodiments, the disclosure provides an image processing method, where the value of a switch flag carried by a current frame is a first value when a difference between a first change value and a second change value is greater than a difference threshold, where the first change value is a difference between a luminance component of the current frame and a luminance component of a reference frame, and the second change value is a difference between a chrominance component of the current frame and a chrominance component of the reference frame.
In some embodiments, the present disclosure proposes an image processing apparatus including:
a determining unit, configured to determine a reference block of a current image block, where the current image block is located in a current frame, and the reference block is located in a reference frame of the current frame;
A determining unit, configured to determine a luminance reconstruction value of the reference block and a chrominance reconstruction value of the reference block;
the processing unit is used for carrying out illumination compensation on the brightness reconstruction value of the reference block to obtain a brightness compensation value of the reference block;
and the determining unit is further used for determining a luminance predicted value and a chrominance predicted value of the current image block, wherein the luminance predicted value of the current image block is a luminance compensation value of the reference block, and the chrominance predicted value of the current image block is a chrominance reconstruction value of the reference block.
In some embodiments, the present disclosure proposes an image processing apparatus including:
a determining module, configured to determine a motion vector of a luminance component and a motion vector of a chrominance component of a current image block, where the current image block is located in a current frame, the motion vector of the chrominance component is different from a parity motion vector, and the parity motion vector is the motion vector of the luminance component, or the parity motion vector is a motion vector obtained by scaling the motion vector of the luminance component according to a luminance sampling rate and a chrominance sampling rate;
the determining module is further used for determining a luminance reference block of the luminance component of the current image block and a chrominance reference block of the chrominance component of the current image block according to the motion vector of the luminance component and the motion vector of the chrominance component of the current image block, wherein the luminance reference block and the chrominance reference block are positioned in a reference frame of the current frame;
The determining module is also used for determining a brightness reconstruction value of the brightness reference block and a chromaticity reconstruction value of the chromaticity reference block;
the processing module is used for carrying out illumination compensation on the brightness reconstruction value of the brightness reference block to obtain a brightness compensation value of the brightness reference block;
the processing module is also used for carrying out illumination compensation on the chromaticity reconstruction value of the chromaticity reference block to obtain a chromaticity compensation value of the chromaticity reference block;
the determining module is further configured to determine a luminance prediction value and a chrominance prediction value of the current image block, where the luminance prediction value of the current image block is a luminance compensation value of the luminance reference block, and the chrominance prediction value of the current image block is a chrominance compensation value of the chrominance reference block.
According to one or more embodiments of the present disclosure, there is provided a terminal including: at least one memory and at least one processor;
wherein the at least one memory is configured to store program code, and the at least one processor is configured to invoke the program code stored by the at least one memory to perform any of the methods described above.
According to one or more embodiments of the present disclosure, there is provided a storage medium for storing program code for performing the above-described method.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (7)

1. An image processing method, comprising:
determining a reference block of a current image block, wherein the current image block is positioned in a current frame, and the reference block is positioned in a reference frame of the current frame;
determining a luminance reconstruction value of the reference block and a chrominance reconstruction value of the reference block;
performing illumination compensation on the brightness reconstruction value of the reference block to obtain a brightness compensation value of the reference block;
determining a luminance prediction value and a chrominance prediction value of the current image block;
wherein the determining the luminance prediction value and the chrominance prediction value of the current image block includes:
if the value of the switch flag bit carried by the current frame is a first value, the brightness predicted value of the current image block is the brightness compensation value of the reference block, and the chromaticity predicted value of the current image block is the chromaticity reconstruction value of the reference block;
If the value of the switch flag bit carried by the current frame is the second value, the brightness predicted value and the chromaticity predicted value of the current image block are respectively equal to the brightness reconstruction value and the chromaticity reconstruction value of the reference block; or the brightness predicted value and the chromaticity predicted value of the current image block are respectively equal to the brightness compensation value and the chromaticity compensation value of the reference block, and the chromaticity compensation value is a value obtained by carrying out illumination compensation on the chromaticity reconstruction value of the reference frame;
the value of the switch flag bit carried by the current frame is determined according to the difference between the luminance component of the current frame and the luminance component of the reference frame and the difference between the chrominance component of the current frame and the chrominance component of the reference frame.
2. The image processing method according to claim 1, characterized by further comprising:
and determining the value of the switch flag bit carried by the current frame as a first value.
3. The image processing method according to claim 2, wherein in a case where a difference between a first change value and a second change value is greater than a difference threshold, the value of the switch flag carried by the current frame is a first value, wherein the first change value is a difference between a luminance component of the current frame and a luminance component of the reference frame, and the second change value is a difference between a chrominance component of the current frame and a chrominance component of the reference frame.
4. The image processing method according to claim 1, wherein if the luminance prediction value of the current image block is the luminance compensation value of the reference block, the luminance prediction value of the current image block is determined according to formula (1):
wherein ,for the luminance prediction value of said current image block, is->Reconstructing a value, alpha, for the luminance of the reference block Y Scaling coefficients, beta, for color channels of the luminance component Y Is the offset of the luminance component.
5. An image processing apparatus, comprising:
a determining unit, configured to determine a reference block of a current image block, where the current image block is located in a current frame, and the reference block is located in a reference frame of the current frame;
the determining unit is further configured to determine a luminance reconstruction value of the reference block and a chrominance reconstruction value of the reference block;
the processing unit is used for carrying out illumination compensation on the brightness reconstruction value of the reference block to obtain a brightness compensation value of the reference block;
the determining unit is further configured to determine a luminance prediction value and a chrominance prediction value of the current image block;
wherein the determining the luminance prediction value and the chrominance prediction value of the current image block includes:
if the value of the switch flag bit carried by the current frame is a first value, the brightness predicted value of the current image block is the brightness compensation value of the reference block, and the chromaticity predicted value of the current image block is the chromaticity reconstruction value of the reference block;
If the value of the switch flag bit carried by the current frame is the second value, the brightness predicted value and the chromaticity predicted value of the current image block are respectively equal to the brightness reconstruction value and the chromaticity reconstruction value of the reference block; or the brightness predicted value and the chromaticity predicted value of the current image block are respectively equal to the brightness compensation value and the chromaticity compensation value of the reference block, and the chromaticity compensation value is a value obtained by carrying out illumination compensation on the chromaticity reconstruction value of the reference frame;
the value of the switch flag bit carried by the current frame is determined according to the difference between the luminance component of the current frame and the luminance component of the reference frame and the difference between the chrominance component of the current frame and the chrominance component of the reference frame.
6. A terminal, comprising:
at least one memory and at least one processor;
wherein the at least one memory is configured to store program code, and the at least one processor is configured to invoke the program code stored by the at least one memory to perform the method of any of claims 1 to 4.
7. A storage medium storing program code which, when executed by a processor, implements the method of any one of claims 1 to 4.
CN202011060097.7A 2020-09-30 2020-09-30 Image processing method, device, terminal and storage medium Active CN112203085B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011060097.7A CN112203085B (en) 2020-09-30 2020-09-30 Image processing method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011060097.7A CN112203085B (en) 2020-09-30 2020-09-30 Image processing method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN112203085A CN112203085A (en) 2021-01-08
CN112203085B true CN112203085B (en) 2023-10-17

Family

ID=74012512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011060097.7A Active CN112203085B (en) 2020-09-30 2020-09-30 Image processing method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN112203085B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022174469A1 (en) * 2021-02-22 2022-08-25 Oppo广东移动通信有限公司 Illumination compensation method, encoder, decoder, and storage medium
CN113422959A (en) * 2021-05-31 2021-09-21 浙江智慧视频安防创新中心有限公司 Video encoding and decoding method and device, electronic equipment and storage medium
CN116708789B (en) * 2023-08-04 2023-10-13 湖南马栏山视频先进技术研究院有限公司 Video analysis coding system based on artificial intelligence

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105103552A (en) * 2013-01-10 2015-11-25 三星电子株式会社 Method for encoding inter-layer video for compensating luminance difference and device therefor, and method for decoding video and device therefor
CN109076210A (en) * 2016-05-28 2018-12-21 联发科技股份有限公司 The method and apparatus of the present image reference of coding and decoding video
CN111031319A (en) * 2019-12-13 2020-04-17 浙江大华技术股份有限公司 Local illumination compensation prediction method, terminal equipment and computer storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8532175B2 (en) * 2007-01-04 2013-09-10 Thomson Licensing Methods and apparatus for reducing coding artifacts for illumination compensation and/or color compensation in multi-view coded video

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105103552A (en) * 2013-01-10 2015-11-25 三星电子株式会社 Method for encoding inter-layer video for compensating luminance difference and device therefor, and method for decoding video and device therefor
CN109076210A (en) * 2016-05-28 2018-12-21 联发科技股份有限公司 The method and apparatus of the present image reference of coding and decoding video
CN111031319A (en) * 2019-12-13 2020-04-17 浙江大华技术股份有限公司 Local illumination compensation prediction method, terminal equipment and computer storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CE10-related: Local illumination compensation simplifications;Saurav Bandyopadhyay等;《Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 13th Meeting: Marrakech, MA, 9–18Jan. 2019 JVET-M0224-v2 》;全文 *
Qiang Tang等.Efficient Chrominance Compensation for MPEG2 to H.264 Transcoding.《 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP '07》.2007,全文. *

Also Published As

Publication number Publication date
CN112203085A (en) 2021-01-08

Similar Documents

Publication Publication Date Title
CN112203085B (en) Image processing method, device, terminal and storage medium
TWI622288B (en) Video decoding method
US8665943B2 (en) Encoding device, encoding method, encoding program, decoding device, decoding method, and decoding program
TWI717776B (en) Method of adaptive filtering for multiple reference line of intra prediction in video coding, video encoding apparatus and video decoding apparatus therewith
RU2720975C2 (en) Method of encoding and decoding images, an encoding and decoding device and corresponding computer programs
US8848799B2 (en) Utilizing thresholds and early termination to achieve fast motion estimation in a video encoder
JP2008193691A (en) Apparatus and method of up-converting frame rate of restored frame
CN112203086B (en) Image processing method, device, terminal and storage medium
CN110248189B (en) Video quality prediction method, device, medium and electronic equipment
CN109996080B (en) Image prediction method and device and coder-decoder
WO2021196994A1 (en) Encoding method and apparatus, terminal, and storage medium
CN113170210A (en) Affine mode signaling in video encoding and decoding
EP3706421A1 (en) Method and apparatus for video encoding and decoding based on affine motion compensation
CN113473126A (en) Video stream processing method and device, electronic equipment and computer readable medium
US11109060B2 (en) Image prediction method and apparatus
KR102609215B1 (en) Video encoders, video decoders, and corresponding methods
US20050089232A1 (en) Method of video compression that accommodates scene changes
WO2014094219A1 (en) Video frame reconstruction
CN110418134B (en) Video coding method and device based on video quality and electronic equipment
CN111654696A (en) Intra-frame multi-reference-line prediction method and device, storage medium and terminal
US20150063462A1 (en) Method and system for enhancing the quality of video during video compression
CN113542737A (en) Encoding mode determining method and device, electronic equipment and storage medium
KR20050053135A (en) Apparatus for calculating absolute difference value, and motion prediction apparatus and motion picture encoding apparatus utilizing the calculated absolute difference value
CN116828180B (en) Video encoding method, apparatus, electronic device, and computer-readable medium
US20110249719A1 (en) Video compression

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant