CN112203085A - Image processing method, device, terminal and storage medium - Google Patents

Image processing method, device, terminal and storage medium Download PDF

Info

Publication number
CN112203085A
CN112203085A CN202011060097.7A CN202011060097A CN112203085A CN 112203085 A CN112203085 A CN 112203085A CN 202011060097 A CN202011060097 A CN 202011060097A CN 112203085 A CN112203085 A CN 112203085A
Authority
CN
China
Prior art keywords
value
chrominance
frame
block
luminance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011060097.7A
Other languages
Chinese (zh)
Other versions
CN112203085B (en
Inventor
王萌
张莉
王诗淇
王悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Group HK Ltd
Original Assignee
ByteDance HK Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ByteDance HK Co Ltd filed Critical ByteDance HK Co Ltd
Priority to CN202011060097.7A priority Critical patent/CN112203085B/en
Publication of CN112203085A publication Critical patent/CN112203085A/en
Application granted granted Critical
Publication of CN112203085B publication Critical patent/CN112203085B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/109Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present disclosure provides an image processing method, apparatus, terminal, and storage medium. Among them, some embodiments of the image processing method include: determining a reference block of a current image block, wherein the current image block is located in a current frame, and the reference block is located in a reference frame of the current frame; determining a luminance reconstruction value of a reference block and a chrominance reconstruction value of the reference block; performing illumination compensation on the brightness reconstruction value of the reference block to obtain a brightness compensation value of the reference block; and determining a brightness predicted value and a chroma predicted value of the current image block, wherein the brightness predicted value of the current image block is a brightness compensation value of the reference block, and the chroma predicted value of the current image block is a chroma reconstruction value of the reference block. The method can reduce the data processing amount in the video coding and video decoding processes, reduce the complexity and simultaneously improve the coding compression efficiency.

Description

Image processing method, device, terminal and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a terminal, and a storage medium.
Background
The video includes a large number of image frames, each image frame includes tens of thousands or hundreds of thousands of pixels, and each pixel is generally represented by 24 bits, so that one video needs to occupy a large amount of storage space and transmission bandwidth. Generally, compression techniques for video include intra-frame compression for removing spatial redundancy in image frames and inter-frame compression for removing temporal redundancy in images.
Disclosure of Invention
To solve the existing problems, the present disclosure provides an image processing method, apparatus, terminal, and storage medium.
The present disclosure adopts the following technical solutions.
In some embodiments, the present disclosure provides an image processing method comprising:
determining a reference block of a current image block, wherein the current image block is located in a current frame, and the reference block is located in a reference frame of the current frame;
determining a luminance reconstruction value of a reference block and a chrominance reconstruction value of the reference block;
performing illumination compensation on the brightness reconstruction value of the reference block to obtain a brightness compensation value of the reference block;
and determining a brightness predicted value and a chroma predicted value of the current image block, wherein the brightness predicted value of the current image block is a brightness compensation value of the reference block, and the chroma predicted value of the current image block is a chroma reconstruction value of the reference block.
In some embodiments, further comprising: and determining the value of the switch flag bit carried by the current frame as a first value.
In some embodiments, the value of the switch flag carried by the current frame is determined according to the difference between the luminance component of the current frame and the luminance component of the reference frame, and the difference between the chrominance component of the current frame and the chrominance component of the reference frame.
In some embodiments, when a difference between the first change value and the second change value is greater than the difference threshold, a value of the switch flag carried by the current frame is a first value, where the first change value is a difference between a luminance component of the current frame and a luminance component of the reference frame, and the second change value is a difference between a chrominance component of the current frame and a chrominance component of the reference frame.
In some embodiments, the luma prediction value of the current image block is determined according to equation (1):
Figure BDA0002712145600000021
wherein ,
Figure BDA0002712145600000022
is a luminance prediction value of the current image block,
Figure BDA0002712145600000023
is a luminance reconstruction value of the reference block, alphaYIs the color channel scaling factor, β, of the luminance componentYIs the offset of the luminance component.
In some embodiments, the determining the luma prediction value and the chroma prediction value of the current image block comprises: and if the value of the switch flag bit carried by the current frame is a first value, the brightness predicted value of the current image block is the brightness compensation value of the reference block, and the chroma predicted value of the current image block is the chroma reconstruction value of the reference block.
If the value of the switch flag bit carried by the current frame is a second value, the brightness predicted value and the chroma predicted value of the current image block are respectively equal to the brightness reconstructed value and the chroma reconstructed value of the reference block; or the luminance predicted value and the chrominance predicted value of the current image block are respectively equal to the luminance compensation value and the chrominance compensation value of the reference block, and the chrominance compensation value is a value obtained by performing illumination compensation on the chrominance reconstruction value of the reference frame.
In some embodiments, the present disclosure provides an image processing method comprising:
determining a motion vector of a luminance component and a motion vector of a chrominance component of a current image block, wherein the current image block is located in a current frame, the motion vector of the chrominance component is different from a co-located motion vector, and the co-located motion vector is the motion vector of the luminance component, or the co-located motion vector is obtained by scaling the motion vector of the luminance component according to a luminance sampling rate and a chrominance sampling rate;
determining a luminance reference block of a luminance component of the current image block and a chrominance reference block of a chrominance component of the current image block according to a motion vector of the luminance component and a motion vector of the chrominance component of the current image block, wherein the luminance reference block and the chrominance reference block are located in a reference frame of the current frame;
determining a luminance reconstruction value of a luminance reference block and a chrominance reconstruction value of a chrominance reference block;
performing illumination compensation on the brightness reconstruction value of the brightness reference block to obtain a brightness compensation value of the brightness reference block;
performing illumination compensation on the chrominance reconstruction value of the chrominance reference block to obtain a chrominance compensation value of the chrominance reference block;
and determining a brightness predicted value and a chroma predicted value of the current image block, wherein the brightness predicted value of the current image block is a brightness compensation value of a brightness reference block, and the chroma predicted value of the current image block is a chroma compensation value of a chroma reference block.
In some embodiments, the motion vector of the chroma reference block is the same as the motion vector of the chroma component of the current image block.
In some embodiments, the motion vector of the chroma component of the current image block is (0, 0).
In some embodiments, the motion vector of the chroma component of the current image block is selected from a list of vector candidates, and the motion vector of the chroma component is recorded by encoding an index of the motion vector of the chroma component in the list of vector candidates.
In some embodiments, the luminance prediction value of the current image block is obtained by using formula (2):
Figure BDA0002712145600000031
wherein ,
Figure BDA0002712145600000034
is a luminance prediction value of the current image block,
Figure BDA0002712145600000035
is a luminance reconstruction value of the reference block, alphaYIs the color channel scaling factor, β, of the luminance componentYIs the offset of the luminance component;
and/or the presence of a gas in the gas,
obtaining a chroma prediction value of the current image block by adopting a formula (3) and a formula (4):
Figure BDA0002712145600000032
Figure BDA0002712145600000033
wherein ,
Figure BDA0002712145600000036
for the U component of the chroma prediction value of the current image block,
Figure BDA0002712145600000037
for the V component of the chroma prediction value of the current image block,
Figure BDA0002712145600000038
a U component of a chrominance reconstruction value for a chrominance reference block,
Figure BDA0002712145600000039
for the V component, alpha, of the chrominance reconstruction value of the chrominance reference blockUColor channel scaling factor, alpha, for the U componentVColor channel scaling factor, β, for the V componentUIs an offset of the U component, betaVIs the offset of the V component.
In some embodiments, determining the motion vector for the luma component and the motion vector for the chroma component of the current image block comprises:
determining the value of a switch flag bit carried by the current frame;
if the switch flag bit of the current frame takes the first value, the brightness predicted value of the current image block is the brightness compensation value of the brightness reference block, and the chroma predicted value of the current image block is the chroma reconstruction value of the chroma reference block;
and if the switch zone bit of the current frame takes the second value, the brightness predicted value of the current image block is the brightness compensation value of the brightness reference block, and the chroma predicted value of the current image block is the chroma compensation value of the chroma reference block.
In some embodiments, the value of the switch flag carried by the current frame is determined according to the difference between the luminance component of the current frame and the luminance component of the reference frame, and the difference between the chrominance component of the current frame and the chrominance component of the reference frame.
In some embodiments, when a difference between the first change value and the second change value is greater than the difference threshold, a value of the switch flag carried by the current frame is a first value, where the first change value is a difference between a luminance component of the current frame and a luminance component of the reference frame, and the second change value is a difference between a chrominance component of the current frame and a chrominance component of the reference frame.
In some embodiments, the present disclosure proposes an image processing apparatus comprising:
the device comprises a determining unit, a judging unit and a judging unit, wherein the determining unit is used for determining a reference block of a current image block, the current image block is positioned in a current frame, and the reference block is positioned in a reference frame of the current frame;
a determining unit, further configured to determine a luminance reconstruction value of the reference block and a chrominance reconstruction value of the reference block;
the processing unit is used for carrying out illumination compensation on the brightness reconstruction value of the reference block to obtain a brightness compensation value of the reference block;
and the determining unit is further used for determining a brightness predicted value and a chroma predicted value of the current image block, wherein the brightness predicted value of the current image block is a brightness compensation value of the reference block, and the chroma predicted value of the current image block is a chroma reconstruction value of the reference block.
In some embodiments, the present disclosure proposes an image processing apparatus comprising:
a determining module, configured to determine a motion vector of a luminance component and a motion vector of a chrominance component of a current image block, where the current image block is located in a current frame, the motion vector of the chrominance component is different from a co-located motion vector, and the co-located motion vector is the motion vector of the luminance component, or the co-located motion vector is a motion vector obtained by scaling the motion vector of the luminance component according to a luminance sampling rate and a chrominance sampling rate;
the determining module is further configured to determine a luminance reference block of a luminance component of the current image block and a chrominance reference block of a chrominance component of the current image block according to the motion vector of the luminance component and the motion vector of the chrominance component of the current image block, where the luminance reference block and the chrominance reference block are located in a reference frame of the current frame;
the determining module is further configured to determine a luma reconstruction value of the luma reference block and a chroma reconstruction value of the chroma reference block;
the processing module is used for carrying out illumination compensation on the brightness reconstruction value of the brightness reference block to obtain a brightness compensation value of the brightness reference block;
the processing module is further configured to perform illumination compensation on the chrominance reconstruction value of the chrominance reference block to obtain a chrominance compensation value of the chrominance reference block;
the determining module is further configured to determine a luminance prediction value and a chrominance prediction value of the current image block, where the luminance prediction value of the current image block is a luminance compensation value of the luminance reference block, and the chrominance prediction value of the current image block is a chrominance compensation value of the chrominance reference block.
In some embodiments, the present disclosure provides a terminal comprising: at least one memory and at least one processor;
the memory is used for storing program codes, and the processor is used for calling the program codes stored in the memory to execute the method.
In some embodiments, the present disclosure provides a storage medium for storing program code for performing the above-described method.
According to the image processing method provided by the embodiment of the disclosure, only illumination compensation is performed on the brightness component in the inter-frame prediction process (the step of performing illumination compensation on the chrominance component is reduced), so that the data processing amount in the video encoding and video decoding processes is reduced, and the complexity of video encoding and decoding is reduced. The method provided by the embodiment of the disclosure can also improve the accuracy of chroma component prediction, thereby reducing residual errors and reducing coding rate, and thus can improve the coding compression performance.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram of a current image block and a reference block corresponding to the current image block according to an embodiment of the disclosure.
Fig. 3 is a flowchart of another image processing method of an embodiment of the present disclosure.
Fig. 4 is a composition diagram of an image processing apparatus according to an embodiment of the present disclosure.
Fig. 5 is a composition diagram of another image processing apparatus of the embodiment of the present disclosure.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that various steps recited in method embodiments of the present disclosure may be performed in parallel and/or in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a" or "an" in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that reference to "one or more" unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
With the rise of short video application, more and more users shoot and share short videos through mobile terminals such as mobile phones, the UGC (User Generated Content) videos Generated by the users are obviously different from videos shot in the prior art, the UGC videos are generally shot in a non-professional illumination scene by using non-professional equipment by non-professional users, so that the light change in the same video is obvious, the shooting equipment of the UGC videos is various, the scenes are various, the video contents are various, and the UGC videos are generally subjected to special effects and rendering of a filter; the UGC video is usually shot by a user through a handheld terminal device such as a mobile phone and a tablet, and is uploaded to a video platform after being compressed. UGC video can carry out the coding compression after shooting and upload, promotes video compression efficiency and helps saving flow bandwidth and calculation consumption.
In some techniques, when video is inter-frame compressed, in order to solve the problem of local illumination variation between adjacent time-domain frames, in a motion compensation stage of inter-frame prediction, local illumination compensation is performed on a luminance component and a chrominance component of a coding unit currently performing inter-frame coding, respectively, and the luminance coding unit (luminance block) and the chrominance coding unit (chrominance block) share a motion vector.
In order to at least partially solve the above problem, an embodiment of the present disclosure provides an image processing method, where an image in the embodiment may be an image in a video, and a detailed description will be given below on a solution provided by the embodiment of the present disclosure with reference to the drawings.
As shown in fig. 1, fig. 1 is a flowchart of an image processing method according to an embodiment of the present disclosure, which includes the following steps.
S11: a reference block for a current image block is determined.
Specifically, the current image block is located in a current Frame, and the Reference block is located in a Reference Frame of the current Frame, in some embodiments, the image may be an image in a video, and the video may be an UGC video, the image processing method proposed by the present disclosure may be used in an inter-Frame prediction process of video compression or decompression, the video includes a plurality of image frames, the current Frame may be any image Frame in the video that needs to be inter-predicted, the current Frame may be, for example, a P image or a B image, the current Frame includes a plurality of coding units, where the current image block may be any coding unit, for the inter-Frame prediction process, the current Frame may correspond to a Reference Frame, that is, an image used for prediction, and is also generally referred to as a Reference image (Reference Frame), the Reference Frame may be, for example, a previous image Frame or a next image Frame adjacent to the current Frame in a time domain, and the Reference Frame of, the encoding unit corresponding to the current image block is used as a reference block, a displacement from the reference block to the current image block is generally referred to as a Motion Vector (MV), a difference between the current image block and the reference block is generally referred to as a Prediction Residual (Prediction Residual), and a process of determining the reference block corresponding to the current image block is generally referred to as Motion Estimation (Motion Estimation) in a video encoding process.
S12: a luma reconstruction value of the reference block and a chroma reconstruction value of the reference block are determined.
S13: and carrying out illumination compensation on the brightness reconstruction value of the reference block to obtain a brightness compensation value of the reference block.
Specifically, the luminance compensation value is a luminance value obtained by performing illumination compensation on the luminance reconstruction value.
S14: and determining a brightness predicted value and a chroma predicted value of the current image block.
Specifically, the luminance predicted value of the current image block is a luminance compensation value of the reference block, and the chrominance predicted value of the current image block is a chrominance reconstructed value of the reference block. In this embodiment, a color space of luminance plus chrominance, i.e., YUV color space, may be used for the color, Y represents luminance, i.e., a gray scale value, and U and V represent chrominance. Illumination compensation is used for solving the problem of local illumination change between adjacent time domains, and is applied to a motion compensation stage of inter-frame prediction, in the prior art, when a luminance predicted value and a chrominance predicted value of a current frame are determined, the same illumination compensation method is adopted, and the value obtained by respectively performing illumination compensation on the luminance reconstructed value and the chrominance reconstructed value of a reference block corresponding to the current image block is used as the luminance predicted value and the chrominance predicted value of the current image block, while in the embodiment, when the luminance predicted value of the current image block is determined, the value obtained by performing illumination compensation on the luminance reconstructed value of the reference block corresponding to the current image block is used as the luminance predicted value of the current image block, and the chrominance reconstructed value of the reference block corresponding to the current image block is directly used as the chrominance predicted value of the current image block without performing illumination compensation on the chrominance predicted value, compared, in the image processing method in this embodiment, in the inter-frame prediction process, by reducing the step of performing illumination compensation on the chroma prediction value, the data processing amount in the video encoding and video decoding processes is reduced, and the complexity is reduced, and the chroma of the current image block may not change relative to the chroma of the reference block, and performing illumination compensation on the chroma prediction value in the prior art may reduce the prediction accuracy of the chroma component.
In some embodiments of the present disclosure, the luminance prediction value of the current image block is obtained by using formula (1):
Figure BDA0002712145600000081
wherein ,
Figure BDA0002712145600000082
is a luminance prediction value of the current image block,
Figure BDA0002712145600000083
for the luminance reconstruction value, alpha, of the reference block corresponding to the current image blockYIs the color channel scaling factor, β, of the luminance componentYIs the offset of the luminance component.
Specifically, in some embodiments, a linear illumination transformation model is used to perform illumination compensation on the luminance prediction value of the reference block corresponding to the current image block during the inter-frame prediction process, where α isY and βYMay be based on neighboring parameters of a current image block in a current frameReferring to fig. 2, for example, 2a in fig. 2 is a current image block and neighboring reference pixels of the current image block, 2b in fig. 2 is a reference block and neighboring reference pixels of the reference block corresponding to the current image block, and the filled circle in fig. 2 is a neighboring reference pixel, and the parameter α can be obtained by using the square error of the neighboring reference pixels of the current image block and the neighboring reference pixels of the reference block corresponding to the current image blockY、βY
In the prior art, illumination compensation is usually performed on three color components (Y, U, V) of a coding unit that needs illumination compensation, but according to the characteristics of a UGC video, there is a change in illumination between image frames of the video, but the chromaticity is not changed or is changed little, that is, the luminance is changed but the chromaticity is not changed, so in some embodiments of the present disclosure, only local illumination compensation is performed on the luminance component Y of a current image block, illumination compensation is not performed on the chromaticity components (U, V), and prediction is performed directly using the chromaticity component of a reference block corresponding to the current image block, that is, the chromaticity component of a reference block corresponding to the current image block is used for prediction, that is, illumination compensation is performed on the chromaticity components (U
Figure BDA0002712145600000091
wherein
Figure BDA0002712145600000092
Is a prediction value of the U component in the chrominance component of the current image block,
Figure BDA0002712145600000093
is a reconstructed value of the U component in the chrominance component of the current image block,
Figure BDA0002712145600000094
is a reconstructed value of the U component in the chrominance component of the reference block in the reference frame corresponding to the current image block,
Figure BDA0002712145600000095
is the reconstructed value of the V component in the chrominance component of the reference block in the reference frame corresponding to the current image block. In this way, not only is data processing reducedThe complexity of the method improves the coding and decoding efficiency, improves the accuracy of the chroma predicted value and can also improve the compression performance.
In some embodiments of the present disclosure, the determining the luminance prediction value and the chrominance prediction value of the current image block includes: and determining that the current frame is a target frame or a non-target frame, wherein if the current frame is the target frame, the brightness predicted value of the current image block is the brightness compensation value of the reference block, and the chroma predicted value of the current image block is the chroma reconstruction value of the reference block.
If the current frame is a non-target frame, the brightness predicted value and the chroma predicted value of the current image block are respectively equal to the brightness reconstructed value and the chroma reconstructed value of the reference block; or the luminance predicted value and the chrominance predicted value of the current image block are respectively equal to the luminance compensation value and the chrominance compensation value of the reference block, and the chrominance compensation value is a value obtained by performing illumination compensation on the chrominance reconstruction value of the reference frame.
Specifically, in this embodiment, the image is divided into a target frame and a non-target frame, where the current frame is the target frame, and only the illumination compensation is performed on the brightness prediction value, and the current frame is the non-target frame. When determining the luminance predicted value and the chrominance predicted value of the image block in the non-target frame, there may be two ways, one is to directly use the luminance reconstructed value and the chrominance reconstructed value of the reference block as the luminance predicted value and the chrominance predicted value, and the other is to use the luminance reconstructed value and the chrominance reconstructed value of the reference block to perform illumination compensation, that is, the luminance compensation value and the chrominance compensation value. It should be noted that the non-target frame may adopt any one of the two manners of determining the luminance predictor and the chrominance predictor, and the manner adopted by each non-target frame may be different.
In some embodiments, before determining the luma prediction value and the chroma prediction value of the current image block, further comprises: and determining the value of the switch flag bit carried by the current frame as a first value.
Specifically, in this embodiment, a switch flag is set in each image frame, the switch flag may be used as a frame-level switch to determine whether to perform illumination compensation on the chrominance predicted value of the reference block, when the switch flag of the current frame is equal to a first value, the chrominance predicted value of the current image block is the chrominance reconstructed value of the reference block, the value of the switch flag may be randomly set, for example, the switch flag of 1 frame is selected from every 2 frames to be set as the first value, in case that the value of the switch flag of the current frame is a second value, the luminance compensated value and the chrominance compensated value of the reference block of the reference frame may be used as the luminance predicted value and the chrominance predicted value of the current image block, respectively, the chrominance compensated value is the chrominance value that performs illumination compensation on the chrominance reconstructed value of the reference block, and the switch flag may be, for example, a frame-level switch (lic _ sep _ flag) set in the image header file, for example, a switch flag may be set in the header file, a switch flag of a target frame may be set to 1, and a switch flag of a non-target frame may be set to 0, so as to determine the type of an image frame in an inter-frame prediction process. When the image processing method provided by the disclosure is used for video coding, each image frame is determined to be a target image or a non-target image, and a corresponding switch zone bit is set for each image frame according to the result.
In some embodiments of the present disclosure, the switch flag is set only in the target frame of the video; specifically, in this embodiment, identification information is not set in the non-target frame, so that it can be determined that the chroma prediction value of the image block of the current frame is equal to the chroma reconstructed value of the reference block as long as the identification information is identified.
In some embodiments of the present disclosure, a value of the switch flag carried by the current frame is determined according to a difference between the luminance component of the current frame and the luminance component of the reference frame, and a difference between the chrominance component of the current frame and the chrominance component of the reference frame. Specifically, in the present disclosure, the difference between the luminance component and the chrominance component of the current frame and the luminance component and the chrominance component of the reference frame indicates the variation amplitude of the luminance component and the chrominance component with respect to the reference frame, if the variation amplitudes of the luminance component and the chrominance component are similar, the luminance component and the chrominance component may be considered to be calculated in a similar manner, otherwise, the luminance component and the chrominance component may be considered to be calculated in a different manner.
In some embodiments of the present disclosure, when a difference between a first change value and a second change value is greater than a difference threshold, a value of a switch flag carried by the current frame is a first value, where the first change value is a difference between a luminance component of the current frame and a luminance component of the reference frame, and the second change value is a difference between a chrominance component of the current frame and a chrominance component of the reference frame. Specifically, the first variation value is an absolute value of a difference between luminance components of the current frame and the reference frame, and similarly, the second variation value is an absolute value of a difference between chrominance components of the current frame and chrominance components of the reference frame.
Specifically, the present embodiment defines a determination criterion for determining whether to perform illumination compensation only on a luminance reconstruction value, that is, how to determine whether an image frame should adopt the scheme of performing illumination compensation only on the luminance reconstruction value and not performing illumination compensation on a chrominance reconstruction value, and may be used in a video encoding process, for any currently encoded image frame, determining a difference value between a first change value and a second change value of the image frame, comparing the difference value with a difference threshold, and according to a comparison result, setting whether the image frame adopts the scheme provided by the present disclosure, when the difference value is not greater than the difference threshold, it indicates that a change trend of a luminance component and a change trend of a chrominance component of the currently encoded image frame are similar, at this time, a processing manner of the luminance component and the chrominance component of the currently encoded current frame should be similar, at this time, illumination compensation should be performed on the predicted values of the luminance component and the chrominance component, or no illumination compensation should be performed on the predicted values of the luminance component and the chrominance component, when the difference value is greater than the difference threshold value, it is described that the variation trends of the luminance component and the chrominance component are relatively large, the luminance component and the chrominance component should not adopt a similar processing mode, and the chrominance of the image frame does not change normally, so that illumination compensation should be performed only on the predicted value of the luminance component, but not on the predicted value of the chrominance component.
In order to better explain the image processing method proposed by the embodiments of the present disclosure, a specific embodiment is proposed below as an example that the image processing method proposed by the present disclosure is applied to a video encoding end.
In the video coding process, the image frame of the video is divided into the maximum coding units with equal size and no overlapping. Then, with the maximum coding unit as a node, different kinds of recursive tree partitions can be performed, for example, a quadtree, a binary tree, a ternary tree, and the like, to form a coding unit. Coding units are basic units of video coding, and each coding unit may contain one luminance block (Y) and two chrominance blocks (UV). Video coding performance results from the removal of data redundancy. The inter-frame mode prediction can effectively remove time domain redundancy, the content of the front frame and the back frame of the video has similarity, the change of illumination can greatly influence the efficiency of inter-frame coding, and the accumulated difference between the current frame and the reference frame can be compared through frame level histogram statistics when the inter-frame prediction is carried out. If the variation accumulated difference of the luminance components of the current frame and the reference frame is similar to the variation accumulated difference of the chrominance components, the current frame is a non-target frame. On the contrary, if the cumulative difference of the change of the luminance component of the current frame and the change of the reference frame is far different from the cumulative difference of the change of the chrominance component, the current frame is the target frame. Whether the current frame is the target frame or the non-target frame is determined by setting a flag bit (switch flag bit) in each image frame, and whether the current frame is the target frame may be identified by a frame level switch (lic _ sep _ flag), for example. lic _ sep _ flag may be determined by histogram statistics of the current coded frame and all frames in the reference frame list and passed as a frame level flag. If the current frame is a non-target frame, lic _ sep _ flag is 0, and if the current frame is a target frame, lic _ sep _ flag is 1. In this embodiment, assuming that the current frame is a target frame, a formula is adopted for the luminance predicted value of the current image block:
Figure BDA0002712145600000122
calculating, wherein the chroma predicted value of the current image block adopts a formula:
Figure BDA0002712145600000121
the meaning of the parameters in the three formulas is the same as that described above, that is, the luminance reconstructed value of the reference block corresponding to the current image block is subjected to linear illumination compensation and then used as the luminance predicted value of the current image block, and the chrominance reconstructed value of the reference block corresponding to the current image block is directly used as the chrominance predicted value of the current image block. That is, only the brightness predicted value is subjected to illumination compensation for the current image block, the chroma predicted value is not subjected to illumination compensation, and the brightness predicted value and the chroma predicted value are subjected to illumination compensation for the non-target frame of the video.
When the image processing method provided by the embodiment of the present disclosure is used at a video decoding end, in a video decoding process, for a current frame, a switch flag bit of the current frame is obtained first, whether the current frame is a target frame or a non-target frame is determined according to whether a value of the switch flag bit is a first value, and then a luminance predicted value and a chrominance predicted value of the current frame are determined, where a method of determining the luminance predicted value and the chrominance predicted value is the same as that in the above-described embodiment for video encoding, and is not described herein again.
In the prior art, a coding block is determined to be a motion vector of a luminance component, and then the motion vector of the luminance component is scaled according to the luminance sampling rate and the chrominance sampling rate to obtain a co-located motion vector, which is used as a motion vector of a chrominance component, taking the luminance sampling rate as half of the chrominance sampling rate in the horizontal direction and the vertical direction as an example, under a 420-sample format, one coding block comprises a luminance block (Y component) and two chrominance blocks (UV components), the motion vector of the luminance component needs to be multiplied by 2 to obtain the motion vector of the chrominance component, that is, the motion vector of the luminance component is scaled according to the sampling rate to obtain the co-located motion vector, however, the co-located motion vector is not necessarily the motion vector that best matches the chrominance block of the current coding block, and directly using it as the motion vector of the chrominance component may cause the prediction accuracy of the chrominance component to be reduced and the chrominance to be distorted.
In some embodiments of the present disclosure, an image processing method is further provided, as shown in fig. 3, the method in this embodiment includes:
s21: a motion vector for a luminance component and a motion vector for a chrominance component of the current image block are determined.
Specifically, the current image block is located in the current frame, the motion vector of the chrominance component is different from the co-located motion vector, and the co-located motion vector is the motion vector of the luminance component, or the co-located motion vector is the motion vector quantity obtained by scaling the motion vector of the luminance component according to the luminance sampling rate and the chrominance sampling rate; in some embodiments, the co-located motion vector is equal to the motion vector of the luma component when the luma and chroma sampling rates are the same. The image processing method in this embodiment may be used in an inter-frame prediction process of video decoding or video encoding, where a current frame may be an image frame in a video currently being encoded or decoded, and a current image block may be any encoding unit in the current frame. In some embodiments, the motion vector of the chrominance component is different from a co-located motion vector, which is a motion vector obtained by scaling the motion vector of the luminance component according to the luminance sampling rate and the chrominance sampling rate. The method for determining the motion vector of the luminance component in the present embodiment may be a method in the related art, and is not limited thereto.
S22: and determining a luminance reference block of the luminance component of the current image block and a chrominance reference block of the chrominance component of the current image block according to the motion vector of the luminance component and the motion vector of the chrominance component of the current image block.
Specifically, the current frame corresponds to a reference frame, and the luminance reference block and the chrominance reference block are located in the reference frame corresponding to the current frame.
S23: and carrying out illumination compensation on the brightness reconstructed value of the brightness reference block to obtain a brightness compensation value of the brightness reference block.
Specifically, the luminance compensation value is a luminance value obtained by performing illumination compensation on the luminance reconstruction value of the luminance reference block.
S24: and carrying out illumination compensation on the chrominance reconstruction value of the chrominance reference block to obtain a chrominance compensation value of the chrominance reference block.
Specifically, the chrominance compensation value is a chrominance value obtained by performing illumination compensation on a chrominance reconstruction value of the chrominance reference block.
S25: determining a luminance prediction value and a chrominance prediction value of the current image block,
specifically, the luminance predicted value of the current image block is a luminance compensation value of the luminance reference block, and the chrominance predicted value of the current image block is a chrominance compensation value of the chrominance reference block. In some embodiments, the method proposed in the embodiments of the present disclosure is used in an inter-frame prediction process, when inter-frame prediction is performed on a luminance component and a chrominance component of a coding unit, a motion vector of the luminance component is obtained first, then the luminance component is scaled according to a chrominance sampling rate and a luminance sampling rate to obtain a co-located motion vector (when the luminance sampling rate is the same as the chrominance sampling rate, the motion vector of the luminance component is equal to the co-located motion vector), and the co-located motion vector is used as a chrominance motion vector of the coding unit, which defaults that a motion mode of a luminance block is the same as a motion mode of a chrominance block, but in an actual situation, a motion mode of a luminance block and a motion mode of a chrominance block in an image are not necessarily the same, and particularly when illumination changes obviously in a video shooting process, the above method may cause an unsatisfactory chrominance prediction value and cause chrominance distortion in, the user experience is affected, and therefore the motion vector of the chrominance component is different from the co-located motion vector in the embodiment, thereby avoiding the problem of chrominance distortion caused by improper selection of the motion vector of the chrominance component.
In some embodiments of the present disclosure, the motion vector of the chroma reference block is the same as the motion vector of the chroma component of the current image block. Specifically, in this embodiment, the corresponding motion vector is determined for the chroma component of the current image block separately, and a co-located motion vector is not used, where the method for determining the chroma reference block may use a Full Search (FS).
In some embodiments of the present disclosure, the motion vector of the chrominance component of the current image block is (0, 0). Specifically, defaulting the motion vector of the chroma component of the current image block to (0,0) in this embodiment can save the encoding overhead of the motion vector and the computational power consumption in encoding and decoding.
In some embodiments of the present disclosure, the motion vector of the chroma component of the current image block is selected from a vector candidate list, and the motion vector of the chroma component is recorded by encoding an index of the motion vector of the chroma component in the vector candidate list. Specifically, in this embodiment, a vector candidate list is separately established for the chroma components, so as to improve the calculation speed of the motion vector of the chroma components, and the vector candidate list may be a candidate list in a Skip mode or a Merge mode.
In some embodiments of the present disclosure, the current image block corresponds to a luminance reference block in a reference frame corresponding to the target frame; the brightness predicted value of the current image block is equal to the value obtained by illumination compensation of the brightness reconstructed value of the brightness reference block. Specifically, the reference frame corresponding to the target frame may be an adjacent frame of the target frame, for example, a previous frame or a subsequent frame of the target frame in a time domain, the luma reference block may be a coding block closest to the current image block in the reference frame in a luma component, and the luma prediction value of the current image block is obtained after illumination compensation is performed on a luma reconstruction value corresponding to the luma reference block.
In some embodiments of the present disclosure, the current image block corresponds to a chrominance reference block in a reference frame corresponding to the target frame; and the chroma predicted value of the current image block is equal to a value obtained after illumination compensation is carried out on the chroma reconstructed value of the chroma reference block, or the chroma predicted value of the current image block is equal to the chroma reconstructed value of the chroma reference block. Specifically, the chroma prediction value of the current image block may be obtained in the same manner by illumination compensation, or the chroma reconstruction value of the chroma reference block may be directly used, so as to reduce the data processing amount.
In some embodiments of the present disclosure, the luminance prediction value of the current image block is obtained by using formula (2):
Figure BDA0002712145600000151
wherein ,
Figure BDA0002712145600000154
is a luminance prediction value of the current image block,
Figure BDA0002712145600000155
for luminance reconstruction values of luminance reference blocks, alphayIs the color channel scaling factor, β, of the luminance componentYIs the offset of the luminance component. Specifically, in the present embodiment, a linear illumination transformation model is used to perform illumination compensation on the luminance reconstruction value of the luminance reference block during the inter-frame prediction process, where α isY and βYThe method can be derived by linear regression according to the neighboring reference pixels of the current image block in the target frame and the neighboring reference pixels of the corresponding reference block.
In some embodiments of the present disclosure, the chroma prediction value of the current image block is obtained by using the following formula (3) and formula (4):
Figure BDA0002712145600000152
Figure BDA0002712145600000153
wherein ,
Figure BDA0002712145600000156
is the U component of the chroma predicted value of the current image block, is the V component of the chroma predicted value of the current image block,
Figure BDA0002712145600000157
a U component of a chrominance reconstruction value for a chrominance reference block,
Figure BDA0002712145600000158
for the V component, alpha, of the chrominance reconstruction value of the chrominance reference blockUColor channel scaling factor, alpha, for the U componentVColor channel scaling factor, β, for the V componentUIs an offset of the U component, betaVIs the offset of the V component. In some embodiments, the reconstructed values of the reference block are illumination compensated using a linear illumination transform model, where αU、αV、βU、βVThe method can be derived by linear regression according to the neighboring reference pixels of the current image block in the target frame and the neighboring reference pixels of the corresponding reference block.
In some embodiments of the present disclosure, the determining the luminance prediction value and the chrominance prediction value of the current image block includes: and determining that the current frame is a target frame or a non-target frame, if the current frame is the target frame, the brightness predicted value of the current image block is the brightness compensation value of the brightness reference block, and the chroma predicted value of the current image block is the chroma reconstruction value of the chroma reference block.
If the current frame is a non-target frame, the brightness predicted value and the chroma predicted value of the current image block are respectively equal to the brightness reconstructed value of the brightness reference block and the chroma reconstructed value of the chroma reference block; or, the luminance predicted value and the chrominance predicted value of the current image block are respectively equal to the luminance compensation value of the luminance reference block and the chrominance compensation value of the chrominance reference block, and the chrominance compensation value is a value obtained by performing illumination compensation on the chrominance reconstruction value of the chrominance reference block of the reference frame. .
Specifically, in this embodiment, the image frame is divided into a target frame and a non-target frame, and when determining the luminance predicted value and the chrominance predicted value of the coding block in the non-target frame, there may be two ways, one is to directly use the luminance reconstructed value and the chrominance reconstructed value of the reference block corresponding to the coding block in the non-target frame, and the other is to use the luminance reconstructed value and the chrominance reconstructed value of the reference block corresponding to the coding block in the non-target frame to perform illumination compensation respectively. It should be noted that the non-target frame may adopt any one of the two manners of determining the luminance predictor and the chrominance predictor, and the manner adopted by each non-target frame may be different.
In some embodiments of the present disclosure, determining a motion vector of a luminance component and a motion vector of a chrominance component of a current image block comprises:
determining the value of a switch flag bit carried by the current frame;
if the switch flag bit of the current frame takes the first value, the brightness predicted value of the current image block is the brightness compensation value of the brightness reference block, and the chroma predicted value of the current image block is the chroma reconstruction value of the chroma reference block;
and if the switch zone bit of the current frame takes the second value, the brightness predicted value of the current image block is the brightness compensation value of the reference block, and the chroma predicted value of the current image block is the chroma compensation value of the reference block.
Specifically, in some embodiments, a switch flag is set in each image frame, and the switch flag is used as a frame-level switch for determining a chroma prediction value and a luma prediction value of a current image block. When the image processing method provided by the disclosure is used for video coding, the value of each image frame is determined firstly, and then a brightness predicted value and a chroma predicted value are determined, wherein the brightness predicted value is equal to the brightness compensation value of the reference block, and the chroma predicted value can be a chroma compensation value or a chroma reconstruction value according to different values.
In some embodiments, a value of the switch flag carried by the current frame is determined according to a difference between a luminance component of the current frame and a luminance component of the reference frame, and a difference between a chrominance component of the current frame and a chrominance component of the reference frame. Specifically, when the difference between the luminance component and the chrominance component of the current frame and the reference frame is small, it indicates that the variation trends of the luminance component and the chrominance component are the same, and the same processing method may be used, otherwise, a different processing method should be used.
In some embodiments of the present disclosure, when a difference between a first change value and a second change value is greater than a difference threshold, a value of a switch flag carried by the current frame is a first value, where the first change value is a difference between a luminance component of the current frame and a luminance component of the reference frame, and the second change value is a difference between a chrominance component of the current frame and a chrominance component of the reference frame.
Specifically, the present embodiment defines a determination standard for the value of the switch flag, that is, how to determine whether the chroma prediction value of an image block should be subjected to illumination compensation, and may be used in a video encoding process, for any currently encoded image frame, determine a difference between a first change value and a second change value of the image frame, compare the difference with a difference threshold, set whether the image frame is a target frame or a non-target frame according to a comparison result, indicate that a change trend of a luminance component and a change trend of a chroma component of the currently encoded image frame are similar when the difference is not greater than the difference threshold, at this time, perform illumination compensation on prediction values of the luminance component and the chroma component or not, indicate that the change trends of the luminance component and the chroma component are relatively different when the difference is greater than the difference threshold, and the two should not adopt a similar processing manner, the motion vector of the chrominance component and the co-located motion vector should not be identical.
In order to better explain the image processing method proposed by the embodiments of the present disclosure, a specific embodiment is proposed below as an example that the image processing method proposed by the present disclosure is applied to a video encoding end.
In the video coding process, the image frame of the video is divided into the maximum coding units with equal size and without overlapping. Then, with the maximum coding unit as a node, different kinds of recursive tree partitions can be performed, for example, a quadtree, a binary tree, a ternary tree, and the like, to form a coding unit. Coding units are basic units of video coding, and each coding unit may contain one luminance block (Y) and two chrominance blocks (UV). Video coding performance derived from data redundancyAnd (5) removing. The inter-frame mode prediction can effectively remove time domain redundancy, the content of the front frame and the back frame of the video has similarity, the change of illumination can greatly influence the efficiency of inter-frame coding, and the accumulated difference between the current frame and the reference frame can be compared through frame level histogram statistics when the inter-frame prediction is carried out. If the variation accumulated difference of the luminance components of the current frame and the reference frame is similar to the variation accumulated difference of the chrominance components, the current frame is a non-target frame. On the contrary, if the cumulative difference of the change of the luminance component of the current frame and the change of the reference frame is far different from the cumulative difference of the change of the chrominance component, the current frame is the target frame. Whether the current frame is the target frame or the non-target frame is determined by setting a switch flag in each image frame, for example, a frame level switch (lic _ sep _ flag) may be used as the switch flag. lic _ sep _ flag may be determined by histogram statistics of the current coded frame and all frames in the reference frame list and passed as a frame level flag. If the current frame is a non-target frame, lic _ sep _ flag is set to 0, and if the current frame is a non-target frame, lic _ sep _ flag is set to 1. In the embodiment, when lic _ sep _ flag of the current frame is 1, for the luma prediction value of the current image block, a linear illumination compensation model (α) is derived by using the peripheral pixels of the luma reference block and the peripheral pixels of the current image block, which are guided by the motion vector MV _ YY,βY). The brightness predicted value of the brightness block after illumination compensation is calculated by the formula (2):
Figure BDA0002712145600000181
the motion vector information for the chroma components U and V may be derived as (0,0) or the motion vector for the chroma components may be from the Merge/Skip candidate list and then a linear illumination compensation model (α) may be derived in a manner consistent with the luma component predictionU,βU) and (αV,βV) And is calculated according to the formulas (3) and (4):
Figure BDA0002712145600000182
Figure BDA0002712145600000183
wherein ,βVAnd/or betaVCan be derived as 1, beta by defaultVAnd/or betaVMay be derived as 0 by default.
When the image processing method provided by the embodiment of the disclosure is used at a video decoding end, in a video decoding process, for a current image frame, a value of a switch flag bit of the current image frame is obtained first, whether a predicted value of a chrominance component is equal to a chrominance reconstruction value or a chrominance compensation value of a chrominance reference block is determined according to the value of the switch flag bit, and a luminance predicted value is equal to a luminance compensation value of a luminance reference block.
An embodiment of the present disclosure provides an image processing apparatus, as shown in fig. 4, including:
a determining unit 31, configured to determine a reference block of a current image block, where the current image block is located in a current frame, and the reference block is located in a reference frame of the current frame;
the determining unit 31 is further configured to determine a luminance reconstruction value of the reference block and a chrominance reconstruction value of the reference block
The processing unit 32 is configured to perform illumination compensation on the luminance reconstruction value of the reference block to obtain a luminance compensation value of the reference block;
the determining unit 31 is further configured to determine a luminance prediction value and a chrominance prediction value of the current image block, where the luminance prediction value of the current image block is a luminance compensation value of the reference block, and the chrominance prediction value of the current image block is a chrominance reconstruction value of the reference block.
An embodiment of the present disclosure further provides an image processing apparatus, as shown in fig. 5, including:
a determining module 41, configured to determine a motion vector of a luminance component and a motion vector of a chrominance component of a current image block, where the current image block is located in a current frame, the motion vector of the chrominance component is different from a co-located motion vector, and the co-located motion vector is the motion vector of the luminance component, or the co-located motion vector is a motion vector obtained by scaling the motion vector of the luminance component according to a luminance sampling rate and a chrominance sampling rate;
the determining module 41 is further configured to determine a luminance reference block of a luminance component of the current image block and a chrominance reference block of a chrominance component of the current image block according to the motion vector of the luminance component and the motion vector of the chrominance component of the current image block, where the luminance reference block and the chrominance reference block are located in a reference frame of the current frame;
a determining module 41, further configured to determine a luminance reconstruction value of the luminance reference block and a chrominance reconstruction value of the chrominance reference block;
the processing module 42 is configured to perform illumination compensation on the luminance reconstruction value of the luminance reference block to obtain a luminance compensation value of the luminance reference block;
the processing module 42 is further configured to perform illumination compensation on the chrominance reconstruction value of the chrominance reference block to obtain a chrominance compensation value of the chrominance reference block;
the determining module 41 is further configured to determine a luminance prediction value and a chrominance prediction value of the current image block, where the luminance prediction value of the current image block is a luminance compensation value of the reference block, and the chrominance prediction value of the current image block is a chrominance compensation value of the reference block.
For the embodiments of the apparatus, since they correspond substantially to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described apparatus embodiments are merely illustrative, wherein the modules described as separate modules may or may not be separate. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The method and apparatus of the present disclosure have been described above based on the embodiments and application examples. In addition, the present disclosure also provides a terminal and a storage medium, which are described below.
Referring now to fig. 6, a schematic diagram of an electronic device (e.g., a terminal device or server) 800 suitable for use in implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in the drawings is only an example and should not bring any limitation to the functions and use range of the embodiments of the present disclosure.
As shown in fig. 6, the electronic device 800 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 801 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage means 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data necessary for the operation of the electronic apparatus 800 are also stored. The processing apparatus 801, the ROM 802, and the RAM803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
Generally, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage 808 including, for example, magnetic tape, hard disk, etc.; and a communication device 809. The communication means 809 may allow the electronic device 800 to communicate wirelessly or by wire with other devices to exchange data. While the figure illustrates an electronic device 800 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 809, or installed from the storage means 808, or installed from the ROM 802. The computer program, when executed by the processing apparatus 801, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods of the present disclosure as described above.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided an image processing method including:
determining a reference block corresponding to a current image block of a target frame in a reference frame corresponding to the target frame in a video;
and performing illumination compensation on the brightness reconstructed value of the reference block corresponding to the current image block to obtain a brightness predicted value of the current image block, and using the chroma reconstructed value of the reference block corresponding to the current image block as the chroma predicted value of the current image block.
In accordance with one or more embodiments of the present disclosure, in some embodiments, the present disclosure provides an image processing method including:
determining a reference block of a current image block, wherein the current image block is located in a current frame, and the reference block is located in a reference frame of the current frame;
determining a luminance reconstruction value of a reference block and a chrominance reconstruction value of the reference block;
performing illumination compensation on the brightness reconstruction value of the reference block to obtain a brightness compensation value of the reference block;
and determining a brightness predicted value and a chroma predicted value of the current image block, wherein the brightness predicted value of the current image block is a brightness compensation value of the reference block, and the chroma predicted value of the current image block is a chroma reconstruction value of the reference block.
In some embodiments, further comprising: and determining the value of the switch flag bit carried by the current frame as a first value.
In some embodiments, the value of the switch flag carried by the current frame is determined according to the difference between the luminance component of the current frame and the luminance component of the reference frame, and the difference between the chrominance component of the current frame and the chrominance component of the reference frame.
In some embodiments, when a difference between the first change value and the second change value is greater than the difference threshold, a value of the switch flag carried by the current frame is a first value, where the first change value is a difference between a luminance component of the current frame and a luminance component of the reference frame, and the second change value is a difference between a chrominance component of the current frame and a chrominance component of the reference frame.
In some embodiments, the luma prediction value of the current image block is determined according to equation (1):
Figure BDA0002712145600000231
wherein ,
Figure BDA0002712145600000232
is a luminance prediction value of the current image block,
Figure BDA0002712145600000233
is a luminance reconstruction value of the reference block, alphaYColor channel scaling factor, alpha, for the luminance componentYIs the offset of the luminance component.
In some embodiments, the determining the luma prediction value and the chroma prediction value of the current image block comprises: and determining that the current frame is a target frame or a non-target frame, wherein if the current frame is the target frame, the brightness predicted value of the current image block is the brightness compensation value of the reference block, and the chroma predicted value of the current image block is the chroma reconstruction value of the reference block.
If the current frame is a non-target frame, the brightness predicted value and the chroma predicted value of the current image block are respectively equal to the brightness reconstructed value and the chroma reconstructed value of the reference block; or the luminance predicted value and the chrominance predicted value of the current image block are respectively equal to the luminance compensation value and the chrominance compensation value of the reference block, and the chrominance compensation value is a value obtained by performing illumination compensation on the chrominance reconstruction value of the reference frame.
In some embodiments, the present disclosure provides an image processing method comprising:
determining a motion vector of a luminance component and a motion vector of a chrominance component of a current image block, wherein the current image block is located in a current frame, the motion vector of the chrominance component is different from a co-located motion vector, and the co-located motion vector is the motion vector of the luminance component, or the co-located motion vector is obtained by scaling the motion vector of the luminance component according to a luminance sampling rate and a chrominance sampling rate;
determining a luminance reference block of a luminance component of the current image block and a chrominance reference block of a chrominance component of the current image block according to a motion vector of the luminance component and a motion vector of the chrominance component of the current image block, wherein the luminance reference block and the chrominance reference block are located in a reference frame of the current frame;
determining a luminance reconstruction value of a luminance reference block and a chrominance reconstruction value of a chrominance reference block;
performing illumination compensation on the brightness reconstruction value of the brightness reference block to obtain a brightness compensation value of the brightness reference block;
performing illumination compensation on the chrominance reconstruction value of the chrominance reference block to obtain a chrominance compensation value of the chrominance reference block;
and determining a brightness predicted value and a chroma predicted value of the current image block, wherein the brightness predicted value of the current image block is a brightness compensation value of a brightness reference block, and the chroma predicted value of the current image block is a chroma compensation value of a chroma reference block.
In some embodiments, the present disclosure provides an image processing method, a motion vector of a chrominance reference block being the same as a motion vector of a chrominance component of a current image block.
In some embodiments, the present disclosure provides an image processing method, wherein a motion vector of a chrominance component of a current image block is (0, 0).
In some embodiments, the present disclosure provides an image processing method, wherein a motion vector of a chroma component of a current image block is determined according to a vector candidate list, and an index of the motion vector of the chroma component in the vector candidate list is used for indicating the motion vector of the chroma component.
In some embodiments, the present disclosure provides an image processing method, which obtains a luminance prediction value of a current image block by using formula (2):
Figure BDA0002712145600000241
wherein ,
Figure BDA0002712145600000253
for the luminance prediction value of the current image block,
Figure BDA0002712145600000254
for the luminance reconstruction value of the luminance reference block, αYIs the color channel scaling factor, β, of the luminance componentYIs the offset of the luminance component;
and/or the presence of a gas in the gas,
obtaining the chroma prediction value of the current image block by adopting a formula (3) and a formula (4):
Figure BDA0002712145600000251
Figure BDA0002712145600000252
wherein ,
Figure BDA0002712145600000255
for the U component of the chroma prediction value of the current image block,
Figure BDA0002712145600000256
for the V component of the chroma prediction value of the current image block,
Figure BDA0002712145600000257
a U component of a chrominance reconstruction value for the chrominance reference block,
Figure BDA0002712145600000258
a V component, a, of a chrominance reconstruction value for the chrominance reference blockUIs the U componentA color channel scaling factor ofVIs the color channel scaling factor, β, of the V componentUIs the offset of the U component, βVIs the offset of the V component.
In some embodiments, the present disclosure provides an image processing method for determining a motion vector of a luminance component and a motion vector of a chrominance component of a current image block, comprising:
determining the value of a switch flag bit carried by the current frame;
if the switch zone bit of the current frame takes the first value, the brightness predicted value of the current image block is the brightness compensation value of the brightness reference block, and the chroma predicted value of the current image block is the chroma reconstruction value of the chroma reference block;
and if the switch zone bit of the current frame takes the second value, the brightness predicted value of the current image block is the brightness compensation value of the brightness reference block, and the chroma predicted value of the current image block is the chroma compensation value of the chroma reference block.
In some embodiments, the present disclosure provides an image processing method, wherein a value of a switch flag carried by a current frame is determined according to a difference between a luminance component of the current frame and a luminance component of a reference frame, and a difference between a chrominance component of the current frame and a chrominance component of the reference frame.
In some embodiments, the present disclosure provides an image processing method, where a value of a switch flag carried in a current frame is a first value when a difference between a first change value and a second change value is greater than a difference threshold, where the first change value is a difference between a luminance component of the current frame and a luminance component of a reference frame, and the second change value is a difference between a chrominance component of the current frame and a chrominance component of the reference frame.
In some embodiments, the present disclosure proposes an image processing apparatus comprising:
the device comprises a determining unit, a judging unit and a judging unit, wherein the determining unit is used for determining a reference block of a current image block, the current image block is positioned in a current frame, and the reference block is positioned in a reference frame of the current frame;
a determining unit, further configured to determine a luminance reconstruction value of the reference block and a chrominance reconstruction value of the reference block;
the processing unit is used for carrying out illumination compensation on the brightness reconstruction value of the reference block to obtain a brightness compensation value of the reference block;
and the determining unit is further used for determining a brightness predicted value and a chroma predicted value of the current image block, wherein the brightness predicted value of the current image block is a brightness compensation value of the reference block, and the chroma predicted value of the current image block is a chroma reconstruction value of the reference block.
In some embodiments, the present disclosure proposes an image processing apparatus comprising:
the determining module is used for determining a motion vector of a luminance component and a motion vector of a chrominance component of a current image block, wherein the current image block is located in a current frame, the motion vector of the chrominance component is different from a co-located motion vector, and the co-located motion vector is the motion vector of the luminance component, or the co-located motion vector is obtained by scaling the motion vector of the luminance component according to a luminance sampling rate and a chrominance sampling rate;
the determining module is further used for determining a luminance reference block of the luminance component of the current image block and a chrominance reference block of the chrominance component of the current image block according to the motion vector of the luminance component and the motion vector of the chrominance component of the current image block, wherein the luminance reference block and the chrominance reference block are located in a reference frame of a current frame;
the determining module is further used for determining a brightness reconstruction value of the brightness reference block and a chroma reconstruction value of the chroma reference block;
the processing module is used for carrying out illumination compensation on the brightness reconstruction value of the brightness reference block to obtain a brightness compensation value of the brightness reference block;
the processing module is further used for carrying out illumination compensation on the chrominance reconstruction value of the chrominance reference block to obtain a chrominance compensation value of the chrominance reference block;
the determining module is further configured to determine a luminance predicted value and a chrominance predicted value of the current image block, where the luminance predicted value of the current image block is a luminance compensation value of a luminance reference block, and the chrominance predicted value of the current image block is a chrominance compensation value of a chrominance reference block.
According to one or more embodiments of the present disclosure, there is provided a terminal including: at least one memory and at least one processor;
wherein the at least one memory is configured to store program code, and the at least one processor is configured to call the program code stored in the at least one memory to perform the method of any one of the above.
According to one or more embodiments of the present disclosure, there is provided a storage medium for storing program code for performing the above-described method.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (8)

1. An image processing method, comprising:
determining a reference block of a current image block, wherein the current image block is located in a current frame, and the reference block is located in a reference frame of the current frame;
determining a luminance reconstruction value of the reference block and a chrominance reconstruction value of the reference block;
performing illumination compensation on the brightness reconstruction value of the reference block to obtain a brightness compensation value of the reference block;
and determining a brightness predicted value and a chroma predicted value of the current image block, wherein the brightness predicted value of the current image block is a brightness compensation value of the reference block, and the chroma predicted value of the current image block is a chroma reconstruction value of the reference block.
2. The image processing method according to claim 1, further comprising:
and determining the value of the switch flag bit carried by the current frame as a first value.
3. The image processing method according to claim 2, wherein the value of the switch flag carried by the current frame is determined according to a difference between the luminance component of the current frame and the luminance component of the reference frame, and a difference between the chrominance component of the current frame and the chrominance component of the reference frame.
4. The image processing method according to claim 3, wherein a value of the switch flag carried by the current frame is a first value when a difference between a first change value and a second change value is greater than a difference threshold, wherein the first change value is a difference between a luminance component of the current frame and a luminance component of the reference frame, and the second change value is a difference between a chrominance component of the current frame and a chrominance component of the reference frame.
5. The image processing method according to claim 1, wherein the luminance prediction value of the current image block is determined according to formula (1):
Figure FDA0002712145590000011
wherein ,
Figure FDA0002712145590000012
for the luminance prediction value of the current image block,
Figure FDA0002712145590000013
for the luminance reconstruction value of the reference block, αYIs the color channel scaling factor, β, of the luminance componentYIs the offset of the luminance component.
6. An image processing apparatus characterized by comprising:
the image processing device comprises a determining unit, a processing unit and a processing unit, wherein the determining unit is used for determining a reference block of a current image block, the current image block is located in a current frame, and the reference block is located in a reference frame of the current frame;
the determining unit is further configured to determine a luminance reconstruction value of the reference block and a chrominance reconstruction value of the reference block;
the processing unit is used for carrying out illumination compensation on the brightness reconstruction value of the reference block to obtain a brightness compensation value of the reference block;
the determining unit is further configured to determine a luminance prediction value and a chrominance prediction value of the current image block, where the luminance prediction value of the current image block is a luminance compensation value of the reference block, and the chrominance prediction value of the current image block is a chrominance reconstruction value of the reference block.
7. A terminal, comprising:
at least one memory and at least one processor;
wherein the at least one memory is configured to store program code and the at least one processor is configured to invoke the program code stored in the at least one memory to perform the method of any of claims 1 to 5.
8. A storage medium for storing program code for performing the method of any one of claims 1 to 5.
CN202011060097.7A 2020-09-30 2020-09-30 Image processing method, device, terminal and storage medium Active CN112203085B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011060097.7A CN112203085B (en) 2020-09-30 2020-09-30 Image processing method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011060097.7A CN112203085B (en) 2020-09-30 2020-09-30 Image processing method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN112203085A true CN112203085A (en) 2021-01-08
CN112203085B CN112203085B (en) 2023-10-17

Family

ID=74012512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011060097.7A Active CN112203085B (en) 2020-09-30 2020-09-30 Image processing method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN112203085B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113422959A (en) * 2021-05-31 2021-09-21 浙江智慧视频安防创新中心有限公司 Video encoding and decoding method and device, electronic equipment and storage medium
WO2022174469A1 (en) * 2021-02-22 2022-08-25 Oppo广东移动通信有限公司 Illumination compensation method, encoder, decoder, and storage medium
CN116708789A (en) * 2023-08-04 2023-09-05 湖南马栏山视频先进技术研究院有限公司 Video analysis coding system based on artificial intelligence

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100183068A1 (en) * 2007-01-04 2010-07-22 Thomson Licensing Methods and apparatus for reducing coding artifacts for illumination compensation and/or color compensation in multi-view coded video
CN105103552A (en) * 2013-01-10 2015-11-25 三星电子株式会社 Method for encoding inter-layer video for compensating luminance difference and device therefor, and method for decoding video and device therefor
CN109076210A (en) * 2016-05-28 2018-12-21 联发科技股份有限公司 The method and apparatus of the present image reference of coding and decoding video
CN111031319A (en) * 2019-12-13 2020-04-17 浙江大华技术股份有限公司 Local illumination compensation prediction method, terminal equipment and computer storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100183068A1 (en) * 2007-01-04 2010-07-22 Thomson Licensing Methods and apparatus for reducing coding artifacts for illumination compensation and/or color compensation in multi-view coded video
CN105103552A (en) * 2013-01-10 2015-11-25 三星电子株式会社 Method for encoding inter-layer video for compensating luminance difference and device therefor, and method for decoding video and device therefor
CN109076210A (en) * 2016-05-28 2018-12-21 联发科技股份有限公司 The method and apparatus of the present image reference of coding and decoding video
CN111031319A (en) * 2019-12-13 2020-04-17 浙江大华技术股份有限公司 Local illumination compensation prediction method, terminal equipment and computer storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
QIANG TANG等: "Efficient Chrominance Compensation for MPEG2 to H.264 Transcoding", 《 2007 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING - ICASSP \'07》 *
SAURAV BANDYOPADHYAY等: "CE10-related: Local illumination compensation simplifications", 《JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 13TH MEETING: MARRAKECH, MA, 9–18JAN. 2019 JVET-M0224-V2 》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022174469A1 (en) * 2021-02-22 2022-08-25 Oppo广东移动通信有限公司 Illumination compensation method, encoder, decoder, and storage medium
CN113422959A (en) * 2021-05-31 2021-09-21 浙江智慧视频安防创新中心有限公司 Video encoding and decoding method and device, electronic equipment and storage medium
CN116708789A (en) * 2023-08-04 2023-09-05 湖南马栏山视频先进技术研究院有限公司 Video analysis coding system based on artificial intelligence
CN116708789B (en) * 2023-08-04 2023-10-13 湖南马栏山视频先进技术研究院有限公司 Video analysis coding system based on artificial intelligence

Also Published As

Publication number Publication date
CN112203085B (en) 2023-10-17

Similar Documents

Publication Publication Date Title
JP7461974B2 (en) Chroma prediction method and device
CN112203085A (en) Image processing method, device, terminal and storage medium
TWI622288B (en) Video decoding method
CN110248189B (en) Video quality prediction method, device, medium and electronic equipment
CN112995663B (en) Video coding method, video decoding method and corresponding devices
CN113784175B (en) HDR video conversion method, device, equipment and computer storage medium
CN109996080B (en) Image prediction method and device and coder-decoder
CN112203086A (en) Image processing method, device, terminal and storage medium
KR102609215B1 (en) Video encoders, video decoders, and corresponding methods
CN113473126A (en) Video stream processing method and device, electronic equipment and computer readable medium
CN110418134B (en) Video coding method and device based on video quality and electronic equipment
CN111738951A (en) Image processing method and device
CN111526363A (en) Encoding method and apparatus, terminal and storage medium
CN111836046A (en) Video encoding method and apparatus, electronic device, and computer-readable storage medium
CN115118964A (en) Video encoding method, video encoding device, electronic equipment and computer-readable storage medium
CN111263166B (en) Video image prediction method and device
CN112118447B (en) Construction method, device and coder-decoder for fusion candidate motion information list
CN116886918A (en) Video coding method, device, equipment and storage medium
US11902506B2 (en) Video encoder, video decoder, and corresponding methods
US20150063462A1 (en) Method and system for enhancing the quality of video during video compression
CN112135149B (en) Entropy encoding/decoding method and device of syntax element and codec
CN115706810A (en) Video frame adjusting method and device, electronic equipment and storage medium
CN113542737A (en) Encoding mode determining method and device, electronic equipment and storage medium
CN113411611B (en) Video image processing method and device and electronic device
CN108769695B (en) Frame type conversion method, system and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant