CN114565543A - Video color enhancement method and system based on UV histogram features - Google Patents

Video color enhancement method and system based on UV histogram features Download PDF

Info

Publication number
CN114565543A
CN114565543A CN202111655771.0A CN202111655771A CN114565543A CN 114565543 A CN114565543 A CN 114565543A CN 202111655771 A CN202111655771 A CN 202111655771A CN 114565543 A CN114565543 A CN 114565543A
Authority
CN
China
Prior art keywords
data
component
model
image
histogram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111655771.0A
Other languages
Chinese (zh)
Inventor
唐杰
张聪聪
朱运平
李庆瑜
戴立言
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI WONDERTEK SOFTWARE CO Ltd
Original Assignee
SHANGHAI WONDERTEK SOFTWARE CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI WONDERTEK SOFTWARE CO Ltd filed Critical SHANGHAI WONDERTEK SOFTWARE CO Ltd
Priority to CN202111655771.0A priority Critical patent/CN114565543A/en
Publication of CN114565543A publication Critical patent/CN114565543A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Of Color Television Signals (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a video color enhancement method and a video color enhancement system based on UV histogram characteristics. The method is based on the YUV color space algorithm, avoids a large amount of time-consuming operations caused by color space conversion, can self-adaptively adjust algorithm parameters according to different scenes by utilizing an open-source data set, does not need human intervention, and is particularly friendly in algorithm use.

Description

Video color enhancement method and system based on UV histogram features
Technical Field
The invention relates to a video processing technology, in particular to a video color enhancement method and system based on UV histogram features.
Background
The existing algorithm for video color adjustment mainly achieves the purpose of image color enhancement by manually adjusting parameters in an RGB color space and an HSV color space of an image. On one hand, no matter based on an RGB color space or an HSV color space, for a video, conversion from the YUV color space to the RGB color space or the HSV color space is needed, and the conversion of the color space consumes a large amount of time and is not beneficial to real-time processing; on the other hand, the algorithm based on the RGB color space or the HSV color space is not well applicable to each scene depending on the setting of parameters by human, and is not friendly to use.
Color enhancement algorithm for RGB color space: converting the video from YUV color space to RGB color space, adjusting saturation in RGB color space:
Figure BDA0003445709590000011
setting the value range of the saturation s (-100, 100), reversely deducing the values of the RGB components of the image after the saturation adjustment to be r ', g ' and b ' respectively according to the set s, and converting the RGB components into YUV color space.
k=s·128/100.0
Figure BDA0003445709590000012
Figure BDA0003445709590000013
Color enhancement algorithm of HSV color space: converting the video from YUV color space to HSV color space, and adjusting saturation in the HSV color space:
S′=S·s
obtaining the value S 'of the component S after the saturation is adjusted according to the set saturation coefficient S, and then converting the value S' from the HSV color space to the YUV color space
The above algorithms all require no human intervention to set coefficients, are very unfriendly and have uncertainty in the algorithm.
Disclosure of Invention
The invention provides a video color enhancement method based on UV histogram features, which aims to solve the technical problems that the prior art cannot be well suitable for various scenes and is not friendly in algorithm use.
A video color enhancement method based on UV histogram features comprises the following steps:
s1, obtaining model weight [2] [256 × 256] through deep neural network learning;
learning parameters of the UV component S2: converting the RGB image to YUV color space, computing histogram histUV 256X256 on the UV component]The histogram histUV [ 256X256 ]]And the model weights [2]][256×256]Multiplying and summing to obtain the final parameter puAnd pv
Figure BDA0003445709590000021
S3 calculating data after color enhancement of UV component of image
Figure BDA0003445709590000022
Where U, V represents the U and V component data of the input video, and U ', V' represent the data after color enhancement of the U and V components.
The method of the invention also comprises the following steps:
s4 with L1Effective supervised learning for loss
Figure BDA0003445709590000023
Wherein N represents the number of pixels of a single channel of the image; u shapegt、VgtRepresenting the U and V component data of GT in the live data set.
Inputting video further comprises converting the RGB image into YCbCr data.
A video color enhancement system based on UV histogram features, comprising:
the model weight training model is used for obtaining model weights [2] [256 x256 ] through deep neural network learning;
learning UV component parameter calculation unit: for converting RGB image to YUV color space, histogram histUV 256X256 is calculated on UV component]The histogram histUV [ 256X256 ]]And the model weights [2]][256×256]Multiplying and summing to obtain the final parameter puAnd pv
Figure BDA0003445709590000031
A calculation unit: data after color enhancement for calculating UV component of image
Figure BDA0003445709590000032
Where U, V represents the U and V component data of the input video, and U ', V' represent the data after color enhancement of the U and V components.
The system may further comprise:
a supervised learning processing unit for adopting L1Effective supervised learning for loss
Figure BDA0003445709590000033
Wherein N represents the number of pixels of a single channel of the image; u shapegt、VgtRepresenting the U and V component data of GT in the live data set.
The invention relates to a video color enhancement algorithm based on UV histogram characteristics, which can realize real-time video color enhancement on a cpu through adjusting parameters of UV components of an image of a deep learning algorithm, so that the color of a video becomes brighter and brighter. The method is based on the YUV color space algorithm, avoids a large amount of time-consuming operations caused by color space conversion, can self-adaptively adjust algorithm parameters according to different scenes by utilizing an open-source data set, does not need human intervention, and is particularly friendly in algorithm use.
Drawings
FIG. 1 is a schematic flow chart of a method for video color enhancement algorithm based on UV histogram features.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings.
The present applicant has found that histogram equalization is intended to expand the dynamic range of an image having a small dynamic range of gray-scale value distribution (i.e., the image is too bright when gray-scale values are concentrated in the right part of the histogram), and that there is a possibility that the number of gray-scale levels of the changed image may be reduced. The conditions known in table 1 are: the number of the common gray levels is 8, and the corresponding distribution probability of each level of the original image is ps(sk) After histogram equalization, gray level 0 is mapped to 1, gray level 1 to 3, gray level 2 to 5, gray levels 3,4 to 6, and gray levels 5,6,7 to 7. Therefore, the histogram equalization algorithm reduces the gray level of the image from 8 gray levels of 0,1,2,3,4,5,6,7 to 5 gray levels of 1,3,5,6,7, and it is not difficult to see the value, and the contrast of some parts of the image must be enhanced.
TABLE 1 histogram equalization calculation procedure
Figure BDA0003445709590000041
A video color enhancement method based on UV histogram features comprises the following steps:
s110, obtaining model weight 2] [256 × 256] through deep neural network learning;
and S120, learning parameters of the UV component: converting the RGB image to YUV color space, computing histogram histUV [ 256X256 ] on the UV component]The histogram histUV [ 256X256 ]]And the model weights [2]][256×256]Multiplying and summing to obtain the final parameter puAnd pv
Figure BDA0003445709590000042
Calculating data after color enhancement of UV component of image S130
Figure BDA0003445709590000043
Where U, V represents the U and V component data of the input video, and U ', V' represent the data after color enhancement of the U and V components.
The invention may also employ L1Effective supervised learning for loss
Figure BDA0003445709590000044
Wherein N represents the number of pixels of a single channel of the image; u shapegt、VgtRepresenting the U and V component data of GT in the fifo dataset.
Inputting video further comprises converting the RGB images into YCbCr data.
Application example
Step0, the first Step is machine learning, and the mode for evaluating the effectiveness of the machine learning is L1loss, below is L1The formula for calculating the loss index is as follows:
Figure BDA0003445709590000051
wherein N represents the number of pixels of a single channel of the image; u shapegtU component data representing GT in the live k open source dataset; vgtV component data representing GT in the live k open source dataset; u' represents the U component in the prediction data set output by the model training process; v' represents the V component in the prediction data set output by the model training process.
The part is the outermost layer framework of model training and is used for judging whether the training result achieves the best effect or not, the prediction data of the model is used, L1Loss is calculated through the prediction data and the source data, and when the change of the L1Loss is stable at a very small level all the time, the training is considered to achieve the best effect. The prediction data is the data obtained by enhancing the U and V components by the U and V component enhancement formula described in Step1 below.
Step1, the process of machine learning is called model training, which mainly comprises model algorithm logic, inputting training data and outputting a prediction set.
The input data of model training is an open-source REDS deblurr data set, the trained model is a linear model, the training is carried out by applying pytorch. histeuv [256 × 256], we can get the 256 × 256 dimensional weights of U and V components respectively through the following linear variation formula: weights [2] [256 × 256 ].
Linear transformation formula: y is xA + b
The obtained weight data can be transposed in the actual operation, and can be subjected to matrix multiplication with x.
The principle of the histogram is applied here, because the histogram can relatively comprehensively reflect the overall characteristics of the image, 256x256 indicates that U has 256 values and V has 256 values, the combination of the two has 256x256 characteristic values, all the characteristics of RGB image colors are reflected, after the characteristic values are input, pytorch.
U component enhancement formula: y isu=Xu*Pu+b
V component enhancement formula: yv ═ Xv*Pv+b
We see that the U, V component enhancement formula here is very similar to the linear transformation formula above, and is the same in nature.
The linear change made during the above model training is to find Pu and Pv here. The nature of UV enhancement is a linear function, and the key is how many coefficients of this linear function are derived from where, some are artificially set, such as eq filters inside ffmpeg, and the coefficients to set the enhancement color intensity are such that: the gain factor is 1.3, and the whole image is uniformly enhanced by a factor of 1.3; instead, we are obtained by linear model training, taking into account an enhancement coefficient of the weighted average of each image feature, see the actual color enhancement process below.
Step2 the above steps 0 and 1 are model training, and the purpose is to obtain the enhancement coefficient of the UV linear enhancement, and finally we obtain a 256x256 dimensional weight on the U, V component through the above steps: weights [2] [256 × 256 ].
Then, we will apply the above weight data to the actual image enhancement application, which we say is the actual image enhancement processing procedure.
In the actual enhancement processing process, similar to the calculation step of the model algorithm, the histogram of the UV component of the current image data is calculated to obtain histUV [256 × 256], and because the data is processed frame by frame when the image data is processed, the histogram of the UV component of the frame is obtained by recalculating the data of each frame during the operation;
then, through the weights [2] of the above models][256×256]Multiplying and summing to obtain the final parameter puAnd pv
Figure BDA0003445709590000061
This enhancement factor puAnd pvThe enhancement coefficient is an enhancement coefficient for the current image frame, which takes into account all the characteristics of the current image frame data. And (4) representing the UV characteristics of different image frames by histUV, applying corresponding weights, and calculating the obtained enhancement coefficient of the current image frame.
Finally, the above enhancement coefficient is applied to the UV component color enhancement calculation of the image data, the calculation formula is as follows:
Figure BDA0003445709590000062
where U, V represents the U and V component data of the input video, and U ', V' represent the data after color enhancement of the U and V components.
A video color enhancement system based on UV histogram features, comprising:
the model weight training model is used for obtaining model weights [2] [256 × 256] through deep neural network learning;
learning UV component parameter calculation unit: for converting RGB image to YUV color space, histogram histUV [ 256X256 ] is calculated on UV component]The histogram histUV [ 256X256 ]]And the model weights [2]][256×256]Multiplying and summing to obtain the final parameter puAnd pv
Figure BDA0003445709590000071
A calculation unit: data after color enhancement for calculating UV component of image
Figure BDA0003445709590000072
Where U, V represents the U and V component data of the input video, and U ', V' represent the data after color enhancement of the U and V components.
The system further comprises:
a supervised learning processing unit for adopting L1Effective supervised learning for loss
Figure BDA0003445709590000073
Wherein N represents the number of single channel pixels of the image; u shapegt、VgtRepresenting the U and V component data of GT in the fifo dataset.
The embodiments of the present invention have been described in detail with reference to the drawings, but the present invention is not limited to the embodiments. Even if various changes are made to the present invention, it is still within the scope of the present invention if they fall within the scope of the claims of the present invention and their equivalents.

Claims (7)

1. A video color enhancement method based on UV histogram features is characterized in that: the method comprises the following steps:
s1, obtaining model weight [2] [256 × 256] through deep neural network learning;
learning parameters of UV component S2: converting the RGB image to YUV color space, computing histogram histUV 256X256 on the UV component]The histogram histUV [ 256X256 ]]And the model weights [2]][256×256]Multiplying and summing to obtain the final parameter puAnd pv
Figure FDA0003445709580000011
S3 calculating data after color enhancement of UV component of image
Figure FDA0003445709580000012
Where U, V denotes the U and V component data of the input video, and U ', V' denote the data after color enhancement of the U and V components.
2. The method of claim 1, further comprising:
s4 with L1Effective supervised learning for loss
Figure FDA0003445709580000013
Wherein N represents the number of pixels of a single channel of the image; u shapegt、VgtRepresenting the U and V component data of GT in the live data set.
3. The method of claim 1 wherein inputting video further comprises converting RGB images to YCbCr data.
4. The method of claim 1, wherein step S1 is preceded by:
machine learning, the way of evaluating the effectiveness of machine learning is L1loss, below is L1The formula for calculating the loss index is as follows:
Figure FDA0003445709580000014
wherein N represents the number of pixels of a single channel of the image; u shapegtU component data representing GT in the live k open source dataset; vgtV component data representing GT in the live k open source dataset; u' represents the U component in the prediction data set output by the model training process; v' represents the V component in the prediction data set output by the model training process;
the part is the outermost layer frame of model training and is used for judging whether the training result achieves the best effect, the prediction data of the model is used, L1Loss is calculated through the prediction data and the source data, when the change of the L1Loss is stable in a preset range all the time, the training is considered to achieve the effect, and the prediction data is the data obtained by calculating the U and V components by the U and V component enhancement formula in the step S1.
5. The method of claim 4, wherein the process of machine learning is model training, which mainly includes model algorithm logic, inputting training data, outputting a prediction set;
the input data of model training is an open-source REDS deblurr data set, the trained model is a linear model, the training is carried out by applying pytorch. histeuv [256 × 256], obtains the 256 × 256 dimensional weights of the U and V components, respectively, through the following linear transformation formula: weights [2] [256 × 256 ];
linear transformation formula: y is xA + b
The obtained weight data can be transposed in the actual operation, and can be subjected to matrix multiplication with x; because the histogram can more comprehensively reflect the overall characteristics of the image, 256 × 256 represents that U has 256 values and V has 256 values, and the total of 256 × 256 characteristic values is combined to reflect all the characteristics of RGB image colors, after the characteristic values are input, the pitcher. nn. linear performs linear regression on the characteristics through the learning of data to obtain the slope of a linear function, which is referred to as weight data, and then the U, V of the image in step S3 can be subjected to enhancement processing.
6. A video color enhancement system based on UV histogram features, characterized by: the method comprises the following steps:
the model weight training model is used for obtaining model weights [2] [256 x256 ] through deep neural network learning;
learning UV component parameter calculation unit: for converting RGB image to YUV color space, histogram histUV [ 256X256 ] is calculated on UV component]The histogram histUV [ 256X256 ]]And the model weights [2]][256×256]Multiplying and summing to obtain the final parameter puAnd pv
Figure FDA0003445709580000031
A calculation unit: data after color enhancement for calculating UV component of image
Figure FDA0003445709580000032
Where U, V represents the U and V component data of the input video, and U ', V' represent the data after color enhancement of the U and V components.
7. The system of claim 6, further comprising:
a supervised learning processing unit for adopting L1Effective supervised learning for loss
Figure FDA0003445709580000033
Wherein N represents the number of pixels of a single channel of the image; u shapegt、VgtRepresenting the U and V component data of GT in the fifo dataset.
CN202111655771.0A 2021-12-30 2021-12-30 Video color enhancement method and system based on UV histogram features Pending CN114565543A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111655771.0A CN114565543A (en) 2021-12-30 2021-12-30 Video color enhancement method and system based on UV histogram features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111655771.0A CN114565543A (en) 2021-12-30 2021-12-30 Video color enhancement method and system based on UV histogram features

Publications (1)

Publication Number Publication Date
CN114565543A true CN114565543A (en) 2022-05-31

Family

ID=81712202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111655771.0A Pending CN114565543A (en) 2021-12-30 2021-12-30 Video color enhancement method and system based on UV histogram features

Country Status (1)

Country Link
CN (1) CN114565543A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116740650A (en) * 2023-08-10 2023-09-12 青岛农业大学 Crop breeding monitoring method and system based on deep learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116740650A (en) * 2023-08-10 2023-09-12 青岛农业大学 Crop breeding monitoring method and system based on deep learning
CN116740650B (en) * 2023-08-10 2023-10-20 青岛农业大学 Crop breeding monitoring method and system based on deep learning

Similar Documents

Publication Publication Date Title
CN105046663B (en) A kind of adaptive enhancement method of low-illumination image for simulating human visual perception
US8295596B1 (en) Adaptive histogram-based video contrast enhancement
KR100771158B1 (en) Method AND System For Enhancement Color Image Quality
US20090317017A1 (en) Image characteristic oriented tone mapping for high dynamic range images
CN110428379B (en) Image gray level enhancement method and system
Kapoor et al. Colour image enhancement based on histogram equalization
CN110298792B (en) Low-illumination image enhancement and denoising method, system and computer equipment
CN110009574B (en) Method for reversely generating high dynamic range image from low dynamic range image
CN111105371A (en) Low-contrast infrared image enhancement method
CN114565535B (en) Image enhancement method and device based on adaptive gradient gamma correction
US20130287299A1 (en) Image processing apparatus
CN115965544A (en) Image enhancement method and system for self-adaptive brightness adjustment
CN114565543A (en) Video color enhancement method and system based on UV histogram features
CN111563854A (en) Particle swarm optimization method for underwater image enhancement processing
CN110766622A (en) Underwater image enhancement method based on brightness discrimination and Gamma smoothing
WO2020107308A1 (en) Low-light-level image rapid enhancement method and apparatus based on retinex
Muniraj et al. Underwater image enhancement by modified color correction and adaptive Look-Up-Table with edge-preserving filter
CN112488968B (en) Image enhancement method for hierarchical histogram equalization fusion
CN114187222A (en) Low-illumination image enhancement method and system and storage medium
CN102456222A (en) Method and device for organized equalization in image
CN110992287B (en) Method for clarifying non-uniform illumination video
CN112308793A (en) Novel method for enhancing contrast and detail of non-uniform illumination image
CN109801246B (en) Global histogram equalization method for adaptive threshold
CN101478690B (en) Image irradiation correcting method based on color domain mapping
CN111080563A (en) Histogram equalization method based on traversal optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination