CN115514947A - AI automatic white balance algorithm and electronic equipment - Google Patents

AI automatic white balance algorithm and electronic equipment Download PDF

Info

Publication number
CN115514947A
CN115514947A CN202110631167.8A CN202110631167A CN115514947A CN 115514947 A CN115514947 A CN 115514947A CN 202110631167 A CN202110631167 A CN 202110631167A CN 115514947 A CN115514947 A CN 115514947A
Authority
CN
China
Prior art keywords
value
ffcc
model
image
covariance matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110631167.8A
Other languages
Chinese (zh)
Other versions
CN115514947B (en
Inventor
钱彦霖
郗东苗
金萌
罗钢
朱聪超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310814091.1A priority Critical patent/CN116761081A/en
Priority to CN202110631167.8A priority patent/CN115514947B/en
Priority to PCT/CN2022/093491 priority patent/WO2022257713A1/en
Publication of CN115514947A publication Critical patent/CN115514947A/en
Application granted granted Critical
Publication of CN115514947B publication Critical patent/CN115514947B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/73Colour balance circuits, e.g. white balance circuits or colour temperature control

Abstract

The application provides an algorithm and an electronic device for automatic white balance of AI. Wherein the algorithm comprises: and respectively calculating the image through a first FFCC model, a second FFCC model, a third FFCC model and a fourth FFCC model to respectively obtain a first regulating value, a second regulating value, a third regulating value and a fourth regulating value of the image. And then, inputting the four regulating values into a Kalman filter for calculation to obtain the regulating value of the image. The image adjustment value obtained by the AI algorithm is an adjustment value with extremely high accuracy, and when an Image Signal Processor (ISP) adjusts the color of the image by using the image adjustment value, the color difference of the image can be well corrected, so that the white balance effect of the image is better.

Description

AI automatic white balance algorithm and electronic equipment
Technical Field
The present application relates to the field of image processing, and in particular, to an algorithm and an electronic device for AI auto white balance.
Background
Due to the unique adaptability of the human eye, the change of the color temperature cannot be found sometimes. For example, if the tungsten lamp is used for a long time, the white paper under the tungsten lamp cannot be perceived to be reddish, if the fluorescent lamp is suddenly changed into the tungsten lamp for illumination, the color of the white paper can be perceived to be reddish, but the feeling can be continued for a while. There is no way for a Charge-coupled Device (CCD) or CMOS circuit of a camera to correct for variations in the color of the light source as the human eye does, so color cast occurs if the white balance setting of the camera does not coincide with the color temperature of the scene illumination. Therefore, how to improve the accuracy of white balance is a matter of increasing attention of technicians.
Disclosure of Invention
The embodiment of the application provides an AI automatic white balance algorithm and electronic equipment, and solves the problems that the RGB _ GAIN (image adjustment value) accuracy of an image light source obtained by a traditional automatic white balance algorithm is not high, and further the electronic equipment is poor in white balance adjustment effect on the image based on the RGB _ GAIN.
In a first aspect, an embodiment of the present application provides an algorithm for AI automatic white balance, including: respectively inputting the first image into a first FFCC model, a second FFCC model, a third FFCC model and a fourth FFCC model for calculation to obtain a first adjusting value, a second adjusting value, a third adjusting value and a fourth adjusting value, wherein the first adjusting value is obtained through the first FFCC model, the second adjusting value is obtained through the second FFCC model, the third adjusting value is obtained through the third FFCC model, and the fourth adjusting value is obtained through the fourth FFCC model; inputting the first adjustment value, the second adjustment value, the third adjustment value and the fourth adjustment value into a Kalman filter for calculation to obtain an adjustment value of a first image, wherein the adjustment value of the first image is used for adjusting the color of the first image; the first FFCC model is used for identifying a model corresponding to the image brightness value larger than a first threshold value, the second FFCC model is used for identifying a model corresponding to the image brightness value smaller than or equal to the first threshold value, and the third FFCC model is used for identifying the D of the image light source uv The model corresponding to the value greater than the second threshold value, and the fourth FFCC model is used for identifying the D of the image light source uv And the model is smaller than or equal to the second threshold value.
The method of the first aspect is implemented by passing the image through an LV (luminance value) -type FFCC model (first FFCC model and second FFCC model) and D, respectively uv The FFCC models (the third FFCC model and the fourth FFCC model) are calculated to obtain adjusting values (a first adjusting value and a second adjusting value) related to the LV of the image and the image light source D uv The associated adjustment values (third adjustment value and fourth adjustment value), the adjustment value to be associated with the LV of the image and the image light source D uv And performing fusion calculation on the related adjustment value through a Kalman filter to obtain an adjustment value (RGB _ GAIN) of the image. Since the RGB _ GAIN is based on the image LV and the image light source D uv As a result, the accuracy of the RGB _ GAIN is extremely high. When the electronic device uses the RGB _ GAIN to adjust RGB of each pixel of the image, the color difference of the image caused by the color temperature of the light source can be accurately corrected, namely, the white balance of the image can be accurately adjusted.
With reference to the first aspect, in an embodiment, the inputting the first image into the first FFCC model, the second FFCC model, the third FFCC model, and the fourth FFCC model respectively to calculate, so as to obtain the first adjustment value, the second adjustment value, the third adjustment value, and the fourth adjustment value specifically includes: respectively inputting the first image into a first FFCC model, a second FFCC model, a third FFCC model and a fourth FFCC model for calculation to obtain a first chromaticity coordinate and a first covariance matrix, a second chromaticity coordinate and a second covariance matrix, a third chromaticity coordinate and a third covariance matrix, and a fourth chromaticity coordinate and a fourth covariance matrix; calculating the weight value f of the first FFCC model according to the brightness value of the first image 1 And a weight value f of a second FFCC model 2 (ii) a Based on f 1 Calculating to obtain a first adjusting value according to the first covariance matrix and the first chromaticity coordinate; based on f 2 Calculating a second adjustment value according to the second covariance matrix and the second chromaticity coordinate; calculating D of the first image light source based on the first adjusting value and the second adjusting value uv (ii) a According to D of the first image light source uv Calculating the weight value f of the third FFCC model 3 And a weight value f of the fourth FFCC model 4 (ii) a Based on f 3 The third covariance matrix and the third chromaticity coordinateCalculating to obtain a third regulating value; based on f 4 And calculating a fourth adjustment value by using the fourth covariance matrix and the fourth chromaticity coordinate.
In the above-described embodiment, the calculated first adjustment value and second adjustment value include luminance value information of the image, and the calculated third adjustment value and fourth adjustment value include image light source D uv And (4) information. Therefore, the image adjustment value calculated by the kalman filter refers to the luminance value of the image and the image D uv These two factors make the accuracy of the image adjustment value higher.
With reference to the first aspect, in one embodiment, the weight value f of the first FFCC model is calculated according to a brightness value of the first image 1 And a weight value f of a second FFCC model 2 The method specifically comprises the following steps: according to the formula
Figure BDA0003103792040000021
Obtaining a weight value f 1 (ii) a According to the formula f 2 =1-f 1 To obtain a weight value f 2 (ii) a Wherein, lv thres Is a first threshold value, x is a luminance value of the first image, lv mult For characterizing x at f 1 The rate of change in the time.
In the above embodiment, f 1 And f 2 The influence degree of the first adjusting value and the second adjusting value obtained by the first FFCC model and the second FFCC model on the adjusting value of the image under the brightness factor can be comprehensively and accurately reflected, so that when the electronic equipment shoots the image under different brightness environments, the Kalman filter can calculate the adjusting value with extremely high image accuracy.
With reference to the first aspect, in one embodiment, based on f 1 The first covariance matrix and the first chromaticity coordinate are calculated to obtain a first adjusting value, and the method specifically comprises the following steps: the first covariance matrix is expressed according to a formula
Figure BDA0003103792040000022
Calculating to obtain an updated first covariance matrix; sigma 1 Is a first covariance matrix, sigma' 1 For the updated first co-partyA difference matrix; mixing first chromaticity coordinate and Sigma' 1 According to a formula Mu' 1 =mu 1 *(Sigma′ 1 ) -1 Calculating to obtain a first adjusting value; wherein, mu 1 Is a first chromaticity coordinate, mu' 1 Is the first adjustment value.
In the above embodiment, the first adjustment value includes f 1 The information of (1). Therefore, the first adjusting value can accurately reflect the influence degree of the adjusting value on the image under the brightness factor, so that when the electronic equipment shoots the image under different brightness environments, the Kalman filter can calculate the adjusting value with extremely high image accuracy.
With reference to the first aspect, in one embodiment, based on f 2 And calculating to obtain a second adjusting value by using the second covariance matrix and the second chromaticity coordinate, wherein the method specifically comprises the following steps: the second covariance matrix is based on the formula
Figure BDA0003103792040000023
Calculating to obtain an updated second covariance matrix; sigma 2 Is a second covariance matrix, sigma' 2 Is the updated second covariance matrix; second chromaticity coordinate and Sigma' 2 According to formula Mu' 2 =Mu 2 *(Sigma′ 2 ) -1 Calculating to obtain a second regulating value; wherein, mu 2 Is a second chromaticity coordinate, mu' 2 Is the second adjustment value.
In the above embodiment, the second adjustment value includes f 2 The information of (1). Therefore, the second adjusting value can accurately reflect the influence degree of the adjusting value on the image under the brightness factor, so that when the electronic equipment shoots the image under different brightness environments, the Kalman filter can calculate the adjusting value with extremely high image accuracy.
With reference to the first aspect, in one embodiment, D of the first image light source is calculated based on the first adjustment value and the second adjustment value uv The method specifically comprises the following steps: according to the formula Mu '= Mu' 1 *(Sigma′ 1 ) -1 +Mu′ 2 *(Sigma′ 2 ) -1 Calculating to obtain the fused chromaticity coordinateMarking; mu 'is the fused chromaticity coordinate, mu' 1 Is a first regulation value, mu' 2 Is second adjustment value, sigma' 1 Is an updated first covariance matrix, sigma' 2 Is the updated second covariance matrix; calculating to obtain D of first image light source based on Mu uv
In the above embodiments, the electronic device calculates D of the image light source uv The electronic device can use the D of the image light source uv And the brightness value of the image as a reference factor for calculating the image adjustment value. Therefore, when the images are shot in different shooting environments, the electronic equipment can calculate the high-accuracy adjusting value of the images through the Kalman filter, and the application scene of the electronic equipment for adjusting the white balance of the images is expanded.
With reference to the first aspect, in one embodiment, the light source is based on D of the first image light source uv Calculating the weight value f of the third FFCC model 3 And a weight value f of a fourth FFCC model 4 The method specifically comprises the following steps: according to the formula
Figure BDA0003103792040000031
Obtaining a weight value f 3 (ii) a According to the formula f 4 =1-f 3 To obtain a weight value f 4 (ii) a Wherein, duv thres Is a second threshold value, and y is D of the first image light source uv ,Duv mult For characterizing y at f 3 Or the rate of change in the temperature.
In the above embodiment, f 3 And f 4 The influence degree of the third adjusting value and the fourth adjusting value obtained by the third FFCC model and the fourth FFCC model on the adjusting value of the image under the light source factors can be comprehensively and accurately reflected, so that when the electronic equipment shoots the image under different light source environments, the Kalman filter can calculate the adjusting value with extremely high image accuracy.
With reference to the first aspect, in one embodiment, based on f 3 The third covariance matrix and the third chromaticity coordinate are calculated to obtain a third adjustment value, which specifically includes: the third covariance matrix is based on the formula
Figure BDA0003103792040000032
Calculating to obtain an updated third covariance matrix; sigma 3 Is a third covariance matrix, sigma' 3 Is the updated third covariance matrix; mixing third chromaticity coordinate with Sigma' 3 According to a formula Mu' 3 =Mu 3 *(Sigma′ 3 ) -1 Calculating to obtain a third regulating value; wherein, mu 3 Is a third chromaticity coordinate, mu' 3 Is the third adjustment value.
In the above embodiment, the third adjustment value includes f 3 The information of (1). Therefore, the third adjusting value can accurately reflect the influence degree of the adjusting value on the image under the light source factors, so that when the electronic equipment shoots the image under different light source environments, the Kalman filter can calculate the adjusting value with extremely high image accuracy.
With reference to the first aspect, in one embodiment, based on f 4 And calculating to obtain a fourth adjusting value by using the fourth covariance matrix and the fourth chromaticity coordinate, wherein the fourth adjusting value specifically comprises the following steps: the fourth covariance matrix is expressed according to the formula
Figure BDA0003103792040000033
Calculating to obtain an updated fourth covariance matrix; sigma 4 Is a fourth covariance matrix, sigma' 4 Is the updated fourth covariance matrix; fourth chroma coordinate and Sigma' 4 According to formula Mu' 4 =Mu 4 *(Sigma′ 4 ) -1 Calculating to obtain a fourth regulating value; wherein, mu 4 Is a fourth chromaticity coordinate, mu' 4 Is the fourth adjustment value.
In the above embodiment, the fourth adjustment value includes f 4 The information of (1). Therefore, the fourth adjusting value can accurately reflect the degree of influence of the adjusting value on the image under the light source factors, so that when the electronic equipment shoots the image under different light source environments, the Kalman filter can calculate the adjusting value with extremely high accuracy of the image.
With reference to the first aspect, in one embodiment, the first adjustment value, the second adjustment value, and the third adjustment value are adjustedInputting the pitch value and the fourth adjustment value into a Kalman filter for calculation to obtain an adjustment value of the first image, wherein the method specifically comprises the following steps: according to the formula Mu '= Mu' 1 *(Sigma′ 1 ) -1 +Mu′ 2 *(Sigma′ 2 ) -1 +Mu′ 3 *(Sigma′ 3 ) -1 +Mu′ 4 *(Sigma′ 4 ) -1 Calculating to obtain an adjusting value of the first image; where Mu "is the adjusted value of the first image, mu' 1 Is the first adjusted value of Mu' 2 Is a second regulation value, mu' 3 Is a third regulation value, mu' 4 Is fourth adjustment value, sigma' 1 Is an updated first covariance matrix, sigma' 2 Is an updated second covariance matrix, sigma' 3 Is an updated third covariance matrix, sigma' 4 Is the updated fourth covariance matrix.
In the above embodiment, since the first FFCC model and the second FFCC model can accurately calculate the adjustment values of the images in different luminance environments, the third FFCC model and the fourth FFCC model can calculate the adjustment values of the images in different light source environments. Therefore, the accuracy of the adjustment values obtained by calculating and fusing the first to fourth adjustment values through the above formula is extremely high, so that the electronic device can accurately adjust the white balance of the image through the adjustment values.
In a second aspect, an embodiment of the present application provides an electronic device, including: one or more processors and memory; the memory coupled with the one or more processors, the memory to store computer program code, the computer program code including computer instructions, the one or more processors to invoke the computer instructions to cause the electronic device to perform: respectively inputting the first image into a first FFCC model, a second FFCC model, a third FFCC model and a fourth FFCC model for calculation to obtain a first adjusting value, a second adjusting value, a third adjusting value and a fourth adjusting value, wherein the first adjusting value is obtained through the first FFCC model, the second adjusting value is obtained through the second FFCC model, the third adjusting value is obtained through the third FFCC model, and the fourth adjusting value is obtained through the fourth FFCC modelOf (1); inputting the first adjustment value, the second adjustment value, the third adjustment value and the fourth adjustment value into a Kalman filter for calculation to obtain an adjustment value of a first image, wherein the adjustment value of the first image is used for adjusting the color of the first image; the first FFCC model is used for identifying a model corresponding to the image brightness value larger than a first threshold value, the second FFCC model is used for identifying a model corresponding to the image brightness value smaller than or equal to the first threshold value, and the third FFCC model is used for identifying the D of the image light source uv The model corresponding to the value greater than the second threshold value, and the fourth FFCC model is used for identifying the D of the image light source uv And the model is smaller than or equal to the second threshold value.
In the above embodiment, the electronic device passes the image through the FFCC models (the first FFCC model and the second FFCC model) of the LV (luminance value) type and the D (luminance value) type, respectively uv The FFCC model is calculated to obtain adjustment values (first adjustment value and second adjustment value) related to the LV of the image and the image light source D uv Associated adjustment values (third adjustment value and fourth adjustment value), adjustment values to be associated with the LV of the image and with the image light source D uv And performing fusion calculation on the related adjustment value through a Kalman filter to obtain an adjustment value (RGB _ GAIN) of the image. Since the RGB _ GAIN is based on the image LV and the image light source D uv As a result, the accuracy of the RGB _ GAIN is extremely high. When the electronic device uses the RGB _ GAIN to adjust RGB of each pixel of the image, the color difference of the image caused by the color temperature of the light source can be accurately corrected, namely, the white balance of the image can be accurately adjusted.
With reference to the second aspect, in one embodiment, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: respectively inputting the first image into a first FFCC model, a second FFCC model, a third FFCC model and a fourth FFCC model for calculation to obtain a first chromaticity coordinate and a first covariance matrix, a second chromaticity coordinate and a second covariance matrix, a third chromaticity coordinate and a third covariance matrix, and a fourth chromaticity coordinate and a fourth covariance matrix; calculating the weight value f of the first FFCC model according to the brightness value of the first image 1 And a weight value f of a second FFCC model 2 (ii) a Based on f 1 Calculating to obtain a first adjusting value according to the first covariance matrix and the first chromaticity coordinate; based on f 2 Calculating a second adjustment value according to the second covariance matrix and the second chromaticity coordinate; calculating D of the first image light source based on the first adjusting value and the second adjusting value uv (ii) a According to D of the first image light source uv Calculating the weight value f of the third FFCC model 3 And a weight value f of the fourth FFCC model 4 (ii) a Based on f 3 Calculating to obtain a third adjustment value according to the third covariance matrix and the third chroma coordinate; based on f 4 And calculating to obtain a fourth adjustment value according to the fourth covariance matrix and the fourth chromaticity coordinate.
In the above-described embodiment, the first adjustment value and the second adjustment value calculated by the electronic device include luminance value information of the image, and the third adjustment value and the fourth adjustment value calculated include the image light source D uv And (4) information. Therefore, the image adjustment value calculated by the electronic device through the kalman filter refers to the brightness value of the image and the image D uv These two factors make the accuracy of the image adjustment value higher.
With reference to the second aspect, in one embodiment, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: according to the formula
Figure BDA0003103792040000041
Obtaining a weight value f 1 (ii) a According to the formula f 2 =1-f 1 To obtain a weight value f 2 (ii) a Wherein, lv thres Is a first threshold value, x is a luminance value of the first image, lv mult For characterizing x at f 1 The rate of change in the time.
In the above embodiment, f 1 And f 2 The influence degree of the first adjusting value and the second adjusting value obtained through the first FFCC model and the second FFCC model on the adjusting value of the image under the brightness factor can be comprehensively and accurately reflected, so that when the electronic equipment shoots the image under different brightness environments, the Kalman filter can calculate the adjusting value with extremely high image accuracy.
With reference to the second aspect, in one embodiment, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: the first covariance matrix is expressed according to a formula
Figure BDA0003103792040000051
Calculating to obtain an updated first covariance matrix; sigma 1 Is the first covariance matrix, sigma' 1 Is the updated first covariance matrix; mixing first chromaticity coordinate and Sigma' 1 According to a formula Mu' 1 =Mu 1 *(Sigma′ 1 ) -1 Calculating to obtain a first adjusting value; wherein, mu 1 Is a first chromaticity coordinate, mu' 1 Is the first adjustment value.
In the above embodiment, the first adjustment value includes f 1 The information of (1). Therefore, the first adjusting value can accurately reflect the influence degree of the adjusting value on the image under the brightness factor, so that when the electronic equipment shoots the image under different brightness environments, the Kalman filter can calculate the adjusting value with extremely high image accuracy.
With reference to the second aspect, in one embodiment, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: the second covariance matrix is based on the formula
Figure BDA0003103792040000052
Calculating to obtain an updated second covariance matrix; sigma 2 Is a second covariance matrix, sigma' 2 Is the updated second covariance matrix; second chroma coordinates and Sigma' 2 According to formula Mu' 2 =Mu 2 *(Sigma′ 2 ) -1 Calculating to obtain a second regulating value; wherein, mu 2 Is a second chromaticity coordinate, mu' 2 Is the second adjustment value.
In the above embodiment, the second adjustment value includes f 2 The information of (a). Therefore, the second adjusting value can accurately reflect the influence degree of the adjusting value on the image under the brightness factor, so that when the electronic equipment is in different brightness environmentsWhen the image is shot, the Kalman filter can calculate the adjustment value with extremely high accuracy of the image.
With reference to the second aspect, in one embodiment, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: according to the formula Mu '= Mu' 1 *(Sigma′ 1 ) -1 +Mu′ 2 *(Sigma′ 2 ) -1 Calculating to obtain a fused chromaticity coordinate; mu 'is the chromaticity coordinate after fusion, mu' 1 Is a first regulation value, mu' 2 Is second adjustment value, sigma' 1 Is an updated first covariance matrix, sigma' 2 Is the updated second covariance matrix; calculating to obtain D of first image light source based on Mu uv
In the above embodiments, the electronic device calculates D of the image light source uv The electronic device can use the D of the image light source uv And the brightness value of the image as a reference factor for calculating the image adjustment value. Therefore, when the images are shot in different shooting environments, the electronic equipment can calculate the high-accuracy adjusting value of the images through the Kalman filter, and the application scene of the electronic equipment for adjusting the white balance of the images is expanded.
With reference to the second aspect, in one embodiment, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: according to the formula
Figure BDA0003103792040000053
Obtaining a weight value f 3 (ii) a According to the formula f 4 =1-f 3 To obtain a weight value f 4 (ii) a Wherein, duv thres Is a second threshold value, y is D of the first image light source uv ,Duv mult For characterizing y at f 3 The rate of change in the time.
In the above embodiment, f 3 And f 4 The influence degree of the third adjusting value and the fourth adjusting value obtained by the third FFCC model and the fourth FFCC model on the adjusting value of the image under the light source factors can be comprehensively and accurately reflected, so that the image is shot when the electronic equipment is in different light source environmentsWhen the image is taken, the Kalman filter can calculate the adjustment value with extremely high accuracy of the image.
With reference to the second aspect, in one embodiment, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: the third covariance matrix is based on the formula
Figure BDA0003103792040000054
Calculating to obtain an updated third covariance matrix; sigma 3 Is a third covariance matrix, sigma' 3 Is the updated third covariance matrix; third chroma coordinates and Sigma' 3 According to a formula Mu' 3 =Mu 3 *(Sigma′ 3 ) -1 Calculating to obtain a third regulating value; wherein, mu 3 Is a third chromaticity coordinate, mu' 3 Is the third adjustment value.
In the above embodiment, the third adjustment value includes f 3 The information of (1). Therefore, the third adjusting value can accurately reflect the influence degree of the adjusting value of the image under the light source factors, so that when the electronic equipment shoots the image under different light source environments, the Kalman filter can calculate the adjusting value with extremely high image accuracy.
With reference to the second aspect, in one embodiment, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: the fourth covariance matrix is expressed according to a formula
Figure BDA0003103792040000061
Calculating to obtain an updated fourth covariance matrix; sigma 4 Is a fourth covariance matrix, sigma' 4 Is the updated fourth covariance matrix; fourth chroma coordinate and Sigma' 4 According to a formula Mu' 4 =Mu 4 *(Sigma′ 4 ) -1 Calculating to obtain a fourth regulating value; wherein, mu 4 Is a fourth chromaticity coordinate, mu' 4 Is the fourth adjustment value.
In the above embodiment, the fourth adjustment value includes f 4 The information of (1). Therefore, the fourth adjustment value can be accurateThe influence degree of the adjustment value of the image under the light source factors is reflected, so that when the electronic equipment shoots the image under different light source environments, the Kalman filter can calculate the adjustment value with extremely high image accuracy.
With reference to the second aspect, in one embodiment, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: according to the formula Mu '= Mu' 1 *(Sigma′ 1 ) -1 +Mu′ 2 *(Sigma′ 2 ) -1 +Mu′ 3 *(Sigma′ 3 ) -1 +Mu′ 4 *(Sigma′ 4 ) -1 Calculating to obtain an adjusting value of the first image; wherein Mu 'is a value of adjustment of the first picture, mu' 1 Is a first regulation value of Mu' 2 Is a second regulation value, mu' 3 Is a third regulation value, mu' 4 Is fourth adjustment value, sigma' 1 Is an updated first covariance matrix, sigma' 2 Is an updated second covariance matrix, sigma' 3 Is an updated third covariance matrix, sigma' 4 Is the updated fourth covariance matrix.
In the above embodiment, since the first FFCC model and the second FFCC model can accurately calculate the adjustment values of the images in different luminance environments, the third FFCC model and the fourth FFCC model can calculate the adjustment values of the images in different light source environments. Therefore, the accuracy of the adjustment values obtained by calculating and fusing the first to fourth adjustment values through the above formula is extremely high, so that the electronic device can accurately adjust the white balance of the image through the adjustment values.
In a third aspect, the present application provides a chip system, which is applied to an electronic device, and the chip system includes one or more processors, and the processor is configured to invoke computer instructions to cause the electronic device to perform the method according to the first aspect or any one implementation manner of the first aspect.
With reference to the third aspect, in one embodiment, the image is passed through an LV (luminance value) type FFCC model (first FFCC model and second FFCC model) and D, respectively uv The FFCC model is calculated to obtain adjustment values (first adjustment value and second adjustment value) related to the LV of the image and the image light source D uv The associated adjustment values (third adjustment value and fourth adjustment value), the adjustment value to be associated with the LV of the image and the image light source D uv And performing fusion calculation on the related adjustment value through a Kalman filter to obtain an adjustment value (RGB _ GAIN) of the image. Since the RGB _ GAIN is based on the image LV and the image light source D uv As a result, the accuracy of the RGB _ GAIN is extremely high. When the electronic device uses the RGB _ GAIN to adjust RGB of each pixel of the image, the color difference of the image caused by the color temperature of the light source can be accurately corrected, namely, the white balance of the image can be accurately adjusted.
In a fourth aspect, the present application provides a computer program product containing instructions, which when run on an electronic device, causes the electronic device to perform the method according to the first aspect or any one of the implementation manners of the first aspect.
With reference to the fourth aspect, in one embodiment, the image is passed through an LV (luminance value) type FFCC model (first FFCC model and second FFCC model) and D, respectively uv The FFCC model is calculated to obtain adjustment values (first adjustment value and second adjustment value) related to the LV of the image and the image light source D uv The associated adjustment values (third adjustment value and fourth adjustment value), the adjustment value to be associated with the LV of the image and the image light source D uv And performing fusion calculation on the related adjustment value through a Kalman filter to obtain an adjustment value (RGB _ GAIN) of the image. Since the RGB _ GAIN is based on the image LV and the image light source D uv As a result, the accuracy of the RGB _ GAIN is extremely high. When the electronic device uses the RGB _ GAIN to adjust RGB of each pixel of the image, the color difference of the image caused by the color temperature of the light source can be accurately corrected, namely, the white balance of the image can be accurately adjusted.
In a fifth aspect, the present application provides a computer-readable storage medium, which includes instructions that, when executed on an electronic device, cause the electronic device to perform the method according to the first aspect or any one of the implementation manners of the first aspect.
With reference to the fifth aspect, in one embodiment, the image is passed through an LV (luminance value) type FFCC model (first FFCC model and second FFCC model) and D, respectively uv The FFCC model is calculated to obtain the adjustment values (first and second adjustment values) related to the LV of the image and the image light source D uv The associated adjustment values (third adjustment value and fourth adjustment value), the adjustment value to be associated with the LV of the image and the image light source D uv And performing fusion calculation on the related adjustment value through a Kalman filter to obtain an adjustment value (RGB _ GAIN) of the image. Since the RGB _ GAIN is based on the image LV and the image light source D uv As a result, the accuracy of the RGB _ GAIN is extremely high. When the electronic device uses the RGB _ GAIN to adjust RGB of each pixel of the image, the color difference of the image caused by the color temperature of the light source can be accurately corrected, namely, the white balance of the image can be accurately adjusted.
In a sixth aspect, an embodiment of the present application provides an electronic device, including: the system comprises a touch screen, a camera, one or more processors and one or more memories; the one or more processors are coupled to the touch screen, the camera, the one or more memories, and the one or more memories are configured to store computer program code, which includes computer instructions that, when executed by the one or more processors, cause the electronic device to perform the method of the first aspect or any one of the implementation manners of the first aspect.
With reference to the sixth aspect, in one embodiment, the image is passed through an LV (luminance value) type FFCC model (first FFCC model and second FFCC model) and D, respectively uv The FFCC model is calculated to obtain adjustment values (first adjustment value and second adjustment value) related to the LV of the image and the image light source D uv Associated adjustment values (third adjustment value and fourth adjustment value), adjustment values to be associated with the LV of the image and with the image light source D uv And performing fusion calculation on the related adjustment value through a Kalman filter to obtain an adjustment value (RGB _ GAIN) of the image. Due to the RGB _ GAIN is based on image LV and image light source D uv As a result, the accuracy of the RGB _ GAIN is extremely high. When the electronic device uses the RGB _ GAIN to adjust RGB of each pixel of the image, the color difference of the image caused by the color temperature of the light source can be accurately corrected, namely, the white balance of the image can be accurately adjusted.
Drawings
Fig. 1A to fig. 1D are application scene diagrams of an AI automatic white balance algorithm provided in an embodiment of the present application;
fig. 2 is a system architecture diagram of an AI auto white balance algorithm provided in an embodiment of the present application;
fig. 3 is a schematic hardware structure diagram of the electronic device 100 provided in the embodiment of the present application;
fig. 4 is a schematic diagram of an architecture of FFCC model training provided in an embodiment of the present application;
FIG. 5 is a schematic flow chart of RGB _ GAIN for outputting fused image light source by AI AWB algorithm according to the present application;
fig. 6 is a uv chromaticity coordinate diagram provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those skilled in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and the like in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not necessarily for describing a particular order. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process may comprise a sequence of steps or elements, or may alternatively comprise steps or elements not listed, or may alternatively comprise other steps or elements inherent to such process, method, article, or apparatus.
Only some, but not all, of the material relevant to the present application is shown in the drawings. Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but could have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
As used in this specification, the terms "component," "module," "system," "unit," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a unit may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a distribution between two or more computers. In addition, these units can execute from various computer readable media having various data structures stored thereon. The units may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., from a second unit of data interacting with another unit in a local system, distributed system, and/or across a network.
Some terms referred to in the embodiments of the present application are explained below.
(1) Planck locus: an object that is neither reflective nor fully transmissive to radiation, but which is capable of absorbing all radiation falling on it, is called a black body or a full radiator. When the black body is continuously heated, the maximum value of the relative spectral power distribution of the black body moves to the short wave direction, the corresponding light colors are changed according to the sequence of red, yellow, white and blue, and the light color change corresponding to the black body forms an arc-shaped track on a chromaticity coordinate graph at different temperatures, namely a black body track or a Planck track.
(2) Correlated Color Temperature (CCT): refers to the temperature of the black body radiator that is closest to the color of the stimulus with the same brightness, expressed in K's temperature, and is used to describe a measure of the color of light located near the planckian locus. Other light sources than the thermal radiation light source have linear spectra whose radiation characteristics are greatly different from those of a black body, so that the light colors of these light sources do not necessarily fall exactly on the black body locus on a chromaticity diagram, and for such light sources, the color characteristics of the light sources are usually described by CCT.
(3)D uv :D uv Refers to the distance, D, from the uv chromaticity coordinates of the test light source to the closest point on the Planckian Locus uv Information characterizing the color shift (green or pink) and direction of the chromaticity coordinates of the test light source from the planckian locus.
(4) RGB: RGB is a three-dimensional vector (R, G, B). Therein, R, G, B represents the amplitude on three color channels, red (Red), green (Green), and Blue (Blue), respectively.
(5) RGB _ GAIN: RGB _ GAIN is a three-dimensional vector (GAIN _ R, GAIN _ G, GAIN _ B), also called RGB GAIN values. Wherein, GAIN _ R, GAIN _ G, GAIN _ B respectively represent the ratio of Red (Red), green (Green) and Blue (Blue) color channels, when RGB _ GAIN of the image light source is multiplied by RGB of the image light source, a three-dimensional vector (R _ GAIN _ R, G _ GAIN _ G, B _ GAIN _ B) is obtained. Wherein R × GAIN _ R = G × GAIN _ G = B × GAIN _ B.
(6) Luminance Value (Lighting Value, LV): for estimating the ambient brightness.
(7) XYZ space: in the embodiment of the present application, RGB is DeviceRGB, and the DeviceRGB color space is a device-dependent color space, that is: the understanding of RGB is different for different devices. Therefore, deviceRGB is not suitable for calculating parameters such as luminance values. Calculating LV isoparameters requires converting DeviceRGB color space to device independent XYZ space, i.e.: RGB is converted to XYZ.
A common method for converting the RGB color space into the XYZ space is: under the environment of different light sources (typical light sources comprise A, H, U, TL84, D50, D65, D75 and the like), a Color Correction Matrix (CCM) with the size of 3*3 is calibrated, and CCMs of different light sources are stored in a memory of the electronic device, and the formula is as follows:
Figure BDA0003103792040000091
and obtaining a three-dimensional vector corresponding to the image in an XYZ space, thereby realizing the conversion from the RGB space to the XYZ space. In the shooting process, the corresponding light source is matched according to the white balance reference point in the image, and the CCM corresponding to the light source is selected. If the RGB with the white balance reference point is between two light sources (e.g., the RGB of the image falls between D50 and D65), CCM can be obtained by bilinear interpolation of D50 and D65. For example, the color correction matrix for D50 is CCM 1 Correlated color temperature of CCT 1 The color correction matrix for D60 is CCM 2 Correlated color temperature of CCT 2 Correlated color temperature of image light source is CCT a . The electronic device may be configured according to the formula:
Figure BDA0003103792040000092
calculating a proportional value g, based on g, according to the formula:
CCM=g*CCM 1 +(1-g)*CCM 2
the CCM of the image can be calculated.
(8) Fast Fourier Color Constancy (FFCC) model: and performing convolution calculation on a uv chromaticity diagram of the image by using a fast Fourier algorithm, wherein the position of the maximum response corresponds to the uv chromaticity coordinate of the light source, so as to acquire the RGB or RGB _ GAIN of the light source of the image or the uv chromaticity coordinate of the light source.
Because the color of an object changes due to the color of the projected light, photos shot under different scenes show different color temperatures, and a CCD circuit or a CMOS circuit in a digital camera or a mobile phone camera cannot correct the change of the color of a light source. Therefore, in order to prevent the color shift of the captured image, it is often necessary to process the image by a white balance algorithm built in the digital camera or the mobile phone to correct the color shift of the image.
The white balance is to adjust the signal gain corresponding to the color temperature in the camera to offset the color cast of the shot image under the condition of different color temperatures, so that the white balance is closer to the visual habit of human eyes. Because the camera is not as intelligent as the human eye (the human eye automatically corrects colors when seeing objects), the camera sets a range, if the average color value of the shot picture falls within the set range, correction is not needed, and if the average color value deviates from the set range, parameters are adjusted to fall within the range. This is the white balance correction process. For example, the three color values of red, green, and blue (RGB of the light source) of the light source of the image to be adjusted are (25, 50, 150), the ratio of red, green, and blue of the light source is 1. Therefore, the color of the image is bluish compared to the true observation by the human eye. In order to solve the problem of color cast of the image, the image is processed by an AWB algorithm in the mobile phone, and RGB _ GAIN of the light source of the image is output, so that the ISP can adjust the RGB value of the image according to the RGB _ GAIN, thereby correcting the color cast generated by the image. Where RGB _ GAIN is 3 GAIN values. For example, the light source RGB _ GAIN is (6,3,1), the primary light source RGB is (25, 50, 150), and the primary light source RGB _ GAIN is multiplied by the light source RGB _ GAIN to obtain the primary light source adjusted RGB (150, 150, 150), that is, the RGB of the white point in the image is equal, thereby eliminating color shift.
In order to solve the problem that the white balance application scene is limited, the application provides an AI AWB algorithm, and the color adjusting values output by a plurality of FFCC models are respectively obtained by processing the shot images in the FFCC models. And the color adjustment values are fused through a Kalman filter to obtain a high-accuracy light source RGB _ GAIN, so that an ISP (Internet service provider) in the electronic equipment can compensate the color cast of the image according to the high-accuracy light source RGB _ GAIN, and the color of the image is consistent with that observed by human eyes.
Next, an application scenario of the AI AWB algorithm is described with reference to fig. 1A to 1D.
Fig. 1A is a diagram of a photographing interface of the electronic device 100, in which a photographing control 1011 and a preview control 1012 are included. When the electronic device 100 detects an input operation (e.g., single click) to the photographing control 1011, the electronic device 100 starts photographing and displays a photographing processing interface as shown in fig. 1B. As shown in fig. 1B, the photo processing interface displays a prompt word "please hold a steady phone while taking a photo", and after the photo is completed, the electronic device 100 detects an input operation (e.g., a single click) with respect to the preview control 1012, and then displays a photo preview interface as shown in fig. 1C.
When the user clicks the photographing control 1011, the electronic device starts to photograph, and in the process of displaying the photographing processing interface of fig. 1B, the electronic device 100 adjusts the white balance of the photographed image. The specific process is as follows: the electronic equipment calculates and processes the image through an AI AWB algorithm to obtain an RGB GAIN value (RGB _ GAIN) of the light source of the image. And then multiplying the RGB of each pixel of the image by the RGB gain value to realize white balance adjustment of the image. As shown in fig. 1D, color cast occurs in the image 1 (the color of the image 1 is grey cast) due to different color temperatures of the light sources of the shooting environment, and after RGB of 70 pixels of the image 1 are multiplied by the RGB gain value of the image 1 calculated by AI AWB, the color compensation for the image 1 is realized, and the overall color of the image 1 after the color compensation is no longer grey cast and is consistent with the color actually observed by human eyes.
Fig. 1A to 1D describe an application scenario of the AI AWB algorithm, and a system architecture diagram of the AI AWB algorithm is described below with reference to fig. 2. Referring to fig. 2, fig. 2 is a system architecture diagram of RGB _ GAIN after image-light source fusion output by AI AWB algorithm according to an embodiment of the present application.
As shown in FIG. 2, the electronic device passes the images through the FFCC in the system of FIG. 2, respectively 1 Model, FFCC 2 Model, FFCC 3 Model and FFCC 4 Processing in model, FFCC 1 model-FFCC 4 The model outputs a first adjustment value, a second adjustment value, a third adjustment value and a fourth adjustment value of the image 1, respectively. The electronic device then sends the FFCC 1 model-FFCC 4 And fusing the adjustment value output by the model through a Kalman filter to obtain a fused light source RGB _ GAIN, so that the ISP adjusts RGB of each pixel in the image 1 according to the RGB _ GAIN to compensate color cast of the image and realize white balance adjustment of the image.
Next, the structure of the electronic apparatus 100 will be described. Referring to fig. 3, fig. 3 is a schematic hardware structure diagram of an electronic device 100 according to an embodiment of the present disclosure. The electronic device 100 may be a mobile phone, a tablet computer, a remote controller, or a wearable electronic device with a wireless communication function (e.g., a smart watch, AR glasses), and so on.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), among others. The different processing units may be separate devices or may be integrated into one or more processors.
In the embodiment of the present application, the ISP is mainly used for the AI-based AWB algorithm (including FFCC) 1 model-FFCC 4 Model) output to adjust the RGB values of each pixel of the image, thereby achieving white balance of the image.
The controller can be a neural center and a command center of the electronic device. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
The electronic device implements display functionality via the GPU, the display screen 194, and the application processor, among other things. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device selects a frequency point, the digital signal processor is used for performing fourier transform and the like on the frequency point energy.
Video codecs are used to compress or decompress digital video. The electronic device may support one or more video codecs. In this way, the electronic device can play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The data storage area can store data (such as audio data, phone book and the like) created in the using process of the electronic device. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The pressure sensor 180A is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a variety of types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronics determine the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic device detects the intensity of the touch operation according to the pressure sensor 180A. The electronic device may also calculate the position of the touch from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device at a different position than the display screen 194.
The camera 194 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB value, YUV format and the like. In some embodiments, the first terminal 10 may include 1 or N cameras 194, N being a positive integer greater than 1. In the embodiment of the present application, RGB of the image light source may be acquired by a CCD circuit or a CMOS circuit in the camera 194.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device may include 1 or N display screens 194, N being a positive integer greater than 1.
In the embodiment of the present application, the display screen 194 is used for displaying the image subjected to the white balance processing on the electronic device mentioned in the embodiment of the present application.
The embodiment of fig. 2 describes a system architecture diagram of RGB _ GAIN for outputting an image light source through an AI AWB algorithm, and the FFCC model in fig. 2 may be a model trained in advance on a computer or a model trained in advance on other electronic devices, which is not limited in the embodiment of the present application. The training process of the FFCC model will be described in detail below with reference to fig. 4. Referring to fig. 4, fig. 4 is a schematic diagram of an architecture of FFCC model training according to an embodiment of the present disclosure. In FIG. 4, according to LV and D uv FFCC models are divided into two broad categories. The LV is a light source brightness Value (Lighting Value) of the image, which is used to estimate the brightness of the image capturing environment, and the image capturing environment can be divided into different scenes based on the LV Value. For example, with LV equal to 10 as a critical point, a shooting scene is generally divided into an indoor scene (LV less than 10) and an outdoor scene (LV greater than 10). If the LV is equal to 100 as the critical point, the shooting scene can be divided into a night scene (LV less than 100) and a day scene (LV greater than 100). In the same way, D uv The light source used to characterize the shooting environment is either an artificial light source (i.e. light that does not exist in nature) or a natural light source, generally denoted as D uv Equal to 0 as a critical point, the light source of the shooting environment can be divided into green light (D) uv Greater than 0) and pink (D) uv Less than 0). The embodiment of the application classifies the FFCC model into LV typeAnd D uv Type, for LV type FFCC model and D, respectively uv The FFCC model is trained. LV and D of the image uv As a reference factor for adjusting the RGB of the image, only a single variable is used as a reference factor for adjusting the white balance of the image (only the LV of the reference image or only the D of the reference image) compared with the conventional method uv ) In the embodiment of the application, the accuracy of RGB _ GAIN of the output light source under different shooting scenes through the AI AWB algorithm is higher, and the applied shooting scenes are wider.
Based on LV and D uv After the FFCC model is divided, first threshold values LV' and D of LV are set uv Of a second threshold value D' uv . Wherein, LV 'and D' uv The method can be obtained based on experimental data or historical data, and the embodiment of the application is not limited. In the embodiment of the application, LV 'is equal to 100,D' uv Equal to 0 is exemplified for the description. Computer terminal is based on LV 'and D' uv Determining FFCCs to be trained 1 model-FFCC 4 And (4) modeling. Wherein, FFCC 1 Adjustment value of image with model output LV greater than 100, FFCC 2 Adjustment values of the image for which the model output LV is less than or equal to 100, FFCC 3 Model output D uv Adjustment value of image greater than 0, FFCC 4 Model output D uv An adjustment value of the image less than or equal to 0. Then, the computer uses a certain number of pictures as training samples to train the FFCC respectively 1 model-FFCC 4 And (4) modeling. The adjustment value output by the FFCC model may be RGB of the image light source, may also be RGB _ GAIN of the image light source, and may also be uv chromaticity coordinates of the image light source, which is not limited in this embodiment of the present application.
This embodiment trains FFCC with the computer side 1 Model, FFCC 1 The adjustment value of the model output is RGB of the image light source as an example, and the description is given. The computer end takes a certain number (for example, more than 2500) of pictures with LV more than 100 as FFCC 1 Training samples of the model, and obtaining an RGB Label value of each training sample. Wherein, the RGB Label value is collected by placing a 24-color card at the position irradiated by a main light source in a picture and using an automatic matting tool to matte 20 th to 23 rd gray of the 24-color cardThe RGB Label value, which is obtained from the average of the pixels in the card area, is usually used as the real RGB of the corresponding light source of the picture to guide the training of the FFCC model. Taking the picture and the RGB Label _1 value thereof as FFCC at the computer end 1 Input to the model, FFCC 1 The light source RGB _1 of the picture is calculated by the model, and the light source RGB _1 of the picture is compared with the RGB Label _1 value of the picture, so that a deviation value Fn is obtained 1 . Wherein, the Fn 1 For characterizing FFCC 1 And (3) calculating the deviation degree of the RGB _1 value of the light source and the RGB Label _1 value of the picture by using the model. Then, FFCC 1 The model is based on Fn 1 Adjusting its internal convolution kernel parameters to let FFCC 1 The RGB of the model output light source is continuously close to the RGB Label _1 value of the picture, thereby completing the FFCC 1 Training of the model such that the current orientation is FFCC 1 FFCC when model inputs image with LV value greater than 100 1 The model can output RGB _1 of the light source with extremely high image accuracy, so that the ISP can adjust the RGB of the image according to the RGB adjustment value of the image light source, and the adjustment of the white balance of the image is realized.
Similarly, a certain number of LVs less than or equal to 100 and D are used respectively uv Greater than 0, D uv Training FFCC by taking pictures smaller than or equal to 0 as training samples 2 model-FFCC 4 Model and will train the FFCC 1 model-FFCC 4 The model is installed in the electronic device.
The conventional AI AWB algorithm adjusts the white balance of an image mainly by taking the image as an input of a single FFCC model, for example, when the LV of an image light source is greater than 100, the image is taken as the FFCC 1 Input of model and based on FFCC 1 The adjustment value output by the model is used for the ISP to adjust the RGB of the image. Due to the fact that the single FFCC model is limited in application shooting scenes, when the electronic equipment shoots in scenes with insufficient light or in scenes such as cloudy weather, the accuracy of the adjustment value output by the single FFCC model is often low, and the color cast problem of the image subjected to white balance adjustment is not obviously improved. For example, in cloudy weather, the white-balanced image is still bluish.
The method aims to solve the problem that the accuracy of an adjustment value output by a traditional AI AWB algorithm is low, so that the effect of white balance adjustment of an ISP on an image is poor. The embodiment of the application provides an AI AWB algorithm. And obtaining the adjustment values output by the plurality of FFCC models by setting the plurality of trained FFCC models as models for processing images and respectively taking the images as the input of the plurality of FFCC models. And fusing the adjusting values through a Kalman filter to obtain RGB _ GAIN of the fused image light source. Wherein the fused RGB _ GAIN is a high accuracy RGB GAIN value. The ISP adjusts RGB of each pixel in the image through the RGB _ GAIN (for example, multiplies RGB of each pixel of the image by the RGB _ GAIN), so as to adjust the white balance of the image, so as to correct color cast of the image.
Referring to fig. 5, fig. 5 is a schematic flow chart of RGB _ GAIN for outputting the fused image light source by the AI AWB algorithm according to the embodiment of the present application. This flow is explained below with reference to fig. 5:
step S501: electronic equipment calculates weight value f of first FFCC model based on LV of image 1 And a weight value f of a second FFCC model 2
Specifically, the image is a first image, and the first FFCC model is the FFCC trained in the embodiment of fig. 4 1 A second FFCC model, wherein the second FFCC model is the FFCC trained in the embodiment of FIG. 4 2 And (4) modeling. In the embodiment of fig. 4, the first threshold LV 'is set as a threshold for distinguishing between high LV and low LV, and the setting of LV' is different and the setting of the imaging scene is also different. For example, when LV' is equal to 100, the shooting scene may be divided into a daytime scene (LV greater than 100) and a nighttime scene (LV less than 100). When LV' is equal to 10, the photographing scene may be divided into an outdoor scene (LV greater than 10) and an indoor scene (LV less than 10).
However, it is not accurate and comprehensive enough to judge the scene where the image is located only by the LV of the image. For example, when LV' is equal to 100, FFCC 1 The model outputs the adjustment value of the image shot in the daytime scene, FFCC 2 The model outputs adjustment values for images taken in the night scene. Due to the complexity of the shooting environment (e.g. light)Line intensity, etc.), there may be a mismatch between the LV of the image light source and the captured scene. For example, when a user takes a picture in a room with a clear lighting at night, the calculated value of the image LV may be greater than 100 due to strong lighting in the room, the electronic device may determine that the current shooting scene is daytime based on the calculated value of the image LV, and pass the image through the FFCC 1 The model is calculated to obtain the FFCC 1 The adjusted value of the model output. Actually, the current shooting scene is at night, which may cause the actual shooting scene to be different from the shooting scene determined by the electronic device, and if the ISP only uses the FFCC 1 The adjustment value output by the model is used for adjusting the RGB of the image, so that the color error of the image caused by external light cannot be completely corrected, and the improvement of the color cast of the image is not obvious.
In order to solve the above problem, the electronic device passes the images through the FFCCs respectively 1 Model and FFCC 2 The model is calculated to obtain the FFCC 1 Model and FFCC 2 The adjustment value output by the model is calculated according to the LV of the image 1 And f 2 So as to follow according to f 1 And f 2 For FFCC 1 Model and FFCC 2 And fusing the adjustment values output by the model.
When an image is taken, the electronic device calculates the LV of the image, and the electronic device may calculate the LV of the image according to formula (1), where formula (1) is as follows:
Figure BDA0003103792040000151
wherein Exposure is the Exposure time. Aperture is the Aperture size and is used for representing the light incoming amount of a lens. Iso is the sensitivity, which is a measure of the sensitivity of the film to light. Luma is the average of the images in XYZ color space, Y.
After computing the image LV, the electronic device can compute f based on equation (2) 1 Equation (2) is as follows:
Figure BDA0003103792040000152
wherein, said Lv thres The threshold for distinguishing between high and low LVs can be understood as LV' and Lv in the embodiment of FIG. 4 thres The method can be obtained from experimental data or historical empirical values, and the embodiment of the application is not limited. Said x is LV of the image, said Lv mult For characterizing f 1 Slope in the x-direction. In calculating f 1 Then, the electronic device is based on the formula f 2 =1-f 1 Calculating to obtain a weight value f 2
Step S502: and the electronic equipment calculates and processes the image through the first FFCC model and the second FFCC model respectively to obtain a first chromaticity coordinate, a second chromaticity coordinate, a first covariance matrix and a second covariance matrix.
Specifically, FFCC 1 The model can directly calculate the uv chromaticity coordinate of the image light source and output the first chromaticity coordinate Mu of the image light source 1 (u′ 1 ,v′ 1 )。
In some embodiments, the FFCC 1 The model calculates a first RGB of the image light source and outputs the first RGB, and the electronic device converts the first RGB into first chromaticity coordinates according to formula (3) and formula (4), where formula (3) and formula (4) are as follows:
Figure BDA0003103792040000153
in other embodiments, the FFCC 1 The model calculates a first RGB _ GAIN of the image light source and outputs the first RGB _ GAIN. Then, the electronic device takes the reciprocal of each element in the first RGB _ GAIN to obtain a first RGB ', and then converts the first RGB' into a first chromaticity coordinate according to the above formula (3) and formula (4).
In the same way, FFCC 2 The model may directly calculate the second chromaticity coordinate of the image and output the second chromaticity coordinate. FFCC 2 The model may also calculate a second RGB for the image light source and output the second RGB for the image. FFCC 2 The model may also calculate a second RGB _ GAIN for the image light source and output the second RGB _ GAIN. When FFCC 2 When the model outputs the second RGB or the second RGB _ GAIN, the second RGB or the second RGB _ GAIN needs to be converted into the second chromaticity coordinate Mu 2 (u′ 2 ,v′ 2 ). Second RGB or second RGB _ GAIN is converted to Mu 2 Please refer to the first RGB or the first RGB _ GAIN to be converted to Mu 1 The related description is not repeated herein.
The first covariance matrix is FFCC 1 Covariance matrix Sigma with model output size of 2x2 1 ,Sigma 1 For characterizing the FFCC 1 The model outputs the degree of reliability of the adjustment value. The second covariance matrix is FFCC 2 The size of the model output is 2x2 covariance matrix Sigma 2 ,Sigma 2 For characterizing the FFCC 2 The model outputs the degree of reliability of the adjustment value.
Step S503: the electronic device is based on the f 1 Updating the first covariance matrix to obtain an updated first covariance matrix, based on f 2 And updating the second covariance matrix to obtain the updated second covariance matrix.
Specifically, the manner in which the electronic device updates the first covariance matrix and updates the second covariance matrix may be updated via a kalman filter. The embodiment of the present application takes an example in which the electronic device updates the first covariance matrix and the second covariance matrix through a kalman filter. Electronic equipment Sigma 1 As input to the Kalman filter, the Kalman filter updates Sigma according to equation (5) 1 And outputting the updated first covariance matrix Sigma' 1 Equation (5) is as follows:
Figure BDA0003103792040000154
wherein, sigma' 1 For the updated first covariance matrix, f 1 Is FFCC 1 Weight value of the model, sigma 1 Is the first before updateA covariance matrix.
Similarly, the Kalman filter updates Sigma according to formula (6) 2 Output updated Secondary covariance matrix Sigma' 2 Equation (6) is as follows:
Figure BDA0003103792040000161
sigma 'of the above' 2 For the updated second covariance matrix, f 2 Is FFCC 2 Weight value of model, sigma 2 Is the second covariance matrix before updating.
Step S504: the electronic device calculates a first adjustment value and a second adjustment value based on the updated first covariance matrix, the updated second covariance matrix, the first chromaticity coordinate, and the second chromaticity coordinate.
Specifically, the electronic device may calculate a first adjustment value and a second adjustment value through a kalman filter, where the first adjustment value is chromaticity coordinate Mu' 1 The second regulation value is chromaticity coordinate Mu' 2 . The electronic device may calculate a first adjustment value Mu 'according to equation (7)' 1 Equation (7) is as follows:
Mu′ 1 =Mu 1 *(Sigma′ 1 ) -1 (7)
similarly, the electronic device may calculate a second adjustment value Mu 'according to equation (8)' 2 Equation (8) is as follows:
Mu′ 2 =Mu 2 *(Sigma′ 2 ) -1 (8)
step S505: the electronic equipment calculates D of the image light source based on the first adjusting value and the second adjusting value uv
Specifically, the electronic device will Sigma' 1 、Sigma′ 2 、Mu′ 1 And Mu' 2 As an input to the Kalman filter, the Kalman filter is fed with a first adjusted value Mu 'according to equation (9)' 1 And a second adjusted value Mu' 2 Performing fusion, outputting fused chromaticity coordinates Mu', formula (9) is shown asShown below:
Mu′=Mu′ 1 *(Sigma′ 1 ) -1 +Mu′ 2 *(Sigma′ 2 ) -1 (9)
then, the electronic device calculates D of the image light source according to Mu uv Calculating D of the image light source uv There are mainly two methods:
the first method comprises the following steps: as shown in fig. 6, the coordinate (u) of the point of shortest distance to Mu ' (u ', v ') on the planckian locus is acquired on the uv chromaticity coordinate graph 0 ,v 0 ) Then, D is calculated according to the formula (10) uv Equation (10) is as follows:
Figure BDA0003103792040000162
wherein, when v' -v 0 When the value is more than or equal to 0, sgn (v' -v) 0 ) =1; when v' -v 0 <At 0, sgn (v' -v) 0 )=-1。
The second method comprises the following steps: the electronic device calculates L according to equation (11) FP Equation (11) is as follows:
Figure BDA0003103792040000163
then, the electronic device is based on u' and the L FP The first parameter a is calculated according to equation (12), where equation (12) is as follows:
Figure BDA0003103792040000164
then, the electronic device calculates L according to formula (13) based on the a BB Equation (13) is as follows:
L BB =k 6 *a 6 +k 5 *a 5 +k 4 *a 4 +k 3 *a 3 +k 2 *a 2 +k 1 *a 1 +k 0 (13)
wherein k is 6 =-0.00616793,k 5 =0.0893944,k 3 =1.5317403,k 2 =-0.5179722,k 1 =1.925865,k 0 = 0.475106; after calculating L BB The electronic equipment calculates D of the image light source according to the formula (14) uv The value, equation (14) is as follows:
D uv =L FP -L BB (14)
step S506: the electronic equipment is based on the D uv Calculating the weight value f of the third FFCC model 3 And a weight value f of a fourth FFCC model 4
Specifically, the third FFCC model is the FFCC trained in the embodiment of fig. 4 3 A fourth FFCC model is the FFCC trained in the embodiment of FIG. 4 4 And (4) modeling. D uv For characterizing the color of the light source of the shooting environment when D uv When the color is larger than 0, the color of the light source of the shooting environment is green. When D is present uv When the color is less than 0, the light source color of the shooting environment is pink. Due to D calculated by formula (10) or formula (11) to formula (14) uv There is a certain error. Thus, there may be D of the image light source uv And the condition of non-conformity with the actual results in selecting a wrong FFCC model, and further obtaining an adjustment value with low accuracy. For example, D uv Calculated value was 0.6 (D) uv Greater than 0), but the color of the actual photographic light source is pink (D) uv Less than 0). Electronic device based on D uv Calculating value selected FFCC 3 FFCC that model and reality should select 4 The models are not consistent, and therefore, the FFCC obtained 3 The accuracy of the adjustment value output by the model is not high.
In order to solve the above problem, the electronic device passes the images through the FFCCs respectively 3 Model and FFCC 4 The model is calculated to obtain the FFCC 3 Model and FFCC 4 The adjustment value of model output is based on the D uv Calculating f 3 And f 4 So as to follow according to f 3 And f 4 For FFCC 3 Model and FFCC 4 And fusing the adjustment values output by the model.
The electronic device can calculate the weight value f according to formula (15) 3 Equation (15) is as follows:
Figure BDA0003103792040000171
wherein, duv thres To distinguish high and low D uv The threshold value of (1) may be understood as the second threshold value Duv' in the embodiment of fig. 4, which may be obtained from experimental data or historical empirical values, and the embodiment of the present application is not limited. Wherein y is D of the image light source uv ,Duv mult For characterizing f 3 Slope in the y-direction. The electronic device is based on the formula f 4 =1-f 3 Calculating to obtain a weight value f 4
Step S507: and the electronic equipment calculates and processes the image through the third FFCC model and the fourth FFCC model respectively to obtain a third chromaticity coordinate, a fourth chromaticity coordinate, a third covariance matrix and a fourth covariance matrix.
Specifically, FFCC 3 The model can directly calculate the uv chromaticity coordinate of the image light source and output the third chromaticity coordinate Mu of the image light source 3 (u′ 3 ,v′ 3 )。
In some embodiments, the FFCC 3 The model calculates a third RGB of the image light source and outputs the third RGB, which the electronic device converts to third chromaticity coordinates according to the above equations (3) and (4).
In other embodiments, the FFCC 3 The model calculates a third RGB _ GAIN of the image light source and outputs the third RGB _ GAIN. Then, the electronic device may take the reciprocal of each element in the third RGB _ GAIN to obtain a third RGB ', and then convert the third RGB' into a third chromaticity coordinate according to the above formulas (3) and (4).
In a similar way, FFCC 4 The model may directly calculate a fourth chromaticity coordinate of the image and output the fourth chromaticity coordinate. FFCC 4 The model may also calculate a fourth RGB of the image light source and output a fourth RGB of the imageRGB。FFCC 4 The model may also calculate a fourth RGB _ GAIN for the image light source and output the fourth RGB _ GAIN. When FFCC 4 When the model outputs the fourth RGB or the fourth RGB _ GAIN, the fourth RGB or the fourth RGB _ GAIN needs to be converted into the fourth chromaticity coordinate Mu 4 (u′ 4 ,v′ 4 ). Fourth RGB or fourth RGB _ GAIN is converted to Mu 4 Please refer to the first RGB or the first RGB _ GAIN to be converted to Mu 1 The related description of the above will not be repeated herein.
The third covariance matrix is FFCC 3 Covariance matrix Sigma with model output size of 2x2 3 ,Sigma 3 For characterizing the FFCC 3 The model outputs the degree of reliability of the adjustment value. The fourth covariance matrix is FFCC 4 The magnitude of the model output is 2x2 covariance matrix Sigma 4 ,Sigma 4 For characterizing the FFCC 4 The model outputs the degree of reliability of the adjustment value.
Step S508: the electronic equipment is based on the f 3 Updating the third covariance matrix to obtain an updated third covariance matrix, based on the f 4 And updating the fourth covariance matrix to obtain the updated fourth covariance matrix.
Specifically, the manner in which the electronic device updates the third covariance matrix and updates the fourth covariance matrix may be updated by a kalman filter. The embodiment of the present application takes the example that the electronic device updates the third covariance matrix and the fourth covariance matrix through the kalman filter. Electronic equipment Sigma 3 As input to the Kalman filter, the Kalman filter updates Sigma according to equation (16) 3 And outputting the updated third covariance matrix Sigma' 3 Equation (16) is as follows:
Figure BDA0003103792040000181
wherein, sigma' 3 For the updated third covariance matrix, f 3 Is FFCC 3 Weight value of the model, sigma 3 Before being updatedThe third covariance matrix of (2).
Similarly, the Kalman filter updates Sigma according to equation (17) 4 And outputting the updated fourth covariance matrix Sigma' 4 Equation (17) is as follows:
Figure BDA0003103792040000182
sigma 'of the above' 4 For the updated fourth covariance matrix, f 4 Is FFCC 4 Weight value of the model, sigma 4 Is the fourth covariance matrix before update.
Step S509: and the electronic equipment calculates to obtain a third adjustment value and a fourth adjustment value based on the updated third covariance matrix, the updated fourth covariance matrix, the third chromaticity coordinate and the fourth chromaticity coordinate.
Specifically, the electronic device may calculate a third adjustment value and a fourth adjustment value through a kalman filter, where the third adjustment value is chromaticity coordinate Mu' 3 The second regulation value is chromaticity coordinate Mu' 4 . The electronic device may calculate a third adjustment value Mu 'according to equation (18)' 3 Equation (18) is as follows:
Mu′ 3 =Mu 3 *(Sigma′ 3 ) -1 (18)
similarly, the electronic device may calculate the fourth adjustment value Mu 'according to formula (19)' 4 Equation (19) is as follows:
Mu′ 4 =Mu 4 *(Sigma′ 4 ) -1 (19)
step S510: and the electronic equipment inputs the first regulating value, the second regulating value, the third regulating value and the fourth regulating value into a Kalman filter for calculation to obtain the regulating values of the images.
Specifically, the electronic device will be Mu' 1 ~Mu′ 4 ,Sigma′ 1 ~Sigma′ 4 As an input to the Kalman filter, the Kalman filter is according to the formula Mu "= Mu' 1 *(Sigma′ 1 ) -1 +Mu′ 2 *(Sigma′ 2 ) -1 +Mu′ 3 *(Sigma′ 3 ) -1 +Mu′ 4 *(Sigma′ 4 ) -1 The fused chromaticity coordinates Mu "(u", v ") are obtained. The electronic device then converts Mu' to RGB _ GAIN and uses the RGB _ GAIN as an adjustment value for the image for adjusting the RGB of the image. The process for the electronic device to convert Mu "to RGB _ GAIN is:
the electronic device calculates the second parameter Z by equation (20), equation (20) being as follows:
Figure BDA0003103792040000183
then, the electronic device calculates the fused light sources RGB (R ', G ', B ') through equations (21) to (23), respectively, and equations (21) to (23) are as follows:
Figure BDA0003103792040000184
and finally, the electronic equipment takes the reciprocal of each element in the fused light source RGB to obtain the fused RGB _ GAIN.
In the embodiment of the application, the electronic equipment connects LV and D uv As two reference factors for adjusting white balance of images, the images are respectively used as FFCCs related to the LV 1 model-FFCC 2 Model, and D uv Related FFCC 3 model-FFCC 4 And inputting the model, obtaining four adjusting values through the output of the four FFCC models, and calculating the four adjusting values through a Kalman filter by the electronic equipment to obtain RGB _ GAIN of the image light source. Compared with the problem that the accuracy of the RGB _ GAIN of the light source obtained by the traditional AI AWB algorithm through calculation by using a single FFCC model is not high, the RGB _ GAIN of the light source output by the AI AWB algorithm in the embodiment of the application under different shooting scenes is higher in accuracy, and further the color cast of the image can be better eliminated.
The above embodiments illustrate the method of the embodiments of the present application in detail, and the related devices of the embodiments are described below.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present disclosure. The electronic device 100 comprises a processor 101 and a memory 102, wherein the detailed description of the units is as follows:
the memory 102 is used for storing program codes;
the processor 101 is configured to call the program code stored in the memory to perform the following steps:
respectively inputting the first image into a first FFCC (fast Fourier color constancy) model, a second FFCC model, a third FFCC model and a fourth FFCC model for calculation to obtain a first adjusting value, a second adjusting value, a third adjusting value and a fourth adjusting value;
and inputting the first adjusting value, the second adjusting value, the third adjusting value and the fourth adjusting value into a Kalman filter for calculation to obtain the adjusting value of the first image.
In a possible implementation manner, the processor 101 inputs the first image into a first FFCC model (fast fourier color constancy model), a second FFCC model, a third FFCC model, and a fourth FFCC model respectively to calculate, so as to obtain a first adjustment value, a second adjustment value, a third adjustment value, and a fourth adjustment value, and specifically includes:
respectively inputting the first image into the first FFCC model, the second FFCC model, the third FFCC model and the fourth FFCC model for calculation to obtain a first chromaticity coordinate and a first covariance matrix, a second chromaticity coordinate and a second covariance matrix, a third chromaticity coordinate and a third covariance matrix, and a fourth chromaticity coordinate and a fourth covariance matrix;
calculating the weight value f of the first FFCC model according to the brightness value of the first image 1 And a weight value f of the second FFCC model 2
Based on the f 1 Calculating the first covariance matrix and the first chromaticity coordinate to obtain the first adjusting value;
based on the f 2 The second covariance momentCalculating the second adjustment value according to the array and the second chromaticity coordinate;
calculating D of the first image light source based on the first adjustment value and the second adjustment value uv
According to D of the first image light source uv Calculating the weight value f of the third FFCC model 3 And a weight value f of the fourth FFCC model 4
Based on the f 3 Calculating the third covariance matrix and the third chromaticity coordinate to obtain a third adjustment value;
based on the f 4 And calculating the fourth adjustment value according to the fourth covariance matrix and the fourth chromaticity coordinate.
In one possible implementation manner, the processor 101 calculates the weight value f of the first FFCC model according to the brightness value of the first image 1 And a weight value f of the second FFCC model 2 The method specifically comprises the following steps:
according to the formula
Figure BDA0003103792040000191
Obtaining the weight value f 1
According to the formula f 2 =1-f 1 Obtaining the weight value f 2
In one possible implementation, the processor 101 is based on the f 1 Calculating the first adjustment value according to the first covariance matrix and the first chromaticity coordinate, and specifically including:
the first covariance matrix is expressed according to a formula
Figure BDA0003103792040000192
Calculating to obtain an updated first covariance matrix;
mixing the first chromaticity coordinates and the Sigma' 1 According to a formula Mu' 1 =Mu 1 *(Sigma′ 1 ) -1 And calculating to obtain the first adjusting value.
In one possibilityIn an implementation manner, the processor 101 is based on the f 2 Calculating the second adjustment value according to the second covariance matrix and the second chromaticity coordinate, and specifically including:
the second covariance matrix is expressed according to a formula
Figure BDA0003103792040000201
Calculating to obtain an updated second covariance matrix;
mixing the second chromaticity coordinates with the Sigma' 2 According to a formula Mu' 2 =Mu 2 *(Sigma′ 2 ) -1 And calculating to obtain the second adjusting value.
In one possible implementation manner, the processor 101 calculates D of the first image light source based on the first adjustment value and the second adjustment value uv The method specifically comprises the following steps:
according to the formula Mu '= Mu' 1 *(Sigma′ 1 ) -1 +Mu′ 2 *(Sigma′ 2 ) -1 Calculating to obtain a fused chromaticity coordinate;
obtaining D of the first image light source based on the Mu uv
In one possible implementation, the processor 101 is configured to determine the first image light source according to D uv Calculating the weight value f of the third FFCC model 3 And a weight value f of the fourth FFCC model 4 The method specifically comprises the following steps:
according to the formula
Figure BDA0003103792040000202
Obtaining the weight value f 3
According to the formula f 4 =1-f 3 Obtaining the weight value f 4
In one possible implementation, the processor 101 is based on the f 3 And calculating the third adjustment value by using the third covariance matrix and the third chromaticity coordinate, wherein the third adjustment value specifically comprises:
combining the third covarianceMatrix according to formula
Figure BDA0003103792040000203
Calculating to obtain an updated third covariance matrix;
mixing the third chroma coordinates and the Sigma' 3 According to a formula Mu' 3 =Mu 3 *(Sigma′ 3 ) -1 And calculating to obtain a third regulating value.
In one possible implementation, the processor 101 is based on the f 4 The fourth covariance matrix and the fourth chromaticity coordinate are calculated to obtain the fourth adjustment value, which specifically includes:
the fourth covariance matrix is expressed according to a formula
Figure BDA0003103792040000204
Calculating to obtain an updated fourth covariance matrix;
mixing the fourth chroma coordinate with the Sigma' 4 According to a formula Mu' 4 =Mu 4 *(Sigma′ 4 ) -1 And calculating to obtain a fourth adjusting value.
In a possible implementation manner, the inputting, by the processor 101, the first adjustment value, the second adjustment value, the third adjustment value, and the fourth adjustment value into a kalman filter to perform calculation to obtain the adjustment value of the first image specifically includes:
according to the formula Mu '= Mu' 1 *(Sigma′ 1 ) -1 +Mu′ 2 *(Sigma′ 2 ) -1 +Mu′ 3 *(Sigma′ 3 ) -1 +Mu′ 4 *(Sigma′ 4 ) -1 And calculating to obtain an adjusting value of the first image.
The present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps in the algorithm for AI automatic white balance in the foregoing embodiments and various possible implementations thereof.
The present application provides a chip system, which includes a processor, and is configured to support an electronic device to implement the functions related to the method in the foregoing embodiments and various possible manners thereof.
In one possible design, the chip system further includes a memory for program instructions and data necessary for the AI auto white balance algorithm. The chip system may be formed by a chip, or may include a chip and other discrete devices.
Embodiments of the present application provide a computer program product containing instructions, which when run on an electronic device, cause the electronic device to perform the steps in the algorithm for AI automatic white balancing in the above embodiments and various possible implementations thereof.
In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions described in the present application are generated in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk), among others.
One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium includes: various media capable of storing program codes, such as ROM or RAM, magnetic or optical disks, etc.
In short, the above description is only an example of the technical solution of the present invention, and is not intended to limit the scope of the present invention. Any modifications, equivalents, improvements and the like made in accordance with the disclosure of the present invention should be considered as being included in the scope of the present invention.

Claims (14)

1. An algorithm for AI auto white balance, comprising:
respectively inputting a first image into a first FFCC model, a second FFCC model, a third FFCC model and a fourth FFCC model for calculation to obtain a first adjusting value, a second adjusting value, a third adjusting value and a fourth adjusting value, wherein the first adjusting value is obtained through the first FFCC model, the second adjusting value is obtained through the second FFCC model, the third adjusting value is obtained through the third FFCC model, and the fourth adjusting value is obtained through the fourth FFCC model;
inputting the first adjustment value, the second adjustment value, the third adjustment value and the fourth adjustment value into a Kalman filter for calculation to obtain an adjustment value of the first image, wherein the adjustment value of the first image is used for adjusting the color of the first image;
the first FFCC model is used for identifying a model corresponding to the image brightness value being larger than a first threshold value, the second FFCC model is used for identifying a model corresponding to the image brightness value being smaller than or equal to the first threshold value, and the third FFCC model is used for identifying the D of the image light source uv The model corresponding to the value greater than the second threshold value, and the fourth FFCC model is used for identifying the D of the image light source uv Is less thanOr equal to the model to which the second threshold corresponds.
2. The method of claim 1, wherein the inputting the first image into the first FFCC model, the second FFCC model, the third FFCC model, and the fourth FFCC model respectively for calculation to obtain the first adjustment value, the second adjustment value, the third adjustment value, and the fourth adjustment value specifically comprises:
respectively inputting the first image into the first FFCC model, the second FFCC model, the third FFCC model and the fourth FFCC model for calculation to obtain a first chromaticity coordinate and a first covariance matrix, a second chromaticity coordinate and a second covariance matrix, a third chromaticity coordinate and a third covariance matrix, and a fourth chromaticity coordinate and a fourth covariance matrix;
calculating the weight value f of the first FFCC model according to the brightness value of the first image 1 And a weight value f of the second FFCC model 2
Based on the f 1 Calculating the first covariance matrix and the first chromaticity coordinate to obtain the first adjusting value;
based on the f 2 Calculating the second adjustment value according to the second covariance matrix and the second chromaticity coordinate;
calculating D of the first image light source based on the first adjustment value and the second adjustment value uv
According to D of the first image light source uv Calculating the weight value f of the third FFCC model 3 And a weight value f of the fourth FFCC model 4
Based on the f 3 Calculating the third adjustment value according to the third covariance matrix and the third chroma coordinate;
based on the f 4 And calculating the fourth adjustment value according to the fourth covariance matrix and the fourth chromaticity coordinate.
3. The method of claim 2, wherein the generating is based on the first graphCalculating the weight value f of the first FFCC model according to the brightness value of the image 1 And a weight value f of the second FFCC model 2 The method specifically comprises the following steps:
according to the formula
Figure FDA0003103792030000011
Obtaining the weight value f 1
According to the formula f 2 =1-f 1 Obtaining the weight value f 2
Wherein, said Lv thres Is the first threshold value, x is a luminance value of the first image, and Lv is the luminance value of the first image mult For characterizing said x at said f 1 The rate of change in the time.
4. The method of any of claims 2-3, wherein the f is based on 1 Calculating the first adjustment value according to the first covariance matrix and the first chromaticity coordinate, and specifically including:
the first covariance matrix is expressed according to a formula
Figure FDA0003103792030000021
Calculating to obtain an updated first covariance matrix, wherein the Sigma 1 Is the first covariance matrix, the Sigma' 1 The updated first covariance matrix;
mixing the first chromaticity coordinates and the Sigma' 1 According to formula Mu' 1 =Mu 1 *(Sigma′ 1 ) -1 Calculating to obtain the first regulating value; wherein, the Mu 1 Is the first chromaticity coordinate, mu' 1 Is the first adjustment value.
5. The method of any of claims 2-3, wherein the f is based on 2 Calculating the second adjustment value according to the second covariance matrix and the second chromaticity coordinate, and specifically including:
will be described inThe second covariance matrix is based on the formula
Figure FDA0003103792030000022
Calculating to obtain an updated second covariance matrix, the Sigma 2 Is the second covariance matrix, the Sigma' 2 The updated second covariance matrix;
mixing the second chromaticity coordinates with the Sigma' 2 According to a formula Mu' 2 =Mu 2 *(Sigma′ 2 ) -1 Calculating to obtain the second adjusting value; wherein, the Mu 2 Is the second chromaticity coordinate, the Mu' 2 Is the second adjustment value.
6. The method of any of claims 4-5, wherein calculating D for the first image light source based on the first adjustment value and the second adjustment value uv The method specifically comprises the following steps:
according to the formula Mu '= Mu' 1 *(Sigma′ 1 ) -1 +Mu′ 2 *(Sigma′ 2 ) -1 Calculating to obtain a fused chromaticity coordinate, wherein Mu 'is the fused chromaticity coordinate, mu' 1 Is the first regulation value, mu' 2 Is the second adjustment value, sigma' 1 Is the updated first covariance matrix, sigma' 2 Is the updated second covariance matrix;
calculating D of the first image light source based on the Mu uv
7. The method of any of claims 2-6, wherein the source is based on D of the first image light uv Calculating the weight value f of the third FFCC model 3 And a weight value f of the fourth FFCC model 4 The method specifically comprises the following steps:
according to the formula
Figure FDA0003103792030000023
Obtaining the weight value f 3
According to the formula f 4 =1-f 3 Obtaining the weight value f 4
Wherein, the Duv thres Is the second threshold value, and y is D of the first image light source uv The Duv of mult For characterizing said y at said f 3 The rate of change in the time.
8. The method of any of claims 2-7, wherein the basing on the f 3 And the third covariance matrix and the third chromaticity coordinate are calculated to obtain the third adjustment value, which specifically includes:
the third covariance matrix is expressed according to a formula
Figure FDA0003103792030000024
Calculating to obtain an updated third covariance matrix, wherein the Sigma 3 Is the third covariance matrix, the Sigma' 3 The updated third covariance matrix;
mixing the third chroma coordinates and the Sigma' 3 According to a formula Mu' 3 =Mu 3 *(Sigma′ 3 ) -1 Calculating to obtain the third regulating value; wherein, the Mu 3 Is the third chromaticity coordinate, the Mu' 3 Is the third adjustment value.
9. The method of any of claims 2-8, wherein the basing on the f 4 The fourth covariance matrix and the fourth chromaticity coordinate are calculated to obtain the fourth adjustment value, which specifically includes:
the fourth covariance matrix is expressed according to a formula
Figure FDA0003103792030000025
Calculating to obtain an updated fourth covariance matrix, sigma 4 Is the fourth covariance matrix, the Sigma' 4 The updated fourth covariance matrix;
mixing the fourth chroma coordinate with the Sigma' 4 According to a formula Mu' 4 =Mu 4 *(Sigma′ 4 ) -1 Calculating to obtain the fourth adjusting value; wherein, the Mu 4 Is the fourth chromaticity coordinate, the Mu' 4 Is the fourth adjustment value.
10. The method according to any one of claims 2 to 9, wherein the inputting the first adjustment value, the second adjustment value, the third adjustment value, and the fourth adjustment value into a kalman filter to perform calculation to obtain the adjustment value of the first image specifically includes:
according to the formula Mu '= Mu' 1 *(Sigma′ 1 ) -1 +Mu′ 2 *(Sigma′ 2 ) -1 +Mu′ 3 *(Sigma′ 3 ) -1 +Mu′ 4 *(Sigma′ 4 ) -1 Calculating to obtain an adjusting value of the first image;
wherein the Mu 'is an adjusted value of the first image, the Mu' 1 Is the first regulation value, mu' 2 Is the second regulation value, mu' 3 Is the third regulation value, mu' 4 Is the fourth adjustment value, sigma' 1 Is the updated first covariance matrix, sigma' 2 Is an updated second covariance matrix, sigma' 3 Is the updated third covariance matrix, sigma' 4 Is the updated fourth covariance matrix.
11. An electronic device, comprising: the system comprises a touch screen, a camera, one or more processors and one or more memories; the one or more processors are coupled with the touch screen, the camera, the one or more memories for storing computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform the method of any of claims 1-10.
12. A computer-readable storage medium comprising instructions that, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-10.
13. A chip system for application to an electronic device, the chip system comprising one or more processors for invoking computer instructions to cause the electronic device to perform the method of any of claims 1-10.
14. A computer program product comprising instructions for causing an electronic device to perform the method according to any one of claims 1-10 when the computer program product is run on the electronic device.
CN202110631167.8A 2021-06-07 2021-06-07 Algorithm for automatic white balance of AI (automatic input/output) and electronic equipment Active CN115514947B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202310814091.1A CN116761081A (en) 2021-06-07 2021-06-07 Algorithm for automatic white balance of AI (automatic input/output) and electronic equipment
CN202110631167.8A CN115514947B (en) 2021-06-07 2021-06-07 Algorithm for automatic white balance of AI (automatic input/output) and electronic equipment
PCT/CN2022/093491 WO2022257713A1 (en) 2021-06-07 2022-05-18 Ai automatic white balance algorithm and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110631167.8A CN115514947B (en) 2021-06-07 2021-06-07 Algorithm for automatic white balance of AI (automatic input/output) and electronic equipment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202310814091.1A Division CN116761081A (en) 2021-06-07 2021-06-07 Algorithm for automatic white balance of AI (automatic input/output) and electronic equipment

Publications (2)

Publication Number Publication Date
CN115514947A true CN115514947A (en) 2022-12-23
CN115514947B CN115514947B (en) 2023-07-21

Family

ID=84425753

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310814091.1A Pending CN116761081A (en) 2021-06-07 2021-06-07 Algorithm for automatic white balance of AI (automatic input/output) and electronic equipment
CN202110631167.8A Active CN115514947B (en) 2021-06-07 2021-06-07 Algorithm for automatic white balance of AI (automatic input/output) and electronic equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202310814091.1A Pending CN116761081A (en) 2021-06-07 2021-06-07 Algorithm for automatic white balance of AI (automatic input/output) and electronic equipment

Country Status (2)

Country Link
CN (2) CN116761081A (en)
WO (1) WO2022257713A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110279703A1 (en) * 2010-05-12 2011-11-17 Samsung Electronics Co., Ltd. Apparatus and method for processing image by using characteristic of light source
CN108024055A (en) * 2017-11-03 2018-05-11 广东欧珀移动通信有限公司 Method, apparatus, mobile terminal and the storage medium of white balance processing
CN108376404A (en) * 2018-02-11 2018-08-07 广东欧珀移动通信有限公司 Image processing method and device, electronic equipment, storage medium
CN109348206A (en) * 2018-11-19 2019-02-15 Oppo广东移动通信有限公司 Image white balancing treatment method, device, storage medium and mobile terminal
CN109618145A (en) * 2018-12-13 2019-04-12 深圳美图创新科技有限公司 Color constancy bearing calibration, device and image processing equipment
US20200051225A1 (en) * 2016-11-15 2020-02-13 Google Llc Fast Fourier Color Constancy
US20210006760A1 (en) * 2018-11-16 2021-01-07 Huawei Technologies Co., Ltd. Meta-learning for camera adaptive color constancy

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20030009624A (en) * 2001-07-23 2003-02-05 주식회사 하이닉스반도체 White balance correction method by using Kalman filter
JP5308792B2 (en) * 2008-11-28 2013-10-09 オリンパス株式会社 White balance adjustment device, white balance adjustment method, white balance adjustment program, and imaging device
JP6337434B2 (en) * 2013-09-18 2018-06-06 株式会社ニコン Image processing apparatus and imaging apparatus
US20170171523A1 (en) * 2015-12-10 2017-06-15 Motorola Mobility Llc Assisted Auto White Balance
CN106204662B (en) * 2016-06-24 2018-11-20 电子科技大学 A kind of color of image constancy method under multiple light courcess environment
CN107635123B (en) * 2017-10-30 2019-07-19 Oppo广东移动通信有限公司 White balancing treatment method and device, electronic device and computer readable storage medium
CN108234971B (en) * 2018-02-09 2019-11-05 上海小蚁科技有限公司 White balance parameter determines method, white balance adjustment method and device, storage medium, terminal
WO2019227355A1 (en) * 2018-05-30 2019-12-05 华为技术有限公司 Image processing method and apparatus
GB201908521D0 (en) * 2019-06-13 2019-07-31 Spectral Edge Ltd Image white balance processing system and method
CN112204957A (en) * 2019-09-20 2021-01-08 深圳市大疆创新科技有限公司 White balance processing method and device, movable platform and camera

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110279703A1 (en) * 2010-05-12 2011-11-17 Samsung Electronics Co., Ltd. Apparatus and method for processing image by using characteristic of light source
US20200051225A1 (en) * 2016-11-15 2020-02-13 Google Llc Fast Fourier Color Constancy
CN108024055A (en) * 2017-11-03 2018-05-11 广东欧珀移动通信有限公司 Method, apparatus, mobile terminal and the storage medium of white balance processing
CN108376404A (en) * 2018-02-11 2018-08-07 广东欧珀移动通信有限公司 Image processing method and device, electronic equipment, storage medium
US20210006760A1 (en) * 2018-11-16 2021-01-07 Huawei Technologies Co., Ltd. Meta-learning for camera adaptive color constancy
CN109348206A (en) * 2018-11-19 2019-02-15 Oppo广东移动通信有限公司 Image white balancing treatment method, device, storage medium and mobile terminal
CN109618145A (en) * 2018-12-13 2019-04-12 深圳美图创新科技有限公司 Color constancy bearing calibration, device and image processing equipment

Also Published As

Publication number Publication date
CN115514947B (en) 2023-07-21
CN116761081A (en) 2023-09-15
WO2022257713A1 (en) 2022-12-15

Similar Documents

Publication Publication Date Title
US9866748B2 (en) System and method for controlling a camera based on processing an image captured by other camera
US9307213B2 (en) Robust selection and weighting for gray patch automatic white balancing
KR20190041586A (en) Electronic device composing a plurality of images and method
KR20210078656A (en) Method for providing white balance and electronic device for supporting the same
CN113727085B (en) White balance processing method, electronic equipment, chip system and storage medium
US9030575B2 (en) Transformations and white point constraint solutions for a novel chromaticity space
CN113066020A (en) Image processing method and device, computer readable medium and electronic device
TWI604413B (en) Image processing method and image processing device
WO2023015993A9 (en) Chromaticity information determination method and related electronic device
CN115514948B (en) Image adjusting method and electronic device
CN117135471A (en) Image processing method and electronic equipment
CN115514947B (en) Algorithm for automatic white balance of AI (automatic input/output) and electronic equipment
TW201717190A (en) Display adjustment method electronic device
WO2022067761A1 (en) Image processing method and apparatus, capturing device, movable platform, and computer readable storage medium
US20200374420A1 (en) Image processing apparatus, image processing method, and storage medium
CN111602390A (en) Terminal white balance processing method, terminal and computer readable storage medium
CN116051434B (en) Image processing method and related electronic equipment
CN116668838B (en) Image processing method and electronic equipment
KR20210101571A (en) method for generating image and electronic device thereof
CN110808002A (en) Screen display compensation method and device and electronic equipment
WO2023124165A1 (en) Image processing method and related electronic device
JP2015119436A (en) Imaging apparatus
CN116437060B (en) Image processing method and related electronic equipment
CN114697629B (en) White balance processing method and device, storage medium and terminal equipment
EP4258676A1 (en) Automatic exposure method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant