CN106204662A - A kind of color of image constancy method under multiple light courcess environment - Google Patents
A kind of color of image constancy method under multiple light courcess environment Download PDFInfo
- Publication number
- CN106204662A CN106204662A CN201610478268.5A CN201610478268A CN106204662A CN 106204662 A CN106204662 A CN 106204662A CN 201610478268 A CN201610478268 A CN 201610478268A CN 106204662 A CN106204662 A CN 106204662A
- Authority
- CN
- China
- Prior art keywords
- image
- color
- light source
- dark areas
- bright area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The invention discloses the color of image constancy method under a kind of multiple light courcess environment, the present invention proposes multiple light courcess color constancy method based on retina physiological mechanism, simulate light dark adaptation ability and the characteristic of retinal neurons receptive field of vision, establish based on dividing bright dark areas based on ON type and the model of OFF receptive field, this model can estimate position and the color of light source well, and can adjust the scene image that parameter adaptation is different.It is few that the method for the present invention has parameter, calculates simple, and speed is fast, effective, it is possible to carry out the features such as real-time process, be very suitable for the front end being built in physical equipment pretreatment and estimate the scene light source colour in image.
Description
Technical field
The invention belongs to computer vision and technical field of image processing, be specifically related to the scene multiple light courcess face of coloured image
Color is estimated and color of image correcting technology.
Background technology
Under natural environment, the visual system of people has the ability of light source colour change in scene that resists.Such as, to same
Individual scene, under the most yellow sunlight irradiates, or under the sunlight irradiation that the time-division is the reddest at dusk, our vision system
The color of the scene that system is perceived remains constant, and this ability is also referred to as the color constancy of visual system.By skill
The restriction of art condition, the picture that current imaging device such as photographing unit photographs often is affected by light source colour in scene
Colour cast the most in various degree, this colour cast produced by scene light source Color influences can make follow-up computer vision apply
Identify accurately as target recognition, form fit etc. cannot obtain according to color and mate.Therefore, for one width input original
Colour cast image, the most effectively removes comprised scene light source colour and is just particularly important.Computational color constancy causes just
Power is in solving this problem, and its main purpose calculates the color of the unknown light source included in any piece image, so just
After utilize calculated light source colour to original colour cast correct image, obtain the image of display under standard white light.At present
The most classical Illuminant estimation method is the gray world method that Buchsbaum proposes for 1980, list of references:
G.Buchsbaum,“A spatial processor model for object colour perception,Journal
Of the Franklin Institute, vol.310, no.1, pp1 26, July 1980 ", although the method calculates simple,
But in most cases it is difficult to entirely accurate and estimates light source colour, and algorithm is just for the field under the conditions of uniform source of light
Scape.
Up to the present, the research for the multiple light courcess estimation of non-uniform Distribution is the most fewer, and comparison is the most representational is
Hamid Reza Vaezi Joze et al. put forward in 2014, list of references: H.R.V.Joze and M.S.Drew,
Exemplar-based color constancy and multiple illumination,Pattern Analysis and
Machine Intelligence, IEEE Transactions on, Vol.36, No.5, pp.860-873,2014, but should
Method calculates complexity, needs to utilize substantial amounts of sample to be trained, very flexible, is not suitable for processing in real time.
Summary of the invention
The invention aims to solve the defect that existing image scene multiple light courcess color method of estimation exists, propose
A kind of color of image constancy method under multiple light courcess environment.
The technical scheme is that the color of image constancy method under a kind of multiple light courcess environment, comprise the steps:
S1. calculate the luminance graph of image: extract the color component in image, the coloured image of input is decomposed into redness,
Green and blue three color components, three color components are added the luminance graph obtaining image;
S2. image is divided into bright area and dark areas: the luminance graph that S1 is obtained is divided into two classes, be specifically divided into bright area and
Dark areas;
Calculating bright area and the light source of dark areas the most respectively: bright area and the dark areas different scale to artwork respectively
Template be filtered, tentatively obtain the bright area of image and the light source of dark areas;
S4. synthesis obtains the light source of whole image: be added by the light source of calculated for step S3 bright area and dark areas,
Obtain the light source colour of whole image;
S5. light source colour is eliminated, it is achieved color constancy: be respectively divided by correspondence by each pixel in original colour cast image
The pixel of calculated light source image, the image without colour cast after being corrected.
Wherein, the purpose dividing bright dark areas in step S2 is the mechanism of simulation retina light dark adaptation, in retina
Clearly, the cell that different brightness level are participated in calculating is different for cone cell and the retinal rod division of labor.
Use classical K-Mean Method to gather in the bright dark areas of actual division to be two classes, be divided into dark areas and luminance area,
Original color image bright area is labeled as 1, and dark areas is labeled as 0.
Filtering Template in step S3 is the DOG template of a double gauss difference, and formula is as follows:
Wherein, R1Being the model of ON type receptive field, this model center just, bear, the here as filtering mould of bright area by periphery
Plate, R2Being the model of OFF type receptive field, this model center is born, outer straight, the here as Filtering Template of dark areas, its middle mold
The yardstick of plate periphery is 3 times of center yardstick, and namely the value of λ is 3, and additionally Filtering Template is a unbalanced double gauss
Difference, namely the value of weight k is not equal to 1, and the span of k is k ∈ (0.1,1) here, and wherein bright area filter scale is wanted
Less than dark areas filter scale, namely σ1< σ2。
Here as a kind of preferred version, meets σ1∈ [2,10], σ2∈[3,20]。
The light source calculating bright area and dark areas respectively described in above-mentioned steps S3 specifically includes the most step by step:
S31. artwork and R1Do convolution and obtain L1;
S32. artwork and R2Do convolution and obtain L2;
S33.L1In dark areas part be set to 0, the part being namely labeled as 0 is set to 0, L2Middle bright area part is set to
0, the part being namely labeled as 1 is set to 0;And then obtain bright area and the light source of dark areas.
After above-mentioned S5, the light source of each pixel in image can be calculated, obtain the light source that a width is complete
Image, this light source figure can be used to correct original colour cast image so that it is recovers image originally under standard light source (white light)
Coloured image.
Beneficial effects of the present invention: first the cromogram of input is divided by the color constancy method of the present invention according to brightness difference
For bright area and dark areas, then using positive periphery, center to bear in bright area, yardstick is different and amplitude double gauss difference type not etc.
Wave filter (Difference of Gaussian, DoG), uses the outer straight DoG wave filter of central negative to enter respectively in dark areas
Row Filtering Processing, the present invention has substantially drawn in human visual system retina light dark adaptation and neuron ON type and OFF
The feature of type receptive field, the receptive field scope (DoG filter center and the yardstick of periphery) of suitable regulation periphery, center and
The sensitivity intensity (amplitude of DoG wave filter periphery) of periphery receptive field, such that it is able to change spatial frequency modulation characteristic (such as
It is changed between bandpass characteristics and low-pass characteristic), sensitivity intensity (amplitude size) energy of regulation periphery receptive field in addition
Enough efficiently extracting shades of colour region in scene, color boundaries information preferably estimates the color of scene light source, thus ON
The periphery sensitivity intensity (amplitude size) that type receptive field yardstick, OFF type receptive field yardstick and two receptive fields are identical is three
Individual main parameter.The method of the present invention has parameter few (only three adjustable parameters, i.e. two yardsticks and an amplitude), meter
Calculating simple, speed is fast, effective, it is possible to carry out the features such as real-time process, is very suitable for being built in physical equipment (such as photograph
Machine) the scene light source colour in image estimated by the front end of pretreatment.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of the scene light source colour method of estimation of the coloured image of the present invention.
Fig. 2 is the distribution of light sources figure (2a) of synthetic in embodiment, the distribution of light sources figure (2b) that bright area is estimated, dark space
The distribution of light sources figure (2c) that territory is estimated, the final complete distribution of light sources figure (2d) estimated.
Fig. 3 is width original input picture (Fig. 3 a) in embodiment, adds the image (Fig. 3 b) after synthetic light source,
And carry out the result (Fig. 3 c) after tint correction.
Detailed description of the invention
Inherently there is the ability of color constancy in view of human vision, it is possible to the colour cast in correction scene, retina automatically
Having ability and the characteristic of retinal neurons receptive field of light dark adaptation, based on this, the present invention proposes based on retina
The multiple light courcess color constancy method of physiological mechanism.This method simulates light dark adaptation ability and the retinal neurons impression of vision
Wild characteristic, establish based on dividing bright dark areas based on ON type and the model of OFF type receptive field, this model can be very
Estimate well position and the color of light source, and the scene image that parameter adaptation is different can be adjusted.
It is specifically described below by an embodiment.
Piece image is downloaded from the most internationally recognized image library website for estimate scene light source colour
(8D5U5535.png) and the real light sources color of correspondence, image size is 340 × 511, and wherein 8D5U5535 image does not has
Pretreatment (such as tint correction, gamma value corrects) through any camera itself.The schematic flow sheet of the detailed step of the present invention
As it is shown in figure 1, detailed process is as follows:
S1. the luminance graph of image is calculated: first the coloured image of input is decomposed into redness, green and blue three colors
Component, three color components are added the luminance graph obtaining image.
With two pixels (136,45) and (397,172) of original input image, pixel value be respectively (0.026,0.035,
0.026) with (0.868,1,0.842) as a example by, be decomposed into the value after redness, green and blue three color components respectively
0.026,0.035,0.026 and 0.868,1,0.842.RGB color component carries out being added the brightness put as each, then
The brightness of two pixels is respectively 0.026+0.035+0.026=0.087,0.868+1+0.842=2.710.
S2. image is divided into bright area and dark areas: the brightness value obtained according to S1, uses classical K-Mean Method,
Image is gathered for dark areas and luminance area two class.We are labeled as 1 original color image bright area, and dark areas is labeled as 0.
As a example by pixel in S1: the brightness value of pixel (136,45) is 0.087, is worth on the low side, K-average is used to gather
Gathered into dark areas class after class, be labeled as 0;The brightness value of pixel (397,172) is 2.710, is worth higher, uses K-average to gather
Gathered into bright area class after class, be labeled as 1.
Calculate bright area and the light source of dark areas the most respectively: use the wave filter bright area to being obtained by step S2 respectively
Class carries out convolutional calculation with the pixel of dark areas class.
Bright area specifically makes in the present embodiment positive periphery, employing center bear, and yardstick is different and amplitude double gauss not etc. is poor
Mode filter, uses the outer straight double gauss difference mode filter of central negative to be filtered respectively processing in dark areas.For difference
Image scene, yardstick (σ can be adjusted flexibly1And σ2) and three parameters of amplitude (k), thus estimate the color of scene light source.
Input pixel value (0.026,0.035,0.026) is carried out with (0.868,1,0.842) by step S1~S2
Calculated it be labeled as the dark areas (0.026,0.035,0.026) of 0 and be labeled as the bright area (0.868,1,0.842) of 1
As a example by.
Here following a kind of calculation can be used:
S31. artwork and wave filter R1Do convolution and obtain L1, dark areas (0.026,0.035,0.026) after convolution at L1In
Value be (0.026,0.030,0.022), bright area (0.868,1,0.842) after convolution at L1In value be (0.880,
0.998,0.880)。
S32. artwork and wave filter R2Do convolution and obtain L2, dark areas (0.026,0.035,0.026) after convolution at L2In
Value be (0.050,0.054,0.043), bright area (0.868,1,0.842) after convolution at L2In value be (0.840,
0.952,0.839)。
S33.L1In dark areas part be set to 0, be namely labeled as the part of 0 and be set to 0, dark areas (0.026,
0.035,0.026) be labeled as 0, at L1In be set to (0,0,0).
L2Middle bright area part is set to 0, and the part being namely labeled as 1 is set to 0, bright area (0.868,1,0.842)
It is labeled as 1, at L2In be set to (0,0,0).L1With L2Both are added and obtain the light source that the whole image first step is estimated.
R in this example1Middle convolution yardstick σ1It is 3, R2Middle convolution yardstick σ2It is 20, the periphery receptive field power of two convolution masks
Weight k is 0.5.
S4. synthesis obtains the light source of whole image: by calculated for bright area light source L1It is calculated light with dark areas
Source L2It is added the light source colour obtaining whole image..
As a example by two pixel values (0.026,0.035,0.026) in S1 and (0.868,1,0.842), (0.026,
0.035,0.026) light source colour that pixel is corresponding be (0.050+0,0.054+0,0.043+0)=(0.050,0.054,
0.043) light source colour that, (0.880,0.998,0.880) point is corresponding be (0.880+0,0.998+0,0.880+0)=
(0.880,0.998,0.880)。
Fig. 2 illustrates the distribution of light sources figure (2a) of synthetic, bright area calculated light source L1Scattergram (2b), secretly
Region calculated light source L2Scattergram (2c), the final complete distribution of light sources figure (2d) estimated.
S5. light source colour is eliminated, it is achieved color constancy.It is respectively divided by correspondence by each pixel in original colour cast image
The pixel of calculated light source image, the image without colour cast after being corrected.
Utilize the light source colour value under step S5 each color component calculated, correct original input picture respectively
The pixel value of each color component.With two pixel values (0.026,0.035,0.026) of original input picture in step S1 with
As a example by (0.868,1,0.842), the result after its correction is respectively (0.026/0.050,0.035/0.054,0.026/0.043)
=(0.520,0.648,0.605), (0.868/0.880,1/0.998,0.842/0.880)=(0.986,1.002,0.957),
Then the value after correction is multiplied by standard white backscatter extinction logarithmic ratioRespectively obtain 0.300,0.374,0.349 with
0.569,0.579,0.553 as the pixel value of the correction chart picture of final output, and other pixel value of original input picture also does
Similar calculating, finally obtains the coloured image after correction.
Above simplified example mainly illustrates with the single pixel value of image for example, is at view picture figure during Practical Calculation
Carry out on all pixel values of picture.
Examples detailed above is very full on the method for the present invention and calculates scene light source colour and remove light source impact and then reality
The whole process of color constancy in the case of existing multiple light courcess.
Fig. 3 c is many after the light source colour value utilizing step S5 to calculate adds artificial light sources to original image (Fig. 3 a)
Light source figure (Fig. 3 b) carries out the result after tint correction.
Those of ordinary skill in the art it will be appreciated that embodiment described here be to aid in reader understanding this
Bright principle, it should be understood that protection scope of the present invention is not limited to such special statement and embodiment.This area
It is each that those of ordinary skill can make various other without departing from essence of the present invention according to these technology disclosed by the invention enlightenment
Planting concrete deformation and combination, these deform and combine the most within the scope of the present invention.
Claims (6)
1. the color of image constancy method under multiple light courcess environment, comprises the steps:
S1. the luminance graph of image is calculated: extract the color component in image, the coloured image of input is decomposed into redness, green
With blue three color components, three color components are added the luminance graph obtaining image;
S2. image is divided into bright area and dark areas: the luminance graph that step S1 is obtained is divided into two classes, be specifically divided into bright area and
Dark areas;
Calculating bright area and the light source of dark areas the most respectively: bright area and the mould of dark areas different scale to artwork respectively
Plate is filtered, and tentatively obtains the bright area of image and the light source of dark areas;
S4. synthesis obtains the light source of whole image: is added by the light source of calculated for step S3 bright area and dark areas, obtains
The light source colour of whole image;
S5. light source colour is eliminated, it is achieved color constancy: be respectively divided by the calculating of correspondence by each pixel in original colour cast image
The pixel of the light source image obtained, the image without colour cast after being corrected.
Color of image constancy method under multiple light courcess environment the most according to claim 1, it is characterised in that divide bright dark space
It is two classes that territory uses classical K-Mean Method to gather, and is divided into dark areas and luminance area, original color image bright area is labeled as
1, dark areas is labeled as 0.
Color of image constancy method under multiple light courcess environment the most according to claim 1, it is characterised in that in step S3
Filtering Template is the DOG template of a double gauss difference, and formula is as follows:
Wherein, R1Being the model of ON type receptive field, this model center just, bear, the here as Filtering Template of bright area, R by periphery2
Being the model of OFF type receptive field, this model center is born, outer straight, the here as Filtering Template of dark areas, wherein template periphery
Yardstick be 3 times of center yardstick, namely the value of λ is 3, and additionally Filtering Template is that a unbalanced double gauss is poor, the most just
It is that the value of weight k is not equal to 1, wherein, bright area filter scale σ1It is less than dark areas filter scale σ2。
Color of image constancy method under multiple light courcess environment the most according to claim 3, it is characterised in that institute in step S3
The light source calculating bright area and dark areas respectively stated specifically includes the most step by step:
S31. artwork and R1Do convolution and obtain L1;
S32. artwork and R2Do convolution and obtain L2;
S33.L1In dark areas part be set to 0, the part being namely labeled as 0 is set to 0, L2Middle bright area part is set to 0, also
It is exactly to be labeled as the part of 1 to be set to 0;And then obtain bright area and the light source of dark areas.
Color of image constancy method under multiple light courcess environment the most according to claim 3, it is characterised in that described weight
The span of k is k ∈ (0.1,1).
Color of image constancy method under multiple light courcess environment the most according to claim 3, it is characterised in that described σ1∈
[2,10], σ2∈[3,20]。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610478268.5A CN106204662B (en) | 2016-06-24 | 2016-06-24 | A kind of color of image constancy method under multiple light courcess environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610478268.5A CN106204662B (en) | 2016-06-24 | 2016-06-24 | A kind of color of image constancy method under multiple light courcess environment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106204662A true CN106204662A (en) | 2016-12-07 |
CN106204662B CN106204662B (en) | 2018-11-20 |
Family
ID=57461999
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610478268.5A Active CN106204662B (en) | 2016-06-24 | 2016-06-24 | A kind of color of image constancy method under multiple light courcess environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106204662B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108416788A (en) * | 2018-03-29 | 2018-08-17 | 河南科技大学 | A kind of edge detection method based on receptive field and its light |
CN108537852A (en) * | 2018-04-17 | 2018-09-14 | 四川大学 | A kind of adaptive color shape constancy method based on Image Warping |
CN108596986A (en) * | 2018-04-20 | 2018-09-28 | 四川大学 | A kind of multiple light courcess color constancy method based on retina physiological mechanism |
CN109978848A (en) * | 2019-03-19 | 2019-07-05 | 电子科技大学 | Method based on hard exudate in multiple light courcess color constancy model inspection eye fundus image |
CN110378289A (en) * | 2019-07-19 | 2019-10-25 | 王立之 | A kind of the reading identifying system and method for Vehicle Identification Number |
CN111161253A (en) * | 2019-12-31 | 2020-05-15 | 柳州快速制造工程技术有限公司 | Mold inspection method based on depth information |
CN112802137A (en) * | 2021-01-28 | 2021-05-14 | 四川大学 | Color constancy method based on convolution self-encoder |
CN116761081A (en) * | 2021-06-07 | 2023-09-15 | 荣耀终端有限公司 | Algorithm for automatic white balance of AI (automatic input/output) and electronic equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101674490A (en) * | 2009-09-23 | 2010-03-17 | 电子科技大学 | Color image color constant method based on retina vision mechanism |
US20120155753A1 (en) * | 2010-12-20 | 2012-06-21 | Samsung Techwin Co., Ltd. | Method and apparatus for estimating light source |
CN103518223A (en) * | 2011-04-18 | 2014-01-15 | 高通股份有限公司 | White balance optimization with high dynamic range images |
-
2016
- 2016-06-24 CN CN201610478268.5A patent/CN106204662B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101674490A (en) * | 2009-09-23 | 2010-03-17 | 电子科技大学 | Color image color constant method based on retina vision mechanism |
US20120155753A1 (en) * | 2010-12-20 | 2012-06-21 | Samsung Techwin Co., Ltd. | Method and apparatus for estimating light source |
CN103518223A (en) * | 2011-04-18 | 2014-01-15 | 高通股份有限公司 | White balance optimization with high dynamic range images |
Non-Patent Citations (1)
Title |
---|
叶勤 等: "城市航空影像中基于颜色恒常性的阴影消除", 《光电子。激光》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108416788A (en) * | 2018-03-29 | 2018-08-17 | 河南科技大学 | A kind of edge detection method based on receptive field and its light |
CN108537852A (en) * | 2018-04-17 | 2018-09-14 | 四川大学 | A kind of adaptive color shape constancy method based on Image Warping |
CN108537852B (en) * | 2018-04-17 | 2020-07-07 | 四川大学 | Self-adaptive color constancy method based on image local contrast |
CN108596986A (en) * | 2018-04-20 | 2018-09-28 | 四川大学 | A kind of multiple light courcess color constancy method based on retina physiological mechanism |
CN109978848A (en) * | 2019-03-19 | 2019-07-05 | 电子科技大学 | Method based on hard exudate in multiple light courcess color constancy model inspection eye fundus image |
CN109978848B (en) * | 2019-03-19 | 2022-11-04 | 电子科技大学 | Method for detecting hard exudation in fundus image based on multi-light-source color constancy model |
CN110378289A (en) * | 2019-07-19 | 2019-10-25 | 王立之 | A kind of the reading identifying system and method for Vehicle Identification Number |
CN110378289B (en) * | 2019-07-19 | 2023-05-16 | 王立之 | Reading and identifying system and method for vehicle identification code |
CN111161253A (en) * | 2019-12-31 | 2020-05-15 | 柳州快速制造工程技术有限公司 | Mold inspection method based on depth information |
CN112802137A (en) * | 2021-01-28 | 2021-05-14 | 四川大学 | Color constancy method based on convolution self-encoder |
CN112802137B (en) * | 2021-01-28 | 2022-06-21 | 四川大学 | Color constancy method based on convolution self-encoder |
CN116761081A (en) * | 2021-06-07 | 2023-09-15 | 荣耀终端有限公司 | Algorithm for automatic white balance of AI (automatic input/output) and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN106204662B (en) | 2018-11-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106204662A (en) | A kind of color of image constancy method under multiple light courcess environment | |
Moore et al. | A real-time neural system for color constancy | |
CN103914699B (en) | A kind of method of the image enhaucament of the automatic lip gloss based on color space | |
CN103973991B (en) | A kind of automatic explosion method judging light scene based on B P neutral net | |
CN103258334B (en) | The scene light source colour method of estimation of coloured image | |
CN109685742A (en) | A kind of image enchancing method under half-light environment | |
CN103065334A (en) | Color cast detection and correction method and device based on HSV (Hue, Saturation, Value) color space | |
CN109658341A (en) | Enhance the method and device thereof of picture contrast | |
Rizzi et al. | On the behavior of spatial models of color | |
CN107798661A (en) | A kind of adaptive image enchancing method | |
US10892166B2 (en) | System and method for light field correction of colored surfaces in an image | |
CN101674490B (en) | Color image color constant method based on retina vision mechanism | |
CN103955900B (en) | Image defogging method based on biological vision mechanism | |
CN105812762A (en) | Automatic white balance method for processing image color cast | |
CN104504722A (en) | Method for correcting image colors through gray points | |
CN107169942B (en) | Underwater image enhancement method based on fish retina mechanism | |
CN103295205B (en) | A kind of low-light-level image quick enhancement method based on Retinex and device | |
CN106709888B (en) | A kind of high dynamic range images production method based on human vision model | |
CN105160636A (en) | Adaptive image pre-treatment method for on-board optical imaging sensor | |
CN108537852B (en) | Self-adaptive color constancy method based on image local contrast | |
CN106815602A (en) | A kind of runway FOD image detection method and devices based on multi-level features description | |
CN106023238A (en) | Color data calibration method for camera module | |
CN109451292A (en) | Color temp bearing calibration and device | |
Ulucan et al. | Color constancy beyond standard illuminants | |
CN105245864A (en) | Camera white balance processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |