CN106295679B - A kind of color image light source colour estimation method based on category correction - Google Patents

A kind of color image light source colour estimation method based on category correction Download PDF

Info

Publication number
CN106295679B
CN106295679B CN201610606092.7A CN201610606092A CN106295679B CN 106295679 B CN106295679 B CN 106295679B CN 201610606092 A CN201610606092 A CN 201610606092A CN 106295679 B CN106295679 B CN 106295679B
Authority
CN
China
Prior art keywords
image
light source
feature
edge feature
correction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610606092.7A
Other languages
Chinese (zh)
Other versions
CN106295679A (en
Inventor
李永杰
张明
高绍兵
任燕泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201610606092.7A priority Critical patent/CN106295679B/en
Publication of CN106295679A publication Critical patent/CN106295679A/en
Application granted granted Critical
Publication of CN106295679B publication Critical patent/CN106295679B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The color image light source colour estimation method based on category correction that the invention discloses a kind of, the edge feature of image is extracted first on the image of one group of known luminaire color, then learnt by least square method, obtain the correction matrix between edge feature and light source, edge feature is extracted to test image to be processed again and is multiplied with correction matrix, rough light source estimation is obtained;Accurately light source estimation is obtained to relearn with one kind training image similar in test image feature to be processed finding in such a way that feature space finds K width adjacent image later.It is few that the present invention relates to parameters, and since the feature of extraction is simple and negligible amounts, so also possessing the features such as calculating is simple, speed is fast;In addition, the present invention is the method based on study, so high treating effect, accuracy is high, is very suitable for the relatively high occasion of the accuracy of estimation requirement to light source colour.

Description

A kind of color image light source colour estimation method based on category correction
Technical field
The invention belongs to computer visions and technical field of image processing, and in particular to a kind of colour based on category correction The design of image light source color estimation method.
Background technique
Under natural environment, same object can be presented different colors under the irradiation of the light of different colours, such as green Leaf inclined yellow under morning twilight irradiation, and that at candlelight is point partially blue.The vision system of people can resist this light source colour Variation, so that the color of constant perception object, that is, vision system have color constancy.However, being limited by technical conditions System, machine do not have this ability, and by physical equipment, for example the picture that takes of camera is due to the variation meeting of light source colour Generate serious colour cast.Therefore, how to go accurately to estimate the light source colour in scene and be moved according to existing image information Except so that obtaining color of the object under standard white light irradiation is just particularly important.
Computational color constancy is just directed to solve the problems, such as this, its main purpose is to calculate any piece image institute The color for the unknown light source for including, the light source colour being then calculated with this carry out light source colour to the image being originally inputted It is shown under the white light of standard after correction, obtains so-called standard picture.Since standard picture eliminates light source colour It influences, thus for subsequent calculating task, for example the scene classification based on color, image retrieval are just not present and lead because of colour cast The misclassification of cause or false retrieval.
Computational color constancy method can be divided into two classes: method and traditional static method based on study.Traditional Static method extracts simple feature from image and estimates that such methods, cannot be very since the error of estimation is larger for light source Good meets requirement of engineering.Based on this demand, the side based on study of being born on the basis of tradition is not based on the method for study Method, than the more typical method based on study have G.D.Finlayson 2013 propose method, bibliography: G.D.Finlayson, " Corrected-moment illuminant estimation " in Proc.Comput.Vis.IEEE Int.Conf., 1911 2013, pp.1904-, this method find feature and light source by feature extraction and using the method for recurrence Between relationship.This method estimates that the accuracy of light source is relatively high, but this method due to having used the means returned The same correction matrix is all used to all images, the light source error for causing parts of images to be estimated is very big, thus is unable to satisfy pair Before the equipment of reception image for estimating the very high occasion of light source colour accuracy requirement, such as intelligent robot or automatic Pilot etc. End.Therefore, a kind of method for realizing correction matrix different to different image studies is just particularly important.
Summary of the invention
It is unable to satisfy pair the purpose of the present invention is to solve image scene light source colour estimation method in the prior art The problem of estimating light source colour accuracy requirement very high occasion, proposes a kind of color image light source face based on category correction Color estimation method.
The technical solution of the present invention is as follows: a kind of color image light source colour estimation method based on category correction, including with Lower step:
S1, the edge feature for extracting training image: using the color image of N width known luminaire as original training set T, respectively Convolution algorithm is done with the template G after Gaussian Profile derivation, obtains the corresponding marginal value of each pixel of image, it is special to extract edge Sign, obtains the edge feature matrix M of N width training image;
S2, learning correction matrix: by least square method, study is instructed by the step S1 eigenmatrix M being calculated and N width Practice the correction matrix C between the standard sources L of image;
S3, rough light source estimation: the edge feature of test image is extracted using the method in step S1, with step S2 The correction matrix C that acquistion is arrived is multiplied, and obtains rough light source estimated result L1;
S4, searching training image corresponding with test image: light is removed respectively to test image and original training set T Source, then edge feature is extracted using the method in S1 respectively, form feature space;It is found out in feature space and test image spy Similar K width training image is levied, as new training set TN;
S5, accurately light source are estimated: repeating step S1-S4, every time replace with the training set T in step S1 in step S4 Obtained new training set TN, training image number also become K from N accordingly, the TN obtained in the step S4 and last grasp Until the TN that step S4 is obtained in work is identical, using the light source estimated result L1 that step S3 is obtained in last time operation as final Light source estimated result.
Further, the template G in step S1 after Gaussian Profile derivation is Gauss gradient operator.
Further, the calculation formula of edge feature is extracted in step S1 are as follows:
R in formulai、Gi、BiEach pixel is respectively indicated in the marginal value in tri- channels R, G, B, N1Indicate pixel in image The number of point, MxyzFor different x, the value of corresponding edge feature under y, z, x, y, z is to meet x >=0, y >=0, z >=0 and x+y+z =3 all combinations.
Further, the value range of K is in step S4
Further, step S4 specifically include it is following step by step:
S41, standard sources L is removed to original N width training image, and edge feature is extracted using the method in step S1;
S42, the light source L1 that rough estimate in step S3 is removed to test image, and side is extracted using the method in step S1 Feature space is collectively formed in the edge feature that N width training image extracts in edge feature, with step S41;
S43, found out in feature space with the most similar K width image of test image characteristic distance, as test image New training image set TN.
Further, the characteristic distance in step S43 is Euclidean distance.
The beneficial effects of the present invention are: the present invention extracts the edge of image first on the image of one group of known luminaire color Then feature is learnt by least square method, obtain the correction matrix between edge feature and light source, then to be processed Test image extracts edge feature and is multiplied with correction matrix, obtains rough light source estimation;Later by being sought in feature space It looks for the mode of K width adjacent image to find to obtain with one kind training image similar in test image feature to be processed to relearn Estimate to accurately light source.It is appropriate to adjust since test image to be processed is different at a distance from feature space with training image The value of corresponding training image number K, it is available be better suited for different type training image as a result, K is uniquely to join here Number.It is few (an only parameter K) that the present invention relates to parameters, and since the feature of extraction is simple and negligible amounts, so also gathering around There is the features such as calculating is simple, speed is fast;In addition, the present invention is the method based on study, so high treating effect, accuracy is high, It is very suitable for the relatively high occasion of the accuracy of estimation requirement to light source colour, such as is built in intelligent robot or drives automatically The front end etc. for the reception vision facilities sailed.
Detailed description of the invention
Fig. 1 is a kind of color image light source colour estimation method flow chart based on category correction provided by the invention.
Fig. 2 is the test image tools_ph-ulm.GIF to be processed of the embodiment of the present invention one.
Fig. 3 is the error amount schematic diagram between light source and real light sources that each step of the embodiment of the present invention one is estimated.
Fig. 4 is the final light source estimated result and real light sources contrast schematic diagram of the embodiment of the present invention one.
Fig. 5 is that the light source colour value of the embodiment of the present invention two calculated using step S5 carries out tone to original test image Result schematic diagram after correction.
Specific embodiment
The embodiment of the present invention is further described with reference to the accompanying drawing.
The color image light source colour estimation method based on category correction that the present invention provides a kind of, as shown in Figure 1, including Following steps:
S1, the edge feature for extracting training image: using the color image of N width known luminaire as original training set T, respectively Convolution algorithm is done with the template G after Gaussian Profile derivation, obtains the corresponding marginal value of each pixel of image, it is special to extract edge Sign, obtains the edge feature matrix M of N width training image.
Wherein, the template G after Gaussian Profile derivation is Gauss gradient operator.
Extract the calculation formula of edge feature are as follows:
R in formulai、Gi、BiEach pixel is respectively indicated in the marginal value in tri- channels R, G, B, N1Indicate pixel in image The number of point, MxyzFor different x, the value of corresponding edge feature under y, z, x, y, z is to meet x >=0, y >=0, z >=0 and x+y+z =3 all combinations, total number of combinations is 19, so available 19 edge features here.
S2, learning correction matrix: by least square method, study is instructed by the step S1 eigenmatrix M being calculated and N width Practice the correction matrix C between the standard sources L of image.
S3, rough light source estimation: the edge feature of test image is extracted using the method in step S1, with step S2 The correction matrix C that acquistion is arrived is multiplied, and obtains rough light source estimated result L1.
S4, searching training image corresponding with test image: light is removed respectively to test image and original training set T Source, then edge feature is extracted using the method in S1 respectively, form feature space;It is found out in feature space and test image spy Levying similar K width training image, (value range of K is), as new training set TN.
The step specifically include it is following step by step:
S41, standard sources L is removed to original N width training image, and edge feature is extracted using the method in step S1.
S42, the light source L1 that rough estimate in step S3 is removed to test image, and side is extracted using the method in step S1 Feature space is collectively formed in the edge feature that N width training image extracts in edge feature, with step S41.
S43, found out in feature space with the most similar K width image of test image characteristic distance, as test image New training image set TN.
Wherein, the characteristic distance in step S43 is Euclidean distance.
S5, accurately light source are estimated: repeating step S1-S4, every time replace with the training set T in step S1 in step S4 Obtained new training set TN, training image number also become K from N accordingly, the TN obtained in the step S4 and last grasp Until the TN that step S4 is obtained in work is identical, using the light source estimated result L1 that step S3 is obtained in last time operation as final Light source estimated result.
Subsequent meter is used directly for by the final light source estimated result L1 of the image calculated after step S5 Calculation machine vision application, for example it is color to can achieve removal divided by L1 with the component of each Color Channel of the original color image of input The purpose of light source colour in chromatic graph picture.In addition, the white balance and color correction of image are also required to use S5 and calculate most Whole light source estimated result L1.
Below with a specific embodiment to a kind of color image light source colour based on category correction provided by the invention Estimation method is described further:
Embodiment one:
Downloading all pictures for estimating the image library SFU object of scene light source colour internationally recognized at present are (altogether 321 width) and its corresponding real light sources color (standard sources) L, image size is 468 × 637, selects 214 width before image library Image selects in residual image piece image tools_ph-ulm.GIF (as shown in Figure 2) as wait locate as training set image The test image of reason is tested, and all images are all without pretreatment (such as tint correction, the gamma by any camera itself Value correction).Then detailed step of the invention is as follows:
S1, the edge feature for extracting training image: using the color image of 214 width known luminaires as original training set T, divide Convolution algorithm is not done with the template G (Gauss gradient operator) after Gaussian Profile derivation, obtains the corresponding side of each pixel of image Edge value, then the edge feature of 19 dimensions is extracted respectively, finally obtain the edge feature matrix for the training set image that size is 214*19 M。
S2, learning correction matrix: by least square method, learn by the step S1 eigenmatrix M being calculated and 214 width Correction matrix between the standard sources L of training image obtains the correction matrix C that size is 19*3:
C=[- 150.0689, -30.1462, -21.5186;- 96.5582, -196.1642, -348.5298;52.6551, 76.4461 115.5982;- 200.5289, -240.3650, -179.6495;- 79.6311,72.4539,125.1126;- 56.1276, -130.2963, -226.1518;683.9180 552.9035,366.8781;214.1444-15.5379 ,- 52.8198;- 149.3407,138.0260,397.1888;154.6218 240.3336,128.2161;156.5752 ,- 50.6503 69.4182;22.7103 90.3730,274.5781;- 65.9786, -384.7642, -66.2556;- 112.7044-104.0913 ,-12.8868;- 349.7427,81.5115, -215.8972;- 79.0109, -48.0727, - 32.2072;- 98.2723, -22.7039, -51.2091;108.6481-52.0896 ,-265.9989;172.6056 171.2726 95.1991].
S3, rough light source estimation: 19 dimension edge features of test image are extracted using the method in step S1, are obtained big The edge feature matrix M1 of the small test image for 1*19:
M1=[0.0002,0.0004,0.0002,0.0014,0.0017,0.0012,0.0015,0.0013,0.0014, 0.0036,0.0040,0.0031,0.0037,0.0034,0.0039,0.0037,0.0032,0.0034,0.0035].
M1 is multiplied with the correction matrix C that step S2 learns again, obtains rough light source estimated result L1= [0.1985,0.2151,0.2360].
S4, searching training image corresponding with test image: to test image and with the original training of 214 width images Collection T removes light source respectively, then extracts 19 dimension edge features respectively using the method in S1, forms feature space.In feature space In find out with K width training image similar in test image feature, thus obtain with one kind image similar in its feature, by this K width Image is as new training set TN.In the embodiment of the present invention, K=100 is chosen.
The step specifically include it is following step by step:
S41, standard sources L is removed to original 214 width training image, and 19 dimension edges are extracted using the method in step S1 Feature obtains the eigenmatrix M0 that size is 214*19.
S42, the light source L1 that rough estimate in step S3 is removed to test image, and side is extracted using the method in step S1 Edge feature obtains the eigenmatrix M2 that size is 1*19:
M2=[0.0060,0.0089,0.0038,0.0366,0.0371,0.0224,0.0365,0.0282,0.0285, 0.0937,0.0900,0.0581,0.0921,0.0790,0.0909,0.0768,0.0673,0.0664,0.0777].
Feature space is collectively formed in the edge feature M0 that 214 width training images extract in M2 and step S41 again.
S43, found out in feature space with the most similar 100 width image of test image characteristic distance, as test image New training image set TN.
S5, accurately light source are estimated: repeating step S1-S4, every time replace with the training set T in step S1 in step S4 Obtained new training set TN, training image number also become K from N accordingly, the TN obtained in the step S4 and last grasp Until the TN that step S4 is obtained in work is identical, using the light source estimated result L1 that step S3 is obtained in last time operation as final Light source estimated result.
In the embodiment of the present invention, to save the time, repetitive operation is twice.The light source obtained after repetitive operation is primary is estimated Meter result is L1=[0.3412,0.3591,0.3168], and the light source estimated result that repetitive operation obtains afterwards twice is L1= [0.3312,0.3365,0.3430].By execution obtain afterwards twice light source estimated result L1=[0.3312,0.3365, 0.3430] it is used as final light source estimated result.
As shown in figure 3, first pillar indicates in step S3 angular error between the light source and real light sources of rough estimate Value, second pillar are the angle error values between the light source and real light sources estimated after repetitive operation once in step S5, Third pillar is the angle error value between the light source and real light sources estimated after repetitive operation twice in step S5.Three The broken line connected between pillar has reacted the downward trend of evaluated error, shows that the light source of estimation is more and more accurate.
It is illustrated in figure 4 the direction of red and green component response under the three primary colors space that step S5 is finally calculated With the direction of the response of real light sources red and green component, Fig. 4 shows the response being calculated by step S5 and true field The information of scape light source colour very close to.
It is with the tint correction of image to the finally obtained light source estimated result of the present invention with a specific embodiment below Example makees a simple demonstration when practical application:
Embodiment two:
Light source colour value under each color component being calculated using step S5 corrects original input picture respectively The pixel value of each color component.A pixel (0.335,0.538,0.601) with the test image inputted in step S3 For, correction after result be (0.335/0.3312,0.538/0.3365,0.601/0.3430)=(1.0115, 1.5988,1.7522), become (0.2319,0.3665,0.4016) after normalized, be then multiplied by the value after correction Standard white light coefficientObtain the correction image of (0.1339,0.2116,0.2319) as final output Pixel value, other pixels of original input picture also do similar calculating, and the color image after finally obtaining correction is such as schemed Shown in 5.
Those of ordinary skill in the art will understand that the embodiments described herein, which is to help reader, understands this hair Bright principle, it should be understood that protection scope of the present invention is not limited to such specific embodiments and embodiments.This field Those of ordinary skill disclosed the technical disclosures can make according to the present invention and various not depart from the other each of essence of the invention The specific variations and combinations of kind, these variations and combinations are still within the scope of the present invention.

Claims (6)

1. a kind of color image light source colour estimation method based on category correction, which comprises the following steps:
S1, extract training image edge feature: using the color image of N width known luminaire as original training set T, respectively with height Template G after this distribution derivation does convolution algorithm, obtains the corresponding marginal value of each pixel of image, extracts edge feature, obtain To the edge feature matrix M of N width training image;
S2, learning correction matrix: by least square method, study is schemed by the step S1 eigenmatrix M being calculated and the training of N width Correction matrix C between the standard sources L of picture;
S3, rough light source estimation: the edge feature of test image is extracted using the method in step S1, is learnt with step S2 The correction matrix C arrived is multiplied, and obtains rough light source estimated result L1;
S4, searching training image corresponding with test image: removing light source to test image and original training set T respectively, then Edge feature is extracted using the method in S1 respectively, forms feature space;It is found out in feature space with test image feature most Similar K width training image, as new training set TN;
S5, accurately light source are estimated: repeating step S1-S4, replace in step S4 the training set T in step S1 obtain every time New training set TN, training image number also becomes K from N accordingly, in TN and last operation obtained in the step S4 Until the TN that step S4 is obtained is identical, using the light source estimated result L1 that step S3 is obtained in last time operation as final light source Estimated result.
2. the color image light source colour estimation method according to claim 1 based on category correction, which is characterized in that institute Stating the template G in step S1 after Gaussian Profile derivation is Gauss gradient operator.
3. the color image light source colour estimation method according to claim 1 based on category correction, which is characterized in that institute State the calculation formula that edge feature is extracted in step S1 are as follows:
R in formulai、Gi、BiEach pixel is respectively indicated in the marginal value in tri- channels R, G, B, N1Indicate pixel in image Number, MxyzFor different x, the value of corresponding edge feature under y, z, x, y, z is to meet x >=0, y >=0, z >=0 and x+y+z=3's All combinations.
4. the color image light source colour estimation method according to claim 1 based on category correction, which is characterized in that institute The value range for stating K in step S4 is
5. the color image light source colour estimation method according to claim 1 based on category correction, which is characterized in that institute State step S4 specifically include it is following step by step:
S41, standard sources L is removed to original N width training image, and edge feature is extracted using the method in step S1;
S42, the light source L1 that rough estimate in step S3 is removed to test image, and edge spy is extracted using the method in step S1 Feature space is collectively formed in sign, the edge feature extracted with N width training image in step S41;
S43, it is found out in feature space with the most similar K width image of test image characteristic distance, new instruction as test image Practice image collection TN.
6. the color image light source colour estimation method according to claim 5 based on category correction, which is characterized in that institute Stating the characteristic distance in step S43 is Euclidean distance.
CN201610606092.7A 2016-07-28 2016-07-28 A kind of color image light source colour estimation method based on category correction Active CN106295679B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610606092.7A CN106295679B (en) 2016-07-28 2016-07-28 A kind of color image light source colour estimation method based on category correction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610606092.7A CN106295679B (en) 2016-07-28 2016-07-28 A kind of color image light source colour estimation method based on category correction

Publications (2)

Publication Number Publication Date
CN106295679A CN106295679A (en) 2017-01-04
CN106295679B true CN106295679B (en) 2019-06-25

Family

ID=57663052

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610606092.7A Active CN106295679B (en) 2016-07-28 2016-07-28 A kind of color image light source colour estimation method based on category correction

Country Status (1)

Country Link
CN (1) CN106295679B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110060308B (en) * 2019-03-28 2021-02-02 杭州电子科技大学 Color constancy method based on light source color distribution limitation
CN112995634B (en) * 2021-04-21 2021-07-20 贝壳找房(北京)科技有限公司 Image white balance processing method and device, electronic equipment and storage medium
CN116188797B (en) * 2022-12-09 2024-03-26 齐鲁工业大学 Scene light source color estimation method capable of being effectively embedded into image signal processor

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103258334A (en) * 2013-05-08 2013-08-21 电子科技大学 Method of estimating scene light source colors of color image
US20130243085A1 (en) * 2012-03-15 2013-09-19 Samsung Electronics Co., Ltd. Method of multi-view video coding and decoding based on local illumination and contrast compensation of reference frames without extra bitrate overhead

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130243085A1 (en) * 2012-03-15 2013-09-19 Samsung Electronics Co., Ltd. Method of multi-view video coding and decoding based on local illumination and contrast compensation of reference frames without extra bitrate overhead
CN103258334A (en) * 2013-05-08 2013-08-21 电子科技大学 Method of estimating scene light source colors of color image

Also Published As

Publication number Publication date
CN106295679A (en) 2017-01-04

Similar Documents

Publication Publication Date Title
Hemrit et al. Rehabilitating the colorchecker dataset for illuminant estimation
CN109520706B (en) Screw hole coordinate extraction method of automobile fuse box
CN106295679B (en) A kind of color image light source colour estimation method based on category correction
CN104504722B (en) Method for correcting image colors through gray points
JP5301715B2 (en) Image illumination detection
Isa Pixel distribution shifting color correction for digital color images
US20190052855A1 (en) System and method for detecting light sources in a multi-illuminated environment using a composite rgb-ir sensor
Wang et al. Fast automatic white balancing method by color histogram stretching
WO2019011342A1 (en) Cloth identification method and device, electronic device and storage medium
CN102867295A (en) Color correction method for color image
CN111385438B (en) Compensating method and device for lens shading correction and computer readable storage medium
CN106296658B (en) A kind of scene light source estimation accuracy method for improving based on camera response function
CN108154496B (en) Electric equipment appearance change identification method suitable for electric power robot
US20160225159A1 (en) Image processing method and image processing apparatus
Fredembach et al. The bright-chromagenic algorithm for illuminant estimation
CN109587466A (en) The method and apparatus of colored shadow correction
CN106204500B (en) A method of realizing that different cameral shooting Same Scene color of image remains unchanged
Fursov et al. Correction of distortions in color images based on parametric identification
Yoshinari et al. Color image enhancement in HSI color space without gamut problem
RU2524869C1 (en) Device for colouring black and white image
Fang et al. Colour correction toolbox
Faghih et al. Neural gray edge: Improving gray edge algorithm using neural network
Woo et al. Deep dichromatic guided learning for illuminant estimation
CN113034449A (en) Target detection model training method and device and communication equipment
CN112801002A (en) Facial expression recognition method and device based on complex scene and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant