CN109618145A - Color constancy bearing calibration, device and image processing equipment - Google Patents

Color constancy bearing calibration, device and image processing equipment Download PDF

Info

Publication number
CN109618145A
CN109618145A CN201811528397.6A CN201811528397A CN109618145A CN 109618145 A CN109618145 A CN 109618145A CN 201811528397 A CN201811528397 A CN 201811528397A CN 109618145 A CN109618145 A CN 109618145A
Authority
CN
China
Prior art keywords
channel
value
sample
channel gain
gain value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811528397.6A
Other languages
Chinese (zh)
Other versions
CN109618145B (en
Inventor
刘键涛
周凡
张长定
李骈臻
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Creative Technology Ltd Shenzhen
Original Assignee
Creative Technology Ltd Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Creative Technology Ltd Shenzhen filed Critical Creative Technology Ltd Shenzhen
Priority to CN201811528397.6A priority Critical patent/CN109618145B/en
Publication of CN109618145A publication Critical patent/CN109618145A/en
Application granted granted Critical
Publication of CN109618145B publication Critical patent/CN109618145B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control

Abstract

The embodiment of the present application provides a kind of color constancy bearing calibration, device and image processing equipment.The R channel gain value for calculating target image by preset algorithm calculates the channel B yield value of target image as initial channel B yield value as Initial R channel gain value;The remaining R channel gain value and remnants channel B yield value of target image are calculated by the depth remnants learning network that training is completed;It is asked to obtain prediction R channel gain value according to Initial R channel gain value and remnants R channel gain value, and prediction channel B yield value is obtained according to initial channel B yield value and remaining channel B yield value;Target image is adjusted according to prediction R channel gain value and prediction channel B yield value, is corrected with the color constancy to target image.In this way, using the shallower deep learning network implementations of network structure to the precise calibration of the color constancy of image.

Description

Color constancy bearing calibration, device and image processing equipment
Technical field
This application involves technical field of image processing, in particular to a kind of color constancy bearing calibration, device and Image processing equipment.
Background technique
In image signal process process, the calculating of color constancy is to keep object right under different illumination conditions The consistency that color is presented.Under complex illumination, the limit or special illumination condition, color constancy algorithm is particularly important.
Under most of illumination conditions, such as, grey world algo-rithms, retina cortex are theoretical, standard deviation weights for traditional algorithm The grey world, standard deviation and luminance weighted grey world etc. can obtain the correction result for allowing user to receive.Once however, going out in environment Under existing complex illumination, the limit or special illumination condition, it is unable to get good color constancy correction result.
Come currently, generalling use convolutional neural networks (Convolutional Neutral Network) model to image Color and it is semantic carry out feature extraction, to realize that the color constancy to image corrects.It is preferably corrected as a result, logical Often need to deepen network structure to improve the feature extraction performance of CNN model, however, the intensification of network structure can bring such as ladder Degree disappear can not Optimization Solution the problems such as.
Summary of the invention
In view of this, be designed to provide a kind of color constancy bearing calibration, device and the image procossing of the application are set It is standby, at least partly to improve the above problem.
In a first aspect, the embodiment of the present application provides a kind of color constancy bearing calibration, which comprises
The R channel gain value for calculating target image by preset algorithm calculates the target as Initial R channel gain value The channel B yield value of image is as initial channel B yield value;
By training complete depth remnants learning network be calculated the target image remaining R channel gain value and Remaining channel B yield value;
It is asked to obtain prediction R channel gain value according to the Initial R channel gain value and the remnants R channel gain value, with And prediction channel B yield value is obtained according to the initial channel B yield value and the remaining channel B yield value;
The target image is adjusted according to the prediction R channel gain value and the prediction channel B yield value, with The color constancy of the target image is corrected.
Optionally, the depth remnants learning network is trained as follows:
Obtain training sample data collection, wherein the training sample data collection includes the original graph acquired from target scene Picture;
Corresponding label R channel gain value and label channel B are determined for each sample that the training sample data are concentrated Yield value;
For each sample, using the preset algorithm calculate the sample Initial R channel gain value and initial B Channel gain value calculates the remaining R channel gain value of the sample by depth remnants learning network to be trained and remnants B leads to Road yield value obtains the prediction R channel gain of the sample according to the Initial R channel gain value and the remnants R channel gain value Value obtains the prediction channel B yield value of the sample according to the initial channel B yield value and the remnants channel B yield value;
Loss meter is carried out to the prediction R channel gain value of each sample and label R channel gain value using default loss function It calculates, and costing bio disturbance is carried out to prediction channel B yield value and label channel B yield value;
It is optimized according to parameter of the result of costing bio disturbance to the depth remnants learning network, obtains training completion Depth remnants learning network.
Optionally, each sample concentrated for the training sample data determines corresponding label R channel gain value and mark Remember channel B yield value, comprising:
For each sample, the R channel gain value and channel B yield value of the sample are adjusted, the sample is reached Specified color effect;
The current R channel gain value of the sample is determined as to the label R channel gain value of the sample, and will be described The current channel B yield value of sample is determined as the label channel B yield value of the sample.
Optionally, the preset algorithm is Classical Grey world algo-rithms;It is calculated by depth remnants learning network to be trained The remaining R channel gain value of the sample and remaining channel B yield value, comprising:
The thumbnail information for obtaining the sample learns the thumbnail information input depth remnants to be trained Network makes the depth remnants learning network to be trained export the remaining R channel gain value and remaining channel B increasing of the sample Benefit value;
Wherein, the thumbnail information includes the detection table DetectionTable or histogram of the sample Histgram。
Optionally, the default loss function are as follows:
Wherein, N indicates sample size,Indicate the predicted value of i-th of sample,Indicate the mark of i-th of sample Note value.
Optionally, it is optimized according to parameter of the result of costing bio disturbance to the depth remnants learning network, comprising:
The result of the costing bio disturbance is inputted into Adam optimizer, and according to the output of Adam optimizer optimization The parameter of depth remnants learning network.
Second aspect, the disclosure provide a kind of color constancy means for correcting, and described device includes:
First computing module, the R channel gain value for calculating target image by preset algorithm increase as Initial R channel Benefit value, calculates the channel B yield value of the target image as initial channel B yield value;
The target image is calculated in second computing module, the depth remnants learning network for being completed by training Remaining R channel gain value and remaining channel B yield value;
Prediction module obtains prediction R for asking according to the Initial R channel gain value and the remnants R channel gain value Channel gain value, and prediction channel B gain is obtained according to the initial channel B yield value and the remaining channel B yield value Value;
Correction module is used for according to the prediction R channel gain value and the prediction channel B yield value to the target figure As being adjusted, it is corrected with the color constancy to the target image.
Optionally, described device further include:
Sample acquisition module, for obtaining training sample data collection, wherein the training sample data collection includes from target The original image of scene acquisition;
Mark value determining module, each sample for concentrating for the training sample data determine that corresponding label R is logical Road yield value and label channel B yield value;
Training module is used for:
For each sample, the grey world R channel gain value and grey world channel B yield value of the sample are calculated, The remaining R channel gain value and remnants channel B yield value of the sample, root are calculated by depth remnants learning network to be trained The prediction R channel gain value of the sample is obtained according to the ash world R channel gain value and the remnants R channel gain value, according to this Grey world's channel B yield value and the remnants channel B yield value obtain the prediction channel B yield value of the sample;
Loss meter is carried out to the prediction R channel gain value of each sample and label R channel gain value using default loss function It calculates, and costing bio disturbance is carried out to prediction channel B yield value and label channel B yield value;
It is optimized according to parameter of the result of costing bio disturbance to the depth remnants learning network, obtains training completion Depth remnants learning network.
Optionally, the mark value determining module includes:
Adjusting submodule adjusts the R channel gain value and channel B gain of the sample for being directed to each sample Value, makes the sample reach specified color effect;
Determine submodule, the channel label R for the current R channel gain value of the sample to be determined as to the sample increases Benefit is worth, and the current channel B yield value of the sample is determined as to the label channel B yield value of the sample.
The third aspect, the disclosure provide a kind of image processing equipment, comprising:
Processor and machine readable storage medium are stored with machine-executable instruction on the machine readable storage medium, The machine-executable instruction is performed the method for promoting the processor to realize that the embodiment of the present application first aspect provides.
In terms of existing technologies, the embodiment of the present application has the advantages that
The embodiment of the present application provides a kind of color constancy bearing calibration, device and image processing equipment.By imputing in advance Method calculates the R channel gain value of target image as Initial R channel gain value, calculates the channel B yield value conduct of target image Initial channel B yield value;The remaining R channel gain of target image is calculated by the depth remnants learning network that training is completed Value and remaining channel B yield value;It is asked to obtain prediction R channel gain according to Initial R channel gain value and remnants R channel gain value Value, and prediction channel B yield value is obtained according to initial channel B yield value and remaining channel B yield value;According to the prediction channel R Yield value and prediction channel B yield value are adjusted target image, are corrected with the color constancy to target image.Such as This, using the shallower deep learning network implementations of network structure to the precise calibration of the color constancy of image.
Detailed description of the invention
Technical solution in ord to more clearly illustrate embodiments of the present application, below will be to needed in the embodiment attached Figure is briefly described, it should be understood that the following drawings illustrates only some embodiments of the application, therefore is not construed as pair The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this A little attached drawings obtain other relevant attached drawings.
Fig. 1 is a kind of block diagram of image processing equipment provided by the embodiments of the present application;
Fig. 2 is a kind of flow diagram of color constancy bearing calibration provided by the embodiments of the present application;
Fig. 3 is a kind of structural schematic diagram of depth remnants learning network provided by the embodiments of the present application;
Fig. 4 is the another flow diagram of color constancy bearing calibration provided by the embodiments of the present application;
Fig. 5 is the operational process of the color constancy bearing calibration under a kind of concrete scene provided by the embodiments of the present application;
Fig. 6 is a kind of functional block diagram of color constancy means for correcting provided by the embodiments of the present application.
Icon: 100- image processing equipment;110- processor;120- machine readable storage medium;200- color constancy Means for correcting;The first computing module of 210-;The second computing module of 220-;230- prediction module;240- correction module;250- sample Obtain module;260- mark value determining module;261- adjusting submodule;262- determines submodule;270- training module.
Specific embodiment
To keep the purposes, technical schemes and advantages of the embodiment of the present application clearer, below in conjunction with the embodiment of the present application In attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is Some embodiments of the present application, instead of all the embodiments.The application being usually described and illustrated herein in the accompanying drawings is implemented The component of example can be arranged and be designed with a variety of different configurations.
Therefore, the detailed description of the embodiments herein provided in the accompanying drawings is not intended to limit below claimed Scope of the present application, but be merely representative of the selected embodiment of the application.Based on the embodiment in the application, this field is common Technical staff's every other embodiment obtained without creative efforts belongs to the model of the application protection It encloses.
It should also be noted that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi It is defined in a attached drawing, does not then need that it is further defined and explained in subsequent attached drawing.
Fig. 1 is please referred to, Fig. 1 is a kind of block diagram of image processing equipment 100 provided by the embodiments of the present application.It is described Image processing equipment 100, which can be mobile terminal, personal computer (Personal Computer, PC) etc., arbitrarily has image The electronic equipment of processing function.Described image processing equipment 100 include color constancy means for correcting 200, processor 110 and Machine readable storage medium 120.
The processor 110 and each element of machine readable storage medium 120 directly or indirectly electrically connect between each other It connects, to realize the transmission or interaction of data.For example, these elements can pass through one or more communication bus or signal between each other Line, which is realized, to be electrically connected.The color constancy means for correcting 200 includes at least one can be with software or firmware (firmware) form is stored on the machine readable storage medium 120 or is solidificated in the operation of image processing equipment 100 Software function module in system (OperatingSystem, OS).Processor 110 be used for receive execute instruction when, call And execute executable module and computer program on machine readable storage medium 120 etc..
The machine readable storage medium 120 may be, but not limited to, and any electronics, magnetism, optics or other physics are deposited Storage device may include storage information, such as executable instruction, data.For example, machine readable storage medium 120 may is that RAM (RandomAccessMemory, random access memory), volatile memory, nonvolatile memory, flash memory, storage are driven Dynamic device (such as hard disk drive), solid state hard disk, any kind of storage dish (such as CD, DVD) or similar storage are situated between Matter or their combination.
It should be appreciated that in the present embodiment, structure shown in FIG. 1 is only to illustrate, and image processing equipment 100 can also include More, the less or entirely different configuration than shown in Fig. 1.For example, it is also possible to include display unit.Wherein, component shown in FIG. 1 It can be realized with software, hardware or combinations thereof.
It referring to figure 2., is a kind of flow diagram of color constancy bearing calibration provided by the embodiments of the present application, the party Method is applied to image processing equipment 100 shown in Fig. 1.Each step that this method includes is described in detail below.
Step S21, the R channel gain value for calculating target image by preset algorithm are calculated as Initial R channel gain value The channel B yield value of the target image is as initial channel B yield value.
Wherein, the preset algorithm can be any for correcting the traditional algorithm of color constancy, such as Classical Grey generation Boundary's algorithm, retina cortex theoretical algorithm, standard deviation Weighted Grey world algo-rithms, standard deviation and luminance weighted grey world algo-rithms etc..
The remaining channel R of the target image is calculated by the depth remnants learning network that training is completed by step S22 Yield value and remaining channel B yield value.
Incorporated by reference to referring to Fig. 3, Fig. 3 shows a kind of structural schematic diagram of the depth remnants learning network in embodiment. Wherein, the depth remnants study includes sequentially connected four convolutional layers (that is, Convolution shown in Fig. 3) and two A full articulamentum.Activation primitive and pond layer are connected in turn after each convolutional layer (that is, shown in Fig. 3 MaxPooling)。
In the present embodiment, each layer in the depth remnants network can have ten channels, and convolution core can be with It is 3.
Optionally, in the present embodiment, the depth remnants learning network can be instructed by step shown in Fig. 4 Practice.
Step S41 obtains training sample data collection, wherein the training sample data collection includes acquiring from target scene Original image.
Wherein, original image refers to the directly collected RAW image of camera sensor, and target scene can be specific more A application scenarios are specifically determined according to user demand.
Step S42, each sample concentrated for the training sample data determine corresponding label R channel gain value and mark Remember channel B yield value.
In the present embodiment, step S42 can be realized by following procedure:
The first, it is directed to each sample, the R channel gain value and channel B yield value of the sample is adjusted, makes the sample Reach specified color effect.
Wherein, the specified color effect can voluntarily be determined by user according to personal preference.
The second, the current R channel gain value of the sample is determined as to the label R channel gain value of the sample, and The current channel B yield value of the sample is determined as to the label channel B yield value of the sample.
Step S43 calculates the Initial R channel gain of the sample using the preset algorithm for each sample Value and initial channel B yield value, the remaining R channel gain value of the sample is calculated by depth remnants learning network to be trained With remaining channel B yield value, the prediction R of the sample is obtained according to the Initial R channel gain value and the remnants R channel gain value Channel gain value increases according to the prediction channel B that the initial channel B yield value and the remnants channel B yield value obtain the sample Benefit value.
Optionally, in the present embodiment, step S33 may include following sub-step, with remaining by depth to be trained Learning network calculates the remaining R channel gain value and remnants channel B yield value of the sample.
The thumbnail information for obtaining the sample learns the thumbnail information input depth remnants to be trained Network makes the depth remnants learning network to be trained export the remaining R channel gain value and remaining channel B increasing of the sample Benefit value.
Wherein, the thumbnail information may include the detection table DetectionTable or histogram of the sample Histgram。
Step S44, using default loss function to the prediction R channel gain value of each sample and label R channel gain value into Row costing bio disturbance, and costing bio disturbance is carried out to prediction channel B yield value and label channel B yield value.
Optionally, the loss function can be Euclidean loss, it can be lost using following calculating formula It calculates:
Wherein, N indicates sample size,Indicate the predicted value of i-th of sample, such as the channel prediction R in the present embodiment Yield value and prediction channel B yield value;Indicate the mark value of i-th of sample, such as the label R channel gain in the present embodiment Value and label channel B yield value.
Step S45 is optimized according to parameter of the result of costing bio disturbance to the depth remnants learning network, is instructed Practice the depth remnants learning network completed.
Below by taking the preset algorithm is Classical Grey world algo-rithms as an example, a specific example is provided in conjunction with Fig. 5, to step Rapid S33- step S35 is described in detail.
When implementing, sample (RGB image as shown in Figure 5) is inputted into the depth remnants learning network, it should with output The remaining R channel gain value (Residual R Gain) of sample and remaining channel B yield value (ResidualBGain).It is same with this When, it is obtained using DetectionTable (detection table) or Histgram (histogram) of the Classical Grey world algo-rithms to the sample The grey world R channel gain value (RGainby GW) of the sample and grey world's channel B yield value (BGain by GW) are described Grey world R channel gain value and grey world's channel B yield value can serve as Initial R channel gain value in the embodiment of the present application and Initial channel B yield value.
It sums to the grey world R channel gain value and the remnants R channel gain value, to obtain the sample Prediction R channel gain value (PredictedRGain) and prediction channel B yield value (PredictedBGain).Then, it uses Euclidean loss is to obtained PredictedRGain and the label R channel gain value (LabeledRGain) of the sample Carry out costing bio disturbance loss result 1, the label channel B yield value to obtained Predicted B Gain and the sample (LabeledGain) it carries out costing bio disturbance and obtains loss result 2.
Loss result 1 and loss result 2 are inputted into Adam optimizer again, and then the output of Adam optimizer is fed back into depth It spends in remaining learning network, the parameter of depth remnants learning network is optimized.
Above-mentioned training process is repeated, until restraining by the Euclidean loss.
Step S23 asks to obtain the increasing of the prediction channel R according to the Initial R channel gain value and the remnants R channel gain value Benefit value, and prediction channel B yield value is obtained according to the initial channel B yield value and the remaining channel B yield value.
The realization process of step S23 is similar to the treatment process of sample with training process, and details are not described herein.
Step S24, according to the prediction R channel gain value and the prediction channel B yield value to the target image into Row adjustment, is corrected with the color constancy to the target image.
By above-mentioned design, using depth remnants learning network to the meter of object color component and traditional algorithm under target scene The residual value between result is calculated, rather than learns the residual value between the object color component and calibration value, so as to using shallower Web results obtain color character information even more as much.
Fig. 6 is please referred to, is a kind of color constancy means for correcting 200 provided by the embodiments of the present application, applied to showing in Fig. 1 Image processing equipment 100 out.The color constancy means for correcting 200 includes the first computing module 210, the second computing module 220, prediction module 230 and correction module 240.
Wherein, first computing module 210 is used to calculate the R channel gain value conduct of target image by preset algorithm Initial R channel gain value calculates the channel B yield value of the target image as initial channel B yield value.
In the present embodiment, first computing module 210 can execute step S21 shown in Figure 2, about the first meter The description for calculating module 210 specifically can be with the detailed description of reference pair step S21.
Second computing module 220 is used to that the target to be calculated by the depth remnants learning network that training is completed The remaining R channel gain value of image and remaining channel B yield value.
In the present embodiment, the second computing module 220 can execute step S22 shown in Figure 2, calculate mould about second The description of block 220 specifically can be with the detailed description of reference pair step S22.
The prediction module 230 according to the Initial R channel gain value and the remnants R channel gain value for asking to obtain It predicts R channel gain value, and prediction channel B is obtained according to the initial channel B yield value and the remaining channel B yield value Yield value.
In the present embodiment, prediction module 230 can execute step S23 shown in Figure 2, about prediction module 230 Description specifically can be with the detailed description of reference pair step S23.
The correction module 240 is used for according to the prediction R channel gain value and the prediction channel B yield value to described Target image is adjusted, and is corrected with the color constancy to the target image.
In the present embodiment, correction module 240 can execute step S24 shown in Figure 2, about correction module 240 Description specifically can be with the detailed description of reference pair step S24.
Optionally, the color constancy means for correcting 200 can also be determined including sample acquisition module 250, mark value Module 260 and training module 270.
Wherein, the sample acquisition module 250 is for obtaining training sample data collection, wherein the training sample data Collection includes the original image acquired from target scene.
In the present embodiment, the sample acquisition module 250 can execute step S31 shown in Fig. 3, about the sample The description of this acquisition module 250 specifically can be with the detailed description of reference pair step S31.
The mark value determining module 260 is used to determine corresponding mark for each sample that the training sample data are concentrated Remember R channel gain value and label channel B yield value.
In the present embodiment, the mark value determining module 260 can execute step S32 shown in Fig. 3, about described The description of mark value determining module 260 specifically can be with the detailed description of reference pair step S32.
The training module 270 is used for:
For each sample, the grey world R channel gain value and grey world channel B yield value of the sample are calculated, The remaining R channel gain value and remnants channel B yield value of the sample, root are calculated by depth remnants learning network to be trained The prediction R channel gain value of the sample is obtained according to the ash world R channel gain value and the remnants R channel gain value, according to this Grey world's channel B yield value and the remnants channel B yield value obtain the prediction channel B yield value of the sample;Using default damage It loses function and costing bio disturbance is carried out to the prediction R channel gain value of each sample and label R channel gain value, and to prediction channel B Yield value and label channel B yield value carry out costing bio disturbance;According to the result of costing bio disturbance to the depth remnants learning network Parameter optimize, obtain training completion depth remnants learning network.
In the present embodiment, training module 270 can execute step S33- step S35 shown in Fig. 3, about training mould The description of block 270 specifically can be with the detailed description of reference pair step S33- step S35.
Optionally, in the present embodiment, the mark value determining module 260 may include adjusting submodule 261 and determine Submodule 262.
Wherein, the adjusting submodule 261 is used to be directed to each sample, adjusts the R channel gain value of the sample With channel B yield value, the sample is made to reach specified color effect.
The determining submodule 262 is used to for the current R channel gain value of the sample being determined as the label R of the sample Channel gain value, and the current channel B yield value of the sample is determined as to the label channel B yield value of the sample.
In conclusion the embodiment of the present application provides a kind of color constancy bearing calibration, device and image processing equipment.It is logical The R channel gain value of preset algorithm calculating target image is crossed as Initial R channel gain value, the channel B for calculating target image increases Benefit value is used as initial channel B yield value;The remaining R of target image is calculated by the depth remnants learning network that training is completed Channel gain value and remaining channel B yield value;It asks to obtain according to Initial R channel gain value and remnants R channel gain value and predicts that R is logical Road yield value, and prediction channel B yield value is obtained according to initial channel B yield value and remaining channel B yield value;According to prediction R channel gain value and prediction channel B yield value are adjusted target image, carry out school with the color constancy to target image Just.In this way, using the shallower deep learning network implementations of network structure to the precise calibration of the color constancy of image.
In embodiment provided herein, it should be understood that disclosed device and method, it can also be by other Mode realize.The apparatus embodiments described above are merely exemplary, for example, the flow chart and block diagram in attached drawing are shown According to device, the architectural framework in the cards of method and computer program product, function of multiple embodiments of the application And operation.In this regard, each box in flowchart or block diagram can represent one of a module, section or code Point, a part of the module, section or code includes one or more for implementing the specified logical function executable Instruction.It should also be noted that function marked in the box can also be attached to be different from some implementations as replacement The sequence marked in figure occurs.For example, two continuous boxes can actually be basically executed in parallel, they sometimes may be used To execute in the opposite order, this depends on the function involved.It is also noted that each of block diagram and or flow chart The combination of box in box and block diagram and or flow chart can be based on the defined function of execution or the dedicated of movement The system of hardware is realized, or can be realized using a combination of dedicated hardware and computer instructions.
In addition, each functional module in each embodiment of the application can integrate one independent portion of formation together Point, it is also possible to modules individualism, an independent part can also be integrated to form with two or more modules.
It, can be with if the function is realized and when sold or used as an independent product in the form of software function module It is stored in a computer readable storage medium.Based on this understanding, the technical solution of the application is substantially in other words The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a People's computer, server or network equipment etc.) execute each embodiment the method for the application all or part of the steps. And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic or disk.
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in process, method, article or equipment including the element.
The above, the only specific embodiment of the application, but the protection scope of the application is not limited thereto, it is any Those familiar with the art within the technical scope of the present application, can easily think of the change or the replacement, and should all contain Lid is within the scope of protection of this application.Therefore, the protection scope of the application should be subject to the protection scope in claims.

Claims (10)

1. a kind of color constancy bearing calibration, which is characterized in that the described method includes:
The R channel gain value for calculating target image by preset algorithm calculates the target image as Initial R channel gain value Channel B yield value as initial channel B yield value;
The remaining R channel gain value and remnants of the target image are calculated by the depth remnants learning network that training is completed Channel B yield value;
It is asked to obtain prediction R channel gain value, Yi Jigen according to the Initial R channel gain value and the remnants R channel gain value Prediction channel B yield value is obtained according to the initial channel B yield value and the remaining channel B yield value;
The target image is adjusted according to the prediction R channel gain value and the prediction channel B yield value, to institute The color constancy for stating target image is corrected.
2. the method according to claim 1, wherein the depth remnants learning network carries out as follows Training:
Obtain training sample data collection, wherein the training sample data collection includes the original image acquired from target scene;
Corresponding label R channel gain value and label channel B gain are determined for each sample that the training sample data are concentrated Value;
For each sample, the Initial R channel gain value and initial channel B of the sample are calculated using the preset algorithm Yield value calculates the remaining R channel gain value of the sample by depth remnants learning network to be trained and remaining channel B increases Benefit value obtains the prediction R channel gain value of the sample, root according to the Initial R channel gain value and the remnants R channel gain value The prediction channel B yield value of the sample is obtained according to the initial channel B yield value and the remnants channel B yield value;
Costing bio disturbance is carried out to the prediction R channel gain value of each sample and label R channel gain value using default loss function, with And costing bio disturbance is carried out to prediction channel B yield value and label channel B yield value;
It is optimized according to parameter of the result of costing bio disturbance to the depth remnants learning network, obtains the depth of training completion Remaining learning network.
3. according to the method described in claim 2, it is characterized in that, being determined for each sample that the training sample data are concentrated Corresponding label R channel gain value and label channel B yield value, comprising:
For each sample, the R channel gain value and channel B yield value of the sample are adjusted, the sample is made to reach specified Color effect;
The current R channel gain value of the sample is determined as to the label R channel gain value of the sample, and by the sample Current channel B yield value is determined as the label channel B yield value of the sample.
4. according to the method in claim 2 or 3, which is characterized in that the preset algorithm is Classical Grey world algo-rithms;Pass through Depth remnants learning network to be trained calculates the remaining R channel gain value and remnants channel B yield value of the sample, comprising:
The thumbnail information input depth remnants to be trained are learnt net by the thumbnail information for obtaining the sample Network makes the depth remnants learning network to be trained export the remaining R channel gain value and remaining channel B gain of the sample Value;
Wherein, the thumbnail information includes the detection table DetectionTable or histogram Histgram of the sample.
5. according to the method in claim 2 or 3, which is characterized in that the default loss function are as follows:
Wherein, N indicates sample size,Indicate the predicted value of i-th of sample,Indicate the mark value of i-th of sample.
6. according to the method in claim 2 or 3, which is characterized in that remaining to the depth according to the result of costing bio disturbance The parameter of learning network optimizes, comprising:
The result of the costing bio disturbance is inputted into Adam optimizer, and the depth is optimized according to the output of the Adam optimizer The parameter of remaining learning network.
7. a kind of color constancy means for correcting, which is characterized in that described device includes:
First computing module, for calculating the R channel gain value of target image by preset algorithm as Initial R channel gain Value, calculates the channel B yield value of the target image as initial channel B yield value;
The remnants of the target image are calculated in second computing module, the depth remnants learning network for being completed by training R channel gain value and remaining channel B yield value;
Prediction module obtains the prediction channel R for asking according to the Initial R channel gain value and the remnants R channel gain value Yield value, and prediction channel B yield value is obtained according to the initial channel B yield value and the remaining channel B yield value;
Correction module, for according to the prediction R channel gain value and the prediction channel B yield value to the target image into Row adjustment, is corrected with the color constancy to the target image.
8. device according to claim 7, which is characterized in that described device further include:
Sample acquisition module, for obtaining training sample data collection, wherein the training sample data collection includes from target scene The original image of acquisition;
Mark value determining module, each sample for concentrating for the training sample data determine that the corresponding channel label R increases Benefit value and label channel B yield value;
Training module is used for:
For each sample, the grey world R channel gain value and grey world channel B yield value of the sample are calculated, is passed through Depth remnants learning network to be trained calculates the remaining R channel gain value and remnants channel B yield value of the sample, according to this Grey world R channel gain value and the remnants R channel gain value obtain the prediction R channel gain value of the sample, according to the ash generation Boundary's channel B yield value and the remnants channel B yield value obtain the prediction channel B yield value of the sample;
Costing bio disturbance is carried out to the prediction R channel gain value of each sample and label R channel gain value using default loss function, with And costing bio disturbance is carried out to prediction channel B yield value and label channel B yield value;
It is optimized according to parameter of the result of costing bio disturbance to the depth remnants learning network, obtains the depth of training completion Remaining learning network.
9. device according to claim 8, which is characterized in that the mark value determining module includes:
Adjusting submodule adjusts the R channel gain value and channel B yield value of the sample, makes for being directed to each sample The sample reaches specified color effect;
Submodule is determined, for the current R channel gain value of the sample to be determined as to the label R channel gain of the sample It is worth, and the current channel B yield value of the sample is determined as to the label channel B yield value of the sample.
10. a kind of image processing equipment characterized by comprising
Processor and machine readable storage medium are stored with machine-executable instruction on the machine readable storage medium, described Machine-executable instruction, which is performed, promotes the processor to realize method of any of claims 1-5.
CN201811528397.6A 2018-12-13 2018-12-13 Color constancy correction method and device and image processing equipment Active CN109618145B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811528397.6A CN109618145B (en) 2018-12-13 2018-12-13 Color constancy correction method and device and image processing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811528397.6A CN109618145B (en) 2018-12-13 2018-12-13 Color constancy correction method and device and image processing equipment

Publications (2)

Publication Number Publication Date
CN109618145A true CN109618145A (en) 2019-04-12
CN109618145B CN109618145B (en) 2020-11-10

Family

ID=66007533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811528397.6A Active CN109618145B (en) 2018-12-13 2018-12-13 Color constancy correction method and device and image processing equipment

Country Status (1)

Country Link
CN (1) CN109618145B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110534071A (en) * 2019-07-19 2019-12-03 南京巨鲨显示科技有限公司 A kind of display color calibration system neural network based and method
CN112183551A (en) * 2019-07-02 2021-01-05 佳能株式会社 Illumination color prediction method, image processing apparatus, and storage medium
CN112488962A (en) * 2020-12-17 2021-03-12 成都极米科技股份有限公司 Method, device, equipment and medium for adjusting picture color based on deep learning
CN115514947A (en) * 2021-06-07 2022-12-23 荣耀终端有限公司 AI automatic white balance algorithm and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100104498A (en) * 2009-03-18 2010-09-29 고려대학교 산학협력단 Auto exposure and auto white-balance method for detecting high dynamic range conditions
CN102883168A (en) * 2012-07-05 2013-01-16 上海大学 White balance processing method directed towards atypical-feature image
CN103313068A (en) * 2013-05-29 2013-09-18 山西绿色光电产业科学技术研究院(有限公司) White balance corrected image processing method and device based on gray edge constraint gray world
CN106412547A (en) * 2016-08-29 2017-02-15 厦门美图之家科技有限公司 Image white balance method and device based on convolutional neural network, and computing device
CN107578390A (en) * 2017-09-14 2018-01-12 长沙全度影像科技有限公司 A kind of method and device that image white balance correction is carried out using neutral net

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100104498A (en) * 2009-03-18 2010-09-29 고려대학교 산학협력단 Auto exposure and auto white-balance method for detecting high dynamic range conditions
CN102883168A (en) * 2012-07-05 2013-01-16 上海大学 White balance processing method directed towards atypical-feature image
CN103313068A (en) * 2013-05-29 2013-09-18 山西绿色光电产业科学技术研究院(有限公司) White balance corrected image processing method and device based on gray edge constraint gray world
CN106412547A (en) * 2016-08-29 2017-02-15 厦门美图之家科技有限公司 Image white balance method and device based on convolutional neural network, and computing device
CN107578390A (en) * 2017-09-14 2018-01-12 长沙全度影像科技有限公司 A kind of method and device that image white balance correction is carried out using neutral net

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183551A (en) * 2019-07-02 2021-01-05 佳能株式会社 Illumination color prediction method, image processing apparatus, and storage medium
CN110534071A (en) * 2019-07-19 2019-12-03 南京巨鲨显示科技有限公司 A kind of display color calibration system neural network based and method
CN110534071B (en) * 2019-07-19 2020-09-18 南京巨鲨显示科技有限公司 Display color calibration system and method based on neural network
CN112488962A (en) * 2020-12-17 2021-03-12 成都极米科技股份有限公司 Method, device, equipment and medium for adjusting picture color based on deep learning
CN115514947A (en) * 2021-06-07 2022-12-23 荣耀终端有限公司 AI automatic white balance algorithm and electronic equipment
CN115514947B (en) * 2021-06-07 2023-07-21 荣耀终端有限公司 Algorithm for automatic white balance of AI (automatic input/output) and electronic equipment

Also Published As

Publication number Publication date
CN109618145B (en) 2020-11-10

Similar Documents

Publication Publication Date Title
CN109618145A (en) Color constancy bearing calibration, device and image processing equipment
CN109919869B (en) Image enhancement method and device and storage medium
US11573991B2 (en) Deep reinforcement learning-based multi-step question answering systems
CN108876745B (en) Image processing method and device
CN108140133B (en) Program generation device, program generation method, and recording medium
KR20090028267A (en) Method and apparatus for auto focusing
US11508038B2 (en) Image processing method, storage medium, image processing apparatus, learned model manufacturing method, and image processing system
CN111105375B (en) Image generation method, model training method and device thereof, and electronic equipment
EP3098806A1 (en) Backlight brightness regulation method and electronic device
WO2020132371A1 (en) Profile-based standard dynamic range and high dynamic range content generation
US20180276503A1 (en) Information processing apparatus, information processing method, and storage medium
JP2015088040A (en) Authentication device, authentication method, and program
US9053552B2 (en) Image processing apparatus, image processing method and non-transitory computer readable medium
CN112241940B (en) Fusion method and device for multiple multi-focus images
KR102208688B1 (en) Apparatus and method for developing object analysis model based on data augmentation
JP7403995B2 (en) Information processing device, control method and program
US10235032B2 (en) Method for optimizing a captured photo or a recorded multi-media and system and electric device therefor
US9836827B2 (en) Method, apparatus and computer program product for reducing chromatic aberrations in deconvolved images
WO2021161453A1 (en) Image processing system, image processing method, and nontemporary computer-readable medium
CN111382772B (en) Image processing method and device and terminal equipment
JP2022038285A (en) Machine learning device and far-infrared image pickup device
CN111739008A (en) Image processing method, device, equipment and readable storage medium
CN110913195A (en) White balance automatic adjusting method, device and computer readable storage medium
US20200349744A1 (en) System and a method for providing color vision deficiency assistance
JP6798607B2 (en) Information processing equipment, information processing methods and information processing programs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant