CN109618145B - Color constancy correction method and device and image processing equipment - Google Patents

Color constancy correction method and device and image processing equipment Download PDF

Info

Publication number
CN109618145B
CN109618145B CN201811528397.6A CN201811528397A CN109618145B CN 109618145 B CN109618145 B CN 109618145B CN 201811528397 A CN201811528397 A CN 201811528397A CN 109618145 B CN109618145 B CN 109618145B
Authority
CN
China
Prior art keywords
gain value
channel gain
sample
residual
predicted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811528397.6A
Other languages
Chinese (zh)
Other versions
CN109618145A (en
Inventor
刘键涛
周凡
张长定
李骈臻
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Meitu Innovation Technology Co ltd
Original Assignee
Shenzhen Meitu Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Meitu Innovation Technology Co ltd filed Critical Shenzhen Meitu Innovation Technology Co ltd
Priority to CN201811528397.6A priority Critical patent/CN109618145B/en
Publication of CN109618145A publication Critical patent/CN109618145A/en
Application granted granted Critical
Publication of CN109618145B publication Critical patent/CN109618145B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control

Abstract

The embodiment of the application provides a color constancy correction method and device and image processing equipment. Calculating an R channel gain value of a target image as an initial R channel gain value and calculating a B channel gain value of the target image as an initial B channel gain value through a preset algorithm; calculating to obtain a residual R channel gain value and a residual B channel gain value of the target image through the trained deep residual learning network; obtaining a predicted R channel gain value according to the initial R channel gain value and the residual R channel gain value, and obtaining a predicted B channel gain value according to the initial B channel gain value and the residual B channel gain value; and adjusting the target image according to the predicted R channel gain value and the predicted B channel gain value so as to correct the color constancy of the target image. Therefore, accurate correction of the color constancy of the image is realized by adopting the deep learning network with a shallow network structure.

Description

Color constancy correction method and device and image processing equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a color constancy correction method and apparatus, and an image processing device.
Background
In the image signal processing flow, the color constancy is calculated to keep the color of the object under different illumination conditions consistent. Under the conditions of complex illumination, extreme or special illumination, the color constancy algorithm is particularly important.
Under most lighting conditions, traditional algorithms such as gray world algorithms, retinal cortical theory, standard deviation weighted gray world, standard deviation and brightness weighted gray world, etc. can yield user-acceptable correction results. However, once complex illumination occurs in the environment, under extreme or special illumination conditions, a good color constancy correction result cannot be obtained.
At present, a Convolutional neural Network (Convolutional neural Network) model is usually adopted to perform feature extraction on colors and semantics of an image so as to realize color constancy correction on the image. To obtain a better correction result, a network structure is usually required to be deepened to improve the feature extraction performance of the CNN model, however, the deepening of the network structure may cause problems such as that the gradient disappears and the solution cannot be optimized.
Disclosure of Invention
In view of the above, an object of the present application is to provide a color constancy correction method, device and image processing apparatus, so as to at least partially improve the above problems.
In a first aspect, an embodiment of the present application provides a color constancy correction method, where the method includes:
calculating an R channel gain value of a target image as an initial R channel gain value through a preset algorithm, and calculating a B channel gain value of the target image as an initial B channel gain value;
calculating to obtain a residual R channel gain value and a residual B channel gain value of the target image through a trained deep residual learning network;
obtaining a predicted R channel gain value according to the initial R channel gain value and the residual R channel gain value, and obtaining a predicted B channel gain value according to the initial B channel gain value and the residual B channel gain value;
and adjusting the target image according to the predicted R channel gain value and the predicted B channel gain value so as to correct the color constancy of the target image.
Optionally, the deep residual learning network is trained by:
acquiring a training sample data set, wherein the training sample data set comprises original images acquired from a target scene;
determining a corresponding marker R channel gain value and a marker B channel gain value for each sample in the training sample data set;
aiming at each sample, calculating an initial R channel gain value and an initial B channel gain value of the sample by adopting the preset algorithm, calculating a residual R channel gain value and a residual B channel gain value of the sample through a deep residual learning network to be trained, obtaining a predicted R channel gain value of the sample according to the initial R channel gain value and the residual R channel gain value, and obtaining a predicted B channel gain value of the sample according to the initial B channel gain value and the residual B channel gain value;
performing loss calculation on the predicted R channel gain value and the marked R channel gain value of each sample by adopting a preset loss function, and performing loss calculation on the predicted B channel gain value and the marked B channel gain value;
and optimizing the parameters of the deep residual learning network according to the loss calculation result to obtain the trained deep residual learning network.
Optionally, determining a corresponding tag R channel gain value and a tag B channel gain value for each sample in the training sample data set, including:
for each sample, adjusting the R channel gain value and the B channel gain value of the sample to enable the sample to achieve the specified color effect;
determining the sample current R-channel gain value as a labeled R-channel gain value for the sample, and determining the sample current B-channel gain value as a labeled B-channel gain value for the sample.
Optionally, the preset algorithm is a classic grey world algorithm; calculating a residual R channel gain value and a residual B channel gain value of the sample through a deep residual learning network to be trained, including:
acquiring thumbnail information of the sample, inputting the thumbnail information into the deep residual learning network to be trained, and enabling the deep residual learning network to be trained to output a residual R channel gain value and a residual B channel gain value of the sample;
wherein the thumbnail information includes a detection table DetectionTable or a Histogram of the sample.
Optionally, the preset loss function is:
Figure GDA0002649537280000031
wherein, N represents the number of samples,
Figure GDA0002649537280000032
represents the predicted value of the ith sample,
Figure GDA0002649537280000033
a marker value representing the ith sample.
Optionally, optimizing parameters of the deep residual learning network according to a result of the loss calculation includes:
and inputting the result of the loss calculation into an Adam optimizer, and optimizing the parameters of the deep residual learning network according to the output of the Adam optimizer.
In a second aspect, the present disclosure provides a color constancy correction apparatus, said apparatus comprising:
the first calculation module is used for calculating an R channel gain value of a target image as an initial R channel gain value through a preset algorithm and calculating a B channel gain value of the target image as an initial B channel gain value;
the second calculation module is used for calculating a residual R channel gain value and a residual B channel gain value of the target image through the trained deep residual learning network;
the prediction module is used for obtaining a prediction R channel gain value according to the initial R channel gain value and the residual R channel gain value and obtaining a prediction B channel gain value according to the initial B channel gain value and the residual B channel gain value;
and the correcting module is used for adjusting the target image according to the predicted R channel gain value and the predicted B channel gain value so as to correct the color constancy of the target image.
Optionally, the apparatus further comprises:
the system comprises a sample acquisition module, a data acquisition module and a data processing module, wherein the sample acquisition module is used for acquiring a training sample data set, and the training sample data set comprises original images acquired from a target scene;
a marker value determining module, configured to determine a corresponding marker R channel gain value and a marker B channel gain value for each sample in the training sample data set;
a training module to:
for each sample, calculating a gray world R channel gain value and a gray world B channel gain value of the sample, calculating a residual R channel gain value and a residual B channel gain value of the sample through a deep residual learning network to be trained, obtaining a predicted R channel gain value of the sample according to the gray world R channel gain value and the residual R channel gain value, and obtaining a predicted B channel gain value of the sample according to the gray world B channel gain value and the residual B channel gain value;
performing loss calculation on the predicted R channel gain value and the marked R channel gain value of each sample by adopting a preset loss function, and performing loss calculation on the predicted B channel gain value and the marked B channel gain value;
and optimizing the parameters of the deep residual learning network according to the loss calculation result to obtain the trained deep residual learning network.
Optionally, the flag value determination module includes:
the adjusting submodule is used for adjusting the R channel gain value and the B channel gain value of each sample so as to enable the sample to achieve the specified color effect;
a determining submodule for determining the current R-channel gain value of the sample as a labeled R-channel gain value of the sample, and for determining the current B-channel gain value of the sample as a labeled B-channel gain value of the sample.
In a third aspect, the present disclosure provides an image processing apparatus comprising:
a processor and a machine-readable storage medium having stored thereon machine-executable instructions that, when executed, cause the processor to implement a method as provided by a first aspect of embodiments of the present application.
Compared with the prior art, the embodiment of the application has the following beneficial effects:
the embodiment of the application provides a color constancy correction method and device and image processing equipment. Calculating an R channel gain value of a target image as an initial R channel gain value and calculating a B channel gain value of the target image as an initial B channel gain value through a preset algorithm; calculating to obtain a residual R channel gain value and a residual B channel gain value of the target image through the trained deep residual learning network; obtaining a predicted R channel gain value according to the initial R channel gain value and the residual R channel gain value, and obtaining a predicted B channel gain value according to the initial B channel gain value and the residual B channel gain value; and adjusting the target image according to the predicted R channel gain value and the predicted B channel gain value so as to correct the color constancy of the target image. Therefore, accurate correction of the color constancy of the image is realized by adopting the deep learning network with a shallow network structure.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic block diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a color constancy correction method according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a deep residual learning network according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a training method for a deep residual learning network according to an embodiment of the present disclosure;
fig. 5 is an operation process of a color constancy correction method under a specific scene according to an embodiment of the present application;
fig. 6 is a functional block diagram of a color constancy correction apparatus according to an embodiment of the present disclosure.
Icon: 100-an image processing device; 110-a processor; 120-a machine-readable storage medium; 200-color constancy correction means; 210-a first calculation module; 220-a second calculation module; 230-a prediction module; 240-a correction module; 250-a sample acquisition module; 260-a marker value determination module; 261-a tuning submodule; 262-a determination submodule; 270-training module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
Referring to fig. 1, fig. 1 is a block diagram of an image processing apparatus 100 according to an embodiment of the present disclosure. The image processing apparatus 100 may be any electronic apparatus having an image processing function, such as a mobile terminal, a Personal Computer (PC), or the like. The image processing apparatus 100 includes a color constancy correction device 200, a processor 110, and a machine-readable storage medium 120.
The elements of the processor 110 and the machine-readable storage medium 120 are electrically connected to each other, directly or indirectly, to enable data transfer or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The color constancy correction apparatus 200 includes at least one software functional module that can be stored on the machine-readable storage medium 120 in the form of software or firmware (firmware) or solidified in an Operating System (OS) of the image processing device 100. The processor 110 is used for calling and executing the executable module and the computer program on the machine-readable storage medium 120 when receiving the execution instruction.
The machine-readable storage medium 120 may be, but is not limited to, any electronic, magnetic, optical, or other physical storage device that can contain stored information, such as executable instructions, data, and the like. For example, the machine-readable storage medium 120 may be: RAM (random access memory), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., a compact disk, a DVD, etc.), or similar storage medium, or a combination thereof.
It should be understood that in the present embodiment, the configuration shown in fig. 1 is merely illustrative, and the image processing apparatus 100 may also include more, less, or completely different configurations than those shown in fig. 1. For example, a display unit may also be included. Wherein the components shown in fig. 1 may be implemented in software, hardware, or a combination thereof.
Fig. 2 is a flowchart illustrating a color constancy correction method according to an embodiment of the present application, where the method is applied to the image processing apparatus 100 shown in fig. 1. The individual steps involved in the method are described in detail below.
Step S21, calculating an R channel gain value of the target image as an initial R channel gain value by a preset algorithm, and calculating a B channel gain value of the target image as an initial B channel gain value.
The preset algorithm may be any conventional algorithm for correcting color constancy, such as a classical gray world algorithm, a retinal cortex theory algorithm, a standard deviation weighted gray world algorithm, a standard deviation and brightness weighted gray world algorithm, and the like.
And step S22, calculating to obtain a residual R channel gain value and a residual B channel gain value of the target image through the trained deep residual learning network.
Referring to fig. 3, fig. 3 is a schematic structural diagram illustrating a deep residual learning network according to an embodiment. Wherein the deep residual learning includes four convolutional layers (i.e., Convolation shown in FIG. 3) and two fully-connected layers connected in sequence. An activation function and a pooling layer (i.e., MaxPooling shown in fig. 3) are connected in sequence after each convolutional layer.
In this embodiment, each layer in the deep residual network may have ten channels, and the convolution kernel may be 3.
Optionally, in this embodiment, the deep residual learning network may be trained through the steps shown in fig. 4.
Step S41, a training sample data set is obtained, wherein the training sample data set includes original images acquired from a target scene.
The original image is a RAW image directly acquired by a camera sensor, and the target scene can be a plurality of specific application scenes and is specifically determined according to user requirements.
Step S42, determining a corresponding tag R channel gain value and a tag B channel gain value for each sample in the training sample data set.
In the present embodiment, step S42 may be implemented by the following procedure:
firstly, aiming at each sample, adjusting the R channel gain value and the B channel gain value of the sample to enable the sample to achieve the specified color effect.
Wherein the designated color effect can be determined by the user according to personal preference.
Second, the current R-channel gain value of the sample is determined as the labeled R-channel gain value of the sample, and the current B-channel gain value of the sample is determined as the labeled B-channel gain value of the sample.
Step S43, aiming at each sample, calculating an initial R channel gain value and an initial B channel gain value of the sample by adopting the preset algorithm, calculating a residual R channel gain value and a residual B channel gain value of the sample through a deep residual learning network to be trained, obtaining a predicted R channel gain value of the sample according to the initial R channel gain value and the residual R channel gain value, and obtaining a predicted B channel gain value of the sample according to the initial B channel gain value and the residual B channel gain value.
Optionally, in this embodiment, step S43 may include the following sub-steps to calculate the residual R channel gain value and the residual B channel gain value of the sample through the deep residual learning network to be trained.
And acquiring thumbnail information of the sample, inputting the thumbnail information into the deep residual learning network to be trained, and enabling the deep residual learning network to be trained to output a residual R channel gain value and a residual B channel gain value of the sample.
The thumbnail information may include a detection table DetectionTable or a Histogram of the sample.
And step S44, performing loss calculation on the predicted R channel gain value and the marked R channel gain value of each sample by adopting a preset loss function, and performing loss calculation on the predicted B channel gain value and the marked B channel gain value.
Alternatively, the loss function may be Euclidean loss, i.e. the loss calculation may be performed using the following calculation:
Figure GDA0002649537280000101
wherein, N represents the number of samples,
Figure GDA0002649537280000102
indicates the predicted values of the ith sample, such as the predicted R channel gain value and the predicted B channel gain value in the present embodiment;
Figure GDA0002649537280000103
the flag values indicating the ith sample, such as the flag R channel gain value and the flag B channel gain value in the present embodiment.
And step S45, optimizing the parameters of the deep residual learning network according to the loss calculation result to obtain the trained deep residual learning network.
Taking the preset algorithm as an example, a specific example is given in conjunction with fig. 5 to describe steps S43-S45 in detail.
In implementation, a sample (such as the RGB image shown in fig. 5) is input into the deep Residual learning network to output a Residual R channel Gain value (Residual R Gain) and a Residual B channel Gain value (Residual R Gain) for the sample. Meanwhile, a detection table or a Histogram of the sample is processed by a classical gray world algorithm to obtain a gray world R channel gain value (RGainbyGW) and a gray world B channel gain value (BGain by GW) of the sample, which may serve as an initial R channel gain value and an initial B channel gain value in the embodiment of the present application.
Summing the gray world R channel gain value and the residual R channel gain value to obtain a predicted R channel gain value (predictedRGain) and a predicted B channel gain value (predictedBGain) for the sample. Then, Euclidean loss is adopted to perform loss calculation on the obtained predictedRGain and the marker R channel Gain value (LabeledRGain) of the sample to obtain a loss result 1, and loss calculation is performed on the obtained predictedB Gain and the marker B channel Gain value (LabeledGain) of the sample to obtain a loss result 2.
And inputting the loss result 1 and the loss result 2 into an Adam optimizer, feeding the output of the Adam optimizer back to the deep residual learning network, and optimizing the parameters of the deep residual learning network.
The training process is repeated until convergence by the Euclidean loss.
Step S23, obtaining a predicted R channel gain value according to the initial R channel gain value and the residual R channel gain value, and obtaining a predicted B channel gain value according to the initial B channel gain value and the residual B channel gain value.
The implementation process of step S23 is similar to the processing process of the sample in the training process, and is not described herein again.
And step S24, adjusting the target image according to the predicted R channel gain value and the predicted B channel gain value so as to correct the color constancy of the target image.
Through the design, the deep residual learning network is adopted to learn the residual value between the target color in the target scene and the calculation result of the traditional algorithm instead of learning the residual value between the target color and the calibration value, so that the same or more color characteristic information can be obtained by adopting a lighter network result.
Referring to fig. 6, a color constancy correction apparatus 200 according to an embodiment of the present application is applied to the image processing apparatus 100 shown in fig. 1. The color constancy correction apparatus 200 includes a first calculation module 210, a second calculation module 220, a prediction module 230, and a correction module 240.
The first calculating module 210 is configured to calculate an R channel gain value of a target image as an initial R channel gain value through a preset algorithm, and calculate a B channel gain value of the target image as an initial B channel gain value.
In this embodiment, the first calculating module 210 may execute step S21 shown in fig. 2, and the detailed description of step S21 may be referred to for the description of the first calculating module 210.
The second calculating module 220 is configured to calculate a residual R channel gain value and a residual B channel gain value of the target image through the trained deep residual learning network.
In the present embodiment, the second calculating module 220 may perform step S22 shown in fig. 2, and the detailed description of step S22 may be referred to for the description of the second calculating module 220.
The prediction module 230 is configured to obtain a predicted R channel gain value according to the initial R channel gain value and the residual R channel gain value, and obtain a predicted B channel gain value according to the initial B channel gain value and the residual B channel gain value.
In the present embodiment, the prediction module 230 may perform step S23 shown in fig. 2, and the detailed description of step S23 may be referred to for the description of the prediction module 230.
The correcting module 240 is configured to adjust the target image according to the predicted R channel gain value and the predicted B channel gain value, so as to correct the color constancy of the target image.
In the present embodiment, the correction module 240 may perform step S24 shown in fig. 2, and the description about the correction module 240 may specifically refer to the detailed description about step S24.
Optionally, the color constancy correction apparatus 200 may further include a sample acquisition module 250, a marker value determination module 260, and a training module 270.
The sample obtaining module 250 is configured to obtain a training sample data set, where the training sample data set includes original images acquired from a target scene.
In this embodiment, the sample acquiring module 250 may execute step S41 shown in fig. 4, and the detailed description of step S41 may be referred to for the description of the sample acquiring module 250.
The marker value determination module 260 is configured to determine a corresponding marker R channel gain value and a marker B channel gain value for each sample in the training sample data set.
In this embodiment, the flag value determination module 260 may perform step S42 shown in fig. 4, and the detailed description of step S42 may be referred to for the description of the flag value determination module 260.
The training module 270 is configured to:
for each sample, calculating a gray world R channel gain value and a gray world B channel gain value of the sample, calculating a residual R channel gain value and a residual B channel gain value of the sample through a deep residual learning network to be trained, obtaining a predicted R channel gain value of the sample according to the gray world R channel gain value and the residual R channel gain value, and obtaining a predicted B channel gain value of the sample according to the gray world B channel gain value and the residual B channel gain value; performing loss calculation on the predicted R channel gain value and the marked R channel gain value of each sample by adopting a preset loss function, and performing loss calculation on the predicted B channel gain value and the marked B channel gain value; and optimizing the parameters of the deep residual learning network according to the loss calculation result to obtain the trained deep residual learning network.
In the present embodiment, the training module 270 may perform steps S43 to S45 shown in fig. 4, and the detailed description of the training module 270 may specifically refer to the detailed description of steps S43 to S45.
Optionally, in this embodiment, the flag value determination module 260 may include an adjustment submodule 261 and a determination submodule 262.
The adjusting submodule 261 is configured to adjust, for each sample, an R channel gain value and a B channel gain value of the sample, so that the sample achieves a specified color effect.
The determination submodule 262 is configured to determine the current R-channel gain value of the sample as the labeled R-channel gain value of the sample, and determine the current B-channel gain value of the sample as the labeled B-channel gain value of the sample.
In summary, the present application provides a color constancy correction method, a color constancy correction device, and an image processing apparatus. Calculating an R channel gain value of a target image as an initial R channel gain value and calculating a B channel gain value of the target image as an initial B channel gain value through a preset algorithm; calculating to obtain a residual R channel gain value and a residual B channel gain value of the target image through the trained deep residual learning network; obtaining a predicted R channel gain value according to the initial R channel gain value and the residual R channel gain value, and obtaining a predicted B channel gain value according to the initial B channel gain value and the residual B channel gain value; and adjusting the target image according to the predicted R channel gain value and the predicted B channel gain value so as to correct the color constancy of the target image. Therefore, accurate correction of the color constancy of the image is realized by adopting the deep learning network with a shallow network structure.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. A method of color constancy correction, said method comprising:
calculating an R channel gain value of a target image as an initial R channel gain value through a preset algorithm, and calculating a B channel gain value of the target image as an initial B channel gain value;
calculating to obtain a residual R channel gain value and a residual B channel gain value of the target image through a trained deep residual learning network;
obtaining a predicted R channel gain value according to the initial R channel gain value and the residual R channel gain value, and obtaining a predicted B channel gain value according to the initial B channel gain value and the residual B channel gain value;
adjusting the target image according to the predicted R channel gain value and the predicted B channel gain value so as to correct the color constancy of the target image;
wherein the deep residual learning network is trained by the steps of:
acquiring a training sample data set, wherein the training sample data set comprises original images acquired from a target scene;
determining a corresponding marker R channel gain value and a marker B channel gain value for each sample in the training sample data set;
aiming at each sample, calculating an initial R channel gain value and an initial B channel gain value of the sample by adopting the preset algorithm, calculating a residual R channel gain value and a residual B channel gain value of the sample through a deep residual learning network to be trained, obtaining a predicted R channel gain value of the sample according to the initial R channel gain value and the residual R channel gain value, and obtaining a predicted B channel gain value of the sample according to the initial B channel gain value and the residual B channel gain value;
performing loss calculation on the predicted R channel gain value and the marked R channel gain value of each sample by adopting a preset loss function, and performing loss calculation on the predicted B channel gain value and the marked B channel gain value;
and optimizing the parameters of the deep residual learning network according to the loss calculation result to obtain the trained deep residual learning network.
2. The method of claim 1, wherein determining a corresponding tag R channel gain value and tag B channel gain value for each sample in the set of training sample data comprises:
for each sample, adjusting the R channel gain value and the B channel gain value of the sample to enable the sample to achieve the specified color effect;
determining the sample current R-channel gain value as a labeled R-channel gain value for the sample, and determining the sample current B-channel gain value as a labeled B-channel gain value for the sample.
3. The method according to claim 1 or 2, wherein the preset algorithm is a classic gray world algorithm; calculating a residual R channel gain value and a residual B channel gain value of the sample through a deep residual learning network to be trained, including:
acquiring thumbnail information of the sample, inputting the thumbnail information into the deep residual learning network to be trained, and enabling the deep residual learning network to be trained to output a residual R channel gain value and a residual B channel gain value of the sample;
wherein the thumbnail information includes a detection table DetectionTable or a Histogram of the sample.
4. The method according to claim 1 or 2, wherein the preset loss function is:
Figure FDA0002617933350000021
wherein, N represents the number of samples,
Figure FDA0002617933350000022
represents the predicted value of the ith sample,
Figure FDA0002617933350000023
a marker value representing the ith sample.
5. The method according to claim 1 or 2, wherein optimizing the parameters of the deep residual learning network according to the result of the loss calculation comprises:
and inputting the result of the loss calculation into an Adam optimizer, and optimizing the parameters of the deep residual learning network according to the output of the Adam optimizer.
6. A color constancy correction apparatus, said apparatus comprising:
the first calculation module is used for calculating an R channel gain value of a target image as an initial R channel gain value through a preset algorithm and calculating a B channel gain value of the target image as an initial B channel gain value;
the second calculation module is used for calculating a residual R channel gain value and a residual B channel gain value of the target image through the trained deep residual learning network;
the prediction module is used for obtaining a prediction R channel gain value according to the initial R channel gain value and the residual R channel gain value and obtaining a prediction B channel gain value according to the initial B channel gain value and the residual B channel gain value;
the correction module is used for adjusting the target image according to the predicted R channel gain value and the predicted B channel gain value so as to correct the color constancy of the target image;
the system comprises a sample acquisition module, a data acquisition module and a data processing module, wherein the sample acquisition module is used for acquiring a training sample data set, and the training sample data set comprises original images acquired from a target scene;
a marker value determining module, configured to determine a corresponding marker R channel gain value and a marker B channel gain value for each sample in the training sample data set;
a training module to:
for each sample, calculating a gray world R channel gain value and a gray world B channel gain value of the sample, calculating a residual R channel gain value and a residual B channel gain value of the sample through a deep residual learning network to be trained, obtaining a predicted R channel gain value of the sample according to the gray world R channel gain value and the residual R channel gain value, and obtaining a predicted B channel gain value of the sample according to the gray world B channel gain value and the residual B channel gain value;
performing loss calculation on the predicted R channel gain value and the marked R channel gain value of each sample by adopting a preset loss function, and performing loss calculation on the predicted B channel gain value and the marked B channel gain value;
and optimizing the parameters of the deep residual learning network according to the loss calculation result to obtain the trained deep residual learning network.
7. The apparatus of claim 6, wherein the flag value determination module comprises:
the adjusting submodule is used for adjusting the R channel gain value and the B channel gain value of each sample so as to enable the samples to achieve the specified color effect;
a determining submodule for determining the current R-channel gain value of the sample as a labeled R-channel gain value of the sample, and for determining the current B-channel gain value of the sample as a labeled B-channel gain value of the sample.
8. An image processing apparatus characterized by comprising:
a processor and a machine-readable storage medium having machine-executable instructions stored thereon that, when executed, cause the processor to implement the method of any of claims 1-5.
CN201811528397.6A 2018-12-13 2018-12-13 Color constancy correction method and device and image processing equipment Active CN109618145B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811528397.6A CN109618145B (en) 2018-12-13 2018-12-13 Color constancy correction method and device and image processing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811528397.6A CN109618145B (en) 2018-12-13 2018-12-13 Color constancy correction method and device and image processing equipment

Publications (2)

Publication Number Publication Date
CN109618145A CN109618145A (en) 2019-04-12
CN109618145B true CN109618145B (en) 2020-11-10

Family

ID=66007533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811528397.6A Active CN109618145B (en) 2018-12-13 2018-12-13 Color constancy correction method and device and image processing equipment

Country Status (1)

Country Link
CN (1) CN109618145B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183551A (en) * 2019-07-02 2021-01-05 佳能株式会社 Illumination color prediction method, image processing apparatus, and storage medium
CN110534071B (en) * 2019-07-19 2020-09-18 南京巨鲨显示科技有限公司 Display color calibration system and method based on neural network
CN112488962A (en) * 2020-12-17 2021-03-12 成都极米科技股份有限公司 Method, device, equipment and medium for adjusting picture color based on deep learning
CN115514947B (en) * 2021-06-07 2023-07-21 荣耀终端有限公司 Algorithm for automatic white balance of AI (automatic input/output) and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100104498A (en) * 2009-03-18 2010-09-29 고려대학교 산학협력단 Auto exposure and auto white-balance method for detecting high dynamic range conditions
CN102883168A (en) * 2012-07-05 2013-01-16 上海大学 White balance processing method directed towards atypical-feature image
CN103313068A (en) * 2013-05-29 2013-09-18 山西绿色光电产业科学技术研究院(有限公司) White balance corrected image processing method and device based on gray edge constraint gray world
CN106412547A (en) * 2016-08-29 2017-02-15 厦门美图之家科技有限公司 Image white balance method and device based on convolutional neural network, and computing device
CN107578390A (en) * 2017-09-14 2018-01-12 长沙全度影像科技有限公司 A kind of method and device that image white balance correction is carried out using neutral net

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100104498A (en) * 2009-03-18 2010-09-29 고려대학교 산학협력단 Auto exposure and auto white-balance method for detecting high dynamic range conditions
CN102883168A (en) * 2012-07-05 2013-01-16 上海大学 White balance processing method directed towards atypical-feature image
CN103313068A (en) * 2013-05-29 2013-09-18 山西绿色光电产业科学技术研究院(有限公司) White balance corrected image processing method and device based on gray edge constraint gray world
CN106412547A (en) * 2016-08-29 2017-02-15 厦门美图之家科技有限公司 Image white balance method and device based on convolutional neural network, and computing device
CN107578390A (en) * 2017-09-14 2018-01-12 长沙全度影像科技有限公司 A kind of method and device that image white balance correction is carried out using neutral net

Also Published As

Publication number Publication date
CN109618145A (en) 2019-04-12

Similar Documents

Publication Publication Date Title
CN109618145B (en) Color constancy correction method and device and image processing equipment
US10949958B2 (en) Fast fourier color constancy
US9554109B2 (en) Identifying gray regions for auto white balancing
US10586314B2 (en) Image fusion method, apparatus, and infrared thermal imaging device
US20130208994A1 (en) Image processing apparatus, image processing method, and recording medium
CN111292246B (en) Image color correction method, storage medium, and endoscope
KR101725884B1 (en) Automatic processing of images
CN109271552B (en) Method and device for retrieving video through picture, electronic equipment and storage medium
JP2008113222A (en) Image processing apparatus, imaging apparatus, image processing method in these apparatuses, and program allowing computer to execute the method
US20230059499A1 (en) Image processing system, image processing method, and non-transitory computer readable medium
US9786076B2 (en) Image combining apparatus, image combining method and non-transitory computer readable medium for storing image combining program
JP2006301803A (en) Image recognition device and image recognition method
WO2023011280A1 (en) Image noise degree estimation method and apparatus, and electronic device and storage medium
Buzzelli et al. Consensus-driven illuminant estimation with GANs
CN114463367A (en) Image processing method and device
WO2016113407A1 (en) Methods and apparatus for groupwise contrast enhancement
CN110913195B (en) White balance automatic adjustment method, device and computer readable storage medium
JP2009169620A (en) Image processing device, image processing method, program, and recording medium
US10235032B2 (en) Method for optimizing a captured photo or a recorded multi-media and system and electric device therefor
US9706112B2 (en) Image tuning in photographic system
JP2007174015A (en) Image management program and image management apparatus
JP4515964B2 (en) Image correction apparatus and method, and image correction program
JP2016162421A (en) Information processor, information processing method and program
CN111476731B (en) Image correction method, device, storage medium and electronic equipment
CN112995634B (en) Image white balance processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant