CN111277809A - Image color correction method, system, device and medium - Google Patents

Image color correction method, system, device and medium Download PDF

Info

Publication number
CN111277809A
CN111277809A CN202010130318.7A CN202010130318A CN111277809A CN 111277809 A CN111277809 A CN 111277809A CN 202010130318 A CN202010130318 A CN 202010130318A CN 111277809 A CN111277809 A CN 111277809A
Authority
CN
China
Prior art keywords
image
model
color
quality color
corrected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010130318.7A
Other languages
Chinese (zh)
Inventor
郑晨
何昭水
吕俊
黄德添
白玉磊
谭北海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202010130318.7A priority Critical patent/CN111277809A/en
Publication of CN111277809A publication Critical patent/CN111277809A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image color correction method, system, device and medium, comprising: dividing the collected image data into color distortion images and high-quality color image sets corresponding to the color distortion images; inputting the color distortion image into a generation model to obtain a corrected image; fixing parameters of a generated model, inputting the corrected image and the high-quality color image into a discrimination model, judging the image, and optimizing the parameters of the discrimination model until the discrimination model can distinguish two groups of images; and substituting the corrected image and the high-quality color image into a loss function, calculating loss, optimizing parameters of a generated model, inputting the corrected image and the high-quality color image into a trained discrimination model, and distinguishing the images until the discrimination model cannot distinguish two groups of images. The method for generating the countermeasure network corrects the color image, and the obtained model has the advantages of high correction speed, high precision and strong noise resistance.

Description

Image color correction method, system, device and medium
Technical Field
The present application relates to the field of color correction technologies, and in particular, to a method, a system, a device, and a medium for image color correction.
Background
Color is one of important parameters in the field of computer vision, and with the development of computer technology in the world, the requirements of various image application fields on color are higher and higher. Because the spectrum sensitivity of the image sensor cannot optimally simulate the perception capability of the human visual system to colors, and because the spectrum distribution of the light source is uneven, the colors of the collected images have certain deviation from the real colors of the object, the colors need to be corrected, so that the real colors are restored as much as possible.
Existing approaches to color correction include spectral response based methods and target color based methods.
The color correction principle based on spectral response is to measure the spectral response of an image sensor by a special instrument and find the relation between the spectral response of the image sensor to be corrected and the CIE color matching function.
The method based on the target color is characterized in that a correction model is established by using a color standard value and a measured value which contain a certain amount of color samples, a functional relation between the color standard value and the measured value is obtained, and the method is simple and practical. The typical algorithm comprises the following steps: three-dimensional table look-up method, polynomial method, least square method with constraint term and pattern search method. Generally, the table lookup method is more accurate, and has the disadvantages that it requires a large number of color standard tristimulus values, and the table creation speed is slow and the precision is not high. The polynomial method obtains a correction coefficient by the least square method based on a functional relationship between the two, but is applicable only to an extremely small color gamut space and easily amplifies noise. Even if a constraint term is added to the least square method, the accuracy of white can be improved, and errors after other color correction are further amplified. For the polynomial method and the least square method, when a correction model is constructed, the correction coefficient solved by using the 1-norm to 2-norm is proved to be more accurate. But since the 1-norm is not trivial, the global optimal solution can suppress noise amplification using a pattern search method. The pattern search method may be involved in local optimization problems, problems of excessive initial value influence and the like, and even the brightness difference of the image before and after correction is too large.
Disclosure of Invention
The embodiment of the application provides an image color correction method, system, device and medium, so that the correction of image colors can be fast and high-precision and the noise resistance is strong.
In view of the above, a first aspect of the present application provides an image color correction method, including:
101. dividing the acquired image data into two data sets, including a color distortion image set and a high-quality color image set corresponding to the color distortion image set;
102. inputting the color distortion image into the constructed generation model to obtain a corrected image;
103. training a discrimination model: fixing parameters of a generated model, inputting the corrected image and the high-quality color image into a discrimination model, enabling the discrimination model to judge the corrected image and the high-quality color image, optimizing parameters of the discrimination model according to a judgment result, and repeating the step 103 until the discrimination model distinguishes the corrected image and the high-quality color image, namely finishing the training of the discrimination model;
104. training and generating a model: substituting the corrected image and the high-quality color image into a loss function, calculating the loss between the corrected image and the high-quality color image, optimizing the parameters of the generated model according to the loss, inputting the corrected image and the high-quality color image into the trained discriminant model so that the discriminant model distinguishes the corrected image and the high-quality color image, and repeating the step 104 until the discriminant model does not distinguish the corrected image and the high-quality color image, namely completing the training of the generated model.
Preferably, the dividing of the acquired image data into two data sets further comprises, after the dividing, a color-distorted image set and a high-quality color image set corresponding to the color-distorted image set:
pre-processing the color-distorted image set and the high-quality color image set;
the pretreatment comprises the following steps: dividing the color-distorted image set and the high-quality color image set into a training set and a test set; and randomly cutting the training images in the training set to obtain a plurality of image blocks, and turning the image blocks up and down and left and right to obtain a plurality of training images.
Preferably, the loss function is specifically:
Figure BDA0002395615300000021
wherein E represents a desire for distance; z represents the number of samples; pgRepresenting a sample distribution produced by a generator; d represents the probability of whether the real data is obtained or not by inputting the fused real input and the label into the discriminator;
Figure BDA0002395615300000022
represents a function that satisfies the 1-Lipschitz limit.
Preferably, the activation function of the generative model is the same as the activation function of the discriminant model, specifically:
Figure BDA0002395615300000031
in the formula, bx,yIs a normalized feature vector ofx,yThe original feature vector is obtained, and N is the number of feature maps; e is a slight positive number used to avoid the divisor 0.
Preferably, before the inputting the corrected image and the high-quality color image corresponding to the corrected image into the loss function, the method further includes performing area sampling on the corrected image and the high-quality color image, where the area sampling specifically includes:
random difference sampling on the line connecting the corrected image and the high quality color image samples:
Figure BDA0002395615300000032
wherein α represents a random number between 0 and 1, xrRepresenting generating a sample region; x is the number ofgRepresenting a true sample region.
A second aspect of the present application provides an image color correction system, the system comprising:
the data acquisition module is used for dividing the acquired image data into two data sets, including a color distortion image set and a high-quality color image set corresponding to the color distortion image set;
the image correction module is used for inputting the color distortion image into the constructed generation model to obtain a corrected image;
the judgment model training module is used for fixing parameters of a generated model, inputting the corrected image and the high-quality color image into a judgment model, enabling the judgment model to judge the corrected image and the high-quality color image, optimizing the parameters of the judgment model according to a judgment result, and repeating the operation in the judgment model training module until the judgment model distinguishes the corrected image and the high-quality color image;
a generative model training module, configured to substitute the corrected image and the high-quality color image into a loss function, calculate a loss between the corrected image and the high-quality color image, optimize parameters of the generative model according to the loss, input the corrected image and the high-quality color image into the trained discriminant model in a manner that the discriminant model distinguishes the corrected image and the high-quality color image, and repeat operations in the generative model training module until the discriminant model does not distinguish the corrected image and the high-quality color image.
Preferably, the method further comprises the following steps:
a pre-processing module to pre-process the color-distorted image set and the high-quality color image set, the pre-processing including dividing the color-distorted image set and the high-quality color image set into a training set and a test set; and randomly cutting the training images in the training set to obtain a plurality of image blocks, and turning the image blocks up and down and left and right to obtain a plurality of training images.
Preferably, the method further comprises the following steps:
a sampling module, configured to perform area sampling on the corrected image and the high-quality color image, where the area sampling specifically is:
random difference sampling on the line connecting the corrected image and the high quality color image samples:
Figure BDA0002395615300000041
wherein α represents a random number between 0 and 1, xrRepresenting generating a sample region; x is the number ofgRepresenting a true sample region.
A third aspect of the present application provides an image color correction apparatus, the apparatus comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the steps of the image color correction method according to the first aspect as described above, according to instructions in the program code.
A fourth aspect of the present application provides a computer-readable storage medium for storing program code for performing the method of the first aspect.
According to the technical scheme, the method has the following advantages:
the application provides an image color correction method, which comprises the steps of dividing collected image data into a color distortion image and a high-quality color image set corresponding to the color distortion image; inputting the color distortion image into a generation model to obtain a corrected image; fixing parameters of a generated model, inputting the corrected image and the high-quality color image into a discrimination model, judging the image, and optimizing the parameters of the discrimination model until the discrimination model can distinguish two groups of images; and substituting the corrected image and the high-quality color image into a loss function, calculating loss, optimizing parameters of a generated model, inputting the corrected image and the high-quality color image into a trained discrimination model, and distinguishing the images until the discrimination model cannot distinguish two groups of images.
The method for generating the countermeasure network integrates the optimization of the discriminant model and the generated model, obtains the optimized weight parameter through mutual countermeasure between the generated model and the discriminant model and through a back propagation algorithm, obtains a stable and reliable model through training, and can correct any color distortion image.
Drawings
FIG. 1 is a flowchart of a method according to an embodiment of a method for image color correction according to the present application;
FIG. 2 is a schematic diagram of an embodiment of an image color correction system according to the present application;
fig. 3 is a schematic diagram of a model training process according to an embodiment of the image color correction method of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method of an embodiment of an image color correction method according to the present application, where fig. 1 includes:
101. the acquired image data is divided into two data sets, including a color-distorted image set and a high-quality color image set corresponding to the color-distorted image set.
It should be noted that, in the color distorted image set, each color distorted image can find a corresponding high-quality image in the high-quality color image set.
In a specific embodiment, there is also a need for preprocessing the color-distorted image set and the high-quality color image set, which may include: dividing the color distortion image set and the high-quality color image set into a training set and a testing set; and randomly cutting the training images in the training set to obtain a plurality of image blocks, and turning the image blocks up and down and left and right to obtain a plurality of training images.
It should be noted that in a specific embodiment, the resolution of the image needs to be higher than 500 × 400, and the content of the image may include landscape, portrait, gourmet, and the like. In addition, part of the images can be selected from the image set as a training set, and part of the images can be selected as a testing set for training and testing the generation countermeasure network. In addition, since the training set sample data images are not of the same size and cannot be directly input into the generation countermeasure network for training, 150 image blocks of 256 × 256 size are randomly cropped in the training image before the training image, and partial image blocks are randomly turned upside down and left and right, thereby increasing the diversity of the training image and adjusting the size of the test image to 900 × 600 size.
102. And inputting the color distortion image into the constructed generation model to obtain a corrected image.
103. Training a discrimination model: fixing parameters of a generated model, inputting the corrected image and the high-quality color image into a discrimination model, enabling the discrimination model to judge the corrected image and the high-quality color image, optimizing parameters of the discrimination model according to a judgment result, and repeating the step 103 until the discrimination model distinguishes the corrected image and the high-quality color image, namely finishing the discrimination model training.
Before the generative model is trained, the discriminant model needs to be trained. Since the discriminant model is trained in order to allow the discriminant model to recognize the corrected image and the high-quality color image, parameters of the generated model may be fixed, the corrected image and the high-quality color image input to the generated model output may be input to the discriminant model, the corrected image and the high-quality color image may be determined, the parameters of the discriminant model may be optimized according to the determination result, different images may be repeatedly input, and the parameters of the discriminant model may be continuously optimized until the discriminant model can distinguish the corrected image and the high-quality color image.
104. Training and generating a model: substituting the corrected image and the high-quality color image into a loss function, calculating the loss between the corrected image and the high-quality color image, optimizing the parameters of the generated model according to the loss, inputting the corrected image and the high-quality color image into a trained discrimination model, enabling the discrimination model to distinguish the corrected image and the high-quality color image, and repeating the step 104 until the discrimination model cannot distinguish the corrected image and the high-quality color image, namely completing the training of the generated model.
The generation model is a convolutional neural network, when the color distortion image is input into the generation model, the generation model corrects the color distortion image so that the corrected image is close to the original high-quality color image, then inputs the corrected image and the original high-quality color image into a loss function, calculates the difference between the corrected image and the high-quality color image, and optimizes the parameters of the generation model according to the difference so that the image corrected by the generation model is closer to the high-quality color image, i.e. the generation model can learn the characteristics of the high-quality color image. In addition, the corrected image and the high-quality color image need to be input into the trained discrimination model, so that the discrimination model distinguishes the corrected image and the high-quality color image, and finally, when the discrimination model cannot distinguish the corrected image and the high-quality color image, the generation model is trained.
The method for generating the countermeasure network integrates the optimization of the discriminant model and the generated model, obtains the optimized weight parameter through mutual countermeasure between the generated model and the discriminant model and through a back propagation algorithm, obtains a stable and reliable model through training, and can correct any color distortion image.
The present application further provides a specific implementation manner, as shown in fig. 3, specifically:
201. randomly sampling from the color-distorted image set, and taking a total of m image samples as one batch, denoted X.
202. And establishing a generative model. The generative model is a convolutional neural network that accepts a color-distorted image and outputs it after correction. When the color distortion image is input into the constructed generation model, the generation model corrects the color distortion image to enable the corrected image to be close to the original high-quality color image, then the corrected image and the original high-quality color image are input into a loss function, the difference between the corrected image and the high-quality color image is calculated, and the parameters of the generation model are optimized according to the difference.
Adopting a generation model with Wasserstein distance, wherein the loss function is specifically as follows:
Figure BDA0002395615300000071
wherein E represents a desire for distance; z represents the number of samples; pgRepresenting a sample distribution produced by a generator; d represents the probability of whether the real data is obtained or not by inputting the fused real input and the label into the discriminator;
Figure BDA0002395615300000072
represents a function that satisfies the 1-Lipschitz limit.
The input of the generation model is a color distortion image, the activation function of the generation model is the same as that of the discriminant model, and SELU is adopted as the activation function. And the model normalization adopts pixel feature vector normalization. After the normalized feature vector is positioned in a convolutional layer, each normalized feature vector has unit length and can be used for restricting the problems of signal range boundary crossing and the like caused by unhealthy competition of a generating model and a judging model, and the normalized notations are specifically expressed as follows:
Figure BDA0002395615300000081
in the formula, bx,yIs a normalized feature vector ofx,yThe original feature vector is obtained, and N is the number of feature maps; e is a slight positive number used to avoid the divisor 0.
In the formula, the left side of the equation is the normalized feature vector, the right side of the equation is the original feature vector, and N is the number of feature maps. The optimizer of the model may use RMSProp (root mean square transfer) to update the parameters within the generated model. The optimizer needs to set a global learning rate, initial parameters, a numerical stability amount and an attenuation rate, and can automatically adjust the learning rate.
203. The generation model receives X (batch), generates m correction samples from the data distribution of the image samples in the batch, and inputs the correction samples to the discriminant model for discriminant model learning.
204. When Wasserstein distance is used as a loss function in the generation of the countermeasure network, a gradient penalty factor is added into the loss function, and when loss is calculated, sampling needs to be carried out from the corrected image area and the high-quality color image area respectively, and sampling needs to be carried out from the middle of the corrected image area and the high-quality color image area respectively, wherein a random number α between 0 and 1 is added, and random interpolation sampling is carried out on the corrected image area and the high-quality color image area, and the formula is expressed as follows:
Figure BDA0002395615300000082
wherein α represents a random number between 0 and 1, xrRepresenting generating a sample region; x is the number ofgRepresenting a true sample region.
205. In the specific training process, firstly, parameters in the generated model are fixed and unchanged, the corrected image and the high-quality color image are input into the discrimination model together, the discrimination model calculates the difference between the corrected image and the high-quality color image, and the discrimination loss is calculated. And (3) propagating the discrimination loss from the output layer to the hidden layer to the input layer in a reverse way, and updating the discrimination model parameters by using an RMSProp optimizer in the process, so that the discrimination model after the parameters are optimized can learn the characteristics of the corrected image and the high-quality color image, and the discrimination model can distinguish the corrected image and the high-quality color image.
The iterative method is adopted in the application, and the judgment model judges the corrected image and the high-quality color image until the judgment model can correctly distinguish the corrected image and the high-quality color image.
206. And fixing the parameters of the discrimination model, and training the generated model. The generation model receives the color distortion image sample, generates a corrected image, transmits the corrected image and the high-quality color image into a trained discrimination model to calculate loss, and expects to generate an image close to the high-quality color image and cheat the discrimination model. Parameters in the generative model are updated using the RMSProp optimizer by back-propagating the parameters in the update generative model. And after the parameters are adjusted, the generation model generates a corrected image again, the corrected image and the high-quality color image are input into the discrimination model, and iteration is continuously carried out until the discrimination model cannot distinguish the corrected image and the high-quality color image according to the judgment whether the discrimination model can correctly distinguish the corrected image from the high-quality color image.
207. The generation of the confrontation network can be gradually increased, and the principle is that the image after the initial correction is trained, and then the image is gradually transited to the generation of the image with higher resolution. The work that needs to be done in the transition phase is to bring the corrected image generated against the network generation and the high quality color image closer together. After the training of the previous stage is completed, the Tensorflow saves the weight of the generated countermeasure network into a folder, and then the weight of the generated countermeasure network of the next stage is constructed. The newly generated countermeasure network uses the weight parameters of the previous stage, and the network layers of the generation model and the discrimination model are deepened, and then the transition stage is carried out. In the process, the generated model carries out up-sampling and convolution, the results of the up-sampling and the convolution are weighted and added to obtain a final result, and the model is judged to carry out down-sampling operation.
208. After the transition phase is completed, the model enters a stabilization phase, in which the model needs to continuously update the weight parameters of the generated countermeasure network so that the generated countermeasure network is closer to a high-quality color image. 206, 207 are repeated until the model can stabilize to produce a color-realistic image. And adjusting the hyper-parameters of the model, repeating the process, and selecting the optimal parameters until the training is finished.
The above are embodiments of the method of the present application, and the present application further includes a system schematic diagram in an embodiment of an image color correction system, as shown in fig. 2, including:
the data acquisition module 301 is configured to divide the acquired image data into two data sets, including a color-distorted image set and a high-quality color image set corresponding to the color-distorted image set.
And an image correction module 302, configured to input the color-distorted image into the constructed generative model, so as to obtain a corrected image.
And the discriminant model training module 303 is configured to fix parameters of the generated model, input the corrected image and the high-quality color image into the discriminant model, enable the discriminant model to determine the corrected image and the high-quality color image, optimize parameters of the discriminant model according to a determination result, and repeat operations in the discriminant model training module until the discriminant model distinguishes the corrected image and the high-quality color image.
A generative model training module 304, configured to substitute the corrected image and the high-quality color image into a loss function, calculate a loss between the corrected image and the high-quality color image, optimize parameters of a generative model according to the loss, substitute the corrected image and the high-quality color image into the loss function, and input the loss function into the trained discriminant model, so that the discriminant model distinguishes the corrected image and the high-quality color image, and repeat operations in the generative model training module until the discriminant model cannot distinguish the corrected image and the high-quality color image.
In a specific embodiment, the method further comprises the following steps:
the preprocessing module is used for preprocessing the color distortion image set and the high-quality color image set, and the preprocessing comprises the steps of dividing the color distortion image set and the high-quality color image set into a training set and a testing set; and randomly cutting the training images in the training set to obtain a plurality of image blocks, and turning the image blocks up and down and left and right to obtain a plurality of training images.
The sampling module is used for carrying out region sampling on the corrected image and the high-quality color image, and the region sampling specifically comprises the following steps:
random difference sampling on the line connecting the corrected image and the high quality color image samples:
Figure BDA0002395615300000101
wherein α represents a random number between 0 and 1, xrRepresenting generating a sample region; x is the number ofgRepresenting a true sample region.
The present application also provides an image color correction apparatus, comprising a processor and a memory: the memory is used for storing the program codes and transmitting the program codes to the processor; the processor is configured to execute an embodiment of an image color correction method according to the present application according to instructions in the program code.
The present application also provides a computer-readable storage medium storing program code for performing embodiments of an image color correction method of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In this application, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in the present application, it should be understood that the disclosed system and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A method for color correction of an image, comprising:
101. dividing the acquired image data into two data sets, including a color distortion image set and a high-quality color image set corresponding to the color distortion image set;
102. inputting the color distortion image into the constructed generation model to obtain a corrected image;
103. training a discrimination model: fixing parameters of a generated model, inputting the corrected image and the high-quality color image into a discrimination model, enabling the discrimination model to judge the corrected image and the high-quality color image, optimizing parameters of the discrimination model according to a judgment result, and repeating the step 103 until the discrimination model distinguishes the corrected image and the high-quality color image, namely finishing the training of the discrimination model;
104. training and generating a model: substituting the corrected image and the high-quality color image into a loss function, calculating the loss between the corrected image and the high-quality color image, optimizing the parameters of the generated model according to the loss, inputting the corrected image and the high-quality color image into the trained discriminant model so that the discriminant model distinguishes the corrected image and the high-quality color image, and repeating the step 104 until the discriminant model does not distinguish the corrected image and the high-quality color image, namely completing the training of the generated model.
2. The image color correction method according to claim 1, wherein said dividing the captured image data into two data sets, including a color-distorted image set and a high-quality color image set corresponding to the color-distorted image set, further comprises:
pre-processing the color-distorted image set and the high-quality color image set;
the pretreatment comprises the following steps: dividing the color-distorted image set and the high-quality color image set into a training set and a test set; and randomly cutting the training images in the training set to obtain a plurality of image blocks, and turning the image blocks up and down and left and right to obtain a plurality of training images.
3. The image color correction method of claim 1, wherein the loss function is specifically:
Figure FDA0002395615290000011
wherein E represents a desire for distance; z represents the number of samples; pgRepresenting a sample distribution produced by a generator; d represents the probability of whether the real data is obtained or not by inputting the fused real input and the label into the discriminator;
Figure FDA0002395615290000012
represents a function that satisfies the 1-Lipschitz limit.
4. The image color correction method of claim 1, wherein the normalization function is specifically:
Figure FDA0002395615290000021
in the formula, bx,yIs a normalized feature vector ofx,yThe original feature vector is obtained, and N is the number of feature maps; e is a slight positive number used to avoid the divisor 0.
5. The image color correction method according to claim 1, further comprising: before inputting the corrected image and the high-quality color image corresponding to the corrected image into the loss function, the method further includes performing area sampling on the corrected image and the high-quality color image, where the area sampling specifically includes:
random difference sampling on the line connecting the corrected image and the high quality color image samples:
Figure FDA0002395615290000022
wherein α represents a random number between 0 and 1, xrRepresenting generating a sample region; x is the number ofgRepresenting a true sample region.
6. An image color correction system, comprising:
the data acquisition module is used for dividing the acquired image data into two data sets, including a color distortion image set and a high-quality color image set corresponding to the color distortion image set;
the image correction module is used for inputting the color distortion image into the constructed generation model to obtain a corrected image;
the judgment model training module is used for fixing parameters of a generated model, inputting the corrected image and the high-quality color image into a judgment model, enabling the judgment model to judge the corrected image and the high-quality color image, optimizing the parameters of the judgment model according to a judgment result, and repeating the operation in the judgment model training module until the judgment model distinguishes the corrected image and the high-quality color image;
a generative model training module, configured to substitute the corrected image and the high-quality color image into a loss function, calculate a loss between the corrected image and the high-quality color image, optimize parameters of the generative model according to the loss, input the corrected image and the high-quality color image into the trained discriminant model in a manner that the discriminant model distinguishes the corrected image and the high-quality color image, and repeat operations in the generative model training module until the discriminant model does not distinguish the corrected image and the high-quality color image.
7. The image color correction system of claim 6, further comprising:
a pre-processing module to pre-process the color-distorted image set and the high-quality color image set, the pre-processing including dividing the color-distorted image set and the high-quality color image set into a training set and a test set; and randomly cutting the training images in the training set to obtain a plurality of image blocks, and turning the image blocks up and down and left and right to obtain a plurality of training images.
8. The image color correction system of claim 6, further comprising:
a sampling module, configured to perform area sampling on the corrected image and the high-quality color image, where the area sampling specifically is:
random difference sampling on the line connecting the corrected image and the high quality color image samples:
Figure FDA0002395615290000031
wherein α represents a random number between 0 and 1, xrRepresenting generating a sample region; x is the number ofgRepresenting a true sample region.
9. An image color correction apparatus, characterized in that the apparatus comprises a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the image color correction method according to any one of claims 1 to 5 according to instructions in the program code.
10. A computer-readable storage medium for storing a program code for executing the image color correction method according to any one of claims 1 to 5.
CN202010130318.7A 2020-02-28 2020-02-28 Image color correction method, system, device and medium Pending CN111277809A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010130318.7A CN111277809A (en) 2020-02-28 2020-02-28 Image color correction method, system, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010130318.7A CN111277809A (en) 2020-02-28 2020-02-28 Image color correction method, system, device and medium

Publications (1)

Publication Number Publication Date
CN111277809A true CN111277809A (en) 2020-06-12

Family

ID=71004166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010130318.7A Pending CN111277809A (en) 2020-02-28 2020-02-28 Image color correction method, system, device and medium

Country Status (1)

Country Link
CN (1) CN111277809A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132172A (en) * 2020-08-04 2020-12-25 绍兴埃瓦科技有限公司 Model training method, device, equipment and medium based on image processing
CN112435169A (en) * 2020-07-01 2021-03-02 新加坡依图有限责任公司(私有) Image generation method and device based on neural network
CN113706415A (en) * 2021-08-27 2021-11-26 北京瑞莱智慧科技有限公司 Training data generation method, countermeasure sample generation method, image color correction method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018184192A1 (en) * 2017-04-07 2018-10-11 Intel Corporation Methods and systems using camera devices for deep channel and convolutional neural network images and formats
CN108711138A (en) * 2018-06-06 2018-10-26 北京印刷学院 A kind of gray scale picture colorization method based on generation confrontation network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018184192A1 (en) * 2017-04-07 2018-10-11 Intel Corporation Methods and systems using camera devices for deep channel and convolutional neural network images and formats
CN108711138A (en) * 2018-06-06 2018-10-26 北京印刷学院 A kind of gray scale picture colorization method based on generation confrontation network

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112435169A (en) * 2020-07-01 2021-03-02 新加坡依图有限责任公司(私有) Image generation method and device based on neural network
CN112132172A (en) * 2020-08-04 2020-12-25 绍兴埃瓦科技有限公司 Model training method, device, equipment and medium based on image processing
CN113706415A (en) * 2021-08-27 2021-11-26 北京瑞莱智慧科技有限公司 Training data generation method, countermeasure sample generation method, image color correction method and device

Similar Documents

Publication Publication Date Title
CN111277809A (en) Image color correction method, system, device and medium
Aldrian et al. Inverse rendering of faces with a 3D morphable model
CN111723691B (en) Three-dimensional face recognition method and device, electronic equipment and storage medium
CN109974623B (en) Three-dimensional information acquisition method and device based on line laser and binocular vision
CN115526891B (en) Training method and related device for defect data set generation model
CN111160229A (en) Video target detection method and device based on SSD (solid State disk) network
CN112329726B (en) Face recognition method and device
CN110569593A (en) Method and system for measuring three-dimensional size of dressed human body, storage medium and electronic equipment
CN109871829A (en) A kind of detection model training method and device based on deep learning
CN111079893B (en) Acquisition method and device for generator network for interference fringe pattern filtering
CN111008945A (en) Multi-image-quality-parameter self-adaptive aberration correction method and device based on machine learning
CN115456921A (en) Synthetic image harmony model training method, harmony method and device
CN115293995A (en) Point cloud point-by-point denoising method based on Transformer
CN111862040A (en) Portrait picture quality evaluation method, device, equipment and storage medium
CN118134787A (en) Fusion method of three-dimensional laser point cloud and visible light image reconstruction point cloud of space target
CN113256733B (en) Camera spectral sensitivity reconstruction method based on confidence voting convolutional neural network
US20230222687A1 (en) Systems and methods for head related transfer function personalization
CN111179333A (en) Defocus fuzzy kernel estimation method based on binocular stereo vision
CN118379358A (en) RGB-D camera depth module calibration method based on antagonistic neural network
CN111586389B (en) Image processing method and related device
CN115859481B (en) Simulation verification method and system for flight simulator
CN117274664A (en) Small sample image classification method, system and medium driven by visual cognition
CN109448060B (en) Camera calibration parameter optimization method based on bat algorithm
CN112949385B (en) Water surface target detection and identification method based on optical vision
CN116597029A (en) Image re-coloring method for achromatopsia

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200612

RJ01 Rejection of invention patent application after publication