CN111277809A - An image color correction method, system, device and medium - Google Patents
An image color correction method, system, device and medium Download PDFInfo
- Publication number
- CN111277809A CN111277809A CN202010130318.7A CN202010130318A CN111277809A CN 111277809 A CN111277809 A CN 111277809A CN 202010130318 A CN202010130318 A CN 202010130318A CN 111277809 A CN111277809 A CN 111277809A
- Authority
- CN
- China
- Prior art keywords
- image
- color
- model
- quality color
- corrected
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000012937 correction Methods 0.000 title claims abstract description 41
- 238000012549 training Methods 0.000 claims description 57
- 230000006870 function Effects 0.000 claims description 30
- 238000005070 sampling Methods 0.000 claims description 23
- 238000007781 pre-processing Methods 0.000 claims description 15
- 239000013598 vector Substances 0.000 claims description 10
- 238000012360 testing method Methods 0.000 claims description 9
- 238000010606 normalization Methods 0.000 claims description 4
- 238000003702 image correction Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 description 9
- 230000003595 spectral effect Effects 0.000 description 6
- 230000004913 activation Effects 0.000 description 4
- 239000003086 colorant Substances 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000007704 transition Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000006641 stabilisation Effects 0.000 description 2
- 238000011105 stabilization Methods 0.000 description 2
- 235000004257 Cordia myxa Nutrition 0.000 description 1
- 244000157795 Cordia myxa Species 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 235000013305 food Nutrition 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/646—Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本申请涉及颜色校正技术领域,尤其涉及一种图像色彩校正方法、系统、设备及介质。The present application relates to the technical field of color correction, and in particular, to an image color correction method, system, device and medium.
背景技术Background technique
色彩是计算机视觉领域重要的参数之一,随着世界计算机技术的发展,各图像应用领域对色彩的要求随之越来越高。由于图像传感器的光谱敏感特性不能最佳地模拟人眼视觉系统对色彩的感知能力,并且由于受光源的光谱分布不均,采集图像的色彩与物体的真实色彩有一定偏差,因此,需要对色彩进行校正,使得尽可能的还原真实色彩。Color is one of the important parameters in the field of computer vision. With the development of computer technology in the world, the requirements for color in various image application fields are getting higher and higher. Because the spectral sensitivity of the image sensor cannot best simulate the human visual system's ability to perceive color, and due to the uneven spectral distribution of the light source, the color of the collected image has a certain deviation from the real color of the object. Correction is made to restore true colors as much as possible.
现有的色彩校正的方案包括基于光谱响应的方法与基于目标色的方法。Existing color correction schemes include spectral response-based methods and target color-based methods.
基于光谱响应的色彩校正原理是用专用仪器测出图像传感器的光谱响应,找到需校正图像传感器的光谱响应和CIE颜色匹配函数间的关系。The principle of color correction based on spectral response is to measure the spectral response of the image sensor with a special instrument, and find the relationship between the spectral response of the image sensor to be corrected and the CIE color matching function.
基于目标色的方法原理是用包含一定量颜色样本的色彩标准值和测量值建立校正模型,得出两者间的函数关系,简单实用。其典型算法包括:三维查表法,多项式法,带约束项最小二乘法及模式搜索法等。通常,查表法更为准确,缺点是其需要大量色彩的标准三刺激值,且建表速度缓慢,精度不高。多项式法则基于两者间的函数关系,通过最小二乘法求出校正系数,但仅适用于极小的色域空间,且容易放大噪声。即使对最小二乘法加入约束项,虽能保证白色的精确度提升,却使其它颜色校正后误差进一步放大。对多项式法及最小二乘法,构造校正模型时,经证明,使用1-范数比2-范数求解的校正系数较准确。但由于1-范数不可微,求全局最优解可使用模式搜索法可抑制噪声放大。而模式搜索法可能陷入局部最优问题、初始值影响过大问题等,甚至校正前后图像亮度差异过大。The principle of the method based on the target color is to establish a calibration model with the color standard value and the measured value containing a certain amount of color samples, and obtain the functional relationship between the two, which is simple and practical. Its typical algorithms include: three-dimensional look-up table method, polynomial method, least square method with constraints, and pattern search method. Generally, the table lookup method is more accurate, but the disadvantage is that it requires a large number of standard tristimulus values of colors, and the table building speed is slow and the accuracy is not high. The polynomial rule is based on the functional relationship between the two, and the correction coefficient is obtained by the least square method, but it is only suitable for a very small color gamut space, and it is easy to amplify the noise. Even if the constraint term is added to the least squares method, although the accuracy of white can be improved, it will further enlarge the error after correction of other colors. For the polynomial method and the least square method, when constructing the correction model, it has been proved that the correction coefficient solved by using the 1-norm is more accurate than the 2-norm. However, since the 1-norm is not differentiable, the pattern search method can be used to find the global optimal solution to suppress noise amplification. The pattern search method may fall into the local optimum problem, the initial value influence is too large, and even the image brightness difference before and after correction is too large.
发明内容SUMMARY OF THE INVENTION
本申请实施例提供了一种图像色彩校正方法、系统、设备及介质,使得对图像颜色的校正能够快速、高精度且抗噪性强。Embodiments of the present application provide an image color correction method, system, device, and medium, so that image color correction can be performed quickly, with high precision, and with strong noise resistance.
有鉴于此,本申请第一方面提供了一种图像色彩校正方法,所述方法包括:In view of this, a first aspect of the present application provides an image color correction method, the method comprising:
101、对采集的图像数据分成两个数据集,包括色彩失真图像集以及所述色彩失真图像集对应的高质量色彩图像集;101. Divide the collected image data into two data sets, including a color-distorted image set and a high-quality color image set corresponding to the color-distorted image set;
102、将彩色失真图像输入到构建的生成模型中,得到校正后的图像;102. Input the color distortion image into the constructed generative model to obtain a corrected image;
103、训练判别模型:固定生成模型的参数,将所述校正后的图像以及所述高质量色彩图像输入到判别模型中,使得所述判别模型对所述校正后的图像以及所述高质量色彩图像进行判断,根据判断结果优化判别模型的参数,重复步骤103,直到判别模型区分出所述校正后的图像以及所述高质量色彩图像,即判别模型训练完成;103. Train a discriminant model: fix the parameters of the generation model, and input the corrected image and the high-quality color image into the discriminant model, so that the discriminant model is responsible for the corrected image and the high-quality color image. The image is judged, the parameters of the discriminant model are optimized according to the judgement result, and
104、训练生成模型:将所述校正后的图像以及所述高质量色彩图像代入损失函数,计算所述校正后的图像以及所述高质量色彩图像之间的损失,根据所述损失对所述生成模型的参数进行优化,将所述校正后的图像以及所述高质量色彩图像输入到训练好的所述判别模型中,使得所述判别模型对所述校正后的图像以及所述高质量色彩图像进行区分,重复步骤104直到所述判别模型区分不出所述校正后的图像以及所述高质量色彩图像,即生成模型训练完成。104. Train a generative model: substitute the corrected image and the high-quality color image into a loss function, calculate the loss between the corrected image and the high-quality color image, and calculate the loss between the corrected image and the high-quality color image. The parameters of the generation model are optimized, and the corrected image and the high-quality color image are input into the trained discriminant model, so that the discriminant model can determine the corrected image and the high-quality color image. The images are distinguished, and
优选地,所述对采集的图像数据分成两个数据集,包括色彩失真图像集以及所述色彩失真图像集对应的高质量色彩图像集之后还包括:Preferably, the pair of collected image data is divided into two data sets, including a color-distorted image set and a high-quality color image set corresponding to the color-distorted image set, and further comprising:
对所述色彩失真图像集以及所述高质量色彩图像集进行预处理;preprocessing the set of color-distorted images and the set of high-quality color images;
所述预处理包括:将所述色彩失真图像集以及所述高质量色彩图像集分成训练集以及测试集;随机裁剪所述训练集中的训练图像,得到多个图像块,对所述图像块进行上下、左右翻转,得到多种训练图像。The preprocessing includes: dividing the color-distorted image set and the high-quality color image set into a training set and a test set; randomly cropping the training images in the training set to obtain a plurality of image blocks, and performing the processing on the image blocks. Flip up and down, left and right to get a variety of training images.
优选地,所述损失函数具体为:Preferably, the loss function is specifically:
式中,所述E表示距离的期望;z表示样本数;Pg表示由生成器产生的样本分布;D表示表示真实输入与标签融合后输入到判别器中得到是否为真实数据的概率;表示满足1-Lipschitz限制的函数。In the formula, E represents the expectation of distance; z represents the number of samples; P g represents the sample distribution generated by the generator; D represents the probability of whether the real input and the label are fused and input into the discriminator to obtain whether it is real data; represents a function satisfying the 1-Lipschitz limit.
优选地,所述生成模型的激活函数与所述判别模型的激活函数相同,具体为:Preferably, the activation function of the generative model is the same as the activation function of the discriminant model, specifically:
式中,bx,y为归一化后的特征向量,ax,y为原始特征向量,N为特征图的数量;e是为了避免除数为0时所使用的微小正数。In the formula, b x, y are the normalized feature vectors, a x, y are the original feature vectors, and N is the number of feature maps; e is a tiny positive number used to avoid dividing by 0.
优选地,所述将所述校正后的图像以及所述校正后的图像对应的高质量彩色图像输入到损失函数中之前还包括,对所述校正后的图像与所述高质量色彩图像进行区域采样,所述区域采样具体为:Preferably, before the inputting the corrected image and the high-quality color image corresponding to the corrected image into the loss function, the method further comprises: performing a regional analysis on the corrected image and the high-quality color image. Sampling, the area sampling is specifically:
在所述校正后的图像与所述高质量色彩图像采样的连线上随机差值采样:Random difference sampling on the line connecting the corrected image and the high-quality color image samples:
式中,α表示0-1之间的随机数,xr表示生成样本区域;xg表示真实样本区域。In the formula, α represents a random number between 0-1, x r represents the generated sample area; x g represents the real sample area.
本申请第二方面提供一种图像色彩校正系统,所述系统包括:A second aspect of the present application provides an image color correction system, the system comprising:
数据获取模块,所述数据获取模块用于对采集的图像数据分成两个数据集,包括色彩失真图像集以及所述色彩失真图像集对应的高质量色彩图像集;a data acquisition module, which is configured to divide the collected image data into two data sets, including a color-distorted image set and a high-quality color image set corresponding to the color-distorted image set;
图像校正模块,所述图像校正模块用于将彩色失真图像输入到构建的生成模型中,得到校正后的图像;an image correction module, which is used for inputting the color-distorted image into the constructed generative model to obtain a corrected image;
判别模型训练模块,所述判别模型训练模块用于固定生成模型的参数,将所述校正后的图像以及所述高质量色彩图像输入到判别模型中,使得所述判别模型对所述校正后的图像以及所述高质量色彩图像进行判断,根据判断结果优化判别模型的参数,重复判别模型训练模块中的操作,直到判别模型区分出所述校正后的图像以及所述高质量色彩图像;A discriminant model training module, which is used to fix the parameters of the generated model, and input the corrected image and the high-quality color image into the discriminant model, so that the discriminant model can correct the corrected image. The image and the high-quality color image are judged, the parameters of the discriminant model are optimized according to the judgement result, and the operations in the discriminant model training module are repeated until the discriminant model distinguishes the corrected image and the high-quality color image;
生成模型训练模块,所述生成模型训练模块用于将所述校正后的图像以及所述高质量色彩图像代入损失函数,计算所述校正后的图像以及所述高质量色彩图像之间的损失,根据所述损失对所述生成模型的参数进行优化,将所述校正后的图像以及所述高质量色彩图像代入损失函数输入到训练好的所述判别模型中,使得所述判别模型对所述校正后的图像以及所述高质量色彩图像进行区分,重复生成模型训练模块中的操作,直到所述判别模型区分不出所述校正后的图像以及所述高质量色彩图像。a generative model training module, which is used to substitute the corrected image and the high-quality color image into a loss function, and calculate the loss between the corrected image and the high-quality color image, The parameters of the generation model are optimized according to the loss, and the corrected image and the high-quality color image are substituted into the loss function and input into the trained discriminant model, so that the discriminant model can affect the The corrected image and the high-quality color image are distinguished, and the operations in the generation model training module are repeated until the discrimination model cannot distinguish the corrected image from the high-quality color image.
优选地,还包括:Preferably, it also includes:
预处理模块,所述预处理模块用于对所述色彩失真图像集以及所述高质量色彩图像集进行预处理,所述预处理包括将所述色彩失真图像集以及所述高质量色彩图像集分成训练集以及测试集;随机裁剪所述训练集中的训练图像,得到多个图像块,对所述图像块进行上下、左右翻转,得到多种训练图像。A preprocessing module, the preprocessing module is configured to perform preprocessing on the set of color-distorted images and the set of high-quality color images, and the preprocessing includes Divide into a training set and a test set; randomly crop the training images in the training set to obtain a plurality of image blocks, and flip the image blocks up and down and left and right to obtain a variety of training images.
优选地,还包括:Preferably, it also includes:
采样模块,所述采样模块用于对所述校正后的图像与所述高质量色彩图像进行区域采样,所述区域采样具体为:A sampling module, the sampling module is used to perform regional sampling on the corrected image and the high-quality color image, and the regional sampling is specifically:
在所述校正后的图像与所述高质量色彩图像采样的连线上随机差值采样:Random difference sampling on the line connecting the corrected image and the high-quality color image samples:
式中,α表示0-1之间的随机数,xr表示生成样本区域;xg表示真实样本区域。In the formula, α represents a random number between 0-1, x r represents the generated sample area; x g represents the real sample area.
本申请第三方面提供一种图像色彩校正设备,所述设备包括处理器以及存储器:A third aspect of the present application provides an image color correction device, the device includes a processor and a memory:
所述存储器用于存储程序代码,并将所述程序代码传输给所述处理器;the memory is used to store program code and transmit the program code to the processor;
所述处理器用于根据所述程序代码中的指令,执行如上述第一方面所述的图像色彩校正方法的步骤。The processor is configured to execute the steps of the image color correction method according to the first aspect above according to the instructions in the program code.
本申请第四方面提供一种计算机可读存储介质,所述计算机可读存储介质用于存储程序代码,所述程序代码用于执行上述第一方面所述的方法。A fourth aspect of the present application provides a computer-readable storage medium, where the computer-readable storage medium is used to store program codes, and the program codes are used to execute the method described in the first aspect.
从以上技术方案可以看出,本申请具有以下优点:As can be seen from the above technical solutions, the present application has the following advantages:
本申请中,提供了一种图像色彩校正方法,包括对采集的图像数据分成色彩失真图像及其对应的高质量色彩图像集;将彩色失真图像输入到生成模型中得到校正后的图像;固定生成模型的参数,将校正后的图像以及高质量色彩图像输入到判别模型中,对图像进行判断,并优化判别模型的参数,直到判别模型能够区分两组图像;将校正后的图像以及高质量色彩图像代入损失函数,计算损失,并对生成模型的参数进行优化,将校正后的图像以及高质量色彩图像输入到训练好的判别模型中,对图像进行区分,直到判别模型区分不出两组图像。In this application, an image color correction method is provided, which includes dividing the collected image data into color-distorted images and their corresponding high-quality color image sets; inputting the color-distorted images into a generation model to obtain a corrected image; The parameters of the model, input the corrected image and high-quality color image into the discriminant model, judge the image, and optimize the parameters of the discriminant model until the discriminant model can distinguish two sets of images; The color image is substituted into the loss function, the loss is calculated, and the parameters of the generation model are optimized. The corrected image and high-quality color image are input into the trained discriminant model, and the images are distinguished until the discriminant model cannot distinguish the two groups. image.
本申请生成对抗网络的方法对将判别模型和生成模型的优化进行整合,通过生成模型和判别模型间的互相对抗,并通过反向传播算法最终得出最优化的权重参数,最终训练得到一个稳定可靠的模型,可以对任一色彩失真图像进行校正,并且本发明训练得出的模型有较高的抗噪声干扰能力,而且仅需训练一次,得出的模型可以反复使用,高效而简便,并且对图像色彩的校正具有一定鲁棒性。The method of the generative adversarial network of the present application integrates the optimization of the discriminant model and the generative model, through the mutual confrontation between the generative model and the discriminant model, and finally obtains the optimal weight parameters through the back-propagation algorithm, and finally obtains a stable The reliable model can correct any color distortion image, and the model obtained by the training of the present invention has high anti-noise interference ability, and only needs to be trained once, and the obtained model can be used repeatedly, which is efficient and convenient, and The correction of image color has certain robustness.
附图说明Description of drawings
图1为本申请一种图像色彩校正方法的一个实施例的方法流程图;1 is a method flowchart of an embodiment of an image color correction method of the present application;
图2为本申请一种图像色彩校正系统的一个实施例中系统示意图;2 is a schematic diagram of a system in an embodiment of an image color correction system of the application;
图3为本申请一种图像色彩校正方法的一个具体实施方式的模型训练过程的示意图。FIG. 3 is a schematic diagram of a model training process of a specific embodiment of an image color correction method of the present application.
具体实施方式Detailed ways
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make those skilled in the art better understand the solutions of the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are only It is a part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
请参阅图1,图1为本申请一种图像色彩校正方法的一个实施例的方法流程图,图1中包括:Please refer to FIG. 1. FIG. 1 is a method flowchart of an embodiment of an image color correction method of the present application, and FIG. 1 includes:
101、对采集的图像数据分成两个数据集,包括色彩失真图像集以及色彩失真图像集对应的高质量色彩图像集。101. Divide the collected image data into two data sets, including a color-distorted image set and a high-quality color image set corresponding to the color-distorted image set.
需要说明的是,彩色失真图像集中,每张彩色失真图像都能在高质量色彩图像集中找到对应的高质量图像。It should be noted that, in the color-distorted image set, each color-distorted image can find a corresponding high-quality image in the high-quality color image set.
在一种具体的实施方式中,还需要对色彩失真图像集以及高质量色彩图像集进行预处理,预处理可以包括:将色彩失真图像集以及高质量色彩图像集分成训练集以及测试集;随机裁剪所述训练集中的训练图像,得到多个图像块,对图像块进行上下、左右翻转,得到多种训练图像。In a specific implementation, the color distortion image set and the high-quality color image set also need to be preprocessed, and the preprocessing may include: dividing the color-distorted image set and the high-quality color image set into a training set and a test set; randomly The training images in the training set are cropped to obtain a plurality of image blocks, and the image blocks are flipped up and down and left and right to obtain various training images.
需要说明的是,在一种具体的实施方式中,图像的分辨率需要高于500×400,图像的内容可以包括风景、人像、美食等。另外,可以从图像集中选取部分图像作为训练集,部分作为测试集,用于对生成对抗网络进行训练和测试。另外,由于训练集样本数据图像尺寸不一,不能直接输入生成对抗网络中进行训练,因此,在训练图像之前,可以将训练图像中随机裁剪150个256×256大小的图像块,并对部分图像块随机进行上下、左右翻转,从而用来增加训练图像的多样性,同时,将测试图像的尺寸均调为900×600大小。It should be noted that, in a specific implementation manner, the resolution of the image needs to be higher than 500×400, and the content of the image may include landscape, portrait, food, and the like. In addition, some images can be selected from the image set as a training set and some as a test set for training and testing the generative adversarial network. In addition, due to the different image sizes of the training set sample data, it cannot be directly input into the generative adversarial network for training. Therefore, before the training image, 150 image blocks of 256×256 size can be randomly cropped from the training image, and some images The blocks are randomly flipped up and down and left and right to increase the diversity of training images. At the same time, the size of the test images is adjusted to 900×600.
102、将彩色失真图像输入到构建的生成模型中,得到校正后的图像。102. Input the color-distorted image into the constructed generative model to obtain a corrected image.
103、训练判别模型:固定生成模型的参数,将校正后的图像以及高质量色彩图像输入到判别模型中,使得判别模型对校正后的图像以及高质量色彩图像进行判断,根据判断结果优化判别模型的参数,重复步骤103,直到判别模型区分出校正后的图像以及高质量色彩图像,即判别模型训练完成。103. Train the discriminant model: fix the parameters of the generation model, input the corrected image and the high-quality color image into the discriminant model, so that the discriminant model judges the corrected image and the high-quality color image, and optimizes the discriminant model according to the judgment result ,
需要说明的是,在训练生成模型之前,首先需要训练判别模型。由于训练判别模型的目的是为了使判别模型能够识别出校正后的图像以及高质量色彩图像,因此,可以将生成模型的参数进行固定,再将输入至生成模型输出的校正后的图像以及高质量色彩图像输入到判别模型中,对校正后的图像以及高质量色彩图像进行判断,根据判断的结果对判别模型的参数进行优化,重复输入不同的图像,不断对判别模型的参数进行优化,直到判别模型能够区分出校正后的图像以及高质量色彩图像。It should be noted that before training the generative model, the discriminative model needs to be trained first. Since the purpose of training the discriminant model is to enable the discriminant model to recognize the corrected image and high-quality color image, the parameters of the generative model can be fixed, and then input to the corrected image and high-quality output of the generative model. The color image is input into the discriminant model, the corrected image and the high-quality color image are judged, the parameters of the discriminant model are optimized according to the judgment result, and different images are input repeatedly, and the parameters of the discriminant model are continuously optimized until the discriminant The model is able to distinguish between corrected images and high-quality color images.
104、训练生成模型:将校正后的图像以及高质量色彩图像代入损失函数,计算校正后的图像以及高质量色彩图像之间的损失,根据损失对生成模型的参数进行优化,将校正后的图像以及高质量色彩图像输入到训练好的判别模型中,使得判别模型对校正后的图像以及高质量色彩图像进行区分,重复步骤104直到判别模型区分不出校正后的图像以及高质量色彩图像,即生成模型训练完成。104. Train the generative model: Substitute the corrected image and the high-quality color image into the loss function, calculate the loss between the corrected image and the high-quality color image, optimize the parameters of the generative model according to the loss, and transfer the corrected image to the loss function. And the high-quality color image is input into the trained discriminant model, so that the discriminant model distinguishes the corrected image and the high-quality color image, repeating
需要说明的是,生成模型为卷积神经网络,当彩色失真图像输入至生成模型中时,生成模型对彩色失真图像进行校正使得校正后的图像接近于原始的高质量彩色图像,之后将校正后的图像以及原始的高质量彩色图像输入至损失函数,计算出校正后的图像与高质量彩色图像之间的差距,根据差距对生成模型的参数进行优化,使得经过生成模型校正后的图像越来越接近高质量色彩图像,即使得生成模型能够学习到高质量色彩图像的特征。另外,还需要将校正后的图像以及高质量色彩图像输入到训练好的判别模型中,使得判别模型对校正后的图像以及高质量色彩图像进行区分,最后当判别模型不能够区分出校正后的图像以及高质量色彩图像时,则说明生成模型已经训练好了。It should be noted that the generation model is a convolutional neural network. When the color distortion image is input into the generation model, the generation model corrects the color distortion image so that the corrected image is close to the original high-quality color image. The image and the original high-quality color image are input to the loss function, the gap between the corrected image and the high-quality color image is calculated, and the parameters of the generative model are optimized according to the gap, so that the image corrected by the generative model is more and more The closer to the high-quality color image, the more the generative model can learn the features of the high-quality color image. In addition, it is also necessary to input the corrected image and high-quality color image into the trained discriminant model, so that the discriminant model can distinguish the corrected image and high-quality color image, and finally when the discriminant model cannot distinguish the corrected image. images and high-quality color images, the generative model has been trained.
本申请生成对抗网络的方法对将判别模型和生成模型的优化进行整合,通过生成模型和判别模型间的互相对抗,并通过反向传播算法最终得出最优化的权重参数,最终训练得到一个稳定可靠的模型,可以对任一色彩失真图像进行校正,并且本发明训练得出的模型有较高的抗噪声干扰能力,而且仅需训练一次,得出的模型可以反复使用,高效而简便,并且对图像色彩的校正具有一定鲁棒性。The method of the generative adversarial network of the present application integrates the optimization of the discriminant model and the generative model, through the mutual confrontation between the generative model and the discriminant model, and finally obtains the optimal weight parameters through the back-propagation algorithm, and finally obtains a stable The reliable model can correct any color distortion image, and the model obtained by the training of the present invention has high anti-noise interference ability, and only needs to be trained once, and the obtained model can be used repeatedly, which is efficient and convenient, and The correction of image color has certain robustness.
本申请还提供了一种具体的实施方式,如图3所示,具体为:The application also provides a specific implementation, as shown in Figure 3, specifically:
201、从色彩失真图像集中随机采样,取出共m个图像样本作为一个batch(批量),表示为X。201. Randomly sample from the color-distorted image set, and take out a total of m image samples as a batch (batch), denoted as X.
202、建立生成模型。生成模型是一个卷积神经网络,它用于接受一张色彩失真图像,将其校正后输出。当彩色失真图像输入至构建好的生成模型中时,生成模型对彩色失真图像进行校正使得校正后的图像接近于原始的高质量彩色图像,之后将校正后的图像以及原始的高质量彩色图像输入至损失函数,计算出校正后的图像与高质量彩色图像之间的差距,根据差距对生成模型的参数进行优化。202. Establish a generative model. The generative model is a convolutional neural network that takes a color-distorted image, corrects it and outputs it. When the color-distorted image is input into the constructed generative model, the generative model corrects the color-distorted image so that the corrected image is close to the original high-quality color image, and then the corrected image and the original high-quality color image are input To the loss function, the gap between the corrected image and the high-quality color image is calculated, and the parameters of the generative model are optimized according to the gap.
采用加入Wasserstein距离的生成模型,其损失函数具体为:Using a generative model with Wasserstein distance added, the loss function is as follows:
式中,所述E表示距离的期望;z表示样本数;Pg表示由生成器产生的样本分布;D表示真实输入与标签融合后输入到判别器中得到是否为真实数据的概率;表示满足1-Lipschitz限制的函数。In the formula, E represents the expectation of distance; z represents the number of samples; P g represents the sample distribution generated by the generator; D represents the probability of whether the real input and the label are fused and input into the discriminator to obtain whether it is real data; represents a function satisfying the 1-Lipschitz limit.
生成模型的输入的是色彩失真图像,其激活函数与判别模型相同,采用SELU作为激活函数。模型归一化采用像素特征向量归一化。它位于卷积层后,使每个归一化后的特征向量具有单位长度,可用来约束生成模型和判别模型不健康的竞争导致信号范围越界等问题,归一化公示具体表示为:The input of the generative model is a color-distorted image, and its activation function is the same as that of the discriminant model, and SELU is used as the activation function. Model normalization adopts pixel feature vector normalization. It is located after the convolutional layer, so that each normalized feature vector has a unit length, which can be used to constrain the unhealthy competition between the generative model and the discriminant model, which leads to problems such as signal range out of bounds. The normalization announcement is specifically expressed as:
式中,bx,y为归一化后的特征向量,ax,y为原始特征向量,N为特征图的数量;e是为了避免除数为0时所使用的微小正数。In the formula, b x, y are the normalized feature vectors, a x, y are the original feature vectors, and N is the number of feature maps; e is a tiny positive number used to avoid dividing by 0.
式中,等式左边是归一化后的特征向量,等式右边分子为原始特征向量,N为特征图的数量。模型的优化器可以采用RMSProp(均方根传递,rootmean squareprop)对生成模型内的参数进行更新。此优化器需要设置全局学习速率,初始参数,数值稳定量以及衰减率,该优化器可以自动调节学习速率。In the formula, the left side of the equation is the normalized feature vector, the numerator on the right side of the equation is the original feature vector, and N is the number of feature maps. The optimizer of the model can use RMSProp (root mean square prop) to update the parameters in the generated model. This optimizer needs to set the global learning rate, initial parameters, numerical stabilization and decay rate, and the optimizer can automatically adjust the learning rate.
203、生成模型接受X这个batch(批量),根据batch中图像样本的数据分布生成m个校正样本,将校正样本输入至判别模型用于判别模型学习。203. The generation model accepts a batch of X, generates m correction samples according to the data distribution of the image samples in the batch, and inputs the corrected samples into the discriminant model for learning the discriminant model.
204、生成判别模型。当采用Wasserstein距离作为生成对抗网络中的损失函数,将损失函数中加入梯度惩罚因子,计算损失时,需要分别从校正后的图像区域以及高质量彩色图像区域采样,还需要从校正后的图像区域以及高质量彩色图像区域中间进行采样。采样方法具体为:加入0-1间的随机数α,在校正后的图像区域以及高质量彩色图像区域上随机插值采样。公式表述为:204. Generate a discriminant model. When the Wasserstein distance is used as the loss function in the generative adversarial network, the gradient penalty factor is added to the loss function, and when calculating the loss, it is necessary to sample from the corrected image area and the high-quality color image area, and also from the corrected image area. and sampling in the middle of the high-quality color image area. The sampling method is as follows: adding a random number α between 0 and 1, and randomly interpolating sampling on the corrected image area and the high-quality color image area. The formula is expressed as:
式中,α表示0-1之间的随机数,xr表示生成样本区域;xg表示真实样本区域。In the formula, α represents a random number between 0-1, x r represents the generated sample area; x g represents the real sample area.
205、具体训练过程中,首先固定生成模型中的参数不变,将校正后的图像以及高质量彩色图像一同输入判别模型,判别模型计算校正后的图像以及高质量彩色图像差距,计算得出判别损失。将判别损失从输出层向隐藏层反向传播,直到输入层,此过程使用RMSProp优化器对判别模型参数进行更新,使得优化参数后的判别模型能够学习到校正后的图像以及高质量彩色图像的特征,从而使判别模型能够区分校正后的图像以及高质量彩色图像。205. In the specific training process, first fix the parameters in the generation model unchanged, input the corrected image and the high-quality color image together into the discriminant model, and the discriminant model calculates the difference between the corrected image and the high-quality color image, and calculates the discriminant. loss. The discriminative loss is back-propagated from the output layer to the hidden layer until the input layer. This process uses the RMSProp optimizer to update the parameters of the discriminant model, so that the discriminant model after optimizing the parameters can learn the corrected image and high-quality color image. features, thereby enabling the discriminative model to distinguish between corrected images and high-quality color images.
本申请中采用迭代的方法,判别模型对校正后的图像以及高质量彩色图像进行判别,直到判别模型可以正确区分校正后的图像以及高质量彩色图像。In the present application, an iterative method is adopted, and the discriminant model discriminates the corrected image and the high-quality color image until the discriminant model can correctly distinguish the corrected image and the high-quality color image.
206、固定判别模型的参数,训练生成模型。生成模型接收色彩失真图像样本,生成校正后的图像,将校正后的图像和高质量彩色图像传入训练好的判别模型中计算损失,生成模型期望可以生成接近高质量彩色图像的图像,骗过判别模型。通过反向传播更新生成模型中的参数,使用RMSProp优化器对生成模型参数进行更新。参数调整后生成模型再次生成校正后的图像,输入判别模型中,看判别模型能否正确区分校正后的图像和高质量彩色图像,不断迭代,直到判断模型不能区分校正后的图像以及高质量彩色图像。206. Fix the parameters of the discriminant model, and train the generative model. The generative model receives color-distorted image samples, generates a corrected image, and transmits the corrected image and high-quality color image to the trained discriminant model to calculate the loss. The generative model expects to generate images that are close to high-quality color images. discriminant model. The parameters in the generative model are updated by backpropagation, and the generative model parameters are updated using the RMSProp optimizer. After the parameters are adjusted, the generation model generates the corrected image again, and input it into the discriminant model to see if the discriminant model can correctly distinguish the corrected image from the high-quality color image, and iterate continuously until the judgment model cannot distinguish between the corrected image and the high-quality color image. image.
207、可以渐进增大生成对抗网络,其原理是先训练初步校正后的图像,再逐步过渡到生成更高分辨率的图片。过渡阶段需要完成的工作是让生成对抗网络生成的校正后的图像以及高质量彩色图像更接近。当上一阶段训练完成后,Tensorflow将生成对抗网络的权重保存至文件夹中,然后构建下一阶段的生成对抗网络的权重。新的生成对抗网络会使用上一阶段的权重参数,并且生成模型和判别模型网络层数加深,然后进行过渡阶段。此过程中,生成模型进行上采样和卷积,将上采样和卷积的结果加权相加得到最终结果,判别模型做下采样操作。207. The generative adversarial network can be gradually increased. The principle is to first train the initially corrected images, and then gradually transition to generating higher-resolution pictures. The work that needs to be done in the transition phase is to bring the rectified image generated by the generative adversarial network and the high-quality color image closer. When the previous stage of training is completed, Tensorflow saves the weights of the generative adversarial network in a folder, and then builds the weights of the generative adversarial network in the next stage. The new generative adversarial network will use the weight parameters of the previous stage, and the network layers of the generative model and the discriminant model will be deepened, and then the transition stage will be performed. In this process, the generative model performs upsampling and convolution, the results of upsampling and convolution are weighted and added to obtain the final result, and the discriminant model performs downsampling operation.
208、过渡阶段完成后,模型进入稳定化阶段,这个阶段中模型需要继续更新生成对抗网络的权重参数,以便于生成的生成对抗网络更接近高质量彩色图像。重复206、207,直至模型可以稳定生成色彩逼真的图像。调整模型超参数并重复上述过程,选择最优的参数,至此训练完成。208. After the transition stage is completed, the model enters the stabilization stage. In this stage, the model needs to continue to update the weight parameters of the generative adversarial network, so that the generated generative adversarial network is closer to high-quality color images. 206, 207 are repeated until the model can stably generate images with realistic colors. Adjust the model hyperparameters and repeat the above process, select the optimal parameters, and the training is complete.
以上是本申请的方法实施例,本申请还包括一种图像色彩校正系统的一个实施例中系统示意图,如图2所示,包括:The above are the method embodiments of the present application, and the present application also includes a system schematic diagram in an embodiment of an image color correction system, as shown in FIG. 2 , including:
数据获取模块301,用于对采集的图像数据分成两个数据集,包括色彩失真图像集以及色彩失真图像集对应的高质量色彩图像集。The data acquisition module 301 is configured to divide the collected image data into two data sets, including a color-distorted image set and a high-quality color image set corresponding to the color-distorted image set.
图像校正模块302,用于将彩色失真图像输入到构建的生成模型中,得到校正后的图像。The image correction module 302 is used for inputting the color-distorted image into the constructed generative model to obtain a corrected image.
判别模型训练模块303,用于固定生成模型的参数,将校正后的图像以及高质量色彩图像输入到判别模型中,使得判别模型对校正后的图像以及高质量色彩图像进行判断,根据判断结果优化判别模型的参数,重复判别模型训练模块中的操作,直到判别模型区分出校正后的图像以及高质量色彩图像。The discriminant model training module 303 is used to fix the parameters of the generated model, and input the corrected image and the high-quality color image into the discriminant model, so that the discriminant model judges the corrected image and the high-quality color image, and optimizes according to the judgment result The parameters of the discriminant model are repeated, and the operations in the discriminant model training module are repeated until the discriminant model distinguishes the corrected image and the high-quality color image.
生成模型训练模块304,用于将校正后的图像以及高质量色彩图像代入损失函数,计算校正后的图像以及高质量色彩图像之间的损失,根据损失对生成模型的参数进行优化,将校正后的图像以及高质量色彩图像代入损失函数输入到训练好的判别模型中,使得判别模型对校正后的图像以及高质量色彩图像进行区分,重复生成模型训练模块中的操作,直到判别模型区分不出所述校正后的图像以及高质量色彩图像。The generation model training module 304 is used for substituting the corrected image and the high-quality color image into the loss function, calculating the loss between the corrected image and the high-quality color image, optimizing the parameters of the generating model according to the loss, and converting the corrected image and the high-quality color image. The images and high-quality color images are substituted into the loss function and input into the trained discriminant model, so that the discriminant model can distinguish the corrected images and high-quality color images, and repeat the operations in the training module of the generation model until the discriminant model cannot distinguish them. The corrected image as well as the high quality color image.
在一种具体的实施方式中,还包括:In a specific embodiment, it also includes:
预处理模块,用于对色彩失真图像集以及高质量色彩图像集进行预处理,预处理包括将色彩失真图像集以及高质量色彩图像集分成训练集以及测试集;随机裁剪训练集中的训练图像,得到多个图像块,对图像块进行上下、左右翻转,得到多种训练图像。The preprocessing module is used to preprocess the color distorted image set and the high-quality color image set. The preprocessing includes dividing the color-distorted image set and the high-quality color image set into a training set and a test set; randomly cropping the training images in the training set, Obtain multiple image blocks, flip the image blocks up and down, left and right, and obtain a variety of training images.
采样模块,用于对校正后的图像与高质量色彩图像进行区域采样,区域采样具体为:The sampling module is used to perform regional sampling on the corrected image and high-quality color image. The regional sampling is as follows:
在校正后的图像与高质量色彩图像采样的连线上随机差值采样:Random difference sampling on the line connecting the corrected image to the high-quality color image samples:
式中,α表示0-1之间的随机数,xr表示生成样本区域;xg表示真实样本区域。In the formula, α represents a random number between 0-1, x r represents the generated sample area; x g represents the real sample area.
本申请还提供了一种图像色彩校正设备,包括处理器以及存储器:存储器用于存储程序代码,并将程序代码传输给处理器;处理器用于根据程序代码中的指令执行本申请一种图像色彩校正方法的实施例。The application also provides an image color correction device, including a processor and a memory: the memory is used for storing program codes, and transmitting the program codes to the processor; Examples of calibration methods.
本申请还提供了一种计算机可读存储介质,用于存储程序代码,所述程序代码用于执行本申请一种图像色彩校正方法的实施例。The present application also provides a computer-readable storage medium for storing program codes, and the program codes are used to execute the embodiments of the image color correction method of the present application.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, the specific working process of the system, device and unit described above may refer to the corresponding process in the foregoing method embodiments, which will not be repeated here.
本申请中,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。In this application, the terms "comprising" and "having" and any variations thereof, are intended to cover non-exclusive inclusion, for example, a process, method, system, product or device comprising a series of steps or units is not necessarily limited to the explicit lists Those steps or units listed may instead include other steps or units not expressly listed or inherent to these processes, methods, products or devices.
应当理解,在本申请中,“至少一个(项)”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,用于描述关联对象的关联关系,表示可以存在三种关系,例如,“A和/或B”可以表示:只存在A,只存在B以及同时存在A和B三种情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b或c中的至少一项(个),可以表示:a,b,c,“a和b”,“a和c”,“b和c”,或“a和b和c”,其中a,b,c可以是单个,也可以是多个。It should be understood that, in this application, "at least one (item)" refers to one or more, and "a plurality" refers to two or more. "And/or" is used to describe the relationship between related objects, indicating that there can be three kinds of relationships, for example, "A and/or B" can mean: only A, only B, and both A and B exist , where A and B can be singular or plural. The character "/" generally indicates that the associated objects are an "or" relationship. "At least one item(s) below" or similar expressions thereof refer to any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (a) of a, b or c, can mean: a, b, c, "a and b", "a and c", "b and c", or "a and b and c" ", where a, b, c can be single or multiple.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed system and method may be implemented in other manners. For example, the apparatus embodiments described above are only illustrative. For example, the division of the modules is only a logical function division. In actual implementation, there may be other division methods. For example, multiple modules or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit. The above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(英文全称:Read-OnlyMemory,英文缩写:ROM)、随机存取存储器(英文全称:RandomAccess Memory,英文缩写:RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The integrated unit, if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (full English name: Read-Only Memory, English abbreviation: ROM), random access memory (English full name: RandomAccess Memory, English abbreviation: RAM), magnetic disk or Various media that can store program codes, such as optical discs.
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。As mentioned above, the above embodiments are only used to illustrate the technical solutions of the present application, but not to limit them; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand: The technical solutions described in the embodiments are modified, or some technical features thereof are equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions in the embodiments of the present application.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010130318.7A CN111277809A (en) | 2020-02-28 | 2020-02-28 | An image color correction method, system, device and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010130318.7A CN111277809A (en) | 2020-02-28 | 2020-02-28 | An image color correction method, system, device and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111277809A true CN111277809A (en) | 2020-06-12 |
Family
ID=71004166
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010130318.7A Pending CN111277809A (en) | 2020-02-28 | 2020-02-28 | An image color correction method, system, device and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111277809A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112132172A (en) * | 2020-08-04 | 2020-12-25 | 绍兴埃瓦科技有限公司 | Model training method, device, equipment and medium based on image processing |
CN112435169A (en) * | 2020-07-01 | 2021-03-02 | 新加坡依图有限责任公司(私有) | Image generation method and device based on neural network |
CN113706415A (en) * | 2021-08-27 | 2021-11-26 | 北京瑞莱智慧科技有限公司 | Training data generation method, countermeasure sample generation method, image color correction method and device |
CN114117908A (en) * | 2021-11-19 | 2022-03-01 | 河南工业大学 | High-precision ASI sea ice density inversion algorithm based on CGAN for data correction |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018184192A1 (en) * | 2017-04-07 | 2018-10-11 | Intel Corporation | Methods and systems using camera devices for deep channel and convolutional neural network images and formats |
CN108711138A (en) * | 2018-06-06 | 2018-10-26 | 北京印刷学院 | A kind of gray scale picture colorization method based on generation confrontation network |
-
2020
- 2020-02-28 CN CN202010130318.7A patent/CN111277809A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018184192A1 (en) * | 2017-04-07 | 2018-10-11 | Intel Corporation | Methods and systems using camera devices for deep channel and convolutional neural network images and formats |
CN108711138A (en) * | 2018-06-06 | 2018-10-26 | 北京印刷学院 | A kind of gray scale picture colorization method based on generation confrontation network |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112435169A (en) * | 2020-07-01 | 2021-03-02 | 新加坡依图有限责任公司(私有) | Image generation method and device based on neural network |
CN112132172A (en) * | 2020-08-04 | 2020-12-25 | 绍兴埃瓦科技有限公司 | Model training method, device, equipment and medium based on image processing |
CN113706415A (en) * | 2021-08-27 | 2021-11-26 | 北京瑞莱智慧科技有限公司 | Training data generation method, countermeasure sample generation method, image color correction method and device |
CN114117908A (en) * | 2021-11-19 | 2022-03-01 | 河南工业大学 | High-precision ASI sea ice density inversion algorithm based on CGAN for data correction |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111277809A (en) | An image color correction method, system, device and medium | |
CN106910192B (en) | Image fusion effect evaluation method based on convolutional neural network | |
CN111723691B (en) | Three-dimensional face recognition method and device, electronic equipment and storage medium | |
CN110163208B (en) | A method and system for scene text detection based on deep learning | |
CN111541511B (en) | Communication interference signal identification method based on target detection in complex electromagnetic environment | |
CN114152217B (en) | Binocular phase expansion method based on supervised learning | |
CN107563350A (en) | A kind of method for detecting human face for suggesting network based on yardstick | |
WO2023005818A1 (en) | Noise image generation method and apparatus, electronic device, and storage medium | |
JP7370922B2 (en) | Learning method, program and image processing device | |
CN113989100A (en) | An Infrared Texture Sample Expansion Method Based on Style Generative Adversarial Networks | |
CN113743417A (en) | Semantic segmentation method and semantic segmentation device | |
CN111325736A (en) | Sight angle estimation method based on human eye difference image | |
CN112884721B (en) | Abnormality detection method, abnormality detection system and computer-readable storage medium | |
CN115205155A (en) | Distorted image correction method and device and terminal equipment | |
CN115439669A (en) | Feature point detection network based on deep learning and cross-resolution image matching method | |
JP7443030B2 (en) | Learning method, program, learning device, and method for manufacturing learned weights | |
CN113191962A (en) | Underwater image color recovery method and device based on environment background light and storage medium | |
CN118229554A (en) | Implicit transducer high-multispectral remote sensing fusion method | |
Bouchouicha et al. | A non-linear camera calibration with genetic algorithms | |
CN113902044B (en) | Image target extraction method based on lightweight YOLOV3 | |
CN101902542A (en) | Color look up table adjusting apparatus | |
CN116578821A (en) | Prediction method of atmospheric turbulence phase screen based on deep learning | |
KR102400082B1 (en) | Display device and control method thereof | |
CN117396919A (en) | Information processing method, program, and information processing apparatus | |
CN103257834A (en) | Systems and methods for facilitating reproduction of arbitrary colors |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200612 |