CN108364270B - Color reduction method and device for color cast image - Google Patents
Color reduction method and device for color cast image Download PDFInfo
- Publication number
- CN108364270B CN108364270B CN201810496229.7A CN201810496229A CN108364270B CN 108364270 B CN108364270 B CN 108364270B CN 201810496229 A CN201810496229 A CN 201810496229A CN 108364270 B CN108364270 B CN 108364270B
- Authority
- CN
- China
- Prior art keywords
- image
- color
- network
- channel
- new
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000009467 reduction Effects 0.000 title abstract description 12
- 230000006870 function Effects 0.000 claims description 20
- 238000005457 optimization Methods 0.000 claims description 20
- 238000013528 artificial neural network Methods 0.000 claims description 19
- 238000010586 diagram Methods 0.000 claims description 19
- 238000005070 sampling Methods 0.000 claims description 19
- 230000008569 process Effects 0.000 claims description 10
- 230000003042 antagnostic effect Effects 0.000 claims description 9
- 230000003287 optical effect Effects 0.000 claims description 5
- 230000009286 beneficial effect Effects 0.000 abstract 1
- 230000004913 activation Effects 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000008485 antagonism Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 230000004313 glare Effects 0.000 description 1
- 238000002430 laser surgery Methods 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
Abstract
The embodiment of the invention provides a color cast image color restoration method and a device, wherein the method comprises the following steps: converting the color cast image into a gray image, and performing channel superposition processing on the gray image to obtain a three-channel image; and inputting the three-channel image into a generator network to obtain an estimated color image. The invention realizes the effective reduction of the color cast image, thereby better reducing the real scene of the operation and being beneficial to the precision and safety of the operation of doctors.
Description
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to an image color restoration method and device.
Background
In laser surgery, the light source with higher output power can saturate the photosensitive element of the endoscope, and further the display screen is bright and white. At this time, the doctor is in a blind vision state, and cannot judge the position of the treatment optical fiber in the cavity and the lesion reaction, so that the operation risk is increased. To avoid the risks that may exist in such situations, filters are typically added to the endoscope lens to filter out the glare. The method can avoid the phenomenon that a display screen is bright and white, but an endoscope image obtained through the optical filter can generate serious color cast, the real operation scene can not be restored, and the accuracy and safety of the operation of a doctor can be influenced.
Disclosure of Invention
The embodiment of the invention provides a color cast image color reduction method and device, which are used for solving the problems that endoscope images obtained through an optical filter in the prior art can generate serious color cast, can not reduce real operation scenes, and can influence the accuracy and safety of doctor operations.
The embodiment of the invention provides a color cast image color restoration method, which comprises the following steps: converting the color cast image into a gray image, and performing channel superposition processing on the gray image to obtain a three-channel image; and inputting the three-channel image into a generator network to obtain an estimated color image.
The embodiment of the invention provides a color cast image color restoration device, which comprises: the three-channel image acquisition module and the pre-estimated color image acquisition module; the three-channel image acquisition module is used for converting the color cast image into a gray image and performing channel superposition processing on the gray image to obtain a three-channel image; and the estimated color image acquisition module is used for inputting the three-channel image into a generator network to obtain an estimated color image.
The embodiment of the invention provides color cast image color restoration equipment, which comprises: at least one processor; and at least one memory communicatively coupled to the processor, wherein: the memory stores program instructions executable by the processor, which when called by the processor are capable of performing the method as described above.
Embodiments of the present invention provide a non-transitory computer readable storage medium storing computer instructions that cause the computer to perform the method as described above.
According to the color reduction method and device for the color cast image, provided by the embodiment of the invention, the color cast image is converted into the gray image, and the gray image is subjected to channel superposition processing to obtain a three-channel image; and inputting the three-channel image into a generator network to obtain an estimated color image, and realizing effective reduction of the color cast image, so that the real operation scene can be better reduced, and the accuracy and safety of the operation of a doctor are facilitated.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flowchart illustrating a color reduction method for color cast images according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a discriminator network according to the present invention;
FIG. 3 is a schematic diagram of a generator network according to the present invention;
FIG. 4 is a schematic structural diagram of a color cast image color recovery apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a color reduction method for a color cast image, including: 101. converting the color cast image into a gray image, and performing channel superposition processing on the gray image to obtain a three-channel image; 102. and inputting the three-channel image into a generator network to obtain an estimated color image.
In this embodiment, for convenience of processing, the color cast image may be clipped to a predetermined size, for example, 224 × 224. The color-cast image may be an endoscopic image that is obtained by a filter and produces color cast. The input of the generator network is a three-channel image which embodies brightness information, so that the color cast image is converted into a gray image, and the channels are overlapped to generate the three-channel image. The generator network is mainly used for extracting high-level characteristic information of the image and establishing a corresponding relation between brightness information of the image and color information of the image based on the high-level characteristic information of the image, so that the color restoration of the image is more accurately carried out.
The color reduction method of the color cast image provided by the embodiment of the invention comprises the steps of converting the color cast image into a gray image, and carrying out channel superposition processing on the gray image to obtain a three-channel image; and inputting the three-channel image into a generator network to obtain an estimated color image, and realizing effective reduction of the color cast image, so that the real operation scene can be better reduced, and the accuracy and safety of the operation of a doctor are facilitated.
As an optional embodiment, before inputting the three-channel image into the generator network and obtaining the estimated color image, the method further includes: converting a plurality of real color images into YUV space, and training an up-sampling convolution network; and obtaining a generator network according to the residual error neural network and the trained up-sampling convolution network.
In this embodiment, the real color image is cropped to the same size as the color cast image: 224*224. The plurality of real color images are normal pictures without color cast. Since the luminance information and the color information of the image are separately represented in the YUV space. By converting a plurality of real color images into YUV space, the brightness information of the images can be used as sample data, the color information of the images can be used as label data, and the up-sampling convolution network is trained. The residual error neural network is mainly used for extracting high-level characteristic information of the image, and the up-sampling convolution neural network is used for establishing a corresponding relation between brightness information of the image and color information of the image based on the high-level characteristic information of the image.
As an alternative embodiment, the converting a plurality of real color images into a YUV space, and the training the upsampling convolutional neural network specifically includes: converting a plurality of real color images into YUV space to obtain a YUV image and a Y image corresponding to each real color image; carrying out channel superposition processing on the Y image corresponding to each real color image to obtain a three-channel image corresponding to each real color image; and taking three-channel images corresponding to all the real color images as sample data, and taking U and V images corresponding to all the real color images as label data, and training the up-sampling convolution neural network.
In this embodiment, in order to enable the generator network to better represent the correspondence between the color information of the image and the brightness information of the image, the upsampling convolutional neural network in the generator network needs to be trained. Because the corresponding relation between the brightness information of the image and the color information of the image in the true color image without color cast is the most accurate, the up-sampling convolution neural network is trained by adopting the true color image. In the YUV space, the Y image represents luminance information of the image, i.e., a grayscale image.
As an optional embodiment, the obtaining of the generator network by the residual neural network and the trained upsampling convolutional network further includes: inputting all real color images and the estimated color images generated by the generator network and corresponding to each real color image into a discriminator network, and carrying out antagonistic learning to optimize parameters of the generator network.
In this embodiment, the estimated color image generated by the generator network and corresponding to each real color image is obtained by inputting the three-channel image corresponding to each real color image into the generator network. In order to make the color reduction effect of the estimated color image generated by the generator network better, the parameters of the generator network are optimized. Specifically, all real color images and the estimated color images generated by the generator network and corresponding to each real color image are input to the discriminator network for antagonistic learning, and the color difference between the real color images without color cast and the generated estimated color images is obtained for optimizing the parameters of the generator network. The structure of the arbiter network may be as shown in fig. 2. The discriminator network comprises eight convolutional layers, and the eight convolutional layers are sequentially a full-link layer, an activation function and a full-link layer. The convolution kernel size is preferably 3 x 3, preferably using the LeakyRelu activation function. As the depth of the mesh increases, the number of feature maps for convolutional layers increases from 64 to 512. The arbiter network may have other structures, which are not limited herein, and the parameters of the generator network may be optimized.
As an optional embodiment, the inputting all real color images and the estimated color image generated by the generator network corresponding to each real color image into the discriminator network, and performing the antagonistic learning to optimize the parameters of the generator network specifically includes: fixing parameters of the generator network, inputting all real color images and the estimated color images which are generated by the generator network for the first time and correspond to each real color image into a discriminator network for antagonistic learning, and obtaining first optimization parameters of the discriminator network; taking the first optimization parameter as a parameter of the discriminator network, fixing the parameter of the discriminator network, and minimizing a loss function of the generator network to obtain a second optimization parameter of the generator network; taking the second optimization parameter as a parameter of the generator network, fixing the parameter of the generator network, inputting all real color images and estimated color images which are generated by the generator network under the second optimization parameter and correspond to each real color image into the discriminator network for antagonistic learning, and obtaining a new first optimization parameter of the discriminator network; taking the new first optimization parameter as a parameter of the discriminator network, fixing the parameter of the discriminator network, and minimizing a loss function of the generator network to obtain a new second optimization parameter of the generator network; and repeating the new acquisition process of the second optimization parameters until each real color image and the estimated color image corresponding to each real color image are smaller than the preset error.
In this embodiment, the discriminator network is preferably a srgan network. The process of the arbiter network performing countermeasure learning to obtain the first optimized parameter of the arbiter network is achieved by minimizing the loss function of the arbiter network. The loss function of the discriminator network can be selected according to actual needs, and preferably, a cross-entropy loss function is adopted. The loss function l of the generator network comprises a content loss function lXAnd a function of the penalty of antagonism lGenThe formula is as follows: l ═ lX+10-3lGen. Wherein the content loss function lXRepresenting the MSE loss l of the estimated color image and the real color imageMSEAnd loss of visual similarity lVGG(ii) a Function of penalty againstXIs the inverse of the arbiter network loss. Loss of MSE lMSEThe formula of (1) is: wherein W and H represent the width and height of the image, respectively,representing a true color image IRCThe pixel value of the pixel point at (x, y) in (x, y),representing an estimated colour image I generated by a generatorGCThe pixel value of the pixel point at the middle (x, y); loss of visual similarity lVGGThe formula of (1) is:whereinI representing 19-layer VGG network extraction applying pre-trainingRCCorresponding to the jth convolutional layer before the ith max pooling layer in the network,to representThe pixel value of the pixel point at (x, y) in (x, y),i representing 19-layer VGG network extraction applying pre-trainingGCCorresponding to the jth convolutional layer before the ith max pooling layer in the network,to representPixel value, W, of the pixel point at center (x, y)i,jAnd Hi,jCorresponding to the width and height of the feature map, respectively. Function of penalty againstGenThe formula of (1) is: representing the image to be generated by the arbiterThe probability of being a true color image is determined, and N represents the number of training images. Minimizing the loss function of the generator network resulting in new second optimization parameters of the generator network, i.e.It should be noted that the generator network includes a residual neural network and an upsampling network. In the optimization of the generator network, the parameters of the residual error neural network are fixed values, and only the parameters of the up-sampling network are optimized.
As an optional embodiment, the inputting the three-channel image into a generator network to obtain an estimated color image specifically includes: inputting the three-channel image into a residual error neural network to obtain an output characteristic diagram of each layer; taking the last layer as a current layer, performing convolution operation on the output characteristic diagram of the current layer, then performing up-sampling to the size of the corresponding output characteristic diagram of the previous layer of the current layer, and adding the size of the corresponding output characteristic diagram of the previous layer of the current layer to obtain an intermediate diagram; taking the previous layer of the current layer as a new current layer, performing convolution operation on the intermediate graph, then performing up-sampling to the size of the corresponding output characteristic graph of the previous layer of the new current layer, and adding the size of the corresponding output characteristic graph of the previous layer of the new current layer to obtain a new intermediate graph; repeating the acquisition process of the new intermediate image until the size of the new intermediate image is the same as that of the three-channel image; and obtaining the pre-estimated color image according to the new intermediate image with the same size as the three-channel image and the gray image.
In this embodiment, the three-channel image is input into the generator network, and the process of obtaining the estimated color image can be as shown in fig. 3. Wherein, the three-channel image is: and after the color cast image is converted into a gray image, cutting the gray image to 224 × 224 in size, and performing channel superposition to obtain 3 three-channel images with the size of 224 × 224. After the three-channel image of 224 × 3 was input to the residual neural network, the resulting output feature maps for the respective layers were 112 × 112, 56 × 56, 28 × 28, 14 × 14, and 7 × 7, in numbers of 64, 256, 512, 1024, and 2048, respectively. Then, taking the last layer as a current layer, and performing convolution operation on output feature graphs of the current layer, namely 2048 graphs with the size of 7 × 7 obtained by the residual neural network by adopting convolution kernels with the size of 3 × 3 to obtain 1024 graphs with the size of 7 × 7; and then, sampling the 1024 graphs with the size of 7 × 7 to obtain 1024 graphs with the size of 14 × 14, and adding the 1024 graphs with the size of 14 × 14 obtained by the residual neural network to obtain 1024 middle graphs with the size of 14 × 14, wherein the output feature graphs correspond to the previous layer of the current layer. Taking the previous layer of the current layer as a new current layer, namely, taking 1024 layers corresponding to graphs with the size of 14 × 14 obtained by a residual neural network, performing convolution operation on the 1024 intermediate graphs with the size of 14 × 14 by using convolution kernel with the size of 3 × 3 to obtain 512 graphs with the size of 14 × 14, then sampling the 512 graphs with the size of 14 × 14 to obtain 512 graphs with the size of 28 × 28, and adding the 512 graphs with the corresponding output feature graphs with the previous layer of the new current layer, namely 512 graphs with the size of 28 × 28 to obtain 512 new intermediate graphs with the size of 28 × 28. The acquisition process of the new intermediate map is repeated until the new intermediate map has the same size as the three-channel image, namely 3 maps with the size of 224 x 224. And performing convolution operation on 3 new intermediate graphs with the size of 224 x 224 by using convolution kernel with the size of 3 x 3 to obtain 2 graphs with the size of 224 x 224, and performing channel superposition on the 2 graphs with the size of 224 x 224 and the gray-scale image to obtain an estimated color image.
In the process, the size of each layer of characteristic diagram obtained by the residual error neural network and the number of layers of the residual error neural network are fixed values. When performing the convolution operation, the convolution kernel size is a fixed value. In the up-sampling process, the activation function of the network is preferably a Relu function; when the convolution operation is carried out on 3 new intermediate graphs with the size of 224 x 224 to obtain 2 graphs with the size of 224 x 224, the activation function of the network is preferably a sigmoid function; a batch normalization layer, the batchnorm layer, is added after each activation layer.
As an optional embodiment, the obtaining the estimated color image according to the new middle image and the gray-scale image with the same size as the three-channel image specifically includes: carrying out convolution operation on a new intermediate image with the same size as the three-channel image, and then carrying out channel superposition on the intermediate image and the gray-scale image to obtain a YUV image; and converting the YUV image into an RGB space to obtain an estimated color image.
In this embodiment, the obtaining of the estimated color image may be performing a convolution operation on 3 new intermediate maps with a size of 224 × 224 by using a convolution kernel with a size of 3 × 3 to obtain 2 maps with a size of 224 × 224 according to 3 new intermediate maps with a size of 224 × 224 and performing channel superposition on the 2 maps with a size of 224 × 224 and the gray-scale image to obtain a YUV image; and converting the YUV image into an RGB space to obtain an estimated color image.
As shown in fig. 4, an embodiment of the present invention provides a color-cast image color restoration apparatus, including: a three-channel image acquisition module 401 and an estimated color image acquisition module 402; the three-channel image obtaining module 401 is configured to convert the color cast image into a gray image, and perform channel superposition processing on the gray image to obtain a three-channel image; the estimated color image obtaining module 402 is configured to input the three-channel image to a generator network to obtain an estimated color image.
The color-cast image color restoration device provided by the embodiment of the invention converts a color-cast image into a gray image, and performs channel superposition processing on the gray image to obtain a three-channel image; and inputting the three-channel image into a generator network to obtain an estimated color image, and realizing effective reduction of the color cast image, so that the real operation scene can be better reduced, and the accuracy and safety of the operation of a doctor are facilitated.
The embodiment of the invention provides color cast image color restoration equipment, which comprises: at least one processor; and at least one memory communicatively coupled to the processor, wherein: the memory stores program instructions executable by the processor, and the processor calls the program instructions to execute the method provided by the method embodiments, for example, the method includes: 101. converting the color cast image into a gray image, and performing channel superposition processing on the gray image to obtain a three-channel image; 102. and inputting the three-channel image into a generator network to obtain an estimated color image.
The present embodiments provide a non-transitory computer-readable storage medium storing computer instructions that cause the computer to perform the methods provided by the above method embodiments, for example, including: 101. converting the color cast image into a gray image, and performing channel superposition processing on the gray image to obtain a three-channel image; 102. and inputting the three-channel image into a generator network to obtain an estimated color image.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (9)
1. A method for color restoration of a color cast image, comprising:
converting the color cast image into a gray image, and performing channel superposition processing on the gray image to obtain a three-channel image;
inputting the three-channel image into a generator network to obtain an estimated color image;
the color cast image is an endoscope image which is obtained through an optical filter and generates color cast;
the inputting the three-channel image into a generator network to obtain an estimated color image specifically comprises:
inputting the three-channel image into a residual error neural network to obtain an output characteristic diagram of each layer;
taking the last layer as a current layer, performing convolution operation on the output characteristic diagram of the current layer, then performing up-sampling to the size of the corresponding output characteristic diagram of the previous layer of the current layer, and adding the size of the corresponding output characteristic diagram of the previous layer of the current layer to obtain an intermediate diagram;
taking the previous layer of the current layer as a new current layer, performing convolution operation on the intermediate graph, up-sampling the intermediate graph to the size of the corresponding output characteristic graph of the previous layer of the new current layer, and adding the intermediate graph and the corresponding output characteristic graph of the previous layer of the new current layer to obtain a new intermediate graph;
repeating the acquisition process of the new intermediate image until the size of the new intermediate image is the same as that of the three-channel image;
and obtaining the pre-estimated color image according to the new intermediate image with the same size as the three-channel image and the gray image.
2. The method of claim 1, wherein inputting the three-channel image to a generator network further comprises, prior to obtaining an estimated color image:
converting a plurality of real color images into YUV space, and training an up-sampling convolution network;
and obtaining a generator network according to the residual error neural network and the trained up-sampling convolution network.
3. The method of claim 2, wherein converting the plurality of real color images to YUV space, training the upsampled convolutional network specifically comprises:
converting a plurality of real color images into YUV space to obtain Y, U and V images corresponding to each real color image;
carrying out channel superposition processing on the Y image corresponding to each real color image to obtain a three-channel image corresponding to each real color image;
and taking three-channel images corresponding to all the real color images as sample data, and taking U and V images corresponding to all the real color images as label data, and training the up-sampling convolution network.
4. The method of claim 2, wherein obtaining the generator network from the residual neural network and the trained upsampled convolutional network further comprises:
inputting all real color images and the estimated color images generated by the generator network and corresponding to each real color image into a discriminator network, and carrying out antagonistic learning to optimize parameters of the generator network.
5. The method of claim 4, wherein inputting all real color images and the estimated color images generated by the generator network corresponding to each real color image into a discriminator network, and wherein performing the antagonistic learning to optimize the parameters of the generator network specifically comprises:
fixing parameters of the generator network, inputting all real color images and the estimated color images which are generated by the generator network for the first time and correspond to each real color image into a discriminator network for antagonistic learning, and obtaining first optimization parameters of the discriminator network; taking the first optimization parameter as a parameter of the discriminator network, fixing the parameter of the discriminator network, and minimizing a loss function of the generator network to obtain a second optimization parameter of the generator network;
taking the second optimization parameter as a parameter of the generator network, fixing the parameter of the generator network, inputting all real color images and estimated color images which are generated by the generator network under the second optimization parameter and correspond to each real color image into the discriminator network for antagonistic learning, and obtaining a new first optimization parameter of the discriminator network; taking the new first optimization parameter as a parameter of the discriminator network, fixing the parameter of the discriminator network, and minimizing a loss function of the generator network to obtain a new second optimization parameter of the generator network;
and repeating the new acquisition process of the second optimization parameters until each real color image and the estimated color image corresponding to each real color image are smaller than the preset error.
6. The method according to claim 1, wherein obtaining the estimated color image according to the new intermediate image and the grayscale image having the same size as the three-channel image specifically comprises:
carrying out convolution operation on a new intermediate image with the same size as the three-channel image, and then carrying out channel superposition on the intermediate image and the gray-scale image to obtain a YUV image;
and converting the YUV image into an RGB space to obtain an estimated color image.
7. A color cast image color reproduction apparatus, comprising: the three-channel image acquisition module and the pre-estimated color image acquisition module;
the three-channel image acquisition module is used for converting the color cast image into a gray image and performing channel superposition processing on the gray image to obtain a three-channel image;
the estimated color image acquisition module is used for inputting the three-channel image into a generator network to obtain an estimated color image;
the color cast image is an endoscope image which is obtained through an optical filter and generates color cast;
the inputting the three-channel image into a generator network to obtain an estimated color image specifically comprises:
inputting the three-channel image into a residual error neural network to obtain an output characteristic diagram of each layer;
taking the last layer as a current layer, performing convolution operation on the output characteristic diagram of the current layer, then performing up-sampling to the size of the corresponding output characteristic diagram of the previous layer of the current layer, and adding the size of the corresponding output characteristic diagram of the previous layer of the current layer to obtain an intermediate diagram;
taking the previous layer of the current layer as a new current layer, performing convolution operation on the intermediate graph, up-sampling the intermediate graph to the size of the corresponding output characteristic graph of the previous layer of the new current layer, and adding the intermediate graph and the corresponding output characteristic graph of the previous layer of the new current layer to obtain a new intermediate graph;
repeating the acquisition process of the new intermediate image until the size of the new intermediate image is the same as that of the three-channel image;
and obtaining the pre-estimated color image according to the new intermediate image with the same size as the three-channel image and the gray image.
8. A color cast image color reproduction apparatus, comprising:
at least one processor; and
at least one memory communicatively coupled to the processor, wherein:
the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of any of claims 1 to 6.
9. A non-transitory computer-readable storage medium storing computer instructions that cause a computer to perform the method of any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810496229.7A CN108364270B (en) | 2018-05-22 | 2018-05-22 | Color reduction method and device for color cast image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810496229.7A CN108364270B (en) | 2018-05-22 | 2018-05-22 | Color reduction method and device for color cast image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108364270A CN108364270A (en) | 2018-08-03 |
CN108364270B true CN108364270B (en) | 2020-11-06 |
Family
ID=63012232
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810496229.7A Active CN108364270B (en) | 2018-05-22 | 2018-05-22 | Color reduction method and device for color cast image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108364270B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109859288B (en) * | 2018-12-25 | 2023-01-10 | 苏州飞搜科技有限公司 | Image coloring method and device based on generation countermeasure network |
CN110276731B (en) * | 2019-06-17 | 2022-08-09 | 艾瑞迈迪科技石家庄有限公司 | Endoscopic image color reduction method and device |
CN110930333A (en) * | 2019-11-22 | 2020-03-27 | 北京金山云网络技术有限公司 | Image restoration method and device, electronic equipment and computer-readable storage medium |
CN111353585A (en) * | 2020-02-25 | 2020-06-30 | 北京百度网讯科技有限公司 | Structure searching method and device of neural network model |
CN111508038A (en) * | 2020-04-17 | 2020-08-07 | 北京百度网讯科技有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN111898448B (en) * | 2020-06-30 | 2023-10-24 | 北京大学 | Pedestrian attribute identification method and system based on deep learning |
CN111898449B (en) * | 2020-06-30 | 2023-04-18 | 北京大学 | Pedestrian attribute identification method and system based on monitoring video |
CN113870371B (en) * | 2021-12-03 | 2022-02-15 | 浙江霖研精密科技有限公司 | Picture color transformation device and method based on generation countermeasure network and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW200641723A (en) * | 2005-02-11 | 2006-12-01 | Hewlett Packard Development Co | Decreasing aliasing in electronic images |
CN101930596A (en) * | 2010-07-19 | 2010-12-29 | 赵全友 | Color constancy method in two steps under a kind of complex illumination |
CN107451963A (en) * | 2017-07-05 | 2017-12-08 | 广东欧谱曼迪科技有限公司 | Multispectral nasal cavity endoscope Real-time image enhancement method and endoscopic imaging system |
CN107527332A (en) * | 2017-10-12 | 2017-12-29 | 长春理工大学 | Enhancement Method is kept based on the low-light (level) image color for improving Retinex |
-
2018
- 2018-05-22 CN CN201810496229.7A patent/CN108364270B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW200641723A (en) * | 2005-02-11 | 2006-12-01 | Hewlett Packard Development Co | Decreasing aliasing in electronic images |
US7525583B2 (en) * | 2005-02-11 | 2009-04-28 | Hewlett-Packard Development Company, L.P. | Decreasing aliasing in electronic images |
CN101930596A (en) * | 2010-07-19 | 2010-12-29 | 赵全友 | Color constancy method in two steps under a kind of complex illumination |
CN107451963A (en) * | 2017-07-05 | 2017-12-08 | 广东欧谱曼迪科技有限公司 | Multispectral nasal cavity endoscope Real-time image enhancement method and endoscopic imaging system |
CN107527332A (en) * | 2017-10-12 | 2017-12-29 | 长春理工大学 | Enhancement Method is kept based on the low-light (level) image color for improving Retinex |
Non-Patent Citations (1)
Title |
---|
"Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network";Christian Ledig et al;《IEEE》;20170726;期刊第2节 * |
Also Published As
Publication number | Publication date |
---|---|
CN108364270A (en) | 2018-08-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108364270B (en) | Color reduction method and device for color cast image | |
CN110008817B (en) | Model training method, image processing method, device, electronic equipment and computer readable storage medium | |
CN110276731B (en) | Endoscopic image color reduction method and device | |
JP6905602B2 (en) | Image lighting methods, devices, electronics and storage media | |
EP4105877A1 (en) | Image enhancement method and image enhancement apparatus | |
CN113454981A (en) | Techniques for multi-exposure fusion of multiple image frames based on convolutional neural network and for deblurring multiple image frames | |
US10817984B2 (en) | Image preprocessing method and device for JPEG compressed file | |
CN110675336A (en) | Low-illumination image enhancement method and device | |
CN109817170B (en) | Pixel compensation method and device and terminal equipment | |
CN113052775B (en) | Image shadow removing method and device | |
US20220398698A1 (en) | Image processing model generation method, processing method, storage medium, and terminal | |
CN113095470A (en) | Neural network training method, image processing method and device, and storage medium | |
AU2013258866A1 (en) | Reducing the dynamic range of image data | |
CN113096023B (en) | Training method, image processing method and device for neural network and storage medium | |
CN113284061A (en) | Underwater image enhancement method based on gradient network | |
Rasheed et al. | LSR: Lightening super-resolution deep network for low-light image enhancement | |
CN115526803A (en) | Non-uniform illumination image enhancement method, system, storage medium and device | |
Wang et al. | Single Underwater Image Enhancement Based on $ L_ {P} $-Norm Decomposition | |
US20220057620A1 (en) | Image processing method for microscopic image, computer readable medium, image processing apparatus, image processing system, and microscope system | |
CN114049264A (en) | Dim light image enhancement method and device, electronic equipment and storage medium | |
CN112200719A (en) | Image processing method, electronic device and readable storage medium | |
WO2014002811A1 (en) | Image-processing device, image-processing method, and image-processing program | |
CN116468636A (en) | Low-illumination enhancement method, device, electronic equipment and readable storage medium | |
CN116362998A (en) | Image enhancement device, image enhancement method, electronic device, and storage medium | |
CN111754412A (en) | Method and device for constructing data pairs and terminal equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |