CN114463196B - Image correction method based on deep learning - Google Patents

Image correction method based on deep learning Download PDF

Info

Publication number
CN114463196B
CN114463196B CN202111623814.7A CN202111623814A CN114463196B CN 114463196 B CN114463196 B CN 114463196B CN 202111623814 A CN202111623814 A CN 202111623814A CN 114463196 B CN114463196 B CN 114463196B
Authority
CN
China
Prior art keywords
image
images
chromatic aberration
value
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111623814.7A
Other languages
Chinese (zh)
Other versions
CN114463196A (en
Inventor
王玥
雷嘉锐
钱常德
孙焕宇
刘�东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiaxing Research Institute of Zhejiang University
Original Assignee
Jiaxing Research Institute of Zhejiang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiaxing Research Institute of Zhejiang University filed Critical Jiaxing Research Institute of Zhejiang University
Priority to CN202111623814.7A priority Critical patent/CN114463196B/en
Publication of CN114463196A publication Critical patent/CN114463196A/en
Application granted granted Critical
Publication of CN114463196B publication Critical patent/CN114463196B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4023Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image correction method based on deep learning, which comprises the following steps: (1) Using the image acquisition equipment with chromatic aberration and better to shoot images with the same field of view as far as possible, and taking the images as the chromatic aberration images and the reference images; (2) Solving offset of two shots by using a template matching algorithm, cutting two images according to the offset, and further dividing a training set and a testing set; (3) Constructing an image correction model, wherein the model comprises a weight prediction network and n learnable 3D lookup tables; (4) Inputting the images with chromatic aberration into a network, comparing the corrected images with a reference image, and calculating a loss function; training with the minimization of the loss function as a target, and updating network parameters; (5) And after model training is finished, performing image correction application. The method has simple operation steps, does not need to manually set a large number of parameters and design algorithms, and effectively reduces the time for manually processing the image while ensuring better effect.

Description

Image correction method based on deep learning
Technical Field
The invention belongs to the field of image processing, and particularly relates to an image correction method based on deep learning.
Background
In general, when an image is photographed using an imaging system, there is a problem in that the quality of the image is poor due to chromatic aberration of a lens used, or the imaging section hardware is not matched with the image capturing section hardware, resulting in chromatic aberration in the photographed image. Specifically, the degradation of image quality due to chromatic aberration is a serious problem, and chromatic aberration mainly causes different focal points of light of different wavelengths due to different refractive indexes when the light passes through the lens, so that positional deviation of different colors is generated on the image plane, and when the lens is simplified due to the spatial position, or a lens with higher magnification and NA is used, chromatic aberration becomes more obvious, and the overall brightness of the image also changes. Problems such as color crosstalk that may exist with CCDs in response to light can also create color artifacts in the image.
In order to reduce such chromatic aberration, lenses made of a specific glass material or lenses processed by a specific processing method of an image acquisition device with better quality are currently used. However, these methods increase the manufacturing cost of the lens, and it is difficult to widely use the above methods on a general image capturing apparatus.
The algorithm correction method is selected more effectively for chromatic aberration generated by common image acquisition equipment. In this regard, chinese patent application No. 200810212608.5 discloses a method of correcting chromatic aberration by image processing, comprising: the luminance signal of the input image is analyzed to extract a color difference area, a color gradient and a luminance component are calculated respectively, a gradient difference between the color components and a gradient of the luminance of the input image are obtained and are respectively used as a first weight and a second weight of the color difference degree of the image, and the chromaticity of the pixels of the input image is corrected according to the two weight values, so that the color difference of the image is corrected.
Chinese patent application No. 201610029519.1 discloses a method of correcting refractive chromatic aberration by image processing, comprising: calculating gradients of a green channel in the image in the vertical and horizontal directions, filtering by a threshold value to obtain an image with a higher gradient value, obtaining a region containing refractive chromatic aberration, distinguishing different strong light regions by using a binarization partitioning method, sequentially extracting each region, properly expanding the boundary of each region, and correcting by using the existing chromatic aberration correction method.
The two technical schemes adopt the traditional image processing method, and a large amount of manual parameter adjustment is needed. And the principle is mainly based on the processing of image edges, and the algorithm needs longer time under the condition of more edges and textures for some complex images. And the color difference of the image with more textures is not obvious and the image with alternate coverage is provided, so that the tuning parameters of the algorithm can be increased. Therefore, a method capable of rapidly and adaptively correcting an image color difference is required.
Disclosure of Invention
The image correction method based on deep learning is simple in operation steps, does not need to manually set a large number of parameters and design algorithms, and effectively reduces the time for manually processing images while ensuring a good effect. The model has the advantages of high running speed, few parameters, saving of calculation resources and realization of real-time chromatic aberration correction of chromatic aberration images.
An image correction method based on deep learning comprises the following steps:
(1) Shooting a group of images with chromatic aberration by using an image acquisition device to be corrected, and shooting another group of images with the same magnification with smaller chromatic aberration by using the image acquisition device with the same magnification as a reference image; the images with color difference are in one-to-one correspondence with the reference images to form a plurality of image pairs;
(2) Carrying out image alignment on the chromatic aberration images and the reference images in each group of image pairs, and dividing the aligned groups of image pairs into a training set and a testing set after amplifying;
(3) Constructing an image correction model, wherein the image correction model comprises a weight prediction network and n learnable 3D lookup tables; the 3D lookup table is used for establishing mapping between the color difference image and the prediction reference image, and the number of channels of the weight prediction network is the same as that of the 3D lookup table;
when an image is input into an image correction model, the image correction model is output by respectively carrying out interpolation up-sampling on a feature image output by a weight prediction network to the size of an original image through a weight prediction network and n 3D lookup tables, weighting and adding the feature image and a prediction reference image output by the 3D lookup tables;
(4) Training the image correction model by using a training set; inputting the chromatic aberration images in the training set into a model, comparing the output images of the model with a reference image to obtain the value of a loss function, and updating network parameters with the minimum loss function as a target;
(5) After the image correction model is trained, the image to be corrected is input into the image correction model to obtain a corrected image.
Further, in step (1), the two groups of photographed images cover most of the colors of the usage scene as much as possible, and the two groups of images cover the same field of view as much as possible.
In the step (2), the specific process of image alignment is as follows:
(2-1) taking a part of the image common to the color difference image and the reference image in each set of images as a template image, recording its position coordinates (x 1 ,y 1 );
(2-2) taking the template image as a sliding window, and respectively calculating the square sum of pixel differences of corresponding points when the template is at different positions on the reference image as a template matching value;
(2-3) recording a matching value of the template in the process of sliding on the reference image, wherein the closer the value is to 0, the higher the matching degree is;
(2-4) recording the coordinate value (x) of the position where the matching value is closest to 0 2 ,y 2 ) The field of view offset of the reference image and the color difference image is calculated as Δx and Δy.
(2-5) cropping the image according to the offset, leaving a portion where the fields of view of the corresponding two images overlap.
In the step (3), the input of the 3D lookup table is a color image with 224×224 resolution and three RGB color channels; the image sequentially passes through three linear interpolations and 3 x 3D convolutional layers with stride being the 1,3 channel output.
The structure of the weight prediction network comprises an Uppsample Biline 2d two-dimensional double-line up-sampling layer, a stride of 1, a 1X 1 convolution layer output by 32 channels, a ReLU activation layer, an InstanceNorm2d layer, an Uppsample Biline 2d two-dimensional double-line up-sampling layer, a stride of 1, a 1X 1 convolution layer output by 64 channels, a ReLU activation layer, an InstanceNorm2d layer, an Uppsample Biline 2d two-dimensional double-line up-sampling layer, a stride of 1, a 1X 1 convolution layer output by 128 channels, a ReLU activation layer, a Dropout layer, a stride of 1, and a 1X 1 convolution layer output by 3 channels which are connected in sequence; and obtaining n feature images obtained by the picture through a weight prediction network, wherein the feature images are obtained by 8 times of downsampling of the original image.
In step (4), the loss function is
Loss=L mses R sm R m
Wherein L is mse Lambda is the mean square value of the deviation of the predicted image and the reference image s And lambda (lambda) m To control R s And R is m Coefficient of two terms for training influence lambda s =0.0001,λ m =10; g (·) is the standard ReLU function,R/G/B value output after the image passes through the 3D lookup table; omega n And predicting the mean square value of the image feature map learned by the network for the weight.
In the training process, an Adam parameter optimization algorithm is adopted to perform parameter optimization, and the learning rate, the first-order and second-order moment estimated exponential decay rate beta are respectively set 1 And beta 2 The method comprises the steps of carrying out a first treatment on the surface of the During training, the network weights are updated with the goal of minimizing the loss function in each parameter pass.
In the training process, when each epoch is finished, the chromatic aberration images in the test set are input into the model to be transmitted forward, the output images are obtained to be compared with the corresponding reference images, and the PSNR value is obtained and used for judging the network effect in real time. The formula for determining the PSNR value is as follows:
where MSE is the mean square value, MAX, of the deviation of the predicted image from the reference image I Is the maximum value of the image gray scale.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention can generate high-quality images, has simple data preparation, allows the field of view deviation in the process of taking pictures, can obtain corrected images with better quality through the network and the image acquisition device with chromatic aberration after the network training is finished, and can reduce the cost of the lens imaging device.
2. The invention directly improves the image quality from the angle of the image, does not need to consider the factors of system design and matching, reduces the manual design and has more practical value.
3. The network constructed by the invention is simple, the generated image is faster, and the average processing speed of a single image reaches 0.0109s under the acceleration of NVIDIA GeForce RTX2070SUPER graphic card, thereby being beneficial to generating the image in real time.
Drawings
FIG. 1 is a flow chart of an image correction method based on deep learning according to the present invention;
FIG. 2 is a network structure diagram of an image correction model according to the present invention.
Detailed Description
The invention will be described in further detail with reference to the drawings and examples, it being noted that the examples described below are intended to facilitate the understanding of the invention and are not intended to limit the invention in any way.
The embodiment of the invention is used for correcting the chromatic aberration of a picture shot by the microscope objective, and chromatic aberration can occur when the picture is shot by the microscope objective, which is a phenomenon that chromatic edges are generated on an image acquisition component due to the fact that lenses have different refractive indexes for light with different wavelengths, and the chromatic aberration comprises longitudinal chromatic aberration and transverse chromatic aberration. For the effect of an actual image, longitudinal chromatic aberration is represented by the outer ring and the inner ring of the image edge showing different colors, and lateral chromatic aberration is represented by the positions different in the lateral or longitudinal directions showing different colors. In addition, the background of the image has certain color deviation or brightness according to the different materials or NA of the objective lens.
The correction method of the embodiment of the invention is carried out on a Win10 system, the used python version is 3.6.10, the PyTorch version is 1.4.0, the CUDA version is 10.2, the cudnn version is 7.6.5.32, the whole implementation flow is as shown in figure 1, and the specific implementation steps are as follows:
firstly, establishing a picture database: shooting a group of images with poor quality by using an image acquisition device to be corrected; and then using the image acquisition equipment with the same magnification and smaller chromatic aberration to shoot another group of images with better quality, and taking the images shot by the two groups as reference images to cover most of colors of a used scene as much as possible, wherein the two groups of images cover the same field of view range as much as possible. Pairs of data sets are divided into training and test sets.
Secondly, carrying out image alignment on the shot chromatic aberration and a reference image: and matching the two images by using a template matching algorithm, calculating offset x and y between the image fields in the horizontal and vertical directions, cutting the images according to the offset, and leaving corresponding overlapping parts of the two image fields as a data set of the network.
The specific process is as follows:
(2-1) taking a part of an image common to an image of poor quality (image to be corrected) and a corresponding reference image as a template image, recording its position coordinates (x 1 ,y 1 )。
And (2-2) taking the template as a sliding window, and respectively calculating the sum of squares of pixel differences of corresponding points of the template at different positions on the reference image to be used as a template matching value.
(2-3) recording the matching value of the template in the process of sliding on the reference image, wherein the closer the value is to 0, the higher the matching degree is.
(2-4) recording the coordinate value (x) of the position where the matching value is closest to 0 2 ,y 2 ) The field of view offset of the reference image and the image to be corrected is calculated as Δx and Δy.
Third, the data is amplified, the two sets of data are randomly scaled, and then randomly cropped to 224 x 224 images. The image was horizontally flipped and randomly rotated 90 degrees in the horizontal and vertical directions with a probability of 0.5, respectively. And finally, normalizing the data to enable the numerical value to be 0 to 1.
Fourth, an image correction model is constructed, the network structure of the model is shown in fig. 2, the network structure is input into the image processed in the second step, and the number of 3D lookup tables and the number n of channels of the weight prediction network are set to be 3. And copying the input image and then transmitting the copied image into a convolution network to respectively obtain 3 down-sampled feature images and images searched by 3D lookup tables. And interpolating and upsampling the feature map to the original image size, and respectively carrying out Hadamard products and summation on the upsampled image and the searched image to obtain an output image of the network.
In the invention, interpolation up-sampling operation is added after down-sampling, and the feature map is up-sampled into a weight map with the same size as the input image, so as to be used for carrying out item-by-item product with the output image of the 3D lookup table. An identity map is added between the input and output of the network.
And fifthly, constructing a loss function. Setting the loss function as the loss function
Loss=L mses R sm R m
Wherein L is mse Lambda is the mean square value of the deviation of the predicted image and the reference image s And lambda (lambda) m To control R s And R is m Coefficient of two terms for training influence lambda s =0.0001,λ m =10, g (·) is the standard ReLU function,and R/G/B value is output for the image after the image passes through the 3D lookup table. Omega n And predicting the mean square value of the image characteristic diagram output by the network for the weight.
Sixth, setting a model optimization algorithm: parameter optimization is carried out by adopting an Adam parameter optimization algorithm, and learning rate, first-order and second-order moment estimation fingers are respectively setDigital attenuation rate beta 1 And beta 2 . During training, the network weights are updated with the goal of minimizing the loss function in each parameter pass.
And seventhly, inputting the poor quality image in the test set to the network for forward transmission when each epoch is finished, obtaining the comparison of the output image and the good quality image, and obtaining the PSNR value for judging the network effect in real time.
Where MSE is the mean square value, MAX, of the deviation of the predicted image from the reference image I Is the maximum value of the image gray scale.
And eighth step, iteratively adjusting parameters. Repeating the operations of the steps six and seven until the network is stable, and obtaining the trained network.
And ninth, inputting and forward propagating the picture to be tested by using a trained network, and obtaining the image after chromatic aberration correction.
In order to verify the effect of the present invention, in this embodiment, a plurality of groups of H & E stained tumor slices are collected by using 20 times objective lens with na=0.8 and another 20 times objective lens with na=0.75, respectively, and the average PSNR values obtained by the image chromatic aberration correction algorithm are shown in the following table, so that the quality of the image is found to be greatly improved.
Table 1 comparison of experimental results
Original image Corrected image
PSNR 16.70 25.96
Through testing, the average processing time of a single image is 0.0109s.
From the experimental results, the correction image generated by the method has higher quality, the original image is greatly improved, and the calculated PSNR value proves the effectiveness of the method.
The foregoing embodiments have described in detail the technical solution and the advantages of the present invention, it should be understood that the foregoing embodiments are merely illustrative of the present invention and are not intended to limit the invention, and any modifications, additions and equivalents made within the scope of the principles of the present invention should be included in the scope of the invention.

Claims (6)

1. The image correction method based on the deep learning is characterized by being used for correcting chromatic aberration of pictures shot by the microscope objective lens and comprising the following steps of:
(1) Shooting a group of images with chromatic aberration by using an image acquisition device to be corrected, and shooting another group of images with the same magnification with smaller chromatic aberration by using the image acquisition device with the same magnification as a reference image; the images with color difference are in one-to-one correspondence with the reference images to form a plurality of image pairs;
(2) Carrying out image alignment on the chromatic aberration images and the reference images in each group of image pairs, and dividing the aligned groups of image pairs into a training set and a testing set after amplifying;
(3) Constructing an image correction model, wherein the image correction model comprises a weight prediction network and n learnable 3D lookup tables; the 3D lookup table is used for establishing mapping between the color difference image and the prediction reference image, and the number of channels of the weight prediction network is the same as that of the 3D lookup table;
the input of the 3D lookup table is a color image with 224 multiplied by 224 resolution and three RGB color channels; the images sequentially pass through 3X 3D convolution layers which are output by three linear interpolation and stride for 1,3 channels;
the structure of the weight prediction network comprises an Uppsample Biline 2d two-dimensional double-line up-sampling layer, a stride of 1, a 1X 1 convolution layer output by 32 channels, a ReLU activation layer, an InstanceNorm2d layer, an Uppsample Biline 2d two-dimensional double-line up-sampling layer, a stride of 1, a 1X 1 convolution layer output by 64 channels, a ReLU activation layer, an InstanceNorm2d layer, an Uppsample Biline 2d two-dimensional double-line up-sampling layer, a stride of 1, a 1X 1 convolution layer output by 128 channels, a ReLU activation layer, a Dropout layer, a stride of 1, and a 1X 1 convolution layer output by 3 channels which are connected in sequence; the method comprises the steps that after the picture passes through a weight prediction network, n feature images which are obtained by 8 times down-sampling of an original image are obtained;
when an image is input into an image correction model, a feature image output by a weight prediction network and n 3D lookup tables are interpolated and up-sampled to the size of an original image respectively, hadamard products are respectively carried out on the feature image and a prediction reference image output by the 3D lookup tables, and the feature image and the prediction reference image are summed, so that the output of the image correction model is obtained;
(4) Training the image correction model by using a training set; inputting the chromatic aberration images in the training set into a model, comparing the output images of the model with a reference image to obtain the value of a loss function, and updating network parameters with the minimum loss function as a target;
(5) After the image correction model is trained, the image to be corrected is input into the image correction model to obtain a corrected image.
2. The image correction method based on deep learning according to claim 1, wherein in the step (2), the specific process of performing image alignment is:
(2-1) taking a part of the image common to the color difference image and the reference image in each set of images as a template image, recording its position coordinates (x 1 ,y 1 );
(2-2) taking the template image as a sliding window, and respectively calculating the square sum of pixel differences of corresponding points when the template is at different positions on the reference image as a template matching value;
(2-3) recording a matching value of the template in the process of sliding on the reference image, wherein the closer the value is to 0, the higher the matching degree is;
(2-4) recording the coordinate value (x) of the position where the matching value is closest to 0 2 ,y 2 ) Calculating the field of view offset of the reference image and the color difference image as Deltax and Deltay;
(2-5) cropping the image according to the offset, leaving a portion where the fields of view of the corresponding two images overlap.
3. The image correction method based on deep learning according to claim 1, wherein in the step (4), the loss function is
Loss=L mses R sm R m
Wherein L is mse Lambda for predicting mean square error of image and reference image s And lambda (lambda) m To control R s And R is m Coefficient of two terms for training influence lambda s =0.0001,λ m =10; g (·) is the standard ReLU function,R/G/B value output after the image passes through the 3D lookup table; omega n And predicting the mean square value of the image feature map learned by the network for the weight.
4. The image correction method based on deep learning as claimed in claim 3, wherein in the training process, an Adam parameter optimization algorithm is adopted to perform parameter optimization, and the learning rate, first order and second order are respectively setExponential decay rate beta for second moment estimation 1 And beta 2 The method comprises the steps of carrying out a first treatment on the surface of the During training, the network weights are updated with the goal of minimizing the loss function in each parameter pass.
5. The image correction method based on deep learning according to claim 3, wherein in the step (4), in the training process, the chromatic aberration image in the test set is input into the model to be propagated forward at the end of each epoch, and the output image is compared with the corresponding reference image to obtain the PSNR value for determining the network effect in real time.
6. The image correction method based on deep learning according to claim 5, wherein the formula for obtaining the PSNR value is as follows:
where MSE is the mean square value, MAX, of the deviation of the predicted image from the reference image I Is the maximum value of the image gray scale.
CN202111623814.7A 2021-12-28 2021-12-28 Image correction method based on deep learning Active CN114463196B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111623814.7A CN114463196B (en) 2021-12-28 2021-12-28 Image correction method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111623814.7A CN114463196B (en) 2021-12-28 2021-12-28 Image correction method based on deep learning

Publications (2)

Publication Number Publication Date
CN114463196A CN114463196A (en) 2022-05-10
CN114463196B true CN114463196B (en) 2023-07-25

Family

ID=81408554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111623814.7A Active CN114463196B (en) 2021-12-28 2021-12-28 Image correction method based on deep learning

Country Status (1)

Country Link
CN (1) CN114463196B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115802173B (en) * 2023-02-06 2023-04-25 北京小米移动软件有限公司 Image processing method and device, electronic equipment and storage medium
CN117649661B (en) * 2024-01-30 2024-04-12 青岛超瑞纳米新材料科技有限公司 Carbon nanotube preparation state image processing method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108492271A (en) * 2018-03-26 2018-09-04 中国电子科技集团公司第三十八研究所 A kind of automated graphics enhancing system and method for fusion multi-scale information
CN111988593A (en) * 2020-08-31 2020-11-24 福州大学 Three-dimensional image color correction method and system based on depth residual optimization
CN113066017A (en) * 2021-02-27 2021-07-02 华为技术有限公司 Image enhancement method, model training method and equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110728637B (en) * 2019-09-21 2023-04-18 天津大学 Dynamic dimming backlight diffusion method for image processing based on deep learning
CN111915484B (en) * 2020-07-06 2023-07-07 天津大学 Reference image guiding super-resolution method based on dense matching and self-adaptive fusion
CN112581373B (en) * 2020-12-14 2022-06-10 北京理工大学 Image color correction method based on deep learning
CN112562019A (en) * 2020-12-24 2021-03-26 Oppo广东移动通信有限公司 Image color adjusting method and device, computer readable medium and electronic equipment
CN113297937B (en) * 2021-05-17 2023-12-15 杭州网易智企科技有限公司 Image processing method, device, equipment and medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108492271A (en) * 2018-03-26 2018-09-04 中国电子科技集团公司第三十八研究所 A kind of automated graphics enhancing system and method for fusion multi-scale information
CN111988593A (en) * 2020-08-31 2020-11-24 福州大学 Three-dimensional image color correction method and system based on depth residual optimization
CN113066017A (en) * 2021-02-27 2021-07-02 华为技术有限公司 Image enhancement method, model training method and equipment

Also Published As

Publication number Publication date
CN114463196A (en) 2022-05-10

Similar Documents

Publication Publication Date Title
CN114463196B (en) Image correction method based on deep learning
CN111539879B (en) Video blind denoising method and device based on deep learning
CN112288658A (en) Underwater image enhancement method based on multi-residual joint learning
WO2018000752A1 (en) Monocular image depth estimation method based on multi-scale cnn and continuous crf
Kang Automatic removal of chromatic aberration from a single image
CN109644230A (en) Image processing method, image processing apparatus, image pick-up device, image processing program and storage medium
KR20190089922A (en) Digital calibration of optical system aberrations
CN108288256B (en) Multispectral mosaic image restoration method
CN111510691B (en) Color interpolation method and device, equipment and storage medium
CN110176023B (en) Optical flow estimation method based on pyramid structure
JP6910780B2 (en) Image processing method, image processing device, imaging device, image processing program, and storage medium
CA2821965A1 (en) Systems and methods for synthesizing high resolution images using super-resolution processes
CN108234884B (en) camera automatic focusing method based on visual saliency
CN108830812A (en) A kind of high frame per second of video based on network deep learning remakes method
CN115797225B (en) Unmanned ship acquired image enhancement method for underwater topography measurement
CN112085717B (en) Video prediction method and system for laparoscopic surgery
CN112116539A (en) Optical aberration fuzzy removal method based on deep learning
CN112270691B (en) Monocular video structure and motion prediction method based on dynamic filter network
CN111598775B (en) Light field video time domain super-resolution reconstruction method based on LSTM network
CN111462002B (en) Underwater image enhancement and restoration method based on convolutional neural network
CN111652815B (en) Mask plate camera image restoration method based on deep learning
CN116309844A (en) Three-dimensional measurement method based on single aviation picture of unmanned aerial vehicle
CN113935917A (en) Optical remote sensing image thin cloud removing method based on cloud picture operation and multi-scale generation countermeasure network
CN106683044B (en) Image splicing method and device of multi-channel optical detection system
CN107993196B (en) Image interpolation method and system based on prediction verification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant