CN109493296A - Image enchancing method, device, electronic equipment and computer-readable medium - Google Patents
Image enchancing method, device, electronic equipment and computer-readable medium Download PDFInfo
- Publication number
- CN109493296A CN109493296A CN201811291358.9A CN201811291358A CN109493296A CN 109493296 A CN109493296 A CN 109493296A CN 201811291358 A CN201811291358 A CN 201811291358A CN 109493296 A CN109493296 A CN 109493296A
- Authority
- CN
- China
- Prior art keywords
- image
- neural network
- convolutional neural
- deep convolutional
- image enhancement
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 94
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 51
- 238000012545 processing Methods 0.000 claims abstract description 31
- 238000012937 correction Methods 0.000 claims abstract description 20
- 238000007781 pre-processing Methods 0.000 claims abstract description 15
- 230000006870 function Effects 0.000 claims description 38
- 238000012549 training Methods 0.000 claims description 36
- 230000004913 activation Effects 0.000 claims description 19
- 238000001914 filtration Methods 0.000 claims description 12
- 238000013507 mapping Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 7
- 239000003086 colorant Substances 0.000 claims description 7
- 238000003062 neural network model Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 9
- 230000002708 enhancing effect Effects 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 17
- 230000008569 process Effects 0.000 description 15
- 238000013528 artificial neural network Methods 0.000 description 8
- 238000004422 calculation algorithm Methods 0.000 description 8
- 238000001228 spectrum Methods 0.000 description 7
- 102100035971 Molybdopterin molybdenumtransferase Human genes 0.000 description 4
- 101710119577 Molybdopterin molybdenumtransferase Proteins 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000012886 linear function Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000005111 flow chemistry technique Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 230000036403 neuro physiology Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 229910052704 radon Inorganic materials 0.000 description 1
- SYUHGPGVQRZVTB-UHFFFAOYSA-N radon atom Chemical compound [Rn] SYUHGPGVQRZVTB-UHFFFAOYSA-N 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
This disclosure relates to a kind of image enchancing method, device, electronic equipment and computer-readable medium.It is related to field of image processing, this method comprises: pre-processing to original image, generates standard picture;Color correction is carried out to the standard picture, generates correction image;And the correction image input picture is enhanced and obtains enhancing image in model;Wherein, described image enhancing model is the depth convolutional neural networks model comprising the on-fixed number of plies.This disclosure relates to image enchancing method, device, electronic equipment and computer-readable medium, it is able to ascend the effect of image enhancement, it efficiently solves the problems, such as that image document structuring is handled, improves authentication efficiency when face recognition, while saving a large amount of human cost.
Description
Technical Field
The present disclosure relates to the field of computer information processing, and in particular, to an image enhancement method, an image enhancement apparatus, an electronic device, and a computer-readable medium.
Background
With the popularization of various digital instruments and digital products, images and videos become the most common information carriers in human activities, and the images and videos contain a large amount of information of objects, so that the images and videos become the main ways for people to obtain external original information. However, due to the influence of the shooting operation or shooting environment factors, the obtained image often has the phenomena of blurring, distortion and the like. In addition, in the processes of generating, transmitting, recording and storing images, because of the incompleteness of an imaging system, a transmission medium and a recording device, the images are inevitably influenced by various adverse factors, so that the image information is lost and the quality is reduced. Image enhancement is the process of restoring a sharp image by processing a blurred image. Image enhancement techniques have been a focus of image processing and computer vision research.
However, since the image usually has noise, in the image enhancement process, both the blurring factor and the sharp image are unknown, while the known quantity usually only has an observed blurred image, so the unknown quantity is more than the known quantity when solving the problem. There is a lot of unreliability in image enhancement.
Therefore, a new image enhancement method, apparatus, electronic device and computer readable medium are needed.
The above information disclosed in this background section is only for enhancement of understanding of the background of the disclosure and therefore it may contain information that does not constitute prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
In view of this, the present disclosure provides an image enhancement method, an image enhancement device, an electronic device, and a computer-readable medium, which can improve an image enhancement effect, efficiently solve the problem of image data structuring processing, improve authentication efficiency during face recognition, and save a large amount of labor cost.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to an aspect of the present disclosure, an image enhancement method is provided, including: preprocessing an original image to generate a standard image; performing color correction on the standard image to generate a corrected image; inputting the corrected image into an image enhancement model to obtain an enhanced image; the image enhancement model is a depth convolution neural network model containing a non-fixed number of layers.
In an exemplary embodiment of the present disclosure, further comprising: and training a deep convolution neural network through image data to obtain the image enhancement model.
In an exemplary embodiment of the present disclosure, training a deep convolutional neural network with image data to obtain the image enhancement model comprises: determining the number of layers of the deep convolutional neural network; determining an activation function of the deep convolutional neural network; determining a loss function of the deep convolutional neural network; and inputting image training data and image comparison data into the deep convolutional neural network, and acquiring the image enhancement model through training.
In an exemplary embodiment of the present disclosure, determining the number of layers of the deep convolutional neural network includes: determining the number of layers of the deep convolutional neural network through the output enhanced image of the deep convolutional neural network;
wherein d is the number of layers of the deep convolutional neural network, s is a mapping area of pixel points in the enhanced image of the deep convolutional neural network on the original image, and n is the size of a filter of the deep convolutional neural network.
In an exemplary embodiment of the present disclosure, the activation function of the deep convolutional neural network is a ReLU activation function.
In an exemplary embodiment of the present disclosure, determining a loss function of a deep convolutional neural network comprises:
wherein,in order to be a function of the loss,noisy images used for training, yiFor input image training data, xiFor image comparison data, n is the number of training images.
In an exemplary embodiment of the present disclosure, preprocessing the original image, and generating the standard image includes at least one of: normalizing the original image to generate a standard image; carrying out standardization processing on the original image to generate a standard image; and adjusting the resolution of the original image to generate a standard image.
In an exemplary embodiment of the present disclosure, color correcting the standard image, generating a corrected image includes: spatially converting the standard image from three primary colors to YCbCrA space; and pair YCbCrAnd carrying out color correction on the spatial standard image to generate the corrected image.
In an exemplary embodiment of the present disclosure, inputting the corrected image into an image enhancement model to obtain an enhanced image includes: inputting the corrected image into a convolution layer with an activation function of an image enhancement model to obtain first data; inputting the first data into a plurality of convolution normalization layers with activation functions to obtain second data; inputting second data into the convolutional layer to obtain the enhanced image.
In an exemplary embodiment of the present disclosure, inputting the corrected image into an image enhancement model to obtain an enhanced image further includes: after each convolution calculation, the calculation result is subjected to boundary processing.
In an exemplary embodiment of the disclosure, before inputting the corrected image into an image enhancement model to obtain an enhanced image, the method further includes: and carrying out deblurring processing on the corrected image through wiener filtering.
According to an aspect of the present disclosure, an image enhancement apparatus is provided, the apparatus including: the preprocessing module is used for preprocessing the original image to generate a standard image; the color correction module is used for performing color correction on the standard image to generate a corrected image; the image enhancement module is used for inputting the corrected image into an image enhancement model to obtain an enhanced image; the image enhancement model is a depth convolution neural network model containing a non-fixed number of layers.
In an exemplary embodiment of the present disclosure, further comprising: and the training module is used for training the deep convolutional neural network through the image data to obtain the image enhancement model.
According to an aspect of the present disclosure, an electronic device is provided, the electronic device including: one or more processors; storage means for storing one or more programs; when executed by one or more processors, cause the one or more processors to implement a method as above.
According to an aspect of the disclosure, a computer-readable medium is proposed, on which a computer program is stored, which program, when being executed by a processor, carries out the method as above.
According to the image enhancement method, the image enhancement device, the electronic equipment and the computer readable medium, the image enhancement effect can be improved, the problem of image data structuring processing can be solved efficiently, the authentication efficiency during face recognition can be improved, and meanwhile, a large amount of labor cost can be saved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings. The drawings described below are merely some embodiments of the present disclosure, and other drawings may be derived from those drawings by those of ordinary skill in the art without inventive effort.
FIG. 1 is a flow chart illustrating a method of image enhancement according to an exemplary embodiment.
Fig. 2 is a diagram illustrating an application scenario of an image enhancement method according to an exemplary embodiment.
Fig. 3 is a diagram illustrating an application scenario of an image enhancement method according to an exemplary embodiment.
Fig. 4 is a flowchart illustrating an image enhancement method according to another exemplary embodiment.
Fig. 5 is a flowchart illustrating an image enhancement method according to another exemplary embodiment.
Fig. 6 is a schematic diagram illustrating an image enhancement method according to another exemplary embodiment.
Fig. 7 is a flowchart illustrating an image enhancement method according to another exemplary embodiment.
Fig. 8 is a block diagram illustrating an image enhancement apparatus according to an exemplary embodiment.
Fig. 9 is a block diagram illustrating an image enhancement apparatus according to another exemplary embodiment.
FIG. 10 is a block diagram illustrating an electronic device in accordance with an example embodiment.
FIG. 11 is a schematic diagram illustrating a computer-readable storage medium according to an example embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals denote the same or similar parts in the drawings, and thus, a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams depicted in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow diagrams depicted in the figures are merely exemplary and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various components, these components should not be limited by these terms. These terms are used to distinguish one element from another. Thus, a first component discussed below may be termed a second component without departing from the teachings of the disclosed concept. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
It is to be understood by those skilled in the art that the drawings are merely schematic representations of exemplary embodiments, and that the blocks or processes shown in the drawings are not necessarily required to practice the present disclosure and are, therefore, not intended to limit the scope of the present disclosure.
The inventors of the present application have discovered that digital image enhancement techniques combine multiple interdisciplines of pattern recognition, machine learning, artificial intelligence, and visual neurophysiology. Most of the existing image enhancement methods are based on a probability prior model of an image, namely for a given blurred image and a blurred kernel, a maximum posterior probability estimation of a potential sharp image is found. With the rise of big data and machine learning, many scholars try to apply machine learning to an image enhancement method, and a shallow learning method which is widely concerned can be regarded as an artificial neural network model to learn statistical rules from a large number of training samples, so that unknown events can be predicted. The inventors believe that this statistical-based machine learning approach exhibits advantages over artificial rule-based systems in many respects.
Image enhancement, which is an inverse problem, estimates an original sharp image from an observed blurred image. Because the blurring factor and the sharp image are unknown in the image enhancement process, and the known quantity is only the observed blurred image, the unknown quantity is more than the known quantity when the problem is solved. The image enhancement has unreliability, and at present, fuzzy parameters are calculated on an image frequency spectrum based on high-frequency energy analysis, hough transformation and Radon transformation by mainly using a parameter estimation method of a space domain and a frequency domain; and obtaining a clear image according to methods such as constrained least square filtering, wiener filtering, inverse filtering and the like. However, since images are usually noisy and blur parameter estimates are generally not accurate enough, these deconvolution algorithms often yield sharp images with various artifacts. Based on the above problems, a new image enhancement method is proposed herein, which solves some technical difficulties encountered in the prior art solutions to some extent.
The following is a detailed description of the image enhancement method in the present application:
FIG. 1 is a flow chart illustrating a method of image enhancement according to an exemplary embodiment. The image enhancement method 10 includes at least steps S102 to S106.
As shown in fig. 1, in S102, the original image is preprocessed to generate a standard image. The image resolution can vary greatly depending on the source of the input image. For consistency in subsequent processing, the original image needs to be preprocessed.
In one embodiment, the pre-processing of the original image, the generating of the standard image comprises at least one of: normalizing the original image to generate a standard image; carrying out standardization processing on the original image to generate a standard image; and adjusting the resolution of the original image to generate a standard image.
In S104, the standard image is color-corrected to generate a corrected image. For example, the image may be color-corrected by a white balance algorithm, and the step may effectively handle the situation of unclear image quality caused by image color cast, and specifically includes: converting the standard image from three primary colors to YCbCrA space; and pair YCbCrAnd carrying out color correction on the spatial standard image to generate the corrected image.
Among them, the three primary color space RGB (red, green, blue) is a color standard in the industry, and various colors are obtained by changing three color channels of red (R), green (G), and blue (B) and superimposing them with each other, where RGB is a color representing three channels of red, green, and blue, and this standard almost includes all colors that can be perceived by human vision, and is one of the most widely used color systems at present.
YCbCrThe color space has brightness as its main component, YCbCrIt is commonly used in video continuous processing in film or digital photography systems. YCbCrWherein Y is a luminance component, CbRefers to the blue chrominance component, and CrRefers to the red chrominance component.
Since the user's naked eye is more sensitive to the Y component of the video, the user's naked eye does not perceive a change in image quality after the chrominance component is reduced by sub-sampling the chrominance component. Thereby changing the original three primary colorsImage conversion in color space to YCbCrAfter space, adopt YCbCrThe original image is stored in the color space, and the storage space can be saved on the basis of keeping the definition of the image.
In S106, the corrected image is input into an image enhancement model to obtain an enhanced image, where the image enhancement model is a deep convolutional neural network model containing a non-fixed number of layers.
In one embodiment, a mapping model between a blurred image and its corresponding sharp image is learned in advance from a large amount of training data, and given a set of blurred images { P }iAnd their corresponding sharp images { P }i F}. And training the data through a deep convolutional neural network model to obtain an image enhancement model.
In one embodiment, the image enhancement model has a non-fixed number of convolutional layers, which can be determined by the enhanced image of the output of the deep convolutional neural network;
wherein d is the number of layers of the deep convolutional neural network, s is a mapping area of pixel points in the enhanced image of the deep convolutional neural network on the original image, and n is the size of a filter of the deep convolutional neural network. Namely, the number of the calculated layers of the image enhancement model can be determined by comparing the size of the mapping area of the pixel points in the enhanced image on the original image.
For example, the size of a convolution filter in the image enhancement model is designated as n × n, in some use scenes, the requirement on image definition is not strict, the mapping area of a pixel point in the enhanced image on the original image is selected to be definitely smaller, and under the condition that the filter is not changed, the calculated number of layers of the medium-depth convolution neural network of the finally determined image enhancement model is smaller.
In one embodiment, the activation function of the deep convolutional neural network in the image enhancement model is a modified linear unit (ReLU). The ReLU function is a piecewise linear function that changes all negative values to 0, while positive values are unchanged, and this operation is referred to as single-sided suppression. Through unilateral inhibition, neurons in a neural network in the image enhancement model have sparse activation.
The sparse image enhancement model realized by the ReLU can better mine relevant features and fit training data. ReLU expression is stronger for linear functions; for the nonlinear function, the gradient of the non-negative interval is constant, so that the problem of gradient disappearance does not exist, and the convergence rate of the image enhancement model is maintained in a stable state.
In one embodiment, the corrected image is input into a convolution layer with an activation function of an image enhancement model to obtain first data; inputting the first data into a plurality of convolution normalization layers with activation functions to obtain second data; inputting second data into the convolutional layer to obtain the enhanced image. The specific processing procedure of the corrected image in the image enhancement model will be described later with reference to the embodiment of fig. 6.
According to the image enhancement method disclosed by the invention, the original image is input into the image enhancement model established by the deep convolutional neural network after being subjected to preprocessing and white balance processing, so that the enhanced image can be obtained, and the image enhancement effect can be improved.
Fig. 2 is a diagram illustrating an application scenario of an image enhancement method according to an exemplary embodiment.
As shown in fig. 2, in the field of insurance claim payment, a client needs to upload image-related information such as an identification card during the process of handling a claim, and the information often has some noise during the process of shooting and transmitting, and when performing structured recognition on a digital image, the content of the image cannot be recognized in time due to poor image quality. As the amount of information increases, the check-by-check of manual work becomes more and more difficult to control.
According to the image enhancement method disclosed by the invention, the image enhancement processing is firstly carried out on the certificate image to be recognized, and the subsequent structured recognition is carried out according to the enhanced image, so that the efficiency and the real-time performance of the character recognition processing can be improved, the problem of data structured processing can be solved very efficiently, and a large amount of labor cost is saved.
Fig. 3 is a diagram illustrating an application scenario of an image enhancement method according to an exemplary embodiment.
As shown in fig. 3, as the degree of informatization increases, the customer's desire to purchase insurance through the mobile terminal is gradually increased. After a customer purchases the image online, the customer often needs to perform face recognition to check customer information, and the quality of a face image often has some noisy information due to the influence of various factors in the process of shooting or uploading a face picture of the customer. For such situations, verification is usually performed manually or a customer visit is needed to be made offline.
In order to improve the one-time authentication passing rate of a client and reduce the manual intervention of the client in insurance purchase, according to the image enhancement method disclosed by the invention, the image enhancement processing is carried out on the face photo image to be recognized, and the face recognition is continued through enhancing the image, so that the authentication passing rate can be greatly improved, the client satisfaction can be improved to the greatest extent, and the labor cost is greatly saved.
It should be clearly understood that this disclosure describes how to make and use particular examples, but the principles of this disclosure are not limited to any details of these examples. Rather, these principles can be applied to many other embodiments based on the teachings of the present disclosure.
Fig. 4 is a flowchart illustrating an image enhancement method according to another exemplary embodiment. FIG. 4 illustrates a specific process of "training a deep convolutional neural network with image data to obtain the image enhancement model".
As shown in fig. 4, in S402, the number of layers of the deep convolutional neural network is determined. Given the size of the convolution filter as n × n, since the prediction model in this embodiment is a clear image corresponding to the input image, the corresponding pooling layer is deleted in the convolution network on the basis of the original deep convolution neural network model, and the number of layers d of the deep convolution neural network corresponding to the image enhancement model in this embodiment can be defined according to the following formula;
wherein d is the number of layers of the deep convolutional neural network, s is a mapping area of pixel points in the enhanced image of the deep convolutional neural network on the original image, and n is the size of a filter of the deep convolutional neural network.
In S404, an activation function of the deep convolutional neural network is determined. Wherein, the activation function of the deep convolution neural network is a ReLU activation function.
In S406, a loss function of the deep convolutional neural network is determined. The loss function is a reflection of the fitting degree of the model to the data, the worse the fitting is, the larger the value of the loss function should be, and when the loss function is larger, the corresponding gradient is also larger, so that in the data training process in the embodiment, the updating variable can be updated faster. In this embodiment, the loss function of the deep convolutional neural network includes:
wherein,in order to be a function of the loss,noisy images used for training, yiFor input image training data, xiFor image comparison data, n is the number of training images.
In S408, the image training data and the image comparison data are input into the deep convolutional neural network, and the image enhancement model is obtained through training. Learning blurred images and their corresponding sharp images may be obtained in advance from a large amount of training data, given a set of blurred images { P }iAnd their corresponding sharp images { P }i FAnd (4) taking the blurred image as image training data, taking the clear image as image contrast data, inputting the image contrast data into the depth convolution neural network with the set parameters, and acquiring the image enhancement model through training.
According to the image enhancement method disclosed by the invention, by combining with actual conditions, through a specific structure and a depth convolution neural network with specific parameters, the blurred image and the corresponding clear image are learned, and an image enhancement model is obtained, so that an accurate and effective image enhancement model can be obtained.
Fig. 5 is a flowchart illustrating an image enhancement method according to another exemplary embodiment. The flowchart 50 of the image enhancement method shown in fig. 5 is a detailed description of "color correction of the standard image and generation of a corrected image" in the flowchart 10 of the image enhancement method shown in fig. 1.
As shown in fig. 5, in S502, the standard image is spatially converted from the three primary colors into YCbCrA space. Converting an image from RGB color space to YCbCrSpatially, the conversion formula can be as follows:
wherein R, G, B, respectively represent the values of the unavailable color channels in the RGB color space.
In S504, a subgraph variance is calculated. Based on YCbCrAnd the space diagram divides the image into sub-images according to a certain rule and calculates the mean value and the variance of each sub-image. Specifically, the image is divided into sub-images sub _ P (x, y) with the size h x wiAnd separately calculate Cr,CbAverage value M ofr,Mb. According to Mr,MbC is calculated by the following formular,CbVariance D ofr,Db。
In S506, a white reference point is determined. Based on the variance values of the different channels, a "white reference point" in the image is calculated and obtained. The concrete is shown in the formula.
Let a luminance matrix R of "white reference pointLAnd the size is w x h. If the value of the luminance (Y component) of the point (i, j) is assigned to R, the point is used as a white reference pointL(i, j); if not, R of the pointLThe value of (i, j) is 0. And selects the maximum 10% luminance (Y component) value among the reference "white reference points" and selects the minimum value Lu _ min thereof.
In S508, color correction is performed on the standard image based on the reference point, and a corrected image is generated. Luminance matrix R based on said reference pointLAdjusting RL。
If R isL(i,j))<Lu_min,RL(i,j)=0;
Otherwise, RL(i,j)=1;
R, G, B and RL are multiplied to obtain R2, G2 and B2 respectively. And calculating the average value of R2, G2 and B2 respectively, Rav,Gav,Bav(ii) a The calculation method of the gain adjustment is shown in the following formula.
Wherein Y ismaxIs the maximum value in the luminance image. The image is color corrected by the following formula. The specific formula is shown below.
Fig. 6 is a schematic diagram illustrating an image enhancement method according to another exemplary embodiment. The flowchart of the image enhancement method shown in fig. 6 is a detailed description of "inputting the corrected image into the image enhancement model to obtain the enhanced image" in the flowchart 10 of the image enhancement method shown in fig. 1.
The image enhancement model in the embodiment of the present application includes three types of layer structures:
for the first layer, W3 × 3 filters may be set and a ReLU activation function is used, which may be defined as:
similarly, for second to penultimate layers, a normalization method may be added on a per first layer basis. For the last layer, the convolution can be performed using W3 x 3 filters, resulting in the final enhanced image.
In one embodiment, inputting the corrected image into an image enhancement model to obtain an enhanced image further comprises: after each convolution calculation, the calculation result is subjected to boundary processing. The method specifically comprises the following steps: after each layer convolution, in order to match the resolution of the input image with that of the output image, the convolution result for each layer is subjected to boundary processing. The boundaries may be filled with, for example, a 0-setting throughout so that no additional noise information is added on the basis of maintaining the original size of the image.
Fig. 7 is a flowchart illustrating an image enhancement method according to another exemplary embodiment. Flowchart 70 of the image enhancement method shown in fig. 7 a process of "deblurring the corrected image by wiener filtering" is added to flowchart 10 of the image enhancement method shown in fig. 1.
In S702, an original image is input.
In S704, the image is preprocessed.
In S706, color correction based on a white balance algorithm.
In S708, the image based on the wiener filtering is deblurred.
In S710, image enhancement based on a deep convolutional neural network.
In S712, the enhanced image is output.
In one embodiment, the color-corrected blurred image P of the white balance algorithm is first correctedo(x, y) performing two-dimensional Fourier transform to obtain a frequency domain G (u, v); calculating a Point Spread Function (PSF) H (x, y) of the defocus blur, and performing two-dimensional Fourier transform to obtain H (u, v) and complex conjugate H of a frequency domain*(u,v)。
Then constructing a wiener filter function according to the following formula; the spectrum of the frequency domain G (u, v) and the wiener filter spectrum are operated according to the formula, so as to obtain the deblurred image spectrum F (u, v), which is shown in the following formula.
Wherein | H (u, v) & gtu2=H(u,v)H*(u, v), in practical applications, an appropriate value of K can be selected according to the effect of the treatment.
According to the image enhancement method disclosed by the invention, the noise pollution phenomenon in the original image can be reduced by carrying out the deblurring operation on the image by using the wiener filtering (minimum mean square error filtering).
According to the image enhancement method disclosed by the invention, through a multi-step-based image enhancement method, the method enhances the effectiveness of the algorithm by performing enhancement on the blurred image in different directions;
according to the image enhancement method disclosed by the invention, the image enhancement method based on the deep learning method is provided, and the image enhancement effect is greatly optimized on the basis of not influencing the performance of the algorithm.
Those skilled in the art will appreciate that all or part of the steps implementing the above embodiments are implemented as computer programs executed by a CPU. When executed by the CPU, performs the functions defined by the methods provided by the present disclosure. The program may be stored in a computer readable storage medium, which may be a read-only memory, a magnetic or optical disk, or the like.
Furthermore, it should be noted that the above-mentioned figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 8 is a block diagram illustrating an image enhancement apparatus according to an exemplary embodiment. The image enhancement apparatus 80 shown in fig. 8 includes: a pre-processing module 802, a color correction module 804, and an image enhancement module 806.
The preprocessing module 802 is configured to preprocess the original image to generate a standard image; for example, the original image is normalized to generate a standard image; carrying out standardization processing on the original image to generate a standard image; and adjusting the resolution of the original image to generate a standard image.
The color correction module 804 is configured to perform color correction on the standard image to generate a corrected image; for example, the image may be color-corrected by a white balance algorithm, and the step may effectively handle the situation of unclear image quality caused by image color cast, and specifically includes: spatially converting the standard image from three primary colors to YCbCrA space; and pair YCbCrAnd carrying out color correction on the spatial standard image to generate the corrected image.
The image enhancement module 806 is configured to input the corrected image into an image enhancement model to obtain an enhanced image; the image enhancement model is a depth convolution neural network model containing a non-fixed number of layers. Inputting the corrected image into a convolution layer with an activation function of an image enhancement model to obtain first data; inputting the first data into a plurality of convolutions with activation functions to obtain second data; inputting second data into the convolutional layer to obtain the enhanced image. The specific processing of the corrected image in the image enhancement model will be described later with reference to the embodiment of fig. 6.
According to the image enhancement device disclosed by the disclosure, the original image is input into the image enhancement model established by the deep convolutional neural network after being subjected to preprocessing and white balance processing, so that the enhanced image is obtained, and the image enhancement effect can be improved.
Fig. 9 is a block diagram illustrating an image enhancement apparatus according to another exemplary embodiment. The image enhancement apparatus 90 shown in fig. 9 further includes, in addition to the image enhancement apparatus 80 shown in fig. 8: a training module 902 and a filtering module 904.
The training module 902 is configured to train a deep convolutional neural network with the image data to obtain the image enhancement model. For example, a mapping model between a blurred image and its corresponding sharp image is learned in advance from a large amount of training data, and a set of blurred images { P is giveniAnd their corresponding sharp images { P }i F}. And training the data through a deep convolutional neural network model to obtain an image enhancement model.
The filtering module 904 is configured to deblur the corrected image by wiener filtering. For example, the color-corrected blurred image P of the white balance algorithmo(x, y) performing two-dimensional Fourier transform to obtain a frequency domain G (u, v); then constructing a wiener filter function; and calculating the frequency spectrum of the frequency domain G (u, v) and the frequency spectrum of the wiener filter according to the formula to obtain the deblurred image frequency spectrum F (u, v).
FIG. 10 is a block diagram illustrating an electronic device in accordance with an example embodiment.
An electronic device 200 according to this embodiment of the present disclosure is described below with reference to fig. 10. The electronic device 200 shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 10, the electronic device 200 is embodied in the form of a general purpose computing device. The components of the electronic device 200 may include, but are not limited to: at least one processing unit 210, at least one memory unit 220, a bus 230 connecting different system components (including the memory unit 220 and the processing unit 210), a display unit 240, and the like.
Wherein the storage unit stores program code executable by the processing unit 210 to cause the processing unit 210 to perform the steps according to various exemplary embodiments of the present disclosure described in the electronic prescription flow processing method section described above in this specification. For example, the processing unit 210 may perform the steps as shown in fig. 1, 4, 5, and 7.
The storage unit 220 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)2201 and/or a cache memory unit 2202, and may further include a read only memory unit (ROM) 2203.
The storage unit 220 may also include a program/utility 2204 having a set (at least one) of program modules 2205, such program modules 2205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 230 may be any representation of one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 200 may also communicate with one or more external devices 300 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 200, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 200 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 250. Also, the electronic device 200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) via the network adapter 260. The network adapter 260 may communicate with other modules of the electronic device 200 via the bus 230. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 200, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by a combination of software and necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, or a network device, etc.) to execute the above method according to the embodiments of the present disclosure.
FIG. 11 schematically illustrates an exemplary diagram of a computer-readable storage medium in an exemplary embodiment of the disclosure.
Referring to fig. 11, a program product 400 for implementing the above method according to an embodiment of the present disclosure is described, which may employ a portable compact disc read only memory (CD-ROM) and include program codes, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable storage medium may include a data signal propagating in baseband or as part of a carrier wave, which carries readable program code. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The computer readable medium carries one or more programs which, when executed by a device, cause the computer readable medium to perform the functions of: preprocessing an original image to generate a standard image; performing color correction on the standard image to generate a corrected image; inputting the corrected image into an image enhancement model to obtain an enhanced image; the image enhancement model is a depth convolution neural network model containing a non-fixed number of layers.
Those skilled in the art will appreciate that the modules described above may be distributed in the apparatus according to the description of the embodiments, or may be modified accordingly in one or more apparatuses unique from the embodiments. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by a combination of software and necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Exemplary embodiments of the present disclosure are specifically illustrated and described above. It is to be understood that the present disclosure is not limited to the precise arrangements, instrumentalities, or instrumentalities described herein; on the contrary, the disclosure is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
In addition, the structures, the proportions, the sizes, and the like shown in the drawings of the present specification are only used for matching with the contents disclosed in the specification, so as to be understood and read by those skilled in the art, and are not used for limiting the limit conditions of the present disclosure, so that the present disclosure has no technical significance, and any modification of the structures, changes of the proportion relationships, or adjustments of the sizes, should still fall within the scope of the technical contents disclosed in the present disclosure without affecting the technical effects and the achievable purposes of the present disclosure. In the present specification, the terms "upper", "first", "second" and "first" are used for clarity of description, and are not intended to limit the scope of the present disclosure, and changes or modifications in the relative relationship may be made without substantial changes in the technical content.
Claims (15)
1. An image enhancement method, comprising:
preprocessing an original image to generate a standard image;
performing color correction on the standard image to generate a corrected image; and
inputting the corrected image into an image enhancement model to obtain an enhanced image;
the image enhancement model is a depth convolution neural network model containing a non-fixed number of layers.
2. The method of claim 1, further comprising:
training a deep convolutional neural network through image data to obtain the image enhancement model.
3. The method of claim 2, wherein training a deep convolutional neural network with image data to obtain the image enhancement model comprises:
determining the number of layers of the deep convolutional neural network;
determining an activation function of a deep convolutional neural network;
determining a loss function of the deep convolutional neural network; and
and inputting image training data and image comparison data into the deep convolutional neural network, and acquiring the image enhancement model through training.
4. The method of claim 3, wherein determining the number of layers of the deep convolutional neural network comprises:
determining the number of layers of the deep convolutional neural network through the output enhanced image of the deep convolutional neural network;
wherein d is the number of layers of the deep convolutional neural network, s is a mapping area of pixel points in the enhanced image of the deep convolutional neural network on the original image, and n is the size of a filter of the deep convolutional neural network.
5. The method of claim 3, wherein the activation function of the deep convolutional neural network is a modified linear unit.
6. The method of claim 3, wherein determining a loss function for the deep convolutional neural network comprises:
wherein,in order to be a function of the loss,noisy images used for training, yiFor input image training data, xiFor image comparison data, n is the number of training images.
7. The method of claim 1, wherein pre-processing the original image to generate the standard image comprises at least one of:
normalizing the original image to generate a standard image;
carrying out standardization processing on the original image to generate a standard image; and
and adjusting the resolution of the original image to generate a standard image.
8. The method of claim 1, wherein color correcting the standard image, generating a corrected image comprises:
spatially converting the standard image from three primary colors to YCbCrA space; and
to YCbCrAnd carrying out color correction on the spatial standard image to generate the corrected image.
9. The method of claim 1, wherein inputting the corrected image into an image enhancement model to obtain an enhanced image comprises:
inputting the corrected image into a convolution layer with an activation function of an image enhancement model to obtain first data;
inputting the first data into a plurality of convolutions with activation functions to obtain second data;
inputting second data into the convolutional layer to obtain the enhanced image.
10. The method of claim 9, wherein inputting the corrected image into an image enhancement model to obtain an enhanced image further comprises:
after each convolution calculation, the calculation result is subjected to boundary processing.
11. The method of claim 1, wherein inputting the corrected image into an image enhancement model to obtain an enhanced image further comprises:
and deblurring the corrected image through wiener filtering.
12. An image enhancement apparatus, comprising:
the preprocessing module is used for preprocessing the original image to generate a standard image;
the color correction module is used for performing color correction on the standard image to generate a corrected image; and
the image enhancement module is used for inputting the corrected image into an image enhancement model to obtain an enhanced image; the image enhancement model is a depth convolution neural network model containing a non-fixed number of layers.
13. The apparatus of claim 12, further comprising:
and the training module is used for training the deep convolutional neural network through the image data to obtain the image enhancement model.
14. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-11.
15. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811291358.9A CN109493296A (en) | 2018-10-31 | 2018-10-31 | Image enchancing method, device, electronic equipment and computer-readable medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811291358.9A CN109493296A (en) | 2018-10-31 | 2018-10-31 | Image enchancing method, device, electronic equipment and computer-readable medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109493296A true CN109493296A (en) | 2019-03-19 |
Family
ID=65693619
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811291358.9A Pending CN109493296A (en) | 2018-10-31 | 2018-10-31 | Image enchancing method, device, electronic equipment and computer-readable medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109493296A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110120024A (en) * | 2019-05-20 | 2019-08-13 | 百度在线网络技术(北京)有限公司 | Method, apparatus, equipment and the storage medium of image procossing |
CN112019827A (en) * | 2020-09-02 | 2020-12-01 | 上海网达软件股份有限公司 | Method, device, equipment and storage medium for enhancing video image color |
CN112887758A (en) * | 2019-11-29 | 2021-06-01 | 北京百度网讯科技有限公司 | Video processing method and device |
CN113436081A (en) * | 2020-03-23 | 2021-09-24 | 阿里巴巴集团控股有限公司 | Data processing method, image enhancement method and model training method thereof |
CN115222606A (en) * | 2021-04-16 | 2022-10-21 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, computer readable medium and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105528638A (en) * | 2016-01-22 | 2016-04-27 | 沈阳工业大学 | Method for grey correlation analysis method to determine number of hidden layer characteristic graphs of convolutional neural network |
CN106485230A (en) * | 2016-10-18 | 2017-03-08 | 中国科学院重庆绿色智能技术研究院 | Based on the training of the Face datection model of neutral net, method for detecting human face and system |
CN108304793A (en) * | 2018-01-26 | 2018-07-20 | 北京易真学思教育科技有限公司 | Online learning analysis system and method |
CN108550125A (en) * | 2018-04-17 | 2018-09-18 | 南京大学 | A kind of optical distortion modification method based on deep learning |
CN108711141A (en) * | 2018-05-17 | 2018-10-26 | 重庆大学 | The motion blur image blind restoration method of network is fought using improved production |
-
2018
- 2018-10-31 CN CN201811291358.9A patent/CN109493296A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105528638A (en) * | 2016-01-22 | 2016-04-27 | 沈阳工业大学 | Method for grey correlation analysis method to determine number of hidden layer characteristic graphs of convolutional neural network |
CN106485230A (en) * | 2016-10-18 | 2017-03-08 | 中国科学院重庆绿色智能技术研究院 | Based on the training of the Face datection model of neutral net, method for detecting human face and system |
CN108304793A (en) * | 2018-01-26 | 2018-07-20 | 北京易真学思教育科技有限公司 | Online learning analysis system and method |
CN108550125A (en) * | 2018-04-17 | 2018-09-18 | 南京大学 | A kind of optical distortion modification method based on deep learning |
CN108711141A (en) * | 2018-05-17 | 2018-10-26 | 重庆大学 | The motion blur image blind restoration method of network is fought using improved production |
Non-Patent Citations (5)
Title |
---|
CHING-CHIH WENG 等: "A novel automatic white balance method for digital still cameras", 《2005 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS)》 * |
KAI ZHANG 等: "Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 * |
徐岩 等: "基于卷积神经网络的水下图像增强方法", 《吉林大学学报(工学版)》 * |
穆德远: "《数字时代的电影摄影》", 31 July 2011 * |
肖健华: "《智能模式识别方法》", 31 January 2006 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110120024A (en) * | 2019-05-20 | 2019-08-13 | 百度在线网络技术(北京)有限公司 | Method, apparatus, equipment and the storage medium of image procossing |
CN112887758A (en) * | 2019-11-29 | 2021-06-01 | 北京百度网讯科技有限公司 | Video processing method and device |
CN112887758B (en) * | 2019-11-29 | 2023-04-14 | 北京百度网讯科技有限公司 | Video processing method and device |
CN113436081A (en) * | 2020-03-23 | 2021-09-24 | 阿里巴巴集团控股有限公司 | Data processing method, image enhancement method and model training method thereof |
CN112019827A (en) * | 2020-09-02 | 2020-12-01 | 上海网达软件股份有限公司 | Method, device, equipment and storage medium for enhancing video image color |
CN115222606A (en) * | 2021-04-16 | 2022-10-21 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, computer readable medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109493296A (en) | Image enchancing method, device, electronic equipment and computer-readable medium | |
Li et al. | Edge-preserving decomposition-based single image haze removal | |
CN109087269B (en) | Weak light image enhancement method and device | |
US10325346B2 (en) | Image processing system for downscaling images using perceptual downscaling method | |
Xiong et al. | Unsupervised low-light image enhancement with decoupled networks | |
US10817984B2 (en) | Image preprocessing method and device for JPEG compressed file | |
CN107358586A (en) | A kind of image enchancing method, device and equipment | |
Liu et al. | Graph-based joint dequantization and contrast enhancement of poorly lit JPEG images | |
KR102095443B1 (en) | Method and Apparatus for Enhancing Image using Structural Tensor Based on Deep Learning | |
US11750935B2 (en) | Systems and methods of image enhancement | |
Kousha et al. | Modeling srgb camera noise with normalizing flows | |
Muniraj et al. | Underwater image enhancement by color correction and color constancy via Retinex for detail preserving | |
CN111353955A (en) | Image processing method, device, equipment and storage medium | |
Ma et al. | Underwater image restoration through a combination of improved dark channel prior and gray world algorithms | |
CN110717864B (en) | Image enhancement method, device, terminal equipment and computer readable medium | |
WO2023215371A1 (en) | System and method for perceptually optimized image denoising and restoration | |
WO2016051716A1 (en) | Image processing method, image processing device, and recording medium for storing image processing program | |
CN111292251B (en) | Image color cast correction method, device and computer storage medium | |
CN109410143B (en) | Image enhancement method and device, electronic equipment and computer readable medium | |
Kumar et al. | Dynamic stochastic resonance and image fusion based model for quality enhancement of dark and hazy images | |
Song et al. | Hue-preserving and saturation-improved color histogram equalization algorithm | |
CN111861940A (en) | Image toning enhancement method based on condition continuous adjustment | |
WO2020062899A1 (en) | Method for obtaining transparency masks by means of foreground and background pixel pairs and grayscale information | |
Ahn et al. | CODEN: combined optimization-based decomposition and learning-based enhancement network for Retinex-based brightness and contrast enhancement | |
Wang et al. | Balanced color contrast enhancement for digital images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190319 |
|
RJ01 | Rejection of invention patent application after publication |