CN111192224B - Image enhancement method, image enhancement device, electronic equipment and computer readable storage medium - Google Patents

Image enhancement method, image enhancement device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN111192224B
CN111192224B CN202010029973.3A CN202010029973A CN111192224B CN 111192224 B CN111192224 B CN 111192224B CN 202010029973 A CN202010029973 A CN 202010029973A CN 111192224 B CN111192224 B CN 111192224B
Authority
CN
China
Prior art keywords
image
reflectivity
loss
module
enhancement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010029973.3A
Other languages
Chinese (zh)
Other versions
CN111192224A (en
Inventor
何宁
刘佳敏
张聪聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Union University
Original Assignee
Beijing Union University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Union University filed Critical Beijing Union University
Priority to CN202010029973.3A priority Critical patent/CN111192224B/en
Publication of CN111192224A publication Critical patent/CN111192224A/en
Application granted granted Critical
Publication of CN111192224B publication Critical patent/CN111192224B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The application discloses an image enhancement method, an image enhancement device, electronic equipment and a computer readable storage medium; an image enhancement method comprising: decomposing the image by a convolutional neural network; enhancement processing is carried out on the decomposed image; the decomposing of the image by the convolutional neural network comprises: inputting an image pair into a convolutional neural network; the image pair comprises a low-light image and a normal-light image; extracting shallow features from an input image through a convolution layer; mapping the RGB image to reflectivity and illuminance by a rectifying linear unit; using a convolution layer to project reflectivity and illumination from the feature space; and performing end-to-end fine tuning by using the reflectivity loss function to obtain a decomposed image. The image enhancement method provided by the embodiment of the application can meet the visual demands of people, is high in universality of image enhancement, high in accuracy and high in efficiency, and solves the technical problem of excessive enhancement of images.

Description

Image enhancement method, image enhancement device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image enhancement method, an image enhancement device, an electronic device, and a computer readable storage medium.
Background
In the conventional method, an image is enhanced mainly in the frequency domain based on retinex theory, which assumes that the image can be decomposed into a reflection map and an illumination map. In addition to this, the Retinex theory assumes that the illumination is changing spatially, but this assumption is offset from the actual one, and the proportion of the three channels of the MSRCR default image is equal, which results in a darkened and noisy processed image. The learner proposes to use the image reflection diagram for image enhancement on the basis, wherein the key of enhancement is the estimation degree of the reflection diagram, but the reflection diagram-based method is easy to cause excessive enhancement, which is the most frequently occurring problem of the existing illumination diagram estimation, and the problem of overexposure is not solved effectively nowadays.
Disclosure of Invention
The object of the application is to provide an image enhancement method, an image enhancement device, an electronic device and a computer readable storage medium. The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview and is intended to neither identify key/critical elements nor delineate the scope of such embodiments. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
According to an aspect of the embodiments of the present application, there is provided an image enhancement method, including:
decomposing the image by a convolutional neural network;
and performing enhancement processing on the decomposed image.
Further, the decomposing the image by the convolutional neural network includes:
inputting an image pair into a convolutional neural network; the image pair comprises a low-light image and a normal-light image;
extracting shallow features from an input image through a convolution layer;
mapping the RGB image to reflectivity and illuminance by a rectifying linear unit;
using a convolution layer to project reflectivity and illumination from the feature space;
and performing end-to-end fine tuning by using the reflectivity loss function to obtain a decomposed image.
Further, the enhancing the decomposed image includes:
the reflectivity and the illuminance are connected and then used as input;
rolling and pooling the input;
combining U-Net, increasing the image by utilizing a nearest neighbor interpolation method, ensuring the size of the image to be consistent with that of the combined feature map, and correspondingly adding the images;
feature fusion is carried out, and a feature diagram with more complete detail preservation is obtained;
and performing end-to-end fine tuning on the network by using random gradient descent to obtain an enhanced image.
Further, in the method, the loss function is derived from a weighted sum of reconstruction loss, invariant reflectivity loss and smoothness loss.
Further, in the method, the loss function L is lost by reconstruction recon Invariable reflectance loss L ir And smoothness loss L is Three components:
L=L reconir L iris L is
wherein lambda is ir ,λ is Coefficients representing reflectivity uniformity and illumination smoothness, respectively.
According to another aspect of an embodiment of the present application, there is provided an image enhancement apparatus including:
the decomposition module is used for decomposing the image through a convolutional neural network;
and the enhancement module is used for enhancing the decomposed image.
Further, the decomposition module includes:
an input module for inputting an image pair to a convolutional neural network; the image pair comprises a low-light image and a normal-light image;
the extraction module is used for extracting shallow layer features from the input image through the convolution layer;
a mapping module for mapping the RGB image to reflectivity and illuminance by a rectifying linear unit;
a projection module for projecting reflectivity and illuminance from the feature space using the convolution layer;
and the first fine tuning module is used for carrying out end-to-end fine tuning by utilizing the reflectivity loss function to obtain a decomposed image.
Further, the enhancement module includes:
the connecting module is used for connecting the reflectivity and the illumination as input;
the convolution pooling module is used for carrying out convolution and pooling on the input;
the adding module is used for combining the U-Net, increasing the image by utilizing a nearest neighbor interpolation method, ensuring the size of the image to be consistent with that of the combined feature map, and correspondingly adding the image;
the fusion module is used for carrying out feature fusion to obtain a feature map with more complete detail preservation;
and the second fine tuning module is used for carrying out end-to-end fine tuning on the network by using random gradient descent to obtain an enhanced image.
According to another aspect of the embodiments of the present application, there is provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the image enhancement method described above.
According to another aspect of the embodiments of the present application, there is provided a computer-readable storage medium having stored thereon a computer program that is executed by a processor to implement the above-described image enhancement method.
One of the technical solutions provided in one aspect of the embodiments of the present application may include the following beneficial effects:
the image enhancement method provided by the embodiment of the application can meet the visual demands of people, is high in universality of image enhancement, high in accuracy and high in efficiency, and solves the technical problem of excessive enhancement of images.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 illustrates a flow chart of an image enhancement method of one embodiment of the present application;
FIG. 2 shows a flowchart of the steps of decomposing an image in one embodiment of the present application;
FIG. 3 shows a block diagram of an image enhancement device of one embodiment of the present application;
FIG. 4 illustrates a RUNet network architecture diagram in one embodiment of the present application;
FIG. 5 illustrates dataset image pairs according to an embodiment of the present application;
fig. 6 shows an exploded view of a low-light image and a normal-light image, and a brightness enhancement effect view, wherein fig. 6 (a), 6 (b), and 6 (c) are a first group of corresponding images, fig. 6 (a) is an exploded view of a normal-light image, fig. 6 (b) is an exploded effect view of a low-light image, and fig. 6 (c) is an enhanced effect view of a low-light image; fig. 6 (d), 6 (e) and 6 (f) are images corresponding to the second group, fig. 6 (d) is a view of the normal light image after being decomposed, fig. 6 (e) is a low light image decomposition effect view, and fig. 6 (f) is a low light image enhancement effect view.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As shown in fig. 1, one embodiment of the present application provides an image enhancement method, including the steps of:
s10, decomposing an image through a convolutional neural network;
specifically, on a Retinex-Net basis, images are learned and decomposed by Convolutional Neural Networks (CNNs);
s20, performing enhancement processing on the decomposed image obtained in the step S10;
specifically, the decomposed image is subjected to enhancement processing through a U-Net-based enhancement network architecture.
The image enhancement method of this embodiment is implemented by RU-Net, as shown in fig. 4.
As shown in fig. 2, in some embodiments, step S10, decomposing the image, includes:
s101, inputting low-light image S low And normal light image S nomal An image pair;
as shown in fig. 5, a dataset image pair;
s102, extracting shallow features from an input image by using a 3 multiplied by 3 convolution layer;
s103, mapping the RGB image to reflectivity R and illuminance I through 5 3X 3 convolution layers with a rectifying linear unit (ReLU) as an activation function;
s104, projecting the reflectivity R and the illuminance I from the feature space by using a 3X 3 convolution layer;
s105, performing end-to-end fine adjustment by using the reflectivity loss function to obtain a decomposed image.
In some embodiments, step S20, performing enhancement processing on the decomposed image obtained in step S10, includes:
s201, connecting the reflectivity R and the illuminance I and then using the connected reflectivity R and illuminance I as input, wherein the convolution kernel size of the convolution layer is 3, and the convolution kernel size of the pooling layer is 2;
s202, rolling and pooling the input;
s203, combining U-Net, increasing the image by utilizing a nearest neighbor interpolation method, ensuring the size of the image to be consistent with that of the combined feature map, and correspondingly adding the images;
s204, carrying out feature fusion to obtain a feature map with more complete detail preservation;
s205, performing end-to-end fine tuning on the network by using random gradient descent to obtain an enhanced image; wherein the loss function is derived from a weighted sum of reconstruction loss, invariant reflectivity loss and smoothness loss. Loss function LLoss L by reconstruction recon Invariable reflectance loss L ir And smoothness loss L is Three-item composition, wherein lambda ir ,λ is Coefficients representing reflectance uniformity and illumination smoothness, respectively:
L=L reconir L iris L is (1)
based on R low And R is high The assumption that the corresponding illumination map can be used to reconstruct the image, reconstruction loss L recon Can be expressed as:
introduction of shared reflectance loss L ir To limit uniformity of reflectivity:
L ir =||R low -R normal || 1 (3)
in areas where the pixels of the image are non-uniform or the brightness changes sharply, the direct use of total variation minimization (TV) as a loss function fails, and in order to understand the image structure, the present embodiment weights the original TV function with the gradient of the reflectivity map to obtain L is The formula is:
wherein,the representation includes level->And vertical direction->Gradient lambda of g A coefficient representing the perceived intensity of the balanced structure. At weight +.>In the case of L is The constraint on the steeper smoothness of the reflectivity gradient, i.e. the constraint on the smoothness of the locations where the image structure is located and where the illumination should be discontinuous, is relaxed.
With the method of this embodiment 1759 pairs of images were trained and 100 images were tested. The reflectivity is not required to be provided in the training process, and the reflectivity consistency and the illumination pattern smoothness are only required to be embedded into the network as a loss function. Thus, the network decomposition of the present embodiment is automatically learned from paired low, normal light images and is naturally suitable for describing light variations between images under different lighting conditions.
In training, the batch size is set to 16 and the image block size is set to 48×48.L (L) ir ,L is And lambda (lambda) g Set to 0.001, 0.1 and 10, respectively. When i.noteq.j, lambda ij Set to 0.001, λ when i=j ij Set to 1, the network iterates 100 times as a whole.
As shown in fig. 6, an exploded view of the low-light image and the normal-light image, and a brightness enhancement effect view are shown, wherein fig. 6 (a), 6 (b), and 6 (c) are the first group of corresponding images, fig. 6 (a) is the exploded view of the normal-light image, fig. 6 (b) is the low-light image decomposition effect view, and fig. 6 (c) is the low-light image enhancement effect view; fig. 6 (d), 6 (e) and 6 (f) are images corresponding to the second group, fig. 6 (d) is a view of the normal light image after being decomposed, fig. 6 (e) is a low light image decomposition effect view, and fig. 6 (f) is a low light image enhancement effect view. The two sets of images in fig. 6 clearly illustrate the technical effect of image enhancement achieved by the method of the present embodiment.
Experimental results show that the method of the embodiment can meet the visual demands of people, and has strong universality, high accuracy and high efficiency on image enhancement.
As shown in fig. 3, this embodiment further provides an image enhancement apparatus, including:
a decomposition module 100 for decomposing the image through a convolutional neural network;
and the enhancement module 200 is used for performing enhancement processing on the decomposed image.
In some embodiments, the decomposition module comprises:
an input module for inputting an image pair to a convolutional neural network; the image pair comprises a low-light image and a normal-light image;
the extraction module is used for extracting shallow layer features from the input image through the convolution layer;
a mapping module for mapping the RGB image to reflectivity and illuminance by a rectifying linear unit;
a projection module for projecting reflectivity and illuminance from the feature space using the convolution layer;
and the first fine tuning module is used for carrying out end-to-end fine tuning by utilizing the reflectivity loss function to obtain a decomposed image.
In certain embodiments, the enhancement module comprises:
the connecting module is used for connecting the reflectivity and the illumination as input;
the convolution pooling module is used for carrying out convolution and pooling on the input;
the adding module is used for combining the U-Net, increasing the image by utilizing a nearest neighbor interpolation method, ensuring the size of the image to be consistent with that of the combined feature map, and correspondingly adding the image;
the fusion module is used for carrying out feature fusion to obtain a feature map with more complete detail preservation;
and the second fine tuning module is used for carrying out end-to-end fine tuning on the network by using random gradient descent to obtain an enhanced image.
The embodiment also provides an electronic device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the program to realize the image enhancement method.
The present embodiment also provides a computer-readable storage medium having stored thereon a computer program that is executed by a processor to implement the above-described image enhancement method.
It should be noted that:
the term "module" is not intended to be limited to a particular physical form. Depending on the particular application, modules may be implemented as hardware, firmware, software, and/or combinations thereof. Furthermore, different modules may share common components or even be implemented by the same components. There may or may not be clear boundaries between different modules.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may also be used with the teachings herein. The required structure for the construction of such devices is apparent from the description above. In addition, the present application is not directed to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present application as described herein, and the above description of specific languages is provided for disclosure of preferred embodiments of the present application.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the present application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the application, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed application requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the present application and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in the creation means of a virtual machine according to embodiments of the present application may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present application may also be embodied as an apparatus or device program (e.g., computer program and computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present application may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
The foregoing examples merely represent embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (4)

1. An image enhancement method, comprising:
decomposing the image through a convolutional neural network specifically comprises:
inputting an image pair into a convolutional neural network; the image pair comprises a low-light image and a normal-light image;
extracting shallow features from an input image through a convolution layer;
mapping the RGB image to reflectivity and illuminance by a rectifying linear unit;
using a convolution layer to project reflectivity and illumination from the feature space;
performing end-to-end fine tuning by using a reflectivity loss function to obtain a decomposed image;
enhancement processing is carried out on the decomposed image, and the enhancement processing concretely comprises the following steps:
the reflectivity and the illuminance are connected and then used as input;
rolling and pooling the input;
combining U-Net, increasing the image by utilizing a nearest neighbor interpolation method, ensuring the size of the image to be consistent with that of the combined feature map, and correspondingly adding the images;
feature fusion is carried out, and a feature diagram with more complete detail preservation is obtained;
performing end-to-end fine tuning on the network by using random gradient descent to obtain an enhanced image;
the loss function is obtained by weighted summation of reconstruction loss, invariable reflectivity loss and smoothness loss, and specifically comprises the following steps: loss function L is lost by reconstruction recon Invariable reflectance loss L ir And smoothness loss L is Three components:
L=L reconir L iris L is
wherein lambda is ir ,λ is Coefficients respectively representing reflectance uniformity and illumination smoothness;
smoothness loss L is The specific formula of (2) is:
wherein,the representation includes level->And vertical direction->Is a gradient of (2); r is R i Representing the reflectivity of the ith image; i i An illumination level representing an ith image; the range of i includes low light images and normal light images; lambda (lambda) g Coefficients representing perceived strength of the balanced structure;representing the weights.
2. An image enhancement apparatus, comprising:
the decomposition module is used for decomposing the image through a convolutional neural network, and specifically comprises the following steps:
an input module for inputting an image pair to a convolutional neural network; the image pair comprises a low-light image and a normal-light image;
the extraction module is used for extracting shallow layer features from the input image through the convolution layer;
a mapping module for mapping the RGB image to reflectivity and illuminance by a rectifying linear unit;
a projection module for projecting reflectivity and illuminance from the feature space using the convolution layer;
the first fine tuning module is used for carrying out end-to-end fine tuning by utilizing the reflectivity loss function to obtain a decomposed image;
the enhancement module is used for enhancing the decomposed image, and specifically comprises the following steps:
the connecting module is used for connecting the reflectivity and the illumination as input;
the convolution pooling module is used for carrying out convolution and pooling on the input;
the adding module is used for combining the U-Net, increasing the image by utilizing a nearest neighbor interpolation method, ensuring the size of the image to be consistent with that of the combined feature map, and correspondingly adding the image;
the fusion module is used for carrying out feature fusion to obtain a feature map with more complete detail preservation;
the second fine tuning module is used for carrying out end-to-end fine tuning on the network by using random gradient descent to obtain an enhanced image;
the loss function is obtained by weighted summation of reconstruction loss, invariable reflectivity loss and smoothness loss, and specifically comprises the following steps: loss function L is lost by reconstruction recon Invariable reflectance loss L ir And smoothness loss L is Three components:
L=L reconir L iris L is
wherein lambda is ir ,λ is Coefficients respectively representing reflectance uniformity and illumination smoothness;
smoothness loss L is The specific formula of (2) is:
wherein,the representation includes level->And vertical direction->Is a gradient of (2); r is R i Representing the reflectivity of the ith image; i i An illumination level representing an ith image; the range of i includes low light images and normal light images; lambda (lambda) g Coefficients representing perceived strength of the balanced structure;representing the weights.
3. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the image enhancement method of claim 1.
4. A computer-readable storage medium having stored thereon a computer program, the program being executable by a processor to implement the image enhancement method of claim 1.
CN202010029973.3A 2020-01-13 2020-01-13 Image enhancement method, image enhancement device, electronic equipment and computer readable storage medium Active CN111192224B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010029973.3A CN111192224B (en) 2020-01-13 2020-01-13 Image enhancement method, image enhancement device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010029973.3A CN111192224B (en) 2020-01-13 2020-01-13 Image enhancement method, image enhancement device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111192224A CN111192224A (en) 2020-05-22
CN111192224B true CN111192224B (en) 2024-03-19

Family

ID=70708067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010029973.3A Active CN111192224B (en) 2020-01-13 2020-01-13 Image enhancement method, image enhancement device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111192224B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230277A (en) * 2018-02-09 2018-06-29 中国人民解放军战略支援部队信息工程大学 A kind of dual intensity CT picture breakdown methods based on convolutional neural networks
CN108961237A (en) * 2018-06-28 2018-12-07 安徽工程大学 A kind of low-dose CT picture breakdown method based on convolutional neural networks
CN109658354A (en) * 2018-12-20 2019-04-19 上海联影医疗科技有限公司 A kind of image enchancing method and system
CN110232661A (en) * 2019-05-03 2019-09-13 天津大学 Low illumination colour-image reinforcing method based on Retinex and convolutional neural networks
CN110378845A (en) * 2019-06-17 2019-10-25 杭州电子科技大学 A kind of image repair method under extreme condition based on convolutional neural networks
RU2706891C1 (en) * 2019-06-06 2019-11-21 Самсунг Электроникс Ко., Лтд. Method of generating a common loss function for training a convolutional neural network for converting an image into an image with drawn parts and a system for converting an image into an image with drawn parts
CN110675336A (en) * 2019-08-29 2020-01-10 苏州千视通视觉科技股份有限公司 Low-illumination image enhancement method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230277A (en) * 2018-02-09 2018-06-29 中国人民解放军战略支援部队信息工程大学 A kind of dual intensity CT picture breakdown methods based on convolutional neural networks
CN108961237A (en) * 2018-06-28 2018-12-07 安徽工程大学 A kind of low-dose CT picture breakdown method based on convolutional neural networks
CN109658354A (en) * 2018-12-20 2019-04-19 上海联影医疗科技有限公司 A kind of image enchancing method and system
CN110232661A (en) * 2019-05-03 2019-09-13 天津大学 Low illumination colour-image reinforcing method based on Retinex and convolutional neural networks
RU2706891C1 (en) * 2019-06-06 2019-11-21 Самсунг Электроникс Ко., Лтд. Method of generating a common loss function for training a convolutional neural network for converting an image into an image with drawn parts and a system for converting an image into an image with drawn parts
CN110378845A (en) * 2019-06-17 2019-10-25 杭州电子科技大学 A kind of image repair method under extreme condition based on convolutional neural networks
CN110675336A (en) * 2019-08-29 2020-01-10 苏州千视通视觉科技股份有限公司 Low-illumination image enhancement method and device

Also Published As

Publication number Publication date
CN111192224A (en) 2020-05-22

Similar Documents

Publication Publication Date Title
CN114140353B (en) Swin-Transformer image denoising method and system based on channel attention
Yang et al. Image correction via deep reciprocating HDR transformation
Celik Spatial mutual information and PageRank-based contrast enhancement and quality-aware relative contrast measure
CN110675336A (en) Low-illumination image enhancement method and device
Liu et al. Graph-based joint dequantization and contrast enhancement of poorly lit JPEG images
Liu et al. Survey of natural image enhancement techniques: Classification, evaluation, challenges, and perspectives
Wang et al. Joint iterative color correction and dehazing for underwater image enhancement
Fang et al. Variational single image dehazing for enhanced visualization
Bi et al. Haze removal for a single remote sensing image using low-rank and sparse prior
Paul Adaptive tri-plateau limit tri-histogram equalization algorithm for digital image enhancement
Shi et al. A joint deep neural networks-based method for single nighttime rainy image enhancement
CN113284061A (en) Underwater image enhancement method based on gradient network
Zhou et al. Linear contrast enhancement network for low-illumination image enhancement
Liu et al. Deep image inpainting with enhanced normalization and contextual attention
CN116012243A (en) Real scene-oriented dim light image enhancement denoising method, system and storage medium
CN113052923A (en) Tone mapping method, tone mapping apparatus, electronic device, and storage medium
Song et al. Hue-preserving and saturation-improved color histogram equalization algorithm
CN111192224B (en) Image enhancement method, image enhancement device, electronic equipment and computer readable storage medium
CN111861940A (en) Image toning enhancement method based on condition continuous adjustment
Hou et al. Efficient L 1-based nonlocal total variational model of Retinex for image restoration
CN116645305A (en) Low-light image enhancement method based on multi-attention mechanism and Retinex
CN110415188A (en) A kind of HDR image tone mapping method based on Multiscale Morphological
Saha et al. Combining highlight removal and low‐light image enhancement technique for HDR‐like image generation
Hanumantharaju et al. A new framework for retinex-based colour image enhancement using particle swarm optimisation
Xu et al. Attention‐based multi‐channel feature fusion enhancement network to process low‐light images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant