CN111192224A - Image enhancement method and device, electronic equipment and computer readable storage medium - Google Patents
Image enhancement method and device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN111192224A CN111192224A CN202010029973.3A CN202010029973A CN111192224A CN 111192224 A CN111192224 A CN 111192224A CN 202010029973 A CN202010029973 A CN 202010029973A CN 111192224 A CN111192224 A CN 111192224A
- Authority
- CN
- China
- Prior art keywords
- image
- reflectivity
- module
- illumination
- enhancement
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000002310 reflectometry Methods 0.000 claims abstract description 40
- 238000005286 illumination Methods 0.000 claims abstract description 32
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 18
- 238000013507 mapping Methods 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims abstract description 8
- 230000006870 function Effects 0.000 claims description 17
- 238000000354 decomposition reaction Methods 0.000 claims description 13
- 238000011176 pooling Methods 0.000 claims description 10
- 230000004927 fusion Effects 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 8
- 230000002708 enhancing effect Effects 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 15
- 230000000694 effects Effects 0.000 description 11
- 230000008569 process Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Images
Classifications
-
- G06T5/73—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Abstract
The application discloses an image enhancement method, an image enhancement device, electronic equipment and a computer-readable storage medium; an image enhancement method, comprising: decomposing the image through a convolutional neural network; performing enhancement processing on the decomposed image; decomposing the image through the convolutional neural network includes: inputting an image pair to a convolutional neural network; the image pair comprises a low light image and a normal light image; extracting shallow features from the input image through the convolutional layer; mapping the RGB image to reflectivity and illumination through a rectification linear unit; projecting the reflectivity and the illumination from the feature space using the convolutional layer; and performing end-to-end fine adjustment by using the reflectivity loss function to obtain a decomposed image. The image enhancement method provided by the embodiment of the application can meet the visual demands of people, has strong universality, high accuracy and high efficiency for image enhancement, and solves the technical problem of excessive image enhancement.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image enhancement method and apparatus, an electronic device, and a computer-readable storage medium.
Background
In the conventional method, the image is enhanced mainly in the frequency domain based on retinex theory, which assumes that the image can be decomposed into a reflection map and an illumination map. Besides, Retinex theory assumes that the illumination varies gradually in space, but this assumption deviates from reality and the proportions of the three channels of the MSRCR default image are equal, which results in a dark and noisy processed image. On the basis, the researchers propose image enhancement by using an image reflection map, wherein the key point of the enhancement is the estimation degree of the reflection map, but the reflection map-based method is easy to cause over-enhancement, which is the most frequently occurring problem in the existing illumination map estimation, and the over-exposure problem is still not effectively solved at present.
Disclosure of Invention
The application aims to provide an image enhancement method, an image enhancement device, an electronic device and a computer-readable storage medium. The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview and is intended to neither identify key/critical elements nor delineate the scope of such embodiments. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
According to an aspect of an embodiment of the present application, there is provided an image enhancement method, including:
decomposing the image through a convolutional neural network;
and performing enhancement processing on the decomposed image.
Further, the decomposing the image by the convolutional neural network includes:
inputting an image pair to a convolutional neural network; the image pair comprises a low light image and a normal light image;
extracting shallow features from the input image through the convolutional layer;
mapping the RGB image to reflectivity and illumination through a rectification linear unit;
projecting the reflectivity and the illumination from the feature space using the convolutional layer;
and performing end-to-end fine adjustment by using the reflectivity loss function to obtain a decomposed image.
Further, the enhancing the decomposed image includes:
connecting the reflectivity and the illumination as input;
convolving and pooling the input;
combining U-Net, increasing the image by using a nearest neighbor interpolation method, ensuring that the size of the image is consistent with that of the combined feature map, and correspondingly adding the image;
carrying out feature fusion to obtain a feature map with more complete details stored;
and performing end-to-end fine adjustment on the network by using random gradient descent to obtain an enhanced image.
Further, in the method, a loss function is obtained by weighted summation of reconstruction loss, invariant reflectivity loss, and flatness loss.
Further, in the method, the loss function L is constructed by reconstructing the loss LreconConstant reflectance loss LirAnd loss of smoothness LisThe three components are as follows:
L=Lrecon+λirLir+λisLis,
wherein λir,λisCoefficients representing reflectance uniformity and illumination smoothness, respectively.
According to another aspect of embodiments of the present application, there is provided an image enhancement apparatus including:
the decomposition module is used for decomposing the image through a convolutional neural network;
and the enhancement module is used for enhancing the decomposed image.
Further, the decomposition module includes:
an input module for inputting an image pair to a convolutional neural network; the image pair comprises a low light image and a normal light image;
the extraction module is used for extracting shallow features from the input image through the convolution layer;
the mapping module is used for mapping the RGB image to reflectivity and illumination through the rectification linear unit;
a projection module for projecting the reflectivity and illumination from the feature space using the convolutional layer;
and the first fine tuning module is used for carrying out end-to-end fine tuning by utilizing the reflectivity loss function to obtain a decomposed image.
Further, the enhancement module includes:
the connecting module is used for connecting the reflectivity and the illumination and then taking the connected reflectivity and illumination as input;
a convolution pooling module for convolving and pooling the input;
the adding module is used for combining the U-Net, increasing the image by utilizing a nearest neighbor interpolation method, ensuring that the size of the combined feature graph is consistent with that of the combined feature graph, and correspondingly adding the combined feature graph;
the fusion module is used for carrying out feature fusion to obtain a feature map with more complete details stored;
and the second fine tuning module is used for performing end-to-end fine tuning on the network by using random gradient descent to obtain an enhanced image.
According to another aspect of the embodiments of the present application, there is provided an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the image enhancement method described above.
According to another aspect of embodiments of the present application, there is provided a computer-readable storage medium having stored thereon a computer program, which is executed by a processor, to implement the image enhancement method described above.
The technical scheme provided by one aspect of the embodiment of the application can have the following beneficial effects:
the image enhancement method provided by the embodiment of the application can meet the visual demands of people, has strong universality, high accuracy and high efficiency for image enhancement, and solves the technical problem of excessive image enhancement.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the application, or may be learned by the practice of the embodiments. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 shows a flow diagram of an image enhancement method of an embodiment of the present application;
FIG. 2 shows a flow chart of the steps of decomposing an image in one embodiment of the present application;
FIG. 3 shows a block diagram of an image enhancement apparatus of an embodiment of the present application;
FIG. 4 shows a RUNet network architecture diagram in one embodiment of the present application;
FIG. 5 illustrates a data set image pair of an embodiment of the present application;
fig. 6 shows a decomposed diagram of a low light image and a normal light image, and a luminance enhancement effect diagram, where fig. 6(a), 6(b), and 6(c) are corresponding images of a first group, fig. 6(a) is a diagram of a decomposed normal light image, fig. 6(b) is a low light image decomposition effect diagram, and fig. 6(c) is a low light image enhancement effect diagram; fig. 6(d), 6(e) and 6(f) are corresponding images of the second group, fig. 6(d) is a diagram after normal light image decomposition, fig. 6(e) is a low light image decomposition effect diagram, and fig. 6(f) is a low light image enhancement effect diagram.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is further described with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As shown in fig. 1, an embodiment of the present application provides an image enhancement method, including the steps of:
s10, decomposing the image through a convolutional neural network;
specifically, on the basis of Retinex-Net, an image is learned and decomposed through a Convolutional Neural Network (CNN);
s20, performing enhancement processing on the decomposed image obtained in step S10;
specifically, enhancement processing is carried out on the decomposed image through an enhanced network architecture based on U-Net.
The image enhancement method of the present embodiment is implemented by RU-Net, as shown in fig. 4.
As shown in fig. 2, in some embodiments, the step S10 of decomposing the image includes:
s101, inputting a low-light image SlowAnd the normal light image SnomalAn image pair;
FIG. 5 shows a data set image pair;
s102, extracting shallow features from an input image by using a 3 x 3 convolutional layer;
s103, mapping the RGB image to a reflectivity R and an illumination I by taking a rectifying linear unit (ReLU) as 5 3 × 3 convolution layers of an activation function;
s104, projecting the reflectivity R and the illumination I from the characteristic space by using a 3 multiplied by 3 convolutional layer;
and S105, performing end-to-end fine adjustment by using the reflectivity loss function to obtain a decomposed image.
In some embodiments, the step S20 of performing enhancement processing on the decomposed image obtained in the step S10 includes:
s201, connecting the reflectivity R and the illumination I and then using the connection as input, wherein the convolution kernel size is 3, and the pooling layer convolution kernel size is 2;
s202, performing convolution and pooling on input;
s203, combining U-Net, increasing the image by using a nearest neighbor interpolation method, ensuring that the size of the image is consistent with that of the combined feature image, and correspondingly adding the image and the combined feature image;
s204, carrying out feature fusion to obtain a feature diagram with more complete details stored;
s205, performing end-to-end fine adjustment on the network by using random gradient descent to obtain an enhanced image;
wherein the loss function is obtained by weighted summation of reconstruction loss, invariant reflectivity loss and smoothness loss. The loss function L is derived from the reconstructed loss LreconConstant loss of reflectivity LirAnd loss of smoothness LisThree components of which lambdair,λisCoefficients representing reflectance uniformity and illumination smoothness, respectively:
L=Lrecon+λirLir+λisLis(1)
based on RlowAnd RhighCan reconstruct the image using the assumption of corresponding illumination maps, the loss L of reconstructionreconCan be expressed as:
introducing a shared reflectivity loss LirTo limit the uniformity of the reflectivity:
Lir=||Rlow-Rnormal||1(3)
in the region where the image pixels are uniform or the brightness is sharply changed, the direct use of the total variation minimization (TV) as a loss function fails, and in order to understand the image structure for the loss, the present embodimentWeighting the original TV function using the gradient of the reflectivity map to obtain LisThe formula is as follows:
wherein the content of the first and second substances,the representation includes levelAnd the vertical directionGradient of (a)gCoefficients representing the perceived intensity of the balance structure. In the weightIn the case of (1), LisThe constraint on smoothness where the reflectivity gradient is steep, i.e. where the image structure is located and where the illumination should be discontinuous, is relaxed.
Using the method of this example, 1759 images were trained and 100 images were tested. The reflectivity is not required to be provided in the training process, and the reflectivity consistency and the illumination graph smoothness are embedded into the network as loss functions. Thus, the network decomposition of the present embodiment is automatically learned from paired low, normal light images, and is naturally adapted to describe light variations between images under different lighting conditions.
During training, the batch size is set to 16, and the image block size is set to 48 × 48. L isir,LisAnd λgSet to 0.001, 0.1 and 10, respectively. When i ≠ j, λijSet to 0.001, λ when i ═ jijSet to 1, the network iterates 100 times as a whole.
As shown in fig. 6, an exploded view of a low light image and a normal light image and a luminance enhancement effect view are shown, wherein fig. 6(a), 6(b) and 6(c) are corresponding images of a first group, fig. 6(a) is a view after the normal light image is exploded, fig. 6(b) is a low light image decomposition effect view, and fig. 6(c) is a low light image enhancement effect view; fig. 6(d), 6(e) and 6(f) are corresponding images of the second group, fig. 6(d) is a diagram after normal light image decomposition, fig. 6(e) is a low light image decomposition effect diagram, and fig. 6(f) is a low light image enhancement effect diagram. The two groups of images in fig. 6 clearly show the technical effect of image enhancement achieved by the method of the present embodiment.
Experimental results show that the method can meet the visual requirements of people, and is high in universality, accuracy and efficiency of image enhancement.
As shown in fig. 3, the present embodiment also provides an image enhancement apparatus including:
a decomposition module 100 for decomposing the image by a convolutional neural network;
and an enhancement module 200, configured to perform enhancement processing on the decomposed image.
In some embodiments, the decomposition module comprises:
an input module for inputting an image pair to a convolutional neural network; the image pair comprises a low light image and a normal light image;
the extraction module is used for extracting shallow features from the input image through the convolution layer;
the mapping module is used for mapping the RGB image to reflectivity and illumination through the rectification linear unit;
a projection module for projecting the reflectivity and illumination from the feature space using the convolutional layer;
and the first fine tuning module is used for carrying out end-to-end fine tuning by utilizing the reflectivity loss function to obtain a decomposed image.
In some embodiments, the enhancement module comprises:
the connecting module is used for connecting the reflectivity and the illumination and then taking the connected reflectivity and illumination as input;
a convolution pooling module for convolving and pooling the input;
the adding module is used for combining the U-Net, increasing the image by utilizing a nearest neighbor interpolation method, ensuring that the size of the combined feature graph is consistent with that of the combined feature graph, and correspondingly adding the combined feature graph;
the fusion module is used for carrying out feature fusion to obtain a feature map with more complete details stored;
and the second fine tuning module is used for performing end-to-end fine tuning on the network by using random gradient descent to obtain an enhanced image.
The embodiment also provides an electronic device, which includes a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor executes the program to implement the image enhancement method.
The present embodiment also provides a computer-readable storage medium having stored thereon a computer program, which is executed by a processor, to implement the image enhancement method described above.
It should be noted that:
the term "module" is not intended to be limited to a particular physical form. Depending on the particular application, a module may be implemented as hardware, firmware, software, and/or combinations thereof. Furthermore, different modules may share common components or even be implemented by the same component. There may or may not be clear boundaries between the various modules.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may be used with the teachings herein. The required structure for constructing such a device will be apparent from the description above. In addition, this application is not directed to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present application as described herein, and any descriptions of specific languages are provided above to disclose the best modes of the present application.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the application, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in the creation apparatus of a virtual machine according to embodiments of the present application. The present application may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present application may be stored on a computer readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The above-mentioned embodiments only express the embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.
Claims (10)
1. An image enhancement method, comprising:
decomposing the image through a convolutional neural network;
and performing enhancement processing on the decomposed image.
2. The method of claim 1, wherein decomposing the image through a convolutional neural network comprises:
inputting an image pair to a convolutional neural network; the image pair comprises a low light image and a normal light image;
extracting shallow features from the input image through the convolutional layer;
mapping the RGB image to reflectivity and illumination through a rectification linear unit;
projecting the reflectivity and the illumination from the feature space using the convolutional layer;
and performing end-to-end fine adjustment by using the reflectivity loss function to obtain a decomposed image.
3. The method according to claim 2, wherein the enhancing the decomposed image comprises:
connecting the reflectivity and the illumination as input;
convolving and pooling the input;
combining U-Net, increasing the image by using a nearest neighbor interpolation method, ensuring that the size of the image is consistent with that of the combined feature map, and correspondingly adding the image;
carrying out feature fusion to obtain a feature map with more complete details stored;
and performing end-to-end fine adjustment on the network by using random gradient descent to obtain an enhanced image.
4. A method according to claim 3, characterized in that in the method the loss function is obtained by weighted summation of reconstruction loss, invariant reflectivity loss and smoothness loss.
5. Method according to claim 4, characterized in that in the method the loss function L consists of reconstructing the loss LreconConstant reflectance loss LirAnd loss of smoothness LisThe three components are as follows:
L=Lrecon+λirLir+λisLis,
wherein λir,λisCoefficients representing reflectance uniformity and illumination smoothness, respectively.
6. An image enhancement apparatus, comprising:
the decomposition module is used for decomposing the image through a convolutional neural network;
and the enhancement module is used for enhancing the decomposed image.
7. The apparatus of claim 6, wherein the decomposition module comprises:
an input module for inputting an image pair to a convolutional neural network; the image pair comprises a low light image and a normal light image;
the extraction module is used for extracting shallow features from the input image through the convolution layer;
the mapping module is used for mapping the RGB image to reflectivity and illumination through the rectification linear unit;
a projection module for projecting the reflectivity and illumination from the feature space using the convolutional layer;
and the first fine tuning module is used for carrying out end-to-end fine tuning by utilizing the reflectivity loss function to obtain a decomposed image.
8. The apparatus of claim 6, wherein the boost module comprises:
the connecting module is used for connecting the reflectivity and the illumination and then taking the connected reflectivity and illumination as input;
a convolution pooling module for convolving and pooling the input;
the adding module is used for combining the U-Net, increasing the image by utilizing a nearest neighbor interpolation method, ensuring that the size of the combined feature graph is consistent with that of the combined feature graph, and correspondingly adding the combined feature graph;
the fusion module is used for carrying out feature fusion to obtain a feature map with more complete details stored;
and the second fine tuning module is used for performing end-to-end fine tuning on the network by using random gradient descent to obtain an enhanced image.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the image enhancement method of any one of claims 1-5.
10. A computer-readable storage medium, on which a computer program is stored, which program is executable by a processor for implementing the image enhancement method as claimed in any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010029973.3A CN111192224B (en) | 2020-01-13 | 2020-01-13 | Image enhancement method, image enhancement device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010029973.3A CN111192224B (en) | 2020-01-13 | 2020-01-13 | Image enhancement method, image enhancement device, electronic equipment and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111192224A true CN111192224A (en) | 2020-05-22 |
CN111192224B CN111192224B (en) | 2024-03-19 |
Family
ID=70708067
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010029973.3A Active CN111192224B (en) | 2020-01-13 | 2020-01-13 | Image enhancement method, image enhancement device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111192224B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108230277A (en) * | 2018-02-09 | 2018-06-29 | 中国人民解放军战略支援部队信息工程大学 | A kind of dual intensity CT picture breakdown methods based on convolutional neural networks |
CN108961237A (en) * | 2018-06-28 | 2018-12-07 | 安徽工程大学 | A kind of low-dose CT picture breakdown method based on convolutional neural networks |
CN109658354A (en) * | 2018-12-20 | 2019-04-19 | 上海联影医疗科技有限公司 | A kind of image enchancing method and system |
CN110232661A (en) * | 2019-05-03 | 2019-09-13 | 天津大学 | Low illumination colour-image reinforcing method based on Retinex and convolutional neural networks |
CN110378845A (en) * | 2019-06-17 | 2019-10-25 | 杭州电子科技大学 | A kind of image repair method under extreme condition based on convolutional neural networks |
RU2706891C1 (en) * | 2019-06-06 | 2019-11-21 | Самсунг Электроникс Ко., Лтд. | Method of generating a common loss function for training a convolutional neural network for converting an image into an image with drawn parts and a system for converting an image into an image with drawn parts |
CN110675336A (en) * | 2019-08-29 | 2020-01-10 | 苏州千视通视觉科技股份有限公司 | Low-illumination image enhancement method and device |
-
2020
- 2020-01-13 CN CN202010029973.3A patent/CN111192224B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108230277A (en) * | 2018-02-09 | 2018-06-29 | 中国人民解放军战略支援部队信息工程大学 | A kind of dual intensity CT picture breakdown methods based on convolutional neural networks |
CN108961237A (en) * | 2018-06-28 | 2018-12-07 | 安徽工程大学 | A kind of low-dose CT picture breakdown method based on convolutional neural networks |
CN109658354A (en) * | 2018-12-20 | 2019-04-19 | 上海联影医疗科技有限公司 | A kind of image enchancing method and system |
CN110232661A (en) * | 2019-05-03 | 2019-09-13 | 天津大学 | Low illumination colour-image reinforcing method based on Retinex and convolutional neural networks |
RU2706891C1 (en) * | 2019-06-06 | 2019-11-21 | Самсунг Электроникс Ко., Лтд. | Method of generating a common loss function for training a convolutional neural network for converting an image into an image with drawn parts and a system for converting an image into an image with drawn parts |
CN110378845A (en) * | 2019-06-17 | 2019-10-25 | 杭州电子科技大学 | A kind of image repair method under extreme condition based on convolutional neural networks |
CN110675336A (en) * | 2019-08-29 | 2020-01-10 | 苏州千视通视觉科技股份有限公司 | Low-illumination image enhancement method and device |
Also Published As
Publication number | Publication date |
---|---|
CN111192224B (en) | 2024-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yang et al. | Image correction via deep reciprocating HDR transformation | |
Ancuti et al. | Single-scale fusion: An effective approach to merging images | |
CN110675336A (en) | Low-illumination image enhancement method and device | |
Liu et al. | Graph-based joint dequantization and contrast enhancement of poorly lit JPEG images | |
US8811764B1 (en) | System and method for scene dependent multi-band blending | |
Liu et al. | Survey of natural image enhancement techniques: Classification, evaluation, challenges, and perspectives | |
Zhou et al. | Udc 2020 challenge on image restoration of under-display camera: Methods and results | |
CN114936979B (en) | Model training method, image denoising method, device, equipment and storage medium | |
CN111681180A (en) | Priori-driven deep learning image defogging method | |
US20030138161A1 (en) | Method and apparatus for enhancing an image using a wavelet-based retinex algorithm | |
Shi et al. | A joint deep neural networks-based method for single nighttime rainy image enhancement | |
CN111353955A (en) | Image processing method, device, equipment and storage medium | |
CN114219722A (en) | Low-illumination image enhancement method by utilizing time-frequency domain hierarchical processing | |
CN115457249A (en) | Method and system for fusing and matching infrared image and visible light image | |
Liu et al. | Deep image inpainting with enhanced normalization and contextual attention | |
CN116012243A (en) | Real scene-oriented dim light image enhancement denoising method, system and storage medium | |
Wang et al. | Single Underwater Image Enhancement Based on $ L_ {P} $-Norm Decomposition | |
CN117152182B (en) | Ultralow-illumination network camera image processing method and device and electronic equipment | |
Meylan | Tone mapping for high dynamic range images | |
Peng et al. | Real-time video dehazing via incremental transmission learning and spatial-temporally coherent regularization | |
CN111192224A (en) | Image enhancement method and device, electronic equipment and computer readable storage medium | |
CN115713585B (en) | Texture image reconstruction method, apparatus, computer device and storage medium | |
CN116645305A (en) | Low-light image enhancement method based on multi-attention mechanism and Retinex | |
CN110415188A (en) | A kind of HDR image tone mapping method based on Multiscale Morphological | |
Xu et al. | Multi-Exposure Image Fusion Algorithm Based on Improved Weight Function |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |