CN110322404A - A kind of image enchancing method and system - Google Patents
A kind of image enchancing method and system Download PDFInfo
- Publication number
- CN110322404A CN110322404A CN201910599238.3A CN201910599238A CN110322404A CN 110322404 A CN110322404 A CN 110322404A CN 201910599238 A CN201910599238 A CN 201910599238A CN 110322404 A CN110322404 A CN 110322404A
- Authority
- CN
- China
- Prior art keywords
- resolution
- super
- module
- wavelet transform
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 230000002708 enhancing effect Effects 0.000 claims abstract description 53
- 238000013507 mapping Methods 0.000 claims description 5
- 230000002194 synthesizing effect Effects 0.000 claims 1
- 230000000007 visual effect Effects 0.000 abstract description 6
- 230000009466 transformation Effects 0.000 description 13
- 230000006870 function Effects 0.000 description 8
- 230000000694 effects Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000003475 lamination Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4053—Super resolution, i.e. output image resolution higher than sensor resolution
-
- G06T5/77—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20064—Wavelet transform [DWT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Abstract
The invention discloses a kind of image enchancing method and systems, this method comprises: carrying out super-resolution to input picture I obtains image I', then carry out Stationary Wavelet Transform to image I', obtain low pass subband L;Stationary Wavelet Transform is carried out to input picture I and obtains detail subbands Hh、HvWith Hd, then to detail subbands Hh、HvWith HdEnhanced respectively, then carries out super-resolution;By the detail subbands H after low pass subband L and super-resolutionh、HvWith HdIt carries out inverse Stationary Wavelet Transform and obtains enhanced imageThe system includes: the first super-resolution module, the first Stationary Wavelet Transform module, the second Stationary Wavelet Transform module, detail subbands enhancing module, the second super-resolution module and inverse Stationary Wavelet Transform module.Image enchancing method and system based on improved FSRCNN of the invention, obtained enhancing image detail is clear, contrast is higher, visual effect is good, it is more excellent to objectively evaluate index, and real-time is good.
Description
Technical field
The present invention relates to a kind of image enhancement technique field, in particular to a kind of image enchancing method and system.
Background technique
The purpose of image enhancement is to enhance the visual effect of image, so that the super-resolution of image also can be included in broad sense
Image enhancement technique.The characteristics of due to infrared imagery technique, the generally existing contrast of infrared image is low, edge blurry and details are prominent
Unconspicuous problem out, therefore need image enhancement technique just to improve the quality of infrared image.
Currently, main two class of infrared image enhancing method being applicable in the market, image enchancing method and base based on airspace
In the image enchancing method of transform domain.
It is directly to handle the pixel in image that spatial domain picture, which enhances algorithm, is basically the gray scale with image
Based on mapping transformation, mapping transformation type used depends on the purpose of enhancing.Spatial domain picture enhances algorithm
Greyscale transformation, histogram equalization (HE), smooth and sharpening etc..Wherein, algorithm of histogram equalization is because it is with lower meter
It calculates complexity and effective contrast enhancing ability is widely used.However, this method is easy to cause excessively enhancing, details letter
Breath loses and picture noise amplification, especially for infrared image.So in order to solve disadvantages mentioned above, researchers are proposed
A variety of innovatory algorithms based on histogram equalizing method.Scholar proposes a kind of Double-histogram equalization algorithm for keeping brightness of image
(Brightness preserving Bi-Histogram Equalization, BBHE), the algorithm is according to the average ash of image
Input picture is divided into two subgraphs by degree, and carries out equilibrium treatment to the two sub- grey level histograms respectively, from certain
The brightness of enhancing front and back is kept in degree.Scholar proposes a kind of homalographic binary subgraph algorithm of histogram equalization
(DSIHE), DSIHE improves global contrast, but only difference is that selection median intensity using strategy identical with BBHE
As separation threshold value, compared with BBHE, DSIHE has better performance in terms of the average information for keeping input picture.It learns
Plateau equalization (Plateaus Histogram Equalization, PE) algorithm that person proposes, the algorithm reduce straight
Noise in square figure equalization process amplifies rain barrel effect, especially for infrared image.Scholar proposes contrast-limited
Self-adapting histogram equilibrium algorithm (Contrast Limited Adaptive Histogram Equalisation,
CLAHE), which is divided into multiple nonoverlapping sub-blocks for input picture, and is cut to the histogram of each sub-block and again
New distribution to limit contrast, while introducing bilinear interpolation to prevent the generation of blocking artifact, but the algorithm calculates complexity.
In order to reduce the computational complexity of algorithm, scholar proposes a kind of optimization based on sub-block overlapping local histogram equalization algorithm
Algorithm.
Image in image space is transformed into other spaces by transform domain Enhancement Method with some form first, then sharp
Image enhancement processing is carried out with the special property in the space, last reconvert is into original image space, to be enhanced
Image afterwards.Common variation has wavelet transformation, Fourier transformation and discrete cosine transform etc..Wavelet transformation is a kind of new hair
The mathematical tool that exhibition is got up, is a kind of multiresolution analysis method, it has good frequency domain and time domain specification, can make up in Fu
Leaf transformation cannot describe the deficiency of the frequency characteristic of time change.Therefore small echo both can be very good the flat region of processing image
Domain, and can effectively indicate the marginal portion of picture signal, this makes the application of wavelet transformation more frequent.In recent years, it grinds
The persons of studying carefully propose a series of algorithm for image enhancement based on wavelet transformation, achieve good effect in field of image processing.
Scholar proposes that using wavelet transformation realization infrared image contrast and Y-PSNR enhancing, this method is first will with wavelet transformation
Image is divided into high-frequency domain and lower frequency region, then eliminates the prominent details of noise with threshold filter in height frequency domain, and lower frequency region is with unusual
Value decomposing process enhances contrast and picture quality, finally, obtain by wavelet inverse transformation and wavelet reconstruction final
Enhance image.Scholar proposes the infrared image noise suppressing method based on steady multi-wavelet transformation, this method combination m ultiwavelet
, there is good effect in the advantages of with Stationary Wavelet Transform in terms of denoising to noise suppressed.
In addition to this, some emerging image enchancing methods also propose in succession and achieve good effect, such as more rulers
Spend analysis method, the Enhancement Method based on fuzzy theory, based on human visual system Enhancement Method etc., the processing effect of these algorithms
Fruit is substantially better than spatial domain, the Enhancement Method of frequency domain, but these algorithms are there is also certain drawbacks, for example, space complexity and
Time complexity is higher, is not able to satisfy the requirement etc. of the real-time of system.
Summary of the invention
The present invention is directed to above-mentioned problems of the prior art, proposes a kind of image enchancing method and system, obtains
Enhancing image detail is clear, contrast is higher, visual effect is good, it is more excellent to objectively evaluate index, and real-time is good, is a kind of effective
Feasible image enhancement technique.
In order to solve the above technical problems, the present invention is achieved through the following technical solutions:
The present invention provides a kind of image enchancing method, comprising:
S11: carrying out super-resolution to input picture I, obtain image I', then carry out Stationary Wavelet Transform to image I',
Obtain low pass subband L;
S12: Stationary Wavelet Transform is carried out to input picture I and obtains detail subbands Hh、HvWith Hd, then to detail subbands Hh、
HvWith HdEnhanced respectively, then to enhanced detail subbands Hh、HvWith HdCarry out super-resolution;
S13: by the detail subbands H after the low pass subband L and super-resolutionh、HvWith HdCarry out inverse Stationary Wavelet Transform
Obtain enhanced image
Wherein, the S11 and S12 sequence in no particular order, can first carry out S11, can also first carry out S12, can also two
Person carries out simultaneously.
Preferably, the super-resolution in super-resolution and/or the S12 in the S11 is to be rolled up based on FSRCNN depth
The super-resolution of product network.
Preferably, the S11 is specifically included:
S111: input picture I is expanded into target resolution, is denoted as Y;
S112: using the convolution kernel of first layer convolutional layer, the block in Y is extracted, and is converted into one group of high dimension vector, Mei Gewei
Degree represents a kind of Feature Mapping;
S113: by high dimension vector obtained in the S112 by the second convolutional layer be mapped to it is another have high resolution graphics
As the high dimension vector of feature, the corresponding feature vector of high-resolution block is obtained;
S114: high dimension vector obtained in the S113 is synthesized using third layer convolutional layer, generates a high resolution graphics
As I', and calculate loss function.
Preferably, the method that input picture I is expanded to target resolution in the S111 is to pass through bi-cubic interpolation.
Preferably, the convolution nuclear volume of first convolutional layer is 64, the convolution nuclear volume of the second convolutional layer is 32,
The convolution nuclear volume of third convolutional layer is 1;Alternatively,
The convolution nuclear volume of first convolutional layer is 128, and the convolution nuclear volume of the second convolutional layer is 64, third volume
The convolution nuclear volume of lamination is 1.
Preferably, the loss function in the S114 is mean square error.
Preferably, the corresponding formula of first convolutional layer are as follows:
F1(Y)=max (0, W1*Y+B1);
The corresponding formula of second convolutional layer are as follows:
F2(Y)=max (0, W2*F1(Y)+B2);
The corresponding formula of the third convolutional layer are as follows:
F3(Y)=W2*F2(Y)+B3;
Wherein, W1Indicate the convolution kernel of first layer convolutional layer, B1Indicate the gain of the first convolutional layer, W2Indicate the second convolution
The convolution kernel of layer, B2Indicate the gain of the second convolutional layer, W3Indicate the convolution kernel of third convolutional layer, B3Indicate third convolutional layer
Gain.
Preferably, in the S12 to detail subbands Hh、HvWith HdCarrying out enhancing respectively includes: to detail subbands Hh、HvWith
HdDegree of comparing enhances respectively, then carries out non-linear enhancing again.
Preferably, the contrast enhancing in the S12 is that CLAHE contrast enhances, formula is respectively as follows:
Wherein, v=2.5s/tM, s are the coefficient amplitudes in transform domain, and M is the size of greatest coefficient amplitude, tM table
Show the threshold value that the coefficient greater than the threshold value will be linearly amplified, parameter b, c, t are enhancing parameter, adjust maximum gain point respectively
Height, appearance position and curve global slopes.
Preferably, the specific formula of the non-linear enhancing in the S12 are as follows:
Wherein, v=2.5s/tM, s are the coefficient amplitudes in transform domain, and M is the size of greatest coefficient amplitude, tM table
Show the threshold value that the coefficient greater than the threshold value will be linearly amplified, parameter b, c, t are enhancing parameter, adjust maximum gain point respectively
Height, appearance position and curve global slopes.
The present invention also provides a kind of Image Intensified Systems, comprising: the first super-resolution module, the first Stationary Wavelet Transform mould
Block, the second Stationary Wavelet Transform module, detail subbands enhancing module, the second super-resolution module and inverse Stationary Wavelet Transform mould
Block;Wherein,
The first super-resolution module is connected with the first Stationary Wavelet Transform module;The first super-resolution mould
Block is used to carry out super-resolution to input picture I, obtains image I', and the first Stationary Wavelet Transform module is used for the first surpassing
The image I' that module resolution obtains carries out Stationary Wavelet Transform, obtains low pass subband L;
The second Stationary Wavelet Transform module and detail subbands enhancing module are connected, the second super-resolution module with it is described
Detail subbands enhance module and are connected;The second Stationary Wavelet Transform module is used to carry out Stationary Wavelet Transform to input picture I
Obtain detail subbands Hh、HvWith Hd, what the detail subbands enhancing module was used to obtain the second Stationary Wavelet Transform module
Detail subbands Hh、HvWith HdEnhanced respectively, the second super-resolution module is used to obtain detail subbands enhancing module
The enhanced detail subbands H arrivedh、HvWith HdCarry out super-resolution;
The inverse Stationary Wavelet Transform module respectively with the first Stationary Wavelet Transform module and second oversubscription
Resolution module is connected;Low pass that the inverse Stationary Wavelet Transform module is used to obtain the first Stationary Wavelet Transform module
Detail subbands H after the super-resolution obtained with L and the second super-resolution moduleh、HvWith HdInverse stationary wavelet is carried out to become
Get enhanced image in return
Compared to the prior art, the invention has the following advantages that
(1) image enchancing method provided by the invention and system, by the details after Stationary Wavelet Transform decomposes
Subband carries out super-resolution, can effectively promote the detailed information of image;
(2) image enchancing method provided by the invention and system can be to figures by the strong Stationary Wavelet Transform of conspicuousness
As carrying out multiple dimensioned multi-direction decomposition, the detailed information of image is indicated well, is conducive to the details enhancing of infrared image;
(3) image enchancing method provided by the invention and system, to noise-containing infrared image enhancement problem, enhancing figure
As details is clear, contrast is higher, visual effect is preferable, objectively evaluates that index is more excellent, and real-time is good, is a kind of effective and feasible
Infrared image enhancement technology.
Certainly, it implements any of the products of the present invention and does not necessarily require achieving all the advantages described above at the same time.
Detailed description of the invention
Embodiments of the present invention are described further with reference to the accompanying drawing:
Fig. 1 is the flow chart of the image enchancing method of the embodiment of the present invention;
Fig. 2 is the source images for the input picture that the embodiment of the present invention uses;
Fig. 3 a is the image obtained after being enhanced using existing Bicubic algorithm for image enhancement source images;
Fig. 3 b is the image obtained after being enhanced using existing SRCNN algorithm for image enhancement source images;
Fig. 3 c is the image obtained after being enhanced using existing FSRCNN algorithm for image enhancement source images;
Fig. 3 d is the image obtained after being enhanced using existing VDSR algorithm for image enhancement source images;
Fig. 3 e is the image obtained after being enhanced using the algorithm for image enhancement of the embodiment of the present invention source images;
Fig. 4 is the structural schematic diagram of the Image Intensified System of the embodiment of the present invention.
Label declaration: 1- the first super-resolution module, 2- the first Stationary Wavelet Transform module, the second Stationary Wavelet Transform of 3-
Module, 4- detail subbands enhance module, and 5- the second super-resolution module, 6- is against Stationary Wavelet Transform module.
Specific embodiment
It elaborates below to the embodiment of the present invention, the present embodiment carries out under the premise of the technical scheme of the present invention
Implement, the detailed implementation method and specific operation process are given, but protection scope of the present invention is not limited to following implementation
Example.
In conjunction with Fig. 1, the algorithm for image enhancement of the invention based on improved FSRCNN is described in detail in the present embodiment,
As shown in Figure 1, comprising:
S11: carrying out super-resolution to input picture I, obtain image I', then carry out Stationary Wavelet Transform to image I',
Obtain low pass subband L;
S12: Stationary Wavelet Transform is carried out to input picture I and obtains detail subbands Hh、HvWith Hd, then to detail subbands Hh、
HvWith HdEnhanced respectively, then to enhanced detail subbands Hh、HvWith HdSuper-resolution is carried out, high-resolution details is obtained
Subband;
S13: by the detail subbands H after super-resolution in low pass subband L and S12 obtained in S11h、HvWith HdIt carries out inverse
Stationary Wavelet Transform obtains enhanced image
Wherein, S11 and S12 sequence in no particular order, can first carry out S11, can also first carry out S12, can also both it is same
Shi Jinhang.
The image enchancing method of above-described embodiment, by surpassing to the detail subbands after Stationary Wavelet Transform decomposes
Resolution ratio can effectively promote the detailed information of image;By the strong Stationary Wavelet Transform of conspicuousness, image can be carried out more
The multi-direction decomposition of scale indicates the detailed information of image well, is conducive to the details enhancing of infrared image.
In preferred embodiment, the super-resolution in super-resolution and S12 in S11 is based on improved FSRCNN depth
The super-resolution of convolutional network is that FSRCNN depth convolution net is re-fed into after low-resolution image to be extended to high-definition picture
Network.
Specifically, the super-resolution in S11 specifically includes:
S111: input picture I is expanded into target resolution, is denoted as Y;
S112: using the convolution kernel of first layer convolutional layer, the block in Y is extracted, and is converted into one group of high dimension vector, Mei Gewei
Degree represents a kind of Feature Mapping;
S113: high dimension vector obtained in S112 is mapped to by the second convolutional layer another has high-definition picture special
The high dimension vector of sign obtains the corresponding feature vector of high-resolution block;
S114: using high dimension vector obtained in third layer convolutional layer synthesis S113, generating a high-definition picture I',
And calculate loss function.
Specifically, the super-resolution in S12 specifically includes: carrying out multiple convolution to low-resolution image and deconvolution obtains
High-definition picture.
In preferred embodiment, the method that input picture I is expanded to target resolution in S111 is to be inserted by double cubes
Value.
In preferred embodiment, the convolution nuclear volume of the first convolutional layer is 64 (or 128) a, and convolution kernel size is 9 × 9;
The convolution nuclear volume of second convolutional layer is 32 (or 64) a, and convolution kernel size is 1 × 1 (or 5 × 5,9 × 9);Third convolutional layer
It is reconstruction of layer, convolution nuclear volume is 1, and convolution kernel size is.When training, high-definition picture down-sampling is first obtained low point
Resolution image, then bi-cubic interpolation are restored to life size, as network inputs, and original image are used to calculate loss function.
In preferred embodiment, the corresponding formula of the first convolutional layer are as follows:
F1(Y)=max (0, W1*Y+B1);
The corresponding formula of second convolutional layer are as follows:
F2(Y)=max (0, W2*F1(Y)+B2);
The corresponding formula of third convolutional layer are as follows:
F3(Y)=W2*F2(Y)+B3;
Wherein, W1Indicate the convolution kernel of first layer convolutional layer, B1Indicate the gain of the first convolutional layer, W2Indicate the second convolution
The convolution kernel of layer, B2Indicate the gain of the second convolutional layer, W3Indicate the convolution kernel of third convolutional layer, B3Indicate third convolutional layer
Gain.
In preferred embodiment, the loss function in S114 is mean square error, formula are as follows:
Wherein, X indicates canonical reference high-definition picture.
In preferred embodiment, in S12 to detail subbands Hh、HvWith HdCarrying out enhancing respectively includes: to detail subbands Hh、Hv
With HdDegree of comparing enhances respectively, then carries out non-linear enhancing again.
In preferred embodiment, the contrast enhancing in S12 is that CLAHE contrast enhances, and formula is respectively as follows:
Wherein, v=2.5s/tM, s are the coefficient amplitudes in transform domain, and M is the size of greatest coefficient amplitude, tM table
Show the threshold value that the coefficient greater than the threshold value will be linearly amplified, parameter b, c, t are enhancing parameter, adjust maximum gain point respectively
Height, appearance position and curve global slopes.
In preferred embodiment, the specific formula of the non-linear enhancing in S12 are as follows:
Wherein, v=2.5s/tM, s are the coefficient amplitudes in transform domain, and M is the size of greatest coefficient amplitude, tM table
Show the threshold value that the coefficient greater than the threshold value will be linearly amplified, parameter b, c, t are enhancing parameter, adjust maximum gain point respectively
Height, appearance position and curve global slopes.
The embodiment of the present invention also provides a kind of Image Intensified System, as shown in figure 4, comprising: interconnected the first surpass
Module resolution 1 and the first Stationary Wavelet Transform module 2, further includes: sequentially connected second Stationary Wavelet Transform module 3,
Detail subbands enhance module 4 and the second super-resolution module 5, further includes: respectively with the first Stationary Wavelet Transform module 2 and
The connected inverse Stationary Wavelet Transform module 6 of second super-resolution module 5.Wherein, the first super-resolution module 1 is used to scheme input
As I progress super-resolution, image I' is obtained, what the first Stationary Wavelet Transform module 2 was used to obtain the first super-resolution module
Image I' carries out Stationary Wavelet Transform, obtains low pass subband L;Second Stationary Wavelet Transform module 3 is used to carry out input picture I
Stationary Wavelet Transform obtains detail subbands Hh、HvWith Hd, detail subbands enhancing module 4 is for the second Stationary Wavelet Transform module 3
Obtained detail subbands Hh、HvWith HdEnhanced respectively, the second super-resolution module 5 is used to obtain detail subbands enhancing module 4
The enhanced detail subbands H arrivedh、HvWith HdCarry out super-resolution;Inverse Stationary Wavelet Transform module 6 is used for steady small by first
Detail subbands H after the super-resolution that the low pass subband L and the second super-resolution module 5 that wave conversion module 2 obtains are obtainedh、Hv
With HdIt carries out inverse Stationary Wavelet Transform and obtains enhanced image
Each module specific implementation can be using the corresponding step of above-mentioned image enchancing method in above-mentioned Image Intensified System embodiment
Rapid technology, details are not described herein.
Further, in other embodiments of the present invention, a kind of computer, including memory, processor can also be provided
And the computer program that can be run on a memory and on a processor is stored, the processor can be used for when executing described program
Execute image enchancing method described in above-described embodiment.
The image enhancement effects of the embodiment of the present invention can be illustrated by emulation experiment:
The source images of Fig. 2 are emulated respectively using the method for existing method and the embodiment of the present invention, obtain as
Enhance image shown in Fig. 3 a -3e.Wherein Fig. 3 a is the enhancing result figure based on Bicubic method, and Fig. 3 b is based on SRCNN method
Enhancing result figure, 3c is the enhancing result figure based on FSRCNN method, and 3d is the enhancing result figure based on VDSR method, and 3e is to be based on
The enhancing result figure of the method for the embodiment of the present invention, from the enhancing result figure of Fig. 3 as it can be seen that the enhancing image of the embodiment of the present invention not
Only overall contrast is improved, and the edge and surface texture of target object are also very clear.
In addition, superiority and advance in order to better illustrate the present invention, will be increased using common 6 typical images
Index is objectively evaluated by force to evaluate the enhancing result of the enhancing result and method for distinguishing acquisition that obtain using the embodiment of the present invention
Objective quality.2 kinds of evaluation indexes are respectively as follows: PSNR and SSIM, and PSNR and SSIM value is bigger to illustrate that enhancing picture quality is better.It is real
Test image to objectively evaluate index as shown in table 1.
Table 1
As can be seen from Table 1, obtain 2 of enhancing result of the embodiment of the present invention objectively evaluate index and are superior to other sides
Method, therefore the present invention can effectively improve the clarity and detailed information of image.
The image enchancing method and the obtained enhancing image visual effect of system that sum up, the embodiment of the present invention proposes be good,
Detailed information is abundant, contrast is high, high-efficient.
It should be noted that the step in the method provided by the invention, can use corresponding mould in the system
Block, device, unit etc. are achieved, and the technical solution that those skilled in the art are referred to the system realizes the method
Steps flow chart, that is, the embodiment in the system can be regarded as realizing the preference of the method, and it will not be described here.
One skilled in the art will appreciate that in addition to realizing system provided by the invention in a manner of pure computer readable program code
And its other than each device, completely can by by method and step carry out programming in logic come so that system provided by the invention and its
Each device is in the form of logic gate, switch, specific integrated circuit, programmable logic controller (PLC) and embedded microcontroller etc.
To realize identical function.So system provided by the invention and its every device are considered a kind of hardware component, and it is right
The device for realizing various functions for including in it can also be considered as the structure in hardware component;It can also will be for realizing each
The device of kind function is considered as either the software module of implementation method can be the structure in hardware component again.
Disclosed herein is merely a preferred embodiment of the present invention, these embodiments are chosen and specifically described to this specification, is
Principle and practical application in order to better explain the present invention is not limitation of the invention.Anyone skilled in the art
The modifications and variations done within the scope of specification should all be fallen in the range of of the invention protect.
Claims (10)
1. a kind of image enchancing method characterized by comprising
S11: super-resolution is carried out to input picture I, obtains image I', Stationary Wavelet Transform then is carried out to image I', is obtained
Low pass subband L;
S12: Stationary Wavelet Transform is carried out to input picture I and obtains detail subbands Hh、HvWith Hd, then to detail subbands Hh、HvWith
HdEnhanced respectively, then to enhanced detail subbands Hh、HvWith HdCarry out super-resolution;
S13: by the detail subbands H after the low pass subband L and super-resolutionh、HvWith HdInverse Stationary Wavelet Transform is carried out to obtain
Enhanced image
Wherein, the S11 and S12 sequence in no particular order.
2. image enchancing method according to claim 1, which is characterized in that super-resolution in the S11 and/or described
Super-resolution in S12 is the super-resolution based on FSRCNN depth convolutional network.
3. image enchancing method according to claim 2, which is characterized in that the super-resolution in the S11 specifically includes:
S111: input picture I is expanded into target resolution, is denoted as Y;
S112: using the convolution kernel of first layer convolutional layer, extracting the block in Y, and be converted into one group of high dimension vector, each dimension generation
A kind of Feature Mapping of table;
S113: high dimension vector obtained in the S112 is mapped to by the second convolutional layer another has high-definition picture special
The high dimension vector of sign obtains the corresponding feature vector of high-resolution block;
S114: synthesizing high dimension vector obtained in the S113 using third layer convolutional layer, generate a high-definition picture I',
And calculate loss function.
4. image enchancing method according to claim 3, which is characterized in that the convolution nuclear volume of first convolutional layer is
64, the convolution nuclear volume of the second convolutional layer is 32, and the convolution nuclear volume of third convolutional layer is 1;Alternatively,
The convolution nuclear volume of first convolutional layer is 128, and the convolution nuclear volume of the second convolutional layer is 64, third convolutional layer
Convolution nuclear volume be 1.
5. image enchancing method according to claim 3, which is characterized in that the loss function in the S114 is mean square error
Difference.
6. image enchancing method according to claim 3, which is characterized in that the corresponding formula of first convolutional layer are as follows:
F1(Y)=max (0, W1*Y+B1);
The corresponding formula of second convolutional layer are as follows:
F2(Y)=max (0, W2*F1(Y)+B2);
The corresponding formula of the third convolutional layer are as follows:
F3(Y)=W2*F2(Y)+B3;
Wherein, W1Indicate the convolution kernel of first layer convolutional layer, B1Indicate the gain of the first convolutional layer, W2Indicate the second convolutional layer
Convolution kernel, B2Indicate the gain of the second convolutional layer, W3Indicate the convolution kernel of third convolutional layer, B3Indicate the gain of third convolutional layer.
7. image enchancing method according to claim 1, which is characterized in that in the S12 to detail subbands Hh、HvWith
HdCarrying out enhancing respectively includes: to detail subbands Hh、HvWith HdDegree of comparing enhances respectively, then carries out non-linear enhancing again.
8. image enchancing method according to claim 7, which is characterized in that the contrast enhancing in the S12 is CLAHE
Contrast enhancing, formula are respectively as follows:
Wherein, v=2.5s/tM, s are the coefficient amplitudes in transform domain, and M is the size of greatest coefficient amplitude, and tM indicates big
In the threshold value that the coefficient of the threshold value will be linearly amplified, parameter b, c, t are enhancing parameter, adjust the height of maximum gain point respectively
Degree, appearance position and curve global slopes.
9. image enchancing method according to claim 7, which is characterized in that non-linear enhancing in the S12 it is specific
Formula are as follows:
Wherein, v=2.5s/tM, s are the coefficient amplitudes in transform domain, and M is the size of greatest coefficient amplitude, and tM indicates big
In the threshold value that the coefficient of the threshold value will be linearly amplified, parameter b, c, t are enhancing parameter, adjust the height of maximum gain point respectively
Degree, appearance position and curve global slopes.
10. a kind of Image Intensified System characterized by comprising the first super-resolution module, the first Stationary Wavelet Transform mould
Block, the second Stationary Wavelet Transform module, detail subbands enhancing module, the second super-resolution module and inverse Stationary Wavelet Transform mould
Block;Wherein,
The first super-resolution module is connected with the first Stationary Wavelet Transform module;The first super-resolution module is used
In carrying out super-resolution to input picture I, image I' is obtained, the first Stationary Wavelet Transform module is used for the first super-resolution
The image I' that rate module obtains carries out Stationary Wavelet Transform, obtains low pass subband L;
The second Stationary Wavelet Transform module is connected with detail subbands enhancing module, the second super-resolution module and the details
Subband enhancement module is connected;The second Stationary Wavelet Transform module is used to carry out Stationary Wavelet Transform to input picture I to obtain
Detail subbands Hh、HvWith Hd, details of the detail subbands enhancing module for being obtained to the second Stationary Wavelet Transform module
Subband Hh、HvWith HdEnhanced respectively, what the second super-resolution module was used to obtain detail subbands enhancing module
Enhanced detail subbands Hh、HvWith HdCarry out super-resolution;
The inverse Stationary Wavelet Transform module respectively with the first Stationary Wavelet Transform module and second super-resolution
Module is connected;The inverse Stationary Wavelet Transform module is used for the low pass subband L for obtaining the first Stationary Wavelet Transform module
And the detail subbands H after the obtained super-resolution of the second super-resolution moduleh、HvWith HdCarry out inverse Stationary Wavelet Transform
Obtain enhanced image
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910599238.3A CN110322404B (en) | 2019-07-04 | 2019-07-04 | Image enhancement method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910599238.3A CN110322404B (en) | 2019-07-04 | 2019-07-04 | Image enhancement method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110322404A true CN110322404A (en) | 2019-10-11 |
CN110322404B CN110322404B (en) | 2023-08-04 |
Family
ID=68122726
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910599238.3A Active CN110322404B (en) | 2019-07-04 | 2019-07-04 | Image enhancement method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110322404B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112785532A (en) * | 2021-01-12 | 2021-05-11 | 安徽大学 | Singular value equalization image enhancement algorithm based on weighted histogram distribution gamma correction |
CN113066035A (en) * | 2021-03-19 | 2021-07-02 | 桂林理工大学 | Image quality enhancement method based on bilinear interpolation and wavelet transformation |
CN113837975A (en) * | 2021-09-05 | 2021-12-24 | 桂林理工大学 | Image enhancement method based on bicubic interpolation and singular value decomposition |
CN116580290A (en) * | 2023-07-11 | 2023-08-11 | 成都庆龙航空科技有限公司 | Unmanned aerial vehicle identification method, unmanned aerial vehicle identification device and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030228061A1 (en) * | 2002-03-15 | 2003-12-11 | Hiroyuki Sakuyama | Image data generation with reduced amount of processing |
CN103500436A (en) * | 2013-09-17 | 2014-01-08 | 广东威创视讯科技股份有限公司 | Image super-resolution processing method and system |
CN109636716A (en) * | 2018-10-29 | 2019-04-16 | 昆明理工大学 | A kind of image super-resolution rebuilding method based on wavelet coefficient study |
-
2019
- 2019-07-04 CN CN201910599238.3A patent/CN110322404B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030228061A1 (en) * | 2002-03-15 | 2003-12-11 | Hiroyuki Sakuyama | Image data generation with reduced amount of processing |
CN103500436A (en) * | 2013-09-17 | 2014-01-08 | 广东威创视讯科技股份有限公司 | Image super-resolution processing method and system |
CN109636716A (en) * | 2018-10-29 | 2019-04-16 | 昆明理工大学 | A kind of image super-resolution rebuilding method based on wavelet coefficient study |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112785532A (en) * | 2021-01-12 | 2021-05-11 | 安徽大学 | Singular value equalization image enhancement algorithm based on weighted histogram distribution gamma correction |
CN112785532B (en) * | 2021-01-12 | 2022-11-18 | 安徽大学 | Singular value equalization image enhancement algorithm based on weighted histogram distribution gamma correction |
CN113066035A (en) * | 2021-03-19 | 2021-07-02 | 桂林理工大学 | Image quality enhancement method based on bilinear interpolation and wavelet transformation |
CN113837975A (en) * | 2021-09-05 | 2021-12-24 | 桂林理工大学 | Image enhancement method based on bicubic interpolation and singular value decomposition |
CN116580290A (en) * | 2023-07-11 | 2023-08-11 | 成都庆龙航空科技有限公司 | Unmanned aerial vehicle identification method, unmanned aerial vehicle identification device and storage medium |
CN116580290B (en) * | 2023-07-11 | 2023-10-20 | 成都庆龙航空科技有限公司 | Unmanned aerial vehicle identification method, unmanned aerial vehicle identification device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110322404B (en) | 2023-08-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110322404A (en) | A kind of image enchancing method and system | |
Fathi et al. | Efficient image denoising method based on a new adaptive wavelet packet thresholding function | |
Guleryuz | Weighted averaging for denoising with overcomplete dictionaries | |
Yu et al. | Image denoising using trivariate shrinkage filter in the wavelet domain and joint bilateral filter in the spatial domain | |
Patel et al. | A review on different image interpolation techniques for image enhancement | |
CN101847257A (en) | Image denoising method based on non-local means and multi-level directional images | |
Wang et al. | New image restoration method associated with tetrolets shrinkage and weighted anisotropic total variation | |
CN104680485A (en) | Method and device for denoising image based on multiple resolutions | |
Krommweh et al. | Tetrolet shrinkage with anisotropic total variation minimization for image approximation | |
Sharma et al. | From pyramids to state‐of‐the‐art: a study and comprehensive comparison of visible–infrared image fusion techniques | |
Xu et al. | A denoising algorithm via wiener filtering in the shearlet domain | |
CN109472756A (en) | Image de-noising method based on shearing wave conversion and with directionality local Wiener filtering | |
Wang et al. | Wiener filter-based wavelet domain denoising | |
CN103310414B (en) | Based on direction wave conversion and the image enchancing method of fuzzy theory | |
CN103077507B (en) | Beta algorithm-based multiscale SAR (Synthetic Aperture Radar) image denoising method | |
CN106296583B (en) | Based on image block group sparse coding and the noisy high spectrum image ultra-resolution ratio reconstructing method that in pairs maps | |
CN102196155B (en) | Self-adaptive coefficient shrinkage video denoising method based on Surfacelet transform (ST) | |
CN104809714A (en) | Image fusion method based on multi-morphological sparse representation | |
CN103745443B (en) | The method and apparatus for improving picture quality | |
CN111192204A (en) | Image enhancement method, system and computer readable storage medium | |
Cao et al. | A License Plate Image Enhancement Method in Low Illumination Using BEMD. | |
Azam et al. | Remote sensing image resolution enlargement algorithm based on wavelet transformation | |
Jiao et al. | Fusion of panchromatic and multispectral images via morphological operator and improved PCNN in mixed multiscale domain | |
Zadeh et al. | Image resolution enhancement using multi-wavelet and cycle-spinning | |
CN106327440B (en) | Picture breakdown filtering method containing non-local data fidelity term |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |