CN104318525A - Space guiding filtering based image detail enhancement method - Google Patents
Space guiding filtering based image detail enhancement method Download PDFInfo
- Publication number
- CN104318525A CN104318525A CN201410552788.7A CN201410552788A CN104318525A CN 104318525 A CN104318525 A CN 104318525A CN 201410552788 A CN201410552788 A CN 201410552788A CN 104318525 A CN104318525 A CN 104318525A
- Authority
- CN
- China
- Prior art keywords
- image
- filtering
- space
- source images
- formula
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a space guiding filtering based image detail enhancement method. The space guiding filtering based image detail enhancement method is characterized by comprising the following steps of firstly, extracting an edge feature response diagram for a source image through an image edge detection algorithm and normalizing; secondly, respectively establishing binaryzation space indicating diagrams for different gray level intervals, performing gauss convolution on every space indicating diagram, obtaining a space filtering diagram and calculating a weight value of every space filtering diagram; thirdly, calculating an accumulation diagram and performing guiding image filtering on the accumulation diagram to obtain a space guiding diagram; finally, solving a foundation image and a residual image and establishing a space guiding filtering based image detail enhancement model to perform image detail enhancement on the source image. The space guiding filtering based image detail enhancement method can effectively improve the enhancement effect of image details.
Description
Technical field
The present invention relates to image detail enhancement method, is more particularly a kind of visual effect for improving image, strengthens the interpretation of detail portion bit image and the image detail enhancement method of discernment in image.
Background technology
21 century is the information age, internet develop rapidly, and the portable intelligent such as mobile phone, iPad mobile device has spread all over people's daily life.Nearly all portable intelligent mobile device all possesses image collecting function.The user in the whole world utilizes mobile phone or iPad to have taken a large amount of pictures every day, and shares on network.But under the pressure of the restriction of factor of natural environment residing during photograph taking or capture apparatus, the visual effect of a lot of network picture is not remarkable.So in order to promote the visual effect of picture, a large amount of digital image processing method studied person propose.In a large amount of digital image processing methods, image detail enhancement method to receive the concern of a large amount of researchers of academia and industry member in recent years.
In image, minutia often includes important information, especially in medical image.But due to the impact of the factor such as noise and contrast, the visuality of the minutia of these images reduces greatly, is unfavorable for the effective utilization to image detail feature.Image detail strengthens as a kind of image processing method, can not only give prominence to the minutia information in image, can also weaken or eliminate undesired signal.
Current, multiple image filtering method driven based on edge feature is used to image detail to be strengthened, such as local Laplce's filtering method and navigational figure filtering method.The main target of this kind of image filtering method is in the detail content not having to strengthen under the prerequisite introducing " artifact " image.But this kind of filtering method adopts unified filtering strength to the pixel in all regions of image in image filtering process, does not consider the contact between filtering strength and zones of different picture material, result in following defect:
(1) when strengthening image detail, " artifact " needs to be introduced extraly, reduces the scope of application of method;
(2) these filtering methods cannot adopt different filtering strengths to different image-regions, reduce filter effect.
In image detail enhancing process, user generally strengthens the region that in image, some comprises very important visual information, does not then process, remain unchanged to other normal areas.Such as, one width is contained to the natural land photo of " blue sky ", " tree " and " mountain peak ", most of user may wish to carry out details enhancing to the region of " mountain peak " or " tree " in image, and remains unchanged to " blue sky " region common in daily life.If can identify different semantic image-region exactly, and then adopt different filtering strengths to zones of different, the effect that image detail strengthens will promote greatly.But, due to " semantic gap " problem existing between Low Level Vision characteristic sum high semantic content, cause becoming very difficult from semantically identifying image-region exactly.Therefore, the enhancing effect of current image detail enhancement method is all limited, can not reach the requirement of user.
Summary of the invention
The present invention is the weak point for avoiding existing for above-mentioned prior art, provides a kind of image detail enhancement method guiding filtering based on space, to effectively promoting the enhancing effect to image detail.
The present invention is that solution problem adopts scheme with the following method:
A kind of feature guiding the image detail enhancement method of filtering based on space of the present invention is carried out as follows:
Step 1 is the source images I of m × n, I ∈ R for resolution
m × n, utilize the skirt response figure of source images I described in Image Edge-Detection operator extraction, and described skirt response figure be normalized divided by 255, obtain normalization skirt response matrix
m is the length of source images I; N is the width of source images I;
Step 2, by described normalization skirt response matrix
tonal range interval [0,1] be evenly divided into k sub-range Ω
i, utilize formula (1) respectively to described k sub-range Ω
iin each sub-range set up corresponding space index map M (i), M (i) ∈ R
m × n, 1≤i≤k;
In formula (1): (x, y) represents described normalization skirt response matrix
the coordinate of middle element, and the position corresponding to pixel in described source images I; 1≤x≤m, 1≤y≤n;
represent described normalization skirt response matrix
the value of middle xth row y column element is positioned at i-th sub-range Ω
iin; M
(x, y)i () represents the value of xth row y column element in i-th space index map M (i);
Step 3, utilize Gaussian convolution to carry out filtering process to each space index map successively, obtain k spatial filtering figure I
gauss(i), I
gauss(i) ∈ R
m × n;
Step 4, add up described normalization skirt response matrix successively
middle all elements drops on the number of element in each sub-range, and utilizes formula (2) to calculate the weight of each spatial filtering figure;
In formula (2), h
irepresent described normalization skirt response matrix
in element drop on i-th sub-range Ω
ithe number of middle element; W (i) represents the weight dropping on i-th spatial filtering figure;
Step 5, formula (3) is utilized to calculate cumulative figure S
a, S
a∈ R
m × n;
Step 6, using described source images I as guiding figure, adopt navigational figure filtering method to described cumulative figure S
aguide image filtering process, obtain space guiding figure S, S ∈ R
m × n;
Step 7, using described source images I as guiding figure, adopt navigational figure filtering method self to guide image filtering process to described source images I, obtain base image I
b, I
b∈ R
m × n;
Step 8, utilize image detail shown in formula (4) to strengthen model, image detail is carried out to described source images I and strengthens process, obtain details and strengthen image I
o, I
o∈ R
m × n;
I
o=I
b+S
0·S⊙I
r (4)
In formula (4): S
0for filtering strength; S
0for scalar; ⊙ is Hadamard product signs, represents that two matrix correspondences are multiplied; I
rrepresent residual image, and have I
r=I-I
b.
Compared with the prior art, beneficial effect of the present invention is embodied in:
1, the present invention constructs a space guiding figure, and semantic region that can be different in approximate evaluation image, overcomes the image-region None-identified problem that " semantic gap " brings.When image detail strengthens, serve a space guiding function.
2, the present invention proposes one guides filtering image detail enhancement method based on space, guiding figure in space is combined with navigational figure filtering, form space navigational figure filtering method.In the navigational figure filtering of space, different picture content areas is treated with a certain discrimination, adopt different filtering strengths, overcome the defect that original navigational figure filter method brings owing to adopting unified filtering strength, when image detail strengthens, the image strengthened can be made to seem more natural, and visual effect is more obvious.
3, image detail enhancement method of the present invention only uses simple Low Level Vision feature, and therefore computation complexity is low, carries out process hourly velocity comparatively fast, can obtain good Consumer's Experience to image.
Accompanying drawing explanation
Fig. 1 is the source images that the present invention needs to carry out image detail enhancing;
The space guiding figure that Fig. 2 sets up according to source images for the present invention;
Fig. 3 uses uniform filtering intensity to carry out the image after image detail enhancing to source images in prior art;
Fig. 4 is that the present invention uses the image detail enhancement method based on space guiding filtering to carry out the image after image detail enhancing to source images.
Embodiment
In the present embodiment, a kind of image detail enhancement method based on space guiding filtering is mainly used in the not good picture of visual effect and carries out image detail enhancing, promotes image vision conspicuousness.The method can make software APP, is arranged on the mobile terminals such as mobile phone or on PC end.The feature of the method proposes a kind of space guiding figure, and combine with original navigational figure filtering method, forms space navigational figure filtering method, for strengthening the detail content of image.
The inventive method carry out image detail strengthen time detailed process as follows:
Step 1 is the source images I of m × n, I ∈ R for resolution
m × n, utilize the skirt response figure of Image Edge-Detection operator extraction source images I, and skirt response figure be normalized divided by 255, obtain normalization skirt response matrix
m is the length of source images I; N is the width of source images I;
For the ease of introducing, in step 1, source images is described for gray level image.If the image carrying out details enhancing is coloured image, then details is carried out respectively to the image array of red, green, blue three Color Channels in coloured image and strengthen process, image array after finally the details of three Color Channels being strengthened merges into a complete coloured image, be coloured image carry out details enhancing after image.Fig. 1 is the width source images that the present invention uses when testing, the mainly image on a mountain peak, but the visual effect of entirety is not fine, and the detailed information on mountain peak is abundant not.
Image Edge-Detection operator in step 1 can be Sobel edge detection operator or Laplace edge detection operator, is all comparatively classical edge detection operator, all has relevant function directly to call at matlab software platform.The fringe region of image generally all comprises image detail information.
Step 2, by normalization skirt response matrix
tonal range interval [0,1] be evenly divided into k sub-range Ω
i, utilize formula (1) respectively to k sub-range Ω
iin each sub-range set up corresponding space index map M (i), M (i) ∈ R
m × n, 1≤i≤k;
In formula (1): (x, y) represents normalization skirt response matrix
the coordinate of middle element, and the position corresponding to pixel in source images I; 1≤x≤m, 1≤y≤n;
represent normalization skirt response matrix
the value of middle xth row y column element is positioned at i-th sub-range Ω
iin; M
(x, y)i () represents the value of xth row y column element in i-th space index map M (i);
In step 2, space index map is a binary picture, and step 2 is actually at each gray scale layer respectively to normalization skirt response matrix
carry out binaryzation.In fact, the region of semanteme identical in piece image is generally all in same gray scale layer, so space index map reflects the spatial information of image detail content.
In testing, k gets 16 to the inventive method.
Step 3, utilize Gaussian convolution to carry out filtering process to each space index map successively, obtain k spatial filtering figure I
gauss(i), I
gauss(i) ∈ R
m × n;
In fact, in step 3, continuous three the Boxfilter computings of Gaussian convolution method are similar to, and object is at the basic prerequisite decline low computational complexity keeping accuracy.
Step 4, successively statistics normalization skirt response matrix
middle all elements drops on the number of element in each sub-range, and utilizes formula (2) to calculate the weight of each spatial filtering figure;
In formula (2), h
irepresent normalization skirt response matrix
in element drop on i-th sub-range Ω
ithe number of middle element; W (i) represents the weight of i-th spatial filtering figure;
The material particular information of image generally all locates strong fringe region in the picture, but strong edge proportion in whole skirt response is less.The weak edge of image often occupies larger specific gravity in whole skirt response, but weak edge is normally by the formation of noise.Consider this phenomenon, strong edge less for proportion is just distributed high weight by formula (2), and the weak edge that proportion is larger distributes low weight.In fact step 4 is exactly to normalization skirt response matrix
carry out statistics with histogram, the result of statistics transfers the weights of spatial filtering figure in step 3 to.
A lot of image detail enhancement methods before all " are made no exception " for different gray scale layers, adopt unified filtering strength, and this causes image detail enhancing effect to be had a greatly reduced quality.The inventive method has carried out " treating with a certain discrimination " in step 4.
Step 5, formula (3) is utilized to calculate cumulative figure S
a, S
a∈ R
m × n;
Step 3 is carried out spatial filtering figure that Gaussian convolution obtains and is multiplied by corresponding weights and adds up by formula (3), obtains cumulative figure.
Step 6, using source images I as guiding figure, adopt navigational figure filtering method to cumulative figure S
aguide image filtering process, obtain space guiding figure S, S ∈ R
m × n;
In step 6, navigational figure filtering method is proposed in the European Computer visual conference of 2010 by doctor He Kaiming of Microsoft Research, Asia's vision calculating group.When source images is schemed for guiding, navigational figure filtering is exactly a filtering operation keeping image border.In this step, use navigational figure filtering to carry out filtering to figure cumulative in step 5, obtain final space guiding figure.Guiding figure in space directly reflects the spatial information of the different semantic content of image, for image detail below strengthens, plays one " guiding " effect.Fig. 2 is exactly the space guiding figure that the source images according to Fig. 1 is set up.
Step 7, using source images I as guiding figure, adopt navigational figure filtering method self to guide image filtering process to source images I, obtain base image I
b, I
b∈ R
m × n;
Base image I in this step
breflect the base image content of low-frequency range.
Step 8, utilize image detail shown in formula (4) to strengthen model, image detail is carried out to source images I and strengthens process, obtain details and strengthen image I
o, I
o∈ R
m × n;
I
o=I
b+S
0·S⊙I
r (4)
In formula (4): S
0for filtering strength, S
0for scalar, ⊙ is Hadamard product signs, represents that two matrix correspondences are multiplied; I
rrepresent residual image, and have I
r=I-I
b.The inventive method test time, filtering strength S
0get 3.Residual image I
rreflect the image detail content of high band.
What formula (4) reflected is the image detail enhancing model guiding filtering based on space that the present invention proposes.This model is actually the improvement of the image detail model shown in formula (5):
I
o=I
b+S
0·I
r (5)
Can find out that the image detail of formula (5) strengthens model is significantly the unified filtering strength adopted, and formula (5) compared by the image detail enhancing model of formula (4)) have more a space guiding figure S, and the space guiding figure S that the present invention proposes considers the relation between picture material and filtering strength to take different filtering strengths for different images content area just.
The image detail of Fig. 3 just according to formula (5) strengthens model and carries out the image after details enhancing to the source images shown in Fig. 1, can find out that the image detail in Fig. 3 not only " mountain peak " and " trees " region is enhanced, and the image in " sky " region have also been obtained enhancing, this part region should not be enhanced, after enhancing, integral image seems very lofty, the drawback that Here it is uses uniform filtering intensity to bring.
Fig. 4 is the image after the image detail enhancing model based on space guiding filtering shown in use formula (4) strengthens the source images shown in Fig. 1.Compare Fig. 3, Fig. 4 strengthens on mountain peak and trees region, and does not strengthen " sky " region, and this wants the image enhaucament result obtained just.
Above; be only the present invention's preferably a kind of embodiment; anyly be familiar with those skilled in the art in the technical scope that the present invention discloses, be equal to replacement according to technical scheme of the present invention and inventive concept thereof or correlation parameter changes, all should be encompassed within protection scope of the present invention.
Claims (1)
1. guide an image detail enhancement method for filtering based on space, it is characterized in that carrying out as follows:
Step 1 is the source images I of m × n, I ∈ R for resolution
m × n, utilize the skirt response figure of source images I described in Image Edge-Detection operator extraction, and described skirt response figure be normalized divided by 255, obtain normalization skirt response matrix | ▽ I|, | ▽ I| ∈ R
m × n; M is the length of source images I; N is the width of source images I;
Step 2, by described normalization skirt response matrix | the tonal range interval [0,1] of ▽ I| is evenly divided into k sub-range Ω
i, utilize formula (1) respectively to described k sub-range Ω
iin each sub-range set up corresponding space index map M (i), M (i) ∈ R
m × n, 1≤i≤k;
In formula (1): (x, y) represents described normalization skirt response matrix | the coordinate of element in ▽ I|, and the position corresponding to pixel in described source images I; 1≤x≤m, 1≤y≤n;
| ▽ I (x, y) | ∈ Ω
irepresenting described normalization skirt response matrix | in ▽ I|, the value of xth row y column element is positioned at i-th sub-range Ω
iin; M
(x, y)i () represents the value of xth row y column element in i-th space index map M (i);
Step 3, utilize Gaussian convolution to carry out filtering process to each space index map successively, obtain k spatial filtering figure I
gauss(i), I
gauss(i) ∈ R
m × n;
Step 4, adding up described normalization skirt response matrix successively | in ▽ I|, all elements drops on the number of element in each sub-range, and utilizes formula (2) to calculate the weight of each spatial filtering figure;
In formula (2), h
irepresenting described normalization skirt response matrix | the element in ▽ I| drops on i-th sub-range Ω
ithe number of middle element; W (i) represents the weight dropping on i-th spatial filtering figure;
Step 5, formula (3) is utilized to calculate cumulative figure S
a, S
a∈ R
m × n;
Step 6, using described source images I as guiding figure, adopt navigational figure filtering method to described cumulative figure S
aguide image filtering process, obtain space guiding figure S, S ∈ R
m × n;
Step 7, using described source images I as guiding figure, adopt navigational figure filtering method self to guide image filtering process to described source images I, obtain base image I
b, I
b∈ R
m × n;
Step 8, utilize image detail shown in formula (4) to strengthen model, image detail is carried out to described source images I and strengthens process, obtain details and strengthen image I
o, I
o∈ R
m × n;
I
o=I
b+S
0·S⊙I
r (4)
In formula (4): S
0for filtering strength; S
0for scalar; ⊙ is Hadamard product signs, represents that two matrix correspondences are multiplied; I
rrepresent residual image, and have I
r=I-I
b.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410552788.7A CN104318525B (en) | 2014-10-17 | 2014-10-17 | Space guiding filtering based image detail enhancement method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410552788.7A CN104318525B (en) | 2014-10-17 | 2014-10-17 | Space guiding filtering based image detail enhancement method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104318525A true CN104318525A (en) | 2015-01-28 |
CN104318525B CN104318525B (en) | 2017-02-15 |
Family
ID=52373751
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410552788.7A Active CN104318525B (en) | 2014-10-17 | 2014-10-17 | Space guiding filtering based image detail enhancement method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104318525B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106611407A (en) * | 2015-10-21 | 2017-05-03 | 中华映管股份有限公司 | Image enhancement method and image processing apparatus thereof |
CN112163994A (en) * | 2020-09-01 | 2021-01-01 | 重庆邮电大学 | Multi-scale medical image fusion method based on convolutional neural network |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103020917A (en) * | 2012-12-29 | 2013-04-03 | 中南大学 | Method for restoring ancient Chinese calligraphy and painting images on basis of conspicuousness detection |
US20130156332A1 (en) * | 2011-12-19 | 2013-06-20 | Cisco Technology, Inc. | System and method for depth-guided image filtering in a video conference environment |
EP2608549A2 (en) * | 2011-12-23 | 2013-06-26 | MediaTek Inc. | Method and apparatus for adjusting depth-related information map according to quality measurement result of the depth-related information map |
KR20130084643A (en) * | 2012-01-17 | 2013-07-25 | 삼성전자주식회사 | Image processing apparatus and method |
CN103337061A (en) * | 2013-07-18 | 2013-10-02 | 厦门大学 | Rain and snow removing method for image based on multiple guided filtering |
CN103440630A (en) * | 2013-09-02 | 2013-12-11 | 南京理工大学 | Large-dynamic-range infrared image display and detail enhancement method based on guiding filter |
CN103955899A (en) * | 2014-05-02 | 2014-07-30 | 南方医科大学 | Dynamic PET image denoising method based on combined image guiding |
CN104050637A (en) * | 2014-06-05 | 2014-09-17 | 华侨大学 | Quick image defogging method based on two times of guide filtration |
-
2014
- 2014-10-17 CN CN201410552788.7A patent/CN104318525B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130156332A1 (en) * | 2011-12-19 | 2013-06-20 | Cisco Technology, Inc. | System and method for depth-guided image filtering in a video conference environment |
EP2608549A2 (en) * | 2011-12-23 | 2013-06-26 | MediaTek Inc. | Method and apparatus for adjusting depth-related information map according to quality measurement result of the depth-related information map |
KR20130084643A (en) * | 2012-01-17 | 2013-07-25 | 삼성전자주식회사 | Image processing apparatus and method |
CN103020917A (en) * | 2012-12-29 | 2013-04-03 | 中南大学 | Method for restoring ancient Chinese calligraphy and painting images on basis of conspicuousness detection |
CN103337061A (en) * | 2013-07-18 | 2013-10-02 | 厦门大学 | Rain and snow removing method for image based on multiple guided filtering |
CN103440630A (en) * | 2013-09-02 | 2013-12-11 | 南京理工大学 | Large-dynamic-range infrared image display and detail enhancement method based on guiding filter |
CN103955899A (en) * | 2014-05-02 | 2014-07-30 | 南方医科大学 | Dynamic PET image denoising method based on combined image guiding |
CN104050637A (en) * | 2014-06-05 | 2014-09-17 | 华侨大学 | Quick image defogging method based on two times of guide filtration |
Non-Patent Citations (1)
Title |
---|
SHIJIE HAO ET AL: ""Spatially guided local Laplacian filter for nature image detail enhancement"", 《MULTIMEDIA TOOLS AND APPLICATIONS》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106611407A (en) * | 2015-10-21 | 2017-05-03 | 中华映管股份有限公司 | Image enhancement method and image processing apparatus thereof |
CN112163994A (en) * | 2020-09-01 | 2021-01-01 | 重庆邮电大学 | Multi-scale medical image fusion method based on convolutional neural network |
CN112163994B (en) * | 2020-09-01 | 2022-07-01 | 重庆邮电大学 | Multi-scale medical image fusion method based on convolutional neural network |
Also Published As
Publication number | Publication date |
---|---|
CN104318525B (en) | 2017-02-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7413400B2 (en) | Skin quality measurement method, skin quality classification method, skin quality measurement device, electronic equipment and storage medium | |
EP3916627A1 (en) | Living body detection method based on facial recognition, and electronic device and storage medium | |
CN102567731B (en) | Extraction method for region of interest | |
CN104103033B (en) | View synthesis method | |
CN103136766B (en) | A kind of object conspicuousness detection method based on color contrast and color distribution | |
CN107808358A (en) | Image watermark automatic testing method | |
CN103634680B (en) | The control method for playing back and device of a kind of intelligent television | |
CN104794479B (en) | This Chinese detection method of natural scene picture based on the transformation of local stroke width | |
CN103218832B (en) | Based on the vision significance algorithm of global color contrast and spatial distribution in image | |
CN106203326B (en) | A kind of image processing method, device and mobile terminal | |
CN108230255A (en) | It is used to implement the method, apparatus and electronic equipment of image enhancement | |
CN103618918A (en) | Method and device for controlling display of smart television | |
CN106920211A (en) | U.S. face processing method, device and terminal device | |
US20240005468A1 (en) | Image distortion evaluation method and apparatus, and computer device | |
CN105389541B (en) | The recognition methods of fingerprint image and device | |
CN106529543B (en) | A kind of dynamic calculates the method and its system of polychrome grade binaryzation adaptive threshold | |
JP2021531571A (en) | Certificate image extraction method and terminal equipment | |
CN106506901A (en) | A kind of hybrid digital picture halftoning method of significance visual attention model | |
CN107103619A (en) | A kind of processing method of hair grain direction, apparatus and system | |
CN111860369A (en) | Fraud identification method and device and storage medium | |
CN110111347B (en) | Image sign extraction method, device and storage medium | |
CN103927509A (en) | Eye locating method and device | |
CN113052923A (en) | Tone mapping method, tone mapping apparatus, electronic device, and storage medium | |
CN104143102A (en) | Online image data processing method | |
CN103295238B (en) | Video real-time location method based on ROI motion detection on Android platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |