CN105023253A - Visual underlying feature-based image enhancement method - Google Patents

Visual underlying feature-based image enhancement method Download PDF

Info

Publication number
CN105023253A
CN105023253A CN201510418774.0A CN201510418774A CN105023253A CN 105023253 A CN105023253 A CN 105023253A CN 201510418774 A CN201510418774 A CN 201510418774A CN 105023253 A CN105023253 A CN 105023253A
Authority
CN
China
Prior art keywords
image
remarkable
visual
feature
brightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510418774.0A
Other languages
Chinese (zh)
Inventor
郭少东
王晓红
章婷
麻祥才
刘玄玄
洪建华
况盛坤
李�杰
魏代海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201510418774.0A priority Critical patent/CN105023253A/en
Publication of CN105023253A publication Critical patent/CN105023253A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a visual underlying feature-based image enhancement method. The method includes the following steps that: 1, the visual underlying features of an image are extracted; 2, weight fusion is performed on the visual underlying features of the image, so that a computing saliency map can be formed; 3, correlation coefficient comparison is performed on an eye movement saliency map measured by an eye tracker and the computing saliency map, so that the optimal weight of the visual underlying features can be determined; and 4, proper image enhancement methods can be selected for different types of images according to weight. According to the visual underlying feature-based image enhancement method of the invention, the eye fixation point-based eye movement saliency map is combined with a traditional image enhancement method for the first time, and an optimized visual underlying feature-based Itti visual attention model is adopted to extract the computing feature saliency map of the image, and therefore, the visual quality of color images re-displayed on a digital display device can be improved; and important information in the images can be highlighted according to different types of images, and less information is weakened, and therefore, a final image enhancement effect can more accords with eye visual perception. The method has a certain application value in image enhancement.

Description

The image enchancing method of view-based access control model low-level image feature
Technical field
The present invention relates to a kind of digital image processing method, especially a kind of image enchancing method of view-based access control model low-level image feature.
Background technology
In actual life, digital picture can cause because of a variety of causes the situation occurring distortion on the display device.In order to avoid this drawback, one of image enhancement technique research emphasis becoming Digital Image Processing.Show according to research, complicated human visual system when processing image be priority processing those attract the region (human eye vision area-of-interest) of human eye vision attention most, instead of the global information of image to be analyzed.
As everyone knows, the method that traditional images strengthens is all carry out treatment and analysis based on global information, this does not meet the characteristic of human visual system, not only cause many to the calculating waste in secondary information process analysis, reduce the efficiency of image procossing, and its final image enhancement effects do not meet the perception of human eye vision.Therefore, in order to overcome the drawback that traditional images Enhancement Method is brought, a kind of novel colour-image reinforcing method based on human eye visual perception becomes the focus of research.This method has considered human-eye visual characteristic and a large amount of physical experiment conclusion at heart.
Summary of the invention
The object of the invention is the image enchancing method proposing a kind of view-based access control model low-level image feature, the image enhancement effects making it final more meets human eye visual perception.
To achieve these goals, the technical solution used in the present invention is as follows:
An image enchancing method for view-based access control model low-level image feature, the method has following steps:
(1) extract the vision low-level image feature of image, comprise color, brightness, direction, texture and edge feature;
(2) Weighted Fusion becomes to calculate significantly figure;
(3) eye recorded with eye tracker moves remarkable figure to carry out related coefficient and compares, to determine the optimal weight of each vision low-level image feature;
(4) be that dissimilar image selects suitable image enchancing method according to the size of weight.
Described step (1) comprises following concrete steps further:
(1.1) in hsv color space, extract brightness, source images is changed into the gray-scale map only having monochrome information, finally image pixel value is normalized to again the scope of [0,255];
(1.2) under the hsv color space meeting human eye visual perception, extract color characteristic, after having extracted color characteristic, pixel value is normalized to again the scope of [0,255];
(1.3) direction character of image is extracted by Gabor transformation;
(1.4) Gabor filter group texture feature extraction is adopted: first utilize Gabor filter group to extract textural characteristics on 5 yardsticks of image, 4 directions, then this 20 width textural characteristics figure is normalized, finally all images are superimposed by equal weight, form last texture and significantly scheme;
(1.5) the most typical extracting mode of edge feature is the Laplace operator utilizing second-order differential, extracts the gray-scale edges feature of image in HSV space.
Described step (2) comprises following concrete steps further:
(2.1) Gaussian function is adopted respectively the color of input picture, brightness, direction, texture and edge to be carried out to the filtering of 6 yardsticks;
(2.2) Image Visual Feature under utilizing difference operation and Laplace operator to extract different scale;
(2.3) carry out central peripheral difference respectively to calculate, error image normalization is sued for peace, obtains the remarkable figure of this feature;
(2.4) carry out weight configuration for the 5 width characteristic remarkable pictures obtained, and ensure that the weighted value addition of 5 vision low-level image features equals 1; 5 width characteristic remarkable pictures are superposed, forms the remarkable figure after 1 width combination of multiple features.
The algorithmic formula that described step (3) adopts is as follows:
ρ = Σ x ( H ( x ) - μ H ) · ( S ( x ) - μ s ) Σ x ( H ( x ) - μ H ) 2 . Σ x ( S ( x ) - μ S ) 2
In formula, the pixel in x representative image, H (x), S (x) are that eye moves the saliency value significantly scheming and calculate current pixel in remarkable figure, μ respectively h, μ srepresent the average saliency value of all pixels in the remarkable figure of two width respectively, the span of ρ is [-1,1], significantly scheme to be correlated with completely when ρ=1 represents two width, ρ=0 represents two width and significantly schemes completely uncorrelated, ρ=-1 represents two width figure inverse correlations, and the marking area namely in the remarkable figure of a width is completely not remarkable in the remarkable figure of another width.
Described step (4) comprises following concrete steps further:
(4.1) account for the image of the 1st weight for brightness, adopt the optimization histogram equalization method based on the remarkable figure of human eye;
(4.2) account for the image of the 1st weight for color characteristic, then take to strengthen the image enchancing method of color saturation in visual impression region-of-interest and contrast;
(4.3) account for the image of the 1st weight for textural characteristics, adopt the method for wavelet transformation here, denoising enhancing is carried out to visual impression region-of-interest, highlight the details of texture part.
The algorithmic formula adopted in described step (1.2) is as follows:
f c ( x , y ) = 1 ( s c + exp ( - s a t u r a t i o n ( x , y ) saturation a ν e ) ) · ( b c + exp ( - b r i g h t n e s s ( x , y ) brightness a ν e ) )
Wherein, saturation (x, y) and brightness (x, y) refers to saturation degree and the brightness value of pixel (x, y) respectively, saturation aveand brightness avethen represent the saturation degree of entire image and the mean value of brightness, s cand b cthat constant all gets 0.5.
The algorithmic formula that described step (1.3) adopts is as follows:
O = exp ( - x 2 + y 2 2 σ 2 ) · c o s ( w · x · c o s ( θ ) + w · y · s i n ( θ ) + π 2 )
Wherein, w is constant, and θ is direction θ=n π/k (n=0,1 of wave filter ... k-1), k refers to the direction number of Gabor filter; Extract the direction character of image with Gabor function, the direction setting of its wave filter is then 4 width characteristic patterns are formed final direction according to equal weight superposition significantly to scheme (O).
The algorithmic formula that described step (1.4) adopts is as follows:
g ( x , y ) = k σ 2 exp ( - k 2 ( x 2 + y 2 ) 2 σ 2 ) exp ( k ( x c o s θ + y s i n θ ) )
Wherein, n refers to yardstick, and θ is finger direction.
The algorithmic formula that described step (2.4) adopts is as follows:
S=w 1C+w 2I+w 3O+w 4T+w 5E
In formula, w 1, w 2, w 3, w 4, w 5be weighted value corresponding to each vision low-level image feature index respectively, add up to 1; C, I, O, T, E represent the remarkable figure of color, brightness, direction, texture and edge feature respectively; Finally namely S image is calculated remarkable figure to normalize to [0,255], and be processed into bianry image.
The invention has the beneficial effects as follows:
The present invention compares with conventional images Enhancement Method following advantage: the eye based on human eye fixation point is moved remarkable figure and is integrated into traditional image enchancing method by the present invention first, propose a kind of Itti visual attention model of view-based access control model low-level image feature of optimization to extract the calculating characteristic remarkable picture of image, improve visual quality when coloured image reproduces on digital distance scope, for different images type, highlight the important information in image, weaken secondary information, final image enhancement effects is made more to meet human eye visual perception, using value is played in image enhaucament.
Accompanying drawing explanation
Fig. 1 is the remarkable figure computation model overall flow figure of view-based access control model low-level image feature;
Fig. 2 is the final effect figure of two kinds of remarkable figure;
Fig. 3 is optimal weight analysis result table;
Fig. 4 is the histogram equalization schematic diagram of view-based access control model area-of-interest;
Fig. 5 is brightness image enchancing method comparison diagram;
Fig. 6 is that color and brightness substep carry out image enchancing method comparison diagram;
Fig. 7 is that texture and brightness substep carry out image enchancing method comparison diagram.
Specific implementation method
Do detailed elaboration below in conjunction with the preferred embodiment of accompanying drawing to the image enchancing method of a kind of view-based access control model low-level image feature involved in the present invention, but the present invention is not limited only to this embodiment.To have the present invention to make the public and understand thoroughly, in following preferred embodiment, carried out concrete details explanation.
1. the remarkable figure computation model of view-based access control model low-level image feature
As shown in Figure 1, the remarkable figure computation model overall flow of view-based access control model low-level image feature is as follows:
(1) brightness: be chosen in hsv color space and extract brightness, changes into the gray-scale map only having monochrome information by source images, finally image pixel value is normalized to again the scope of [0,255];
(2) color characteristic: color characteristic extracts under the hsv color space meeting human eye visual perception, after having extracted color characteristic, will normalize to the scope of [0,255], its algorithmic formula again by pixel value:
f c ( x , y ) = 1 ( s c + exp ( - s a t u r a t i o n ( x , y ) saturation a ν e ) ) · ( b c + exp ( - b r i g h t n e s s ( x , y ) brightness a ν e ) )
Wherein, saturation (x, y) and brightness (x, y) refers to saturation degree and the brightness value of pixel (x, y) respectively, saturation aveand brightness avethen represent the saturation degree of entire image and the mean value of brightness, s cand b cthat constant all gets 0.5.
(3) direction character: direction character is the local feature of image, showed some pixel in image also exist on certain direction in Rankine-Hugoniot relations, and then create a kind of visual sense of direction.General Bian Gabor transformation extracts the direction character of image, its computing formula:
O = exp ( - x 2 + y 2 2 σ 2 ) · c o s ( w · x · cos ( θ ) + w · y · s i n ( θ ) + π 2 )
Wherein, w is constant, and θ is direction θ=n π/k (n=0,1 of wave filter ... k-1), k refers to the direction number of Gabor filter.Adopt Gabor function to extract the direction character of image herein, the direction setting of its wave filter is then 4 width characteristic patterns are formed final direction according to equal weight superposition significantly to scheme (O).
(4) textural characteristics: adopt Gabor filter group to extract the feature of texture.First utilize Gabor filter group to extract textural characteristics on 5 yardsticks of image, 4 directions, then this 20 width textural characteristics figure is normalized, finally all images is superimposed by equal weight, forms last texture and significantly scheme; The algorithmic formula adopted is as follows:
g ( x , y ) = k σ 2 exp ( - k 2 ( x 2 + y 2 ) 2 σ 2 ) exp ( k ( x c o s θ + y s i n θ ) )
Wherein, n refers to yardstick, and θ is finger direction.
(5) edge feature: the most typical extracting mode of edge feature is the Laplace operator utilizing second-order differential, extracts the gray-scale edges feature of image in HSV space.
After obtaining the characteristic pattern of 5 kinds of vision low-level image features, utilize the concrete steps of Itti model calculating characteristic remarkable picture as described below.
(1) Gaussian function is adopted respectively the color of input picture, brightness, direction, texture and edge to be carried out to the filtering of 6 yardsticks;
(2) Image Visual Feature under utilizing difference operation and Laplace operator to extract different scale;
(3) carry out central peripheral difference respectively to calculate, error image normalization is sued for peace, obtains the remarkable figure of this feature; (4) weight configuration (ensure that the weighted value of 5 vision low-level image features is added and equal 1) is carried out for the 5 width characteristic remarkable pictures obtained.Superposed by 5 width characteristic remarkable pictures, form the remarkable figure after 1 width combination of multiple features, formula is:
S=w 1C+w 2I+w 3O+w 4T+w 5E
In formula, w 1, w 2, w 3, w 4, w 5be weighted value corresponding to each vision low-level image feature index respectively, add up to 1; C, I, O, T, E represent the remarkable figure of color, brightness, direction, texture and edge feature respectively; Finally namely S image is calculated remarkable figure to normalize to [0,255], and be processed into bianry image.
2. the remarkable figure computation model of the Gauss of human eye fixation point
Blinkpunkt generally refers to the position of human eye area-of-interest, the blinkpunkt obtaining human eye is caught by high-accuracy eye tracker, by Gaussian function to all blinkpunkt simulating two-dimensional Gaussian distribution, namely the value when blinkpunkt position is maximum, along with its field constantly expands, value will constantly reduce to be close to zero.Dimensional Gaussian distribution function is:
H ( x , y ) = 1 2 πσ 2 exp ( - ( x - x 0 ) 2 + ( y - y 0 ) 2 2 σ 2 )
In formula, H (x, y) is for blinkpunkt is to the influence degree of its neighboring area; (x 0, y 0) be the coordinate of blinkpunkt; σ refers to the variance parameter of gauss of distribution function.According to actual observation Distance geometry angle in experiment, parameter σ=37, all take into account all blinkpunkts.The final effect figure of two kinds of remarkable figure as shown in Figure 2, to be wherein former figure, Fig. 2 b significantly scheme S for calculating to Fig. 2 a, and to be that eye is dynamic significantly scheme H to Fig. 2 c.
3. based on the determination of different images vision low-level image feature weight coefficient
Adopt correlation analysis to compare and calculate the similarity significantly scheming to move with eye remarkable figure, thus determine further in the weight calculating each vision low-level image feature in remarkable figure.
The formula of related coefficient is as follows:
ρ = Σ x ( H ( x ) - μ H ) · ( S ( x ) - μ S ) Σ x ( H ( x ) - μ H ) 2 · Σ x ( S ( x ) - μ S ) 2
In formula, the pixel in x representative image, H (x), S (x) are that eye moves the saliency value significantly scheming and calculate current pixel in remarkable figure, μ respectively h, μ srepresent the average saliency value of all pixels in the remarkable figure of two width respectively.The span of ρ is [-1,1], significantly scheme to be correlated with completely when ρ=1 represents two width, ρ=0 represents two width and significantly schemes completely uncorrelated, ρ=-1 represents two width figure inverse correlations, and the marking area namely in the remarkable figure of a width is completely not remarkable in the remarkable figure of another width.
4. the result of calculation of optimal weight and analysis
It is objective and changeless that eye in experiment moves remarkable figure, and calculate remarkable figure and constantly can be adjusted by the weight allocation of visual signature, until a stack features weight makes to calculate significantly figure and the eye related coefficient moved between remarkable figure reach maximum, this group weighted value is exactly optimal weight.In order to reduce calculated amount, first by observing the hotspot graph that eye tracker experiment obtains, determine the several characteristic indexs the most deep on all types of picture impact, and correspondingly give larger weight, then the heavy accuracy value of weighting is 0.05, namely the weighted value scope of each visual signature is [0.05,0.75], and entitlement weight values adds up to 1.Obtain the possibility of the entitlement code reassignment met under these conditions, and calculate related coefficient, that group weighted value when selecting related coefficient maximum.Experimental result as shown in Figure 3.
As can be seen from Figure 3, when getting best weights weight values, the related coefficient between the remarkable figure of 2 width all reaches more than 80%, and correlativity is very strong.Mix from dividing of weighted value, for the image of personal portrait class, its textural characteristics weight reaches 0.46, and influence degree is maximum; And in natural land image, the weight of brightness reaches 0.42, to having the greatest impact of visual impression region-of-interest; In animal class image, the weight of textural characteristics reaches 0.34, and influence degree is maximum; In the image of color class, color characteristic has been endowed larger weight and has reached 0.34, and influence degree is maximum.
Comparing result shows, in dissimilar image, the influence degree of different visual signatures to interesting image regions is different; Same visual signature is also different to dissimilar image contributions degree.
5. based on the image enchancing method of different images vision low-level image feature
Drawn by Fig. 3, these 3 features of color, brightness and texture account for major weight in all types of image.Account for the image of the 1st weight for brightness, propose the optimization histogram equalization method based on the remarkable figure of human eye, principle is shown in Fig. 4.First constructing the gray level histogram based on the remarkable figure of human eye, then carrying out rational limited amplitude process to accounting for leading gray level in histogram; By the process saved space of deutomerite according to the quantity of information distribution situation of gray level histogram, optimize the gray level distributing to non-dominant.Like this, just reach the object of so-called proportion optimizing, experiment proves that the image enhancement effects of the method more meets human eye visual perception.
Account for the image of the 1st weight for color characteristic, then take to strengthen the image enchancing method of color saturation in visual impression region-of-interest and contrast.Account for the image of the 1st weight for textural characteristics, adopt the method for wavelet transformation here, denoising enhancing is carried out to visual impression region-of-interest, highlight the details of texture part.
Consider and there will be the close special circumstances of 2 principal visual low-level image feature weights, the weighted value as direction and brightness in the class image of ocean is very close.Thus setting (comprises 0.05) when the difference of 2 weights within 0.05, then carry out according to the corresponding Enhancement Method substep of the 1st, the 2nd weight.If the difference of 2 weights is more than 0.05, then still process according to the corresponding Enhancement Method of the 1st weight.
Fig. 5-7 contrasts the design sketch that other do not consider the image enchancing method of visual impression region-of-interest
As seen from Figure 5, when strengthening a kind of vision low-level image feature separately, the effect of the histogram enhancement method based on human eye visual perception of proposition is better, avoids the problem that traditional histogram equalization crosses enhancing.Wherein, Fig. 5 a is the gray-scale map of former Fig. 1 component, and Fig. 5 b is that traditional histogram equalization strengthens design sketch, and Fig. 5 c strengthens design sketch for optimizing histogram equalization.As can be seen from Figures 6 and 7, the effect of carrying out substep enhancing for multiple key feature proposed, only process for single features than traditional Enhancement Method, design sketch after final enhancing more can highlight the very important visual information (raising as all corresponding in color saturation, contrast in human eye area-of-interest, texture is more clear), meet human eye visual perception.Wherein, Fig. 6 a, Fig. 7 a are former figure, Fig. 6 b, Fig. 7 b is traditional images Enhancement Method, and Fig. 6 c, Fig. 7 c are the image enchancing method proposed.

Claims (9)

1. an image enchancing method for view-based access control model low-level image feature, is characterized in that, the method has following steps:
(1) extract the vision low-level image feature of image, comprise color, brightness, direction, texture and edge feature;
(2) Weighted Fusion becomes to calculate significantly figure;
(3) eye recorded with eye tracker moves remarkable figure to carry out related coefficient and compares, to determine the optimal weight of each vision low-level image feature;
(4) be that dissimilar image selects suitable image enchancing method according to the size of weight.
2. the image enchancing method of view-based access control model low-level image feature according to claim 1, is characterized in that, described step (1) comprises following concrete steps further:
(1.1) in hsv color space, extract brightness, source images is changed into the gray-scale map only having monochrome information, finally image pixel value is normalized to again the scope of [0,255];
(1.2) under the hsv color space meeting human eye visual perception, extract color characteristic, after having extracted color characteristic, pixel value is normalized to again the scope of [0,255];
(1.3) direction character of image is extracted by Gabor transformation;
(1.4) Gabor filter group texture feature extraction is adopted: first utilize Gabor filter group to extract textural characteristics on 5 yardsticks of image, 4 directions, then this 20 width textural characteristics figure is normalized, finally all images are superimposed by equal weight, form last texture and significantly scheme;
(1.5) the most typical extracting mode of edge feature is the Laplace operator utilizing second-order differential, extracts the gray-scale edges feature of image in HSV space.
3. the image enchancing method of view-based access control model low-level image feature according to claim 1, is characterized in that, described step (2) comprises following concrete steps further:
(2.1) Gaussian function is adopted respectively the color of input picture, brightness, direction, texture and edge to be carried out to the filtering of 6 yardsticks;
(2.2) Image Visual Feature under utilizing difference operation and Laplace operator to extract different scale;
(2.3) carry out central peripheral difference respectively to calculate, error image normalization is sued for peace, obtains the remarkable figure of this feature;
(2.4) carry out weight configuration for the 5 width characteristic remarkable pictures obtained, and ensure that the weighted value addition of 5 vision low-level image features equals 1; 5 width characteristic remarkable pictures are superposed, forms the remarkable figure after 1 width combination of multiple features.
4. the image enchancing method of view-based access control model low-level image feature according to claim 1, is characterized in that, the algorithmic formula that described step (3) adopts is as follows:
ρ = Σ x ( H ( x ) - μ H ) · ( S ( x ) - μ s ) Σ x ( H ( x ) - μ H ) 2 · Σ x ( S ( x ) - μ S ) 2
In formula, the pixel in x representative image, H (x), S (x) are that eye moves the saliency value significantly scheming and calculate current pixel in remarkable figure, μ respectively h, μ srepresent the average saliency value of all pixels in the remarkable figure of two width respectively, the span of ρ is [-1,1], significantly scheme to be correlated with completely when ρ=1 represents two width, ρ=0 represents two width and significantly schemes completely uncorrelated, ρ=-1 represents two width figure inverse correlations, and the marking area namely in the remarkable figure of a width is completely not remarkable in the remarkable figure of another width.
5. the image enchancing method of view-based access control model low-level image feature according to claim 1, is characterized in that, described step (4) comprises following concrete steps further:
(4.1) account for the image of the 1st weight for brightness, adopt the optimization histogram equalization method based on the remarkable figure of human eye;
(4.2) account for the image of the 1st weight for color characteristic, then take to strengthen the image enchancing method of color saturation in visual impression region-of-interest and contrast;
(4.3) account for the image of the 1st weight for textural characteristics, adopt the method for wavelet transformation here, denoising enhancing is carried out to visual impression region-of-interest, highlight the details of texture part.
6. the image enchancing method of view-based access control model low-level image feature according to claim 2, is characterized in that, the algorithmic formula adopted in described step (1.2) is as follows:
f c ( x , y ) = 1 ( s c + exp ( - s a t u r a t i o n ( x , y ) saturation a ν e ) ) · ( b c + exp ( - b r i g h t n e s s ( x , y ) brightness a ν e ) )
Wherein, saturation (x, y) and brightness (x, y) refers to saturation degree and the brightness value of pixel (x, y) respectively, saturation aveand brightness avethen represent the saturation degree of entire image and the mean value of brightness, s cand b cthat constant all gets 0.5.
7. the image enchancing method of view-based access control model low-level image feature according to claim 2, is characterized in that, the algorithmic formula that described step (1.3) adopts is as follows:
O = exp ( - x 2 + y 2 2 σ 2 ) · c o s ( w · x · c o s ( θ ) + w · y · s i n ( θ ) + π 2 )
Wherein, w is constant, and θ is direction θ=n π/k (n=0,1 of wave filter ... k-1), k refers to the direction number of Gabor filter; Extract the direction character of image with Gabor function, the direction setting of its wave filter is then 4 width characteristic patterns are formed final direction according to equal weight superposition significantly to scheme (O).
8. the image enchancing method of view-based access control model low-level image feature according to claim 2, is characterized in that, the algorithmic formula that described step (1.4) adopts is as follows:
g ( x , y ) = k σ 2 exp ( - k 2 ( x 2 + y 2 ) 2 σ 2 ) exp ( k ( x c o s θ + y s i n θ ) )
Wherein, n refers to yardstick, and θ is finger direction.
9. the image enchancing method of view-based access control model low-level image feature according to claim 3, is characterized in that, the algorithmic formula that described step (2.4) adopts is as follows:
S=w 1C+w 2I+w 3O+w 4T+w 5E
In formula, w 1, w 2, w 3, w 4, w 5be weighted value corresponding to each vision low-level image feature index respectively, add up to 1; C, I, O, T, E represent the remarkable figure of color, brightness, direction, texture and edge feature respectively; Finally namely S image is calculated remarkable figure to normalize to [0,255], and be processed into bianry image.
CN201510418774.0A 2015-07-16 2015-07-16 Visual underlying feature-based image enhancement method Pending CN105023253A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510418774.0A CN105023253A (en) 2015-07-16 2015-07-16 Visual underlying feature-based image enhancement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510418774.0A CN105023253A (en) 2015-07-16 2015-07-16 Visual underlying feature-based image enhancement method

Publications (1)

Publication Number Publication Date
CN105023253A true CN105023253A (en) 2015-11-04

Family

ID=54413197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510418774.0A Pending CN105023253A (en) 2015-07-16 2015-07-16 Visual underlying feature-based image enhancement method

Country Status (1)

Country Link
CN (1) CN105023253A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105898288A (en) * 2016-05-30 2016-08-24 上海交通大学 Synergistic visual search system and method capable of sharing attentions
CN107085828A (en) * 2017-04-29 2017-08-22 天津大学 Image mosaic fusion method based on human-eye visual characteristic
CN107147874A (en) * 2017-05-31 2017-09-08 深圳众厉电力科技有限公司 A kind of good remote monitoring system of monitoring performance
CN108230233A (en) * 2017-05-16 2018-06-29 北京市商汤科技开发有限公司 Data enhancing, treating method and apparatus, electronic equipment and computer storage media
CN108305236A (en) * 2018-01-16 2018-07-20 腾讯科技(深圳)有限公司 Image enhancement processing method and device
CN109035203A (en) * 2018-06-25 2018-12-18 青岛海信医疗设备股份有限公司 Medical image processing method, device, equipment and storage medium
CN109300099A (en) * 2018-08-29 2019-02-01 努比亚技术有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN109344840A (en) * 2018-08-07 2019-02-15 深圳市商汤科技有限公司 Image processing method and device, electronic equipment, storage medium, program product
CN109461417A (en) * 2018-12-11 2019-03-12 惠科股份有限公司 A kind of driving method of display panel, drive system and display device
CN109978881A (en) * 2019-04-09 2019-07-05 苏州浪潮智能科技有限公司 A kind of method and apparatus of saliency processing
CN111278363A (en) * 2017-10-16 2020-06-12 北京深迈瑞医疗电子技术研究院有限公司 Ultrasonic imaging equipment, system and image enhancement method for ultrasonic contrast imaging
CN115661447A (en) * 2022-11-23 2023-01-31 成都信息工程大学 Product image adjusting method based on big data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102510438A (en) * 2011-11-21 2012-06-20 四川虹微技术有限公司 Acquisition method of sparse coefficient vector for recovering and enhancing video image
CN103177420A (en) * 2013-03-13 2013-06-26 北京大学 Image amplification method and image application device based on local-area feature correlations
CN103345732A (en) * 2013-07-26 2013-10-09 电子科技大学 Pulse coupled neural network (PCNN) image enhancement algorithm and device based on Contourlet transformation
CN103500442A (en) * 2013-09-29 2014-01-08 华南理工大学 X-ray image multi-scale detail enhancement method in integrated circuit packaging

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102510438A (en) * 2011-11-21 2012-06-20 四川虹微技术有限公司 Acquisition method of sparse coefficient vector for recovering and enhancing video image
CN103177420A (en) * 2013-03-13 2013-06-26 北京大学 Image amplification method and image application device based on local-area feature correlations
CN103345732A (en) * 2013-07-26 2013-10-09 电子科技大学 Pulse coupled neural network (PCNN) image enhancement algorithm and device based on Contourlet transformation
CN103500442A (en) * 2013-09-29 2014-01-08 华南理工大学 X-ray image multi-scale detail enhancement method in integrated circuit packaging

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张尚伟 等: "具有细节补偿和色彩恢复的多尺度Retinex色调映射算法", 《西安交通大学学报》 *
章婷 等: "基于视觉底层特征的图像增强方法", 《包装工程》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105898288A (en) * 2016-05-30 2016-08-24 上海交通大学 Synergistic visual search system and method capable of sharing attentions
CN107085828A (en) * 2017-04-29 2017-08-22 天津大学 Image mosaic fusion method based on human-eye visual characteristic
CN107085828B (en) * 2017-04-29 2020-06-26 天津大学 Image splicing and fusing method based on human visual characteristics
CN108230233A (en) * 2017-05-16 2018-06-29 北京市商汤科技开发有限公司 Data enhancing, treating method and apparatus, electronic equipment and computer storage media
CN107147874A (en) * 2017-05-31 2017-09-08 深圳众厉电力科技有限公司 A kind of good remote monitoring system of monitoring performance
US11737734B2 (en) 2017-10-16 2023-08-29 Beijing Shen Mindray Med Elec Tech Res Inst Co Ltd Ultrasound imaging device and system, and image enhancement method for contrast enhanced ultrasound imaging
CN111278363B (en) * 2017-10-16 2022-07-22 北京深迈瑞医疗电子技术研究院有限公司 Ultrasonic imaging equipment, system and image enhancement method for ultrasonic contrast imaging
CN111278363A (en) * 2017-10-16 2020-06-12 北京深迈瑞医疗电子技术研究院有限公司 Ultrasonic imaging equipment, system and image enhancement method for ultrasonic contrast imaging
CN108305236A (en) * 2018-01-16 2018-07-20 腾讯科技(深圳)有限公司 Image enhancement processing method and device
CN109035203A (en) * 2018-06-25 2018-12-18 青岛海信医疗设备股份有限公司 Medical image processing method, device, equipment and storage medium
WO2020029708A1 (en) * 2018-08-07 2020-02-13 深圳市商汤科技有限公司 Image processing method and apparatus, electronic device, storage medium, and program product
JP2021507439A (en) * 2018-08-07 2021-02-22 深▲チェン▼市商▲湯▼科技有限公司Shenzhen Sensetime Technology Co., Ltd. Image processing methods and devices, electronic devices, storage media and program products
CN109344840B (en) * 2018-08-07 2022-04-01 深圳市商汤科技有限公司 Image processing method and apparatus, electronic device, storage medium, and program product
JP7065199B2 (en) 2018-08-07 2022-05-11 深▲チェン▼市商▲湯▼科技有限公司 Image processing methods and equipment, electronic devices, storage media and program products
CN109344840A (en) * 2018-08-07 2019-02-15 深圳市商汤科技有限公司 Image processing method and device, electronic equipment, storage medium, program product
CN109300099A (en) * 2018-08-29 2019-02-01 努比亚技术有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN109461417A (en) * 2018-12-11 2019-03-12 惠科股份有限公司 A kind of driving method of display panel, drive system and display device
CN109978881A (en) * 2019-04-09 2019-07-05 苏州浪潮智能科技有限公司 A kind of method and apparatus of saliency processing
CN109978881B (en) * 2019-04-09 2021-11-26 苏州浪潮智能科技有限公司 Image saliency processing method and device
CN115661447A (en) * 2022-11-23 2023-01-31 成都信息工程大学 Product image adjusting method based on big data

Similar Documents

Publication Publication Date Title
CN105023253A (en) Visual underlying feature-based image enhancement method
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN104834922B (en) Gesture identification method based on hybrid neural networks
CN103310453B (en) A kind of fast image registration method based on subimage Corner Feature
Yang et al. Multifeature-based surround inhibition improves contour detection in natural images
CN101271525B (en) Fast image sequence characteristic remarkable picture capturing method
CN107909059A (en) It is a kind of towards cooperateing with complicated City scenarios the traffic mark board of bionical vision to detect and recognition methods
CN103886589B (en) Object-oriented automated high-precision edge extracting method
CN104299009B (en) License plate character recognition method based on multi-feature fusion
CN109919960B (en) Image continuous edge detection method based on multi-scale Gabor filter
CN107844795A (en) Convolutional neural networks feature extracting method based on principal component analysis
JP5766620B2 (en) Object region detection apparatus, method, and program
CN106780582A (en) Based on the image significance detection method that textural characteristics and color characteristic are merged
CN103186790A (en) Object detecting system and object detecting method
CN108154147A (en) The region of interest area detecting method of view-based access control model attention model
CN104408728A (en) Method for detecting forged images based on noise estimation
CN102999908A (en) Synthetic aperture radar (SAR) airport segmentation method based on improved visual attention model
CN113392856A (en) Image forgery detection device and method
EP3671635A1 (en) Curvilinear object segmentation with noise priors
Yang et al. Fast and robust vanishing point detection using contourlet texture detector for unstructured road
CN115100719A (en) Face recognition method based on fusion of Gabor binary pattern and three-dimensional gradient histogram features
EP3671634B1 (en) Curvilinear object segmentation with geometric priors
Chen et al. A template matching approach for segmenting microscopy images
CN109165551B (en) Expression recognition method for adaptively weighting and fusing significance structure tensor and LBP characteristics

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20151104

WD01 Invention patent application deemed withdrawn after publication