CN104966285A - Method for detecting saliency regions - Google Patents

Method for detecting saliency regions Download PDF

Info

Publication number
CN104966285A
CN104966285A CN201510297326.XA CN201510297326A CN104966285A CN 104966285 A CN104966285 A CN 104966285A CN 201510297326 A CN201510297326 A CN 201510297326A CN 104966285 A CN104966285 A CN 104966285A
Authority
CN
China
Prior art keywords
region
value
formula
assembly
sigma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510297326.XA
Other languages
Chinese (zh)
Other versions
CN104966285B (en
Inventor
王立春
李志明
孔德慧
尹宝才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201510297326.XA priority Critical patent/CN104966285B/en
Publication of CN104966285A publication Critical patent/CN104966285A/en
Application granted granted Critical
Publication of CN104966285B publication Critical patent/CN104966285B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Abstract

The invention discloses a method for detecting saliency regions, and in the method, basic elements involved in difference comparison calculation are defined as regions so that the basic elements are in the same order of magnitude as final detection results are, thereby improving the detecting efficiency of salient regions. The method comprises the steps of (1) starting; (2) using a gaussian filter for pre-processing an original image; (3) converting the pre-processed image in the step (2) from an RGB color space to an LAB color space; (4) performing graph-based image segmentation; (5) calculating feature vectors and center-of-mass coordinates of segmented regions; and (6) calculating saliency values of the segmented regions based on the LAB color space.

Description

A kind of detection method of salient region
Technical field
The invention belongs to the technical field of multimedia technology and computer graphics, relate to a kind of detection method of salient region particularly.
Background technology
Salient region detects and refers to the salient region of the machine simulation mankind quick and precisely in recognition visible sensation field.The detection method of salient region is very important a kind of technology at computer vision field, has very large using value in fields such as Iamge Segmentation, object identification, image retrieval, content maintenance and picture edittings.Come in the past few decades, researchist proposes the detection method of a lot of salient region, mainly contains based on the salient region detection of global contrast and detects two large class methods based on the conspicuousness of regional correlation.The detection method of the salient region of current classics has MZ, the methods such as FT, SR, AC, GB, IT, and at present in these 6 kinds typical conspicuousness detection algorithms, efficiency and the accuracy rate of FT are the highest.
The common ground of these algorithms take pixel as fundamental operation unit above, the difference of each pixel in given proper vector within the scope of other pixels in arbitrary pixel comparison itself and whole image or appointed area calculated to the significance value of this pixel.But from salient region testing result and human visual system, salient region in an image is exactly the foreground object in this image usually, and normally one or more region of continuous print, these algorithms all have ignored salient region inherently this fact of region above, thus salient region detection efficiency is lower.
Summary of the invention
Technology of the present invention is dealt with problems and is: overcome the deficiencies in the prior art, a kind of detection method of salient region is provided, the fundamental element participating in otherness comparing calculation is defined as region by it, makes it with final testing result in same magnitude, thus improves the efficiency of salient region detection.
Technical solution of the present invention is: the detection method of this salient region, comprises the following steps:
(1) start;
(2) use gaussian filtering to carry out pre-service to original image, calculate the value of each element in Gaussian convolution core according to formula (2):
k ( i , j ) = 1 2 πσ 2 e - ( i - k - 1 ) 2 + ( j - k - 1 ) 2 2 σ 2 - - - ( 2 )
Wherein i, j value is from 1 to 2k+1, and parameter σ is variance, and parameter k determines the dimension of nuclear matrix, and the dimension of Gaussian convolution core is (2k+1) * (2k+1) dimension;
(3) by step (2) pretreated image from RGB color space conversion to LAB color space;
(4) Iamge Segmentation based on figure is carried out;
(5) proper vector and the center-of-mass coordinate of each cut zone is calculated;
(6) significance value of each cut zone is calculated based on LAB color space.
First the present invention carries out the Iamge Segmentation based on figure to original image, using the region that splits as the base unit of otherness comparing calculation, makes it with final testing result in same magnitude, thus improves the efficiency that salient region detects.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the detection method according to salient region of the present invention.
Fig. 2 shows the accuracy rate and recall rate curve that the present invention and other 6 kinds of methods carry out detecting on the public data collection comprising 1000 width images, and wherein the vertical pivot of curve map is accuracy rate, and transverse axis is recall rate.
Embodiment
As shown in Figure 1, the detection method of this salient region, comprises the following steps:
(1) start;
(2) use gaussian filtering to carry out pre-service to original image, calculate the value of each element in Gaussian convolution core according to formula (2):
k ( i , j ) = 1 2 πσ 2 e - ( i - k - 1 ) 2 + ( j - k - 1 ) 2 2 σ 2 - - - ( 2 )
Wherein i, j value is from 1 to 2k+1, and parameter σ is variance, and parameter k determines the dimension of nuclear matrix, and the dimension of Gaussian convolution core is (2k+1) * (2k+1) dimension;
(3) by step (2) pretreated image from RGB color space conversion to LAB color space;
(4) on LAB color space, carry out the Iamge Segmentation based on figure;
(5) proper vector and the center-of-mass coordinate of each cut zone is calculated;
(6) significance value of each cut zone is calculated based on LAB color space.
First the present invention carries out the Iamge Segmentation based on figure to original image, using the region that splits as the base unit of otherness comparing calculation, makes it with final testing result in same magnitude, thus improves the efficiency that salient region detects.
Preferably, in described step (2), σ value is 0.8, k value is 1, and the size of filter window is 3*3.
Preferably, described step (4) comprises step by step following:
(4.1) be weighted-graph G=<V by image mapped, each vertex v in E>, figure i∈ V corresponds to each pixel in image, every bar limit edge (v i, v j) ∈ E connects a pair neighbor v iand v j, the weight w (edge (v on limit i, v j)) represent the non-negative similarity of neighbor in gray scale, color or texture, use Euclidean distance to calculate weights according to formula (8):
w(edge(v i,v j))=||l i-l j||+||a i-a j||+||b i-b j|| (8)
Wherein l i, a i, b iv ipixel transitions is to the value of Lab passage corresponding to Lab color space, and then the set of opposite side is sorted according to weights non-decreasing;
(4.2) S time initially 0as initial segmentation result, wherein package count is n, each summit as an independent assembly, and to each assembly C kits threshold value T of initialization k=τ (C k), time initial | C k|=1;
(4.3) step (4.4) m time is cycled to repeat, wherein q=1,2 ..., m;
(4.4) Q q=(v i, v j) represent and q article of limit in ergodic process first obtain v i, v jthe assembly C at place 1, C 2if, C 1and C 2equal, illustrate that these two summits itself are inside same assembly, so S q=S q-1; If C 1and C 2unequal, then judge w (edge (v i, v j)) weights whether than Mint (C 1, C 2) little, if than Mint (C 1, C 2) little, illustrate inside an assembly, so by assembly C 2be merged into C 1assembly, if C ' 1the C before merging 1assembly, then C 1=C 1' ∪ C 2, upgrade C after merging 1threshold tau (C 1)=τ (C 1')+τ (C 2);
(4.5) for the region C that step (4.4) divides 1and C 2once filter, region number of vertex being less than threshold value MIN_NUM merges, and gets 0.4% of the total pixel of original image as the value of MIN_NUM; Be cycled to repeat step (4.4) m time, wherein q=1,2 ..., m, determines whether to belong to same assembly according to formula (9):
D ( C 1 , C 2 ) = t r u e i f D i f ( C 1 , C 2 ) &le; M I N _ M U M f a l s e o t h e r w i s e
( 9 ) .
Preferably, in described step (5):
Calculate the proper vector in each region and the center-of-mass coordinate in each region, region r kproper vector I k=[L m, a m, b m] t, wherein L m, a m, b mcalculate according to formula (10):
L m = 1 n &Sigma; i = 0 n L i a m = 1 n &Sigma; i = 0 n a i b m = 1 n &Sigma; i = 0 n b i - - - ( 10 )
Wherein n represents region r ksum of all pixels, L i, a i, b ibe the value of Lab space corresponding to i-th pixel, the proper vector in each region is the mean value that this region comprises the Lab color of pixel;
Each region r kcenter-of-mass coordinate c k=(x k, y k), wherein x k, y kcalculate according to formula (11):
x k = 1 n &Sigma; i = 0 n x i / c l o s y k = 1 n &Sigma; i = 0 n y i / r ows - - - ( 11 )
Wherein n represents region r ksum of all pixels, x i, y iregion r kthe coordinate of pixel, clos is the columns of entire image, and rows is the line number of entire image.
Preferably, in described step (6):
Region r isignificance value use formula (12) calculate:
S ( r i ) = &Sigma; r i &NotEqual; r j U i j w ( r j ) D i j ( r i , r j ) - - - ( 12 )
Wherein w (r j) be region r jweighted value, represent region r jto region r iconspicuousness affect intensity, U ijbe the space length in two regions, represent that far and near region is to region r iaffect intensity, its by formula (13) calculate:
U ij=exp(-C ij(r i,r j)/ρ) (13)
What wherein ρ control space length calculated region significance affects intensity, and ρ value is 0.16; Its
Middle C ij(r i, r j), represent region r i, r jbetween centroid distance, calculate according to formula (14):
C ij(r i,r j)=||x i-x j||+||y i-y j|| (14)
Wherein (x i, y i) and (x j, y j) represent region r i, r jcenter-of-mass coordinate;
D ij(r i, r j) represent that two regions are in the distance of color space, calculate according to formula (15):
D(r i,r j)=||I i-I j|| (15)
Wherein I i, I jrepresent region r iand r jproper vector.
Below provide a specific embodiment:
The method that the present invention proposes comprises the following steps:
1. gaussian filtering
Image can be subject to the interference of various noise in the process of imaging, digitizing and transmission, thus make the last deteriroation of image quality obtained, show as feature flood, image blurring, image fault etc., the image like this by noise can have a negative impact to the graphical analysis in later stage.Filtering is a mathematical model, can carry out Conversion of Energy by this model to view data, and the result of filtering gets rid of the low signal of energy.Noise belongs to low-yield signal, therefore can effectively noise signal be filtered out by filtering.Digital picture filtering each pixel to original image is investigated, and the filter result of this pixel carries out being multiplied obtaining with a Filtering Template according to its neighbor (comprising self).The present invention uses gaussian filtering to carry out pre-service to original image, and gaussian filtering is a kind of linear smoothing filtering, is applicable to filtering white Gaussian noise.The calculation criterion of gaussian filtering is, is weighted average calculating operation based on other grey scale pixel values in the gray-scale value of this point own and neighborhood thereof, and weighted average coefficients is sampled by two-dimensional discrete Gaussian function and obtained after normalization.Concrete Gaussian filter function is such as formula (1):
h ( x , y ) = 1 2 &pi;&sigma; z e - x 2 + y 2 2 &sigma; 2 - - - ( 1 )
The present invention has carried out suitable change to formula (1) in implementation procedure, uses formula (2) to calculate the value of each element in Gaussian convolution core:
h ( i , j ) = 1 2 &pi;&sigma; 2 e - ( i - k - 1 ) 2 + ( - j - k - 1 ) 2 2 &sigma; 2 - - - ( 2 )
Wherein i, j value is from 1 to 2k+1, and parameter σ is variance, and σ crosses conference and deepens the degree of filtering, thus image border can be caused too fuzzy, thus affects next step region segmentation based on figure; The too small filter effect of σ will be weakened, and denoising effect is bad.Parameter k determines the dimension of nuclear matrix, and the dimension of Gaussian convolution core is (2k+1) * (2k+1) dimension, the namely size of filter window.Visual obvious change will be caused to be conducive to again removing image artifacts to original image when considering that in gaussian filtering, σ value is 0.8, therefore in this experiment, σ value is 0.8, k value is 1, and namely the size of filter window is 3*3 simultaneously.
After gaussian filtering being carried out to original image with upper type, the quality that later stage salient region obtains can be significantly improved, the reason that the quality that obtains salient region rises is because after gaussian filtering, the cut zone obtained based on the image partition method of figure is more accurate, thus improves the quality that the later stage obtains salient region.
2.RGB color space conversion is to LAB color space
Because LAB color space is more close to human visual system, it is devoted to perception homogeneity, is a kind of device-independent color system, is also a kind of color system based on physiological characteristic.This also just means, it is the visual response describing people with method for digitizing, therefore more meet with the sensory perceptual system of the mankind, based on the present invention of this theoretical foundation by original image from RGB color space conversion to LAB color space, and then carry out the segmentation in region and the calculating of conspicuousness.
3. based on the Iamge Segmentation of figure
So-called Iamge Segmentation refers to the region according to features such as gray scale, color, texture and shapes, image being divided into some mutual not crossovers, and makes these features in the same area, present similarity, and between zones of different, present obvious otherness.Consider that conspicuousness detects the singularity of application scenarios, need higher segmentation speed and large regions segmentation result more accurately, can control the zonule of segmentation in cutting procedure simultaneously, directly be divided on the large regions high with its similarity-rough set by zonule, thus reduce the time complexity of later stage calculating.Consider the perception carrying out more meeting based on the Iamge Segmentation of figure the mankind on Lab color space, therefore the image segmentation algorithm that the people such as Pedro F.Felzenszwalb propose is improved, be LAB color space by the weight calculation mode on limit in former algorithm from RGB color space conversion, thus the Iamge Segmentation based on figure based on RGB color space is converted to the Iamge Segmentation based on figure based on LAB color space.
First the implication of the label symbol used in algorithm of the present invention is introduced:
Predicate D (formula 3) represents whether two assemblies belong to same assembly, assembly and region.
D ( C 1 , C 2 ) = t r u e i f D i f ( C 1 , C 2 ) &le; M int ( C 1 , C 2 ) f a l s e o t h e r w i s e - - - ( 3 )
Wherein C 1, C 2be the subset of V, V is the set of all pixel compositions in image, and each pixel sees a summit of mapping.Dif (C 1, C 2) define such as formula (4):
D i f ( C 1 , C 2 ) = min v i &Element; C 1 , v j &Element; C 2 , ( v i , v j ) &Element; E w ( e d g e ( v i , v j ) ) - - - ( 4 )
Namely between two assemblies, the minimum weighted value on all connected limits, as the difference value of two assemblies, if do not have the limit be connected, thinks that the difference value of two assemblies is infinitely great, when difference value is less than or equal to Mint (C 1, C 2) then belong to same assembly, then need to merge two assemblies, otherwise belong to two different assemblies, so do not need to carry out any process.Wherein Mint (C 1, C 2) define such as formula (5):
Mint(C 1,C 2)=min(Int(C 1)+τ(C 1),Int(C 2)+τ(C 2)) (5)
Int (C k) represent assembly C kinternal diversity, be assembly C kthe maximum weights on limit in the minimum spanning tree of assembly, C kit is the subset of V.Int (C k) define such as formula (6):
I n t ( C k ) = max e &Element; M S T ( C k , E k ) w ( e ) - - - ( 6 )
Wherein E kassembly C kthe set of all limits, τ (C k) be threshold function table, this threshold function table is minimum threshold when judging whether two assemblies belong to different assembly, defines such as formula (7):
τ(C k)=l/|C k| (7)
Wherein l is used for controlling minimum threshold, and be a constant, generally higher to resolution image value should be larger, this experiment l value value 50, | C k| represent assembly C kin the number of vertex that contains.
Input of the present invention is the figure G having n summit and m bar limit, G=<V, E>, and output splits to point set V the assembly set S=(C obtained 1, C 2, C 3, C r), assembly and region here, concrete segmentation step is as follows:
Step1: be first weighted-graph G=<V by image mapped, each vertex v in E>, figure i∈ V corresponds to each pixel in image, every bar limit edge (v i, v j) ∈ E connects a pair neighbor v iand v j(8 neighborhood neighbours are defined as adjacent).Weight w (edge (the v on limit i, v j)) represent the non-negative similarity of neighbor in gray scale, color or texture.Here Euclidean distance is used to calculate weights, such as formula (8):
w(edge(v i,v j))=||l i-l j||+||a i-a j||+||b i-b j|| (8)
Wherein l i, a i, b iv ipixel transitions is to the value of LAB passage corresponding to LAB color space.Then the set of opposite side is sorted according to weights non-decreasing.
Step2: S time initial 0as initial segmentation result, wherein package count is n, and namely each summit is as an independent assembly, and to each assembly C kits threshold value T of initialization k=τ (C k), time initial | C k|=1.
Step3: be cycled to repeat the 4th step m time, wherein q=1,2 ..., m, namely travels through one time to all limits.
Step4: at S q-1on segmentation result basis, obtain segmentation result S according to dividing method below q.Definition Q q=(v i, v j) represent and q article of limit in ergodic process first obtain v i, v jthe assembly C at place 1, C 2if, C 1and C 2equal, illustrate that these two summits itself are inside same assembly, so do not carry out any other process, i.e. S q=S q-1.If C 1and C 2unequal, then judge whether to belong to same assembly according to the definition of formula 3, namely w (edge (v i, v j)) weights whether than Mint (C 1, C 2) little, if than Mint (C 1, C 2) little explanation inside an assembly, so by assembly C 2be merged into C 1assembly, if C ' 1the C before merging 1assembly, then C 1=C 1' ∪ C 2, upgrade C after merging 1threshold tau (C 1)=τ (C 1')+τ (C 2).
Step5: owing to considering that salient region itself there will not be very little region, therefore for the region C that the 4th step divides 1and C 2once filter, the region being less than threshold value MIN_NUM by number of vertex merges, and considers adaptivity in test, and namely more high threshold is larger for resolution, and resolution more Low threshold is less, gets 0.4% of the total pixel of original image here as the value of MIN_NUM.Be cycled to repeat the 4th step m time, wherein q=1,2 ..., m, namely travels through once again to all limits, but amendment determines whether the rule belonging to same assembly, and new rule is such as formula (9):
D ( C 1 , C 2 ) = t r u e i f min ( s i z e ( C 1 ) , s i z e ( C 2 ) ) < M I N _ M U M f a l s e o t h e r w i s e - - - ( 9 )
After 5th EOS, original image is divided into some unnormalized regions adaptively.
4. calculate proper vector and center-of-mass coordinate
The proper vector in each region of following calculating and the center-of-mass coordinate in each region, region r kproper vector I k=[L m, a m, b m] t, wherein L m, a m, b mcalculating such as formula (10):
L m = 1 n &Sigma; i = 0 n L i a m = 1 n &Sigma; i = 0 n a i b m = 1 n &Sigma; i = 0 n b i - - - ( 10 )
Wherein n represents region r ksum of all pixels, L i, a i, b ithe value of the Lab space that i-th pixel is corresponding, namely the proper vector in each region is the mean value that this region comprises the Lab color of pixel.Each region r kcenter-of-mass coordinate c k=(x k, y k), wherein x k, y kcalculating such as formula (11):
x k = 1 n &Sigma; i = 0 n x i / c l o s y k = 1 n &Sigma; i = 0 n y i / r ows - - - ( 11 )
Wherein n represents region r ksum of all pixels, x i, y iregion r kthe coordinate of pixel, clos is the columns of entire image, and rows is the line number of entire image.
5 salient regions calculate
Region r isignificance value use formula (12) calculate:
S ( r i ) = &Sigma; r i &NotEqual; r j U i j w ( r j ) D i j ( r i , r j ) - - - ( 12 )
Wherein w (r j) be region r jweighted value, represent region r jto region r iconspicuousness affect intensity, can find that large regions is easier than zonule form stronger contrast effect by observing, therefore use the weights of sum of all pixels as corresponding region in each region here, weights show that more greatly area contribution value is larger.U in formula (12) ijbe the space length in two regions, can find that the high-contrast in the distant region of the high-contrast of adjacent area is more prone to cause the notice of vision by observing, therefore introducing space length U here ijrepresent that far and near region is to region r iaffect intensity.Wherein space length weights U ijcalculate with formula (13):
U ij=exp(-C ij(r i,r j)/ρ) (13)
What wherein ρ control space length calculated region significance affects intensity, and can find out that ρ is larger from formula, the impact of space length weights on conspicuousness is less, and in test, ρ value is 0.16.Wherein C ij(r i, r j), represent region r i, r jbetween centroid distance, calculate as formula (14):
C ij(r i,r j)=||x i-x j||+||y i-y j|| (14)
Wherein (x i, y i) and (x j, y j) represent region r i, r jcenter-of-mass coordinate.
D ij(r i, r j) represent two regions distance at color space, be similar between two pixels inside other algorithms to specific strength, calculate as formula (15):
D(r i,r j)=||I i-I j|| (15)
Here I i, I jrepresent region r iand r jproper vector, here use Euclidean distance calculate the distance of two regions at color space.
The above; it is only preferred embodiment of the present invention; not any pro forma restriction is done to the present invention, every above embodiment is done according to technical spirit of the present invention any simple modification, equivalent variations and modification, all still belong to the protection domain of technical solution of the present invention.

Claims (5)

1. a detection method for salient region, is characterized in that, comprises the following steps:
(1) start;
(2) use gaussian filtering to carry out pre-service to original image, calculate the value of each element in Gaussian convolution core according to formula (2):
h ( i , j ) = 1 2 &pi;&sigma; 2 e - ( i - k - 1 ) 2 + ( j - k - 1 ) 2 2 &sigma; 2 - - - ( 2 )
Wherein i, j value is from 1 to 2k+1, and parameter σ is variance, and parameter k determines the dimension of nuclear matrix, and the dimension of Gaussian convolution core is (2k+1) * (2k+1) dimension;
(3) by step (2) pretreated image from RGB color space conversion to LAB color space;
(4) Iamge Segmentation based on figure is carried out;
(5) proper vector and the center-of-mass coordinate of each cut zone is calculated;
(6) significance value of each cut zone is calculated based on LAB color space.
2. the detection method of salient region according to claim 1, is characterized in that, in described step (2), σ value is 0.8, k value is 1, and the size of filter window is 3*3.
3. the detection method of salient region according to claim 2, is characterized in that, described step (4) comprises step by step following:
(4.1) be weighted-graph G=<V by image mapped, each vertex v in E>, figure i∈ V corresponds to each pixel in image, every bar limit edge (v i, v j) ∈ E connects a pair neighbor v iand v j, the weight w (wdge (v on limit i, v j)) represent the non-negative similarity of neighbor in gray scale, color or texture, use Euclidean distance to calculate weights according to formula (8):
w(edge(v i,v j))=||l i-l j||+||a i-a j||+||b i-b j|| (8)
Wherein l i, a i, b iv ipixel transitions is to the value of Lab passage corresponding to Lab color space, and then the set of opposite side is sorted according to weights non-decreasing;
(4.2) S time initially 0as initial segmentation result, wherein package count is n, each summit as an independent assembly, and to each assembly C kits threshold value T of initialization k=τ (C k), time initial | C k|=1;
(4.3) step (4.4) m time is cycled to repeat, wherein q=1,2 ..., m;
(4.4) O q=(v i, v j) represent and q article of limit in ergodic process first obtain v i, v jthe assembly C at place 1, C 2if, C 1and C 2equal, illustrate that these two summits itself are inside same assembly, so S q=S q-1; If C 1and C 2unequal, then judge w (edge (v i, v j)) weights whether than Mint (C 1, C 2) little, if than Mint (C 1, C 2) little, illustrate inside an assembly, so by assembly C 2be merged into C 1assembly, if C ' 1the C before merging 1assembly, then C 1=C 1' ∪ C 2, upgrade C after merging 1threshold tau (C 1)=τ (C 1')+τ (C 2);
(4.5) for the region C that step (4.4) divides 1and C 2once filter, region number of vertex being less than threshold value MIN_NUM merges, and gets 0.4% of the total pixel of original image as the value of MIN_NUM; Be cycled to repeat step (4.4) m time, wherein q=1,2 ..., m, determines whether to belong to same assembly according to formula (9):
D ( C 1 , C 2 ) = { t r u e i f D i f ( C 1 , C 2 ) &le; M I N _ N U M f a l s e o t h e r w i s e - - - ( 9 ) .
4. the detection method of salient region according to claim 3, is characterized in that, in described step (5):
Calculate the proper vector in each region and the center-of-mass coordinate in each region, region r kproper vector I k=[L m, a m, b m] t, wherein L m, a m, b mcalculate according to formula (10):
L m = 1 n &Sigma; i = 0 n L i
a m = 1 n &Sigma; i = 0 n a i - - - ( 10 )
b m = 1 n &Sigma; i = 0 n b i
Wherein n represents region r ksum of all pixels, L i, a i, b ibe the value of Lab space corresponding to i-th pixel, the proper vector in each region is the mean value that this region comprises the Lab color of pixel;
Each region r kcenter-of-mass coordinate c k=(x k, y k), wherein x k, y kcalculate according to formula (11):
x k = 1 n &Sigma; i = 0 n x i / c l o s
y k = 1 n &Sigma; i = 0 n y i / r o w s - - - ( 11 )
Wherein n represents region r ksum of all pixels, x i, y iregion r kthe coordinate of pixel, clos is the columns of entire image, and rows is the line number of entire image.
5. the detection method of salient region according to claim 4, is characterized in that, in described step (6):
Region r isignificance value use formula (12) calculate:
S ( r i ) = &Sigma; r i &NotEqual; r j U i j w ( r j ) D i j ( r i , r j ) - - - ( 12 )
Wherein w (r j) be region r jweighted value, represent region r jto region r iconspicuousness affect intensity, U ijbe the space length in two regions, represent that far and near region is to region r iaffect intensity, its by formula (13) calculate:
U ij=exp(-C ij(r i,r j)/ρ) (13)
What wherein ρ control space length calculated region significance affects intensity, and ρ value is 0.16;
Wherein C ij(r i, r j), represent region r i, r jbetween centroid distance, calculate according to formula (14):
C ij(r i,r j)=||x i-x j||+||y i-y j|| (14)
Wherein (x i, y i) and (x j, y j) represent region r i, r jcenter-of-mass coordinate;
D ij(r i, r j) represent that two regions are in the distance of color space, calculate according to formula (15):
D(r i,r j)=||I i-I j|| (15)
Wherein I i, I jrepresent region r iand r jproper vector.
CN201510297326.XA 2015-06-03 2015-06-03 A kind of detection method of salient region Active CN104966285B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510297326.XA CN104966285B (en) 2015-06-03 2015-06-03 A kind of detection method of salient region

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510297326.XA CN104966285B (en) 2015-06-03 2015-06-03 A kind of detection method of salient region

Publications (2)

Publication Number Publication Date
CN104966285A true CN104966285A (en) 2015-10-07
CN104966285B CN104966285B (en) 2018-01-19

Family

ID=54220316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510297326.XA Active CN104966285B (en) 2015-06-03 2015-06-03 A kind of detection method of salient region

Country Status (1)

Country Link
CN (1) CN104966285B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105528765A (en) * 2015-12-02 2016-04-27 小米科技有限责任公司 Method and device for processing image
CN105825238A (en) * 2016-03-30 2016-08-03 江苏大学 Visual saliency object detection method
CN108921820A (en) * 2018-05-30 2018-11-30 咸阳师范学院 A kind of saliency object detection method based on feature clustering and color contrast
CN109872300A (en) * 2018-12-17 2019-06-11 南京工大数控科技有限公司 A kind of vision significance detection method of friction plate open defect
CN110009712A (en) * 2019-03-01 2019-07-12 华为技术有限公司 A kind of picture and text composition method and its relevant apparatus
CN110111295A (en) * 2018-02-01 2019-08-09 北京中科奥森数据科技有限公司 A kind of image collaboration conspicuousness detection method and device
CN110136110A (en) * 2019-05-13 2019-08-16 京东方科技集团股份有限公司 The detection method and device of photovoltaic module defect
CN110288597A (en) * 2019-07-01 2019-09-27 哈尔滨工业大学 Wireless capsule endoscope saliency detection method based on attention mechanism
CN110866460A (en) * 2019-10-28 2020-03-06 衢州学院 Method and device for detecting specific target area in complex scene video
CN113436091A (en) * 2021-06-16 2021-09-24 中国电子科技集团公司第五十四研究所 Object-oriented remote sensing image multi-feature classification method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020196848A1 (en) * 2001-05-10 2002-12-26 Roman Kendyl A. Separate plane compression
CN101984464A (en) * 2010-10-22 2011-03-09 北京工业大学 Method for detecting degree of visual saliency of image in different regions
CN102129694A (en) * 2010-01-18 2011-07-20 中国科学院研究生院 Method for detecting salient region of image
CN102509099A (en) * 2011-10-21 2012-06-20 清华大学深圳研究生院 Detection method for image salient region
CN104103082A (en) * 2014-06-06 2014-10-15 华南理工大学 Image saliency detection method based on region description and priori knowledge

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020196848A1 (en) * 2001-05-10 2002-12-26 Roman Kendyl A. Separate plane compression
CN102129694A (en) * 2010-01-18 2011-07-20 中国科学院研究生院 Method for detecting salient region of image
CN101984464A (en) * 2010-10-22 2011-03-09 北京工业大学 Method for detecting degree of visual saliency of image in different regions
CN102509099A (en) * 2011-10-21 2012-06-20 清华大学深圳研究生院 Detection method for image salient region
CN104103082A (en) * 2014-06-06 2014-10-15 华南理工大学 Image saliency detection method based on region description and priori knowledge

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张会: "基于视觉显著性的目标识别", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105528765A (en) * 2015-12-02 2016-04-27 小米科技有限责任公司 Method and device for processing image
CN105825238A (en) * 2016-03-30 2016-08-03 江苏大学 Visual saliency object detection method
CN105825238B (en) * 2016-03-30 2019-04-30 江苏大学 A kind of vision significance mesh object detection method
CN110111295A (en) * 2018-02-01 2019-08-09 北京中科奥森数据科技有限公司 A kind of image collaboration conspicuousness detection method and device
CN110111295B (en) * 2018-02-01 2021-06-11 北京中科奥森数据科技有限公司 Image collaborative saliency detection method and device
CN108921820B (en) * 2018-05-30 2021-10-29 咸阳师范学院 Saliency target detection method based on color features and clustering algorithm
CN108921820A (en) * 2018-05-30 2018-11-30 咸阳师范学院 A kind of saliency object detection method based on feature clustering and color contrast
CN109872300A (en) * 2018-12-17 2019-06-11 南京工大数控科技有限公司 A kind of vision significance detection method of friction plate open defect
CN109872300B (en) * 2018-12-17 2021-02-19 南京工大数控科技有限公司 Visual saliency detection method for appearance defects of friction plate
CN110009712A (en) * 2019-03-01 2019-07-12 华为技术有限公司 A kind of picture and text composition method and its relevant apparatus
US11790584B2 (en) 2019-03-01 2023-10-17 Huawei Technologies Co., Ltd. Image and text typesetting method and related apparatus thereof
CN110136110A (en) * 2019-05-13 2019-08-16 京东方科技集团股份有限公司 The detection method and device of photovoltaic module defect
CN110288597A (en) * 2019-07-01 2019-09-27 哈尔滨工业大学 Wireless capsule endoscope saliency detection method based on attention mechanism
CN110866460A (en) * 2019-10-28 2020-03-06 衢州学院 Method and device for detecting specific target area in complex scene video
CN113436091A (en) * 2021-06-16 2021-09-24 中国电子科技集团公司第五十四研究所 Object-oriented remote sensing image multi-feature classification method
CN113436091B (en) * 2021-06-16 2023-03-31 中国电子科技集团公司第五十四研究所 Object-oriented remote sensing image multi-feature classification method

Also Published As

Publication number Publication date
CN104966285B (en) 2018-01-19

Similar Documents

Publication Publication Date Title
CN104966285A (en) Method for detecting saliency regions
CN104966085B (en) A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features
CN104240244B (en) A kind of conspicuousness object detecting method based on communication mode and manifold ranking
CN104050471B (en) Natural scene character detection method and system
Chen et al. A novel color edge detection algorithm in RGB color space
CN103186904B (en) Picture contour extraction method and device
Chen et al. An improved license plate location method based on edge detection
CN103729842B (en) Based on the fabric defect detection method of partial statistics characteristic and overall significance analysis
CN103745468B (en) Significant object detecting method based on graph structure and boundary apriority
CN103048329B (en) A kind of road surface crack detection method based on active contour model
CN102006425A (en) Method for splicing video in real time based on multiple cameras
CN102496023B (en) Region of interest extraction method of pixel level
CN102902956B (en) A kind of ground visible cloud image identifying processing method
CN103020993B (en) Visual saliency detection method by fusing dual-channel color contrasts
CN104834912A (en) Weather identification method and apparatus based on image information detection
CN104077577A (en) Trademark detection method based on convolutional neural network
CN104463870A (en) Image salient region detection method
CN104299232B (en) SAR image segmentation method based on self-adaptive window directionlet domain and improved FCM
CN104036284A (en) Adaboost algorithm based multi-scale pedestrian detection method
CN107256547A (en) A kind of face crack recognition methods detected based on conspicuousness
CN104715244A (en) Multi-viewing-angle face detection method based on skin color segmentation and machine learning
CN103198479A (en) SAR image segmentation method based on semantic information classification
CN104574328A (en) Color image enhancement method based on histogram segmentation
CN104966054A (en) Weak and small object detection method in visible image of unmanned plane
CN104182983B (en) Highway monitoring video definition detection method based on corner features

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant