CN104463870A - Image salient region detection method - Google Patents

Image salient region detection method Download PDF

Info

Publication number
CN104463870A
CN104463870A CN201410742968.1A CN201410742968A CN104463870A CN 104463870 A CN104463870 A CN 104463870A CN 201410742968 A CN201410742968 A CN 201410742968A CN 104463870 A CN104463870 A CN 104463870A
Authority
CN
China
Prior art keywords
pixel
super
background
image
represent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410742968.1A
Other languages
Chinese (zh)
Inventor
卿来云
苗军
帅佳玫
黄庆明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Chinese Academy of Sciences
Original Assignee
University of Chinese Academy of Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Chinese Academy of Sciences filed Critical University of Chinese Academy of Sciences
Priority to CN201410742968.1A priority Critical patent/CN104463870A/en
Publication of CN104463870A publication Critical patent/CN104463870A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image salient region detection method. The method includes the steps that background estimation is conducted on an image generated after super pixel partition is conducted; the contrast ratio of all super pixels and the background is obtained according to the color difference of all the super pixels in the image and the super pixels in the background estimation; a super pixel saliency map is obtained according to the contrast ratio of all the super pixels and the background. By means of the image salient region detection method, robustness can be conducted on background noise of the image and the like, and calculation is easy and fast to conduct.

Description

A kind of detection method for image salient region
Technical field
The present invention relates to technical field of image processing, particularly a kind of detection method for image salient region.
Background technology
In the sensation that the mankind are all, the external information of 70% is had at least to be obtained by vision system.Biological vision system, comprises human visual system, automatically can select and note the position that minority in scene " is correlated with ".For given input picture, as shown in Fig. 1 (a), Fig. 1 (b) shows the mark of its marking area.Human eye can give more concern to the safflower of prospect when observing, and sweeps and mistake greenery and other background area.Biological vision system this in the face of natural scene time the process that can be primarily focused on rapidly on a few significant visual object be called as vision attention select.This ability makes biological tissue that its limited perception cognitive resources is concentrated on maximally related partial data, thus makes them can fast and effeciently process a large amount of signals, survives in the environment of complexity change.
If this mechanism can be introduced art of image analysis, by the marking area that computational resource priority allocation easily causes observer to note to those, greatly will improve the work efficiency of conventional images analytical approach, salient region of image detects and to propose on the basis of this thought just and to grow up.
The region of significant difference both marking area in image is defined by contrasting with neighborhood usually.The modal one of this definition realizes being central authorities-periphery mechanism, and namely central and that periphery difference is large region is marking area.This species diversity can be color distortion, towards difference and texture difference etc.The foremost marking area detection model proposed by Itti and Koch etc. is exactly first carry out multiple dimensioned, multidirectional Gabor convolution to image, extract the color of image, brightness and towards etc. feature, then use difference Gaussian approximation central authorities-periphery poor.In recent research, the people such as Yichen Wei proposes the method based on background distributions priori, carrys out the conspicuousness of estimated image block according to image block to the size of the geodesic distance of image surrounding background.Method based on background priori achieves good effect on some natural images, as shown in Fig. 2 (a) (original image) and Fig. 2 (b) (Saliency maps).But, although the geodesic distance measure used in said method has its rationality, but when processing the image that change of background is larger or texture is very abundant, due to the Accumulation Phenomenon of texture region contrast, the method is estimated the conspicuousness of image can be inaccurate, as shown in Fig. 2 (c) (original image) and 2 (d) (Saliency maps).
Summary of the invention
For problems of the prior art, the invention provides a kind of detection method for image salient region, comprising:
Step 1), background estimating is carried out to the image split through super-pixel;
Step 2), according to the color distortion of the super-pixel in super-pixel each in image and described background estimating, obtain the contrast of each super-pixel and background;
Step 3) obtain super-pixel Saliency maps based on the contrast of each super-pixel and background.
The step 1 of said method) in, the set of the super-pixel of the pixel comprised in described image in the pixel coverage of n, range image border estimated as a setting, wherein n is positive integer.
The step 2 of said method) in, adopt the contrast of each super-pixel and background in following steps computed image:
Step 21), obtain the set of the La*b* color space distance of all super-pixel in this super-pixel and described background estimating according to following formula:
D i = { | | c i - c j | | 2 | ∀ S j ∈ B ^ }
Wherein, D irepresent super-pixel S iwith the set of the La*b* color space distance of each super-pixel in background estimating, c irepresent super-pixel S ila*b* color, represent background estimating;
Step 22), by the La*b* color space distance in this set by sorting from small to large;
Step 23), using k La*b* color space distance before in this set and as the contrast of this super-pixel and background, wherein k is positive integer.
In said method, step 3) also comprise:
According to the geodesic distance of the super-pixel in super-pixel each in described image and described background estimating, the background obtaining each super-pixel is connective;
For each super-pixel in described image, the contrast of itself and background and background connectedness are carried out linear superposition;
Super-pixel Saliency maps is obtained according to linear superposition result.
In said method, for each super-pixel in described image, using connective as the background of this super-pixel for the minimum value of the geodesic distance of this super-pixel and its color k neighbour in described background estimating, wherein k is positive integer.Wherein, following steps are adopted to calculate the background connectedness of each super-pixel in described image:
Undirected authorized graph is set up for described image, node in this figure comprises super-pixel in described image and virtual background node B, the limit E in this figure and comprises the internal edges that connects neighbouring super pixels and be connected the color k neighbour of super-pixel in described background estimating and the external edge of virtual background node B;
The background calculating each super-pixel according to following formula is connective:
Connectivity ( S i ) = min S 1 = S i , S 2 , . . . , S n = B Σ j = 1 n - 1 weight ( S j , S j + 1 ) , ∃ ( S j , S j + 1 ) ∈ E
Wherein, adjacent two super-pixel S j, S j+1weight weight (S j, S j+1) for they La*b* color space distance and be expressed as follows:
weight(S j,S j+1)=||c j-c j+1|| 2
Wherein, c jrepresent super-pixel S jla*b* color, the weight between the super-pixel be connected with virtual background node B and virtual background node B is 0.
In said method, for each super-pixel in described image, according to following formula, the contrast of itself and background and background connectedness are carried out linear superposition:
Saliency(S i)=Constrast(S i)+α·Connectivty(S i)
Wherein, Saliency (S i) represent super-pixel S iconspicuousness, Constrast (S i) represent super-pixel S iwith the contrast of background, Connectivity (S i) represent super-pixel S ibackground connective, α is the weight of linear superposition and is be greater than 0 number being less than 1.
Said method can also comprise:
Step 4), super-pixel Saliency maps is processed, obtain pixel significance figure.Wherein, calculate the conspicuousness of each pixel in described image according to following formula, obtain pixel significance figure:
Saliency ( I p ) = Σ j = 1 N w pj Saliency ( S j )
Wherein, Saliency (I p) represent pixel I pconspicuousness, Saliency (S j) represent super-pixel S jconspicuousness, weight w pjbe expressed as follows:
w pj = 1 Z i exp ( - 1 2 ( α | | c ~ p - c j | | 2 + β | | L ~ p - L j | | 2 ) )
Wherein, and c jrepresent pixel I respectively pwith super-pixel S jla*b* color, and L jrepresent pixel I respectively pwith super-pixel S jcoordinate in described image, α and β represents weight.
In said method, in step 1) also comprise before:
Step 0), texture Fuzzy Processing is carried out to image.
Said method also comprises:
Step 5), binary conversion treatment is carried out to obtained Saliency maps.
Adopt the present invention can reach following beneficial effect:
1, the accuracy detected.Be different from and use the overall situation or local contrast to locate obvious object in the past, the present invention makes full use of the prior imformation of background distributions and to utilize with the contrast of background and connectedness to detect prospect, and detection accuracy has greatly improved.
2, the robustness of tolerance.Can to the robust more of the noise in background estimating by suitably increasing k when calculating background contrasts, and background connectedness can solve part colours a large amount of produced problem in background of foreground object.
Accompanying drawing explanation
Fig. 1 (a) and 1 (b) respectively illustrate an example image and Saliency maps thereof;
Fig. 2 (a) and 2 (b) respectively illustrate another example image and Saliency maps thereof;
Fig. 2 (c) and 2 (d) respectively illustrate another example image and Saliency maps thereof;
Fig. 3 is the process flow diagram of detection method for image salient region according to an embodiment of the invention;
Fig. 4 (a) shows the original image of example;
Fig. 4 (b) be to the original image shown in Fig. 4 (a) carry out texture fuzzy after the texture blurred picture that obtains;
Fig. 4 (c) carries out super-pixel to the texture blurred picture of Fig. 4 (b) to split the result schematic diagram obtained;
Fig. 4 (d) carries out to the image of Fig. 4 (c) result schematic diagram that background estimating obtains;
Fig. 4 (e) is the schematic diagram of the background contrasts of each super-pixel in figure (c);
Fig. 4 (f) is the schematic diagram of the background connectedness of each super-pixel in figure (c);
Fig. 4 (g) is the super-pixel Saliency maps obtained after the background connectedness shown in the background contrasts shown in Fig. 4 (e) and Fig. 4 (f) is carried out linear superposition;
Fig. 4 (h) is the final Saliency maps obtained after the smoothing up-sampling of super-pixel Saliency maps to Fig. 4 (g);
Fig. 5 (a) adopts method provided by the invention and existing methodical PR curve on ASD database;
Fig. 5 (b) adopts method provided by the invention and the evaluation result under existing methodical adaptivenon-uniform sampling on ASD database;
Fig. 6 (a) adopts method provided by the invention and existing methodical PR curve on SED2 database;
Fig. 6 (b) adopts method provided by the invention and the evaluation result under existing methodical adaptivenon-uniform sampling on SED2 database.
Embodiment
Below in conjunction with the drawings and specific embodiments, the present invention is illustrated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
According to one embodiment of present invention, a kind of detection method for image salient region is provided.
Generally, the method comprises: carry out background estimating to the image split through super-pixel; According to the color distortion of the super-pixel in super-pixel each in image and background estimating, obtain the background contrasts of each super-pixel; Super-pixel Saliency maps is obtained according to the background contrasts of each super-pixel.
Each step of the method is described in detail below in conjunction with Fig. 3.
The first step: texture fuzzy operation is carried out to input picture
In order to remove image (such as, length and width are respectively the input picture I of H, W) human eye such as minor variations in background is insensitive and may form accumulative effect and then affect the high frequency texture change of result of calculation, first can carry out texture Fuzzy Processing to this input picture.Such as, this input picture application structure extracting method is carried out to the suppression of texture region, obtain the image (or claiming texture blurred picture) after texture suppression.The perception that smoothly more can not only meet human eye of this texture part, and the more all even nature of the result that super-pixel subsequently can be made to split.
Fig. 4 (a) shows a width original input picture, and after application structure extracting method carries out texture Fuzzy Processing, effect is as shown in Fig. 4 (b).
It should be noted that, this step not necessarily is necessary, for the image that there is not or have less high frequency texture change, or carried out the input picture of texture Fuzzy Processing, this step can be skipped.
Second step: super-pixel segmentation is carried out to texture blurred picture
After image after obtain texture suppression according to the first step, in order to reduce computation complexity, existing super-pixel partitioning algorithm is then utilized this image to be divided into the super-pixel (i.e. the uniform super-pixel of color) of apparent homogeneity.
Such as, for the texture blurred picture shown in Fig. 4 (b), the result of its super-pixel segmentation is as shown in Fig. 4 (c).
3rd step: utilize super-pixel segmentation result to carry out background estimating according to background priori
Generally, according to the background of background distributions prior estimate image, obtain by the background estimating of super-pixel set expression.
In order to obtain the super-pixel set of the background (i.e. background estimating) estimated, according to the background priori of Saliency maps picture, image surrounding is that the probability of background is very large, therefore in this article, the super-pixel of the pixel comprised in image in the pixel coverage of n, range image border is defined as the background estimating of this image, is shown below:
B ^ = { S i | min { x , y , | W - x | , | H - y | } ≤ n , ∃ I p ∈ S i } - - - ( 1 )
Wherein, represent background estimating, S irepresent i-th super-pixel, (x, y) represents pixel I pcoordinate in the picture, W, H represent width and the height of input picture respectively.N is positive integer, preferably, and n=10.
For the embodiment of Fig. 4 (c), the result of its background estimating is as shown in Fig. 4 (d), and in Fig. 4 (d), the most dark part represents background estimating.
4th step: the background contrasts calculating each super-pixel
As one of ordinary skill in the known, prospect in image usually and background have larger difference, the difference of the background estimating of background and previous step is then relatively little, design the mode of the difference between tolerance super-pixel and image background herein accordingly, i.e. the metric form of the background contrasts of super-pixel.
In one embodiment, the background contrasts (i.e. the contrast of super-pixel and background) of certain super-pixel be calculated, first calculate the La*b* color space distance of each super-pixel in this super-pixel and background estimating according to following formula:
D i = { | | c i - c j | | 2 | ∀ S j ∈ B ^ } - - - ( 2 )
Wherein, D irepresent super-pixel S iwith the set of the La*b* color space distance of each super-pixel in background estimating, c irepresent super-pixel S ila*b* color.
Then, to D iin distance terms carry out from small to large rearrangement:
D ~ i = < d i 1 , d i 2 , . . . , d iM > , d i 1 &le; d i 2 &le; . . . &le; d iM - - - ( 3 )
Wherein, represent the super-pixel S after sequence iwith the set of the La*b* color space distance of each super-pixel in background estimating; Each d im, m=1 ..., M represents that sequence number is the distance terms of m, and wherein M is the number of the super-pixel comprised in background estimating.
For belonging to the super-pixel of background area (abbreviation background), in background estimating, easily find the super-pixel quite similar with its color, therefore its the value of front k item go to zero; And for belonging to the super-pixel of prospect, because the colour-difference exceptional talents of itself and background area make it become obvious object (i.e. prospect), therefore its the value of front k item relatively large.Based on the color distortion of the super-pixel in each super-pixel and background estimating, in one embodiment, the background contrasts of super-pixel is calculated according to following formula:
Contrast ( S i ) = &Sigma; j = 1 k d ij - - - ( 4 )
Wherein, Constrast (S i) represent super-pixel S ibackground contrasts, this background contrasts is super-pixel S ithe La*b* color space distance sum of k the super-pixel (or claim color k neighbour) the most close with color in background estimating.K is positive integer, preferably, and k=5.
According to the above method calculating background contrasts, each super-pixel in image is calculated to the contrast of itself and background estimating.After the background contrasts obtaining each super-pixel, carry out linear stretch process, thus corresponding super-pixel Saliency maps can be obtained.Such as, Fig. 4 (e) shows the background contrasts of each super-pixel of Fig. 4 (d) with the form of super-pixel Saliency maps, background contrast's angle value of super-pixel is larger, and in Fig. 4 (e), the brightness of this super-pixel is lower.
Above-mentioned detection method for image salient region detects all very effective for the marking area in most of image, and by suitably increasing the value of k, also compares robust for a small amount of noise in background estimating.But when using the method, when the color in prospect occurs in a large number in background estimating, this part prospect may be background by error-detecting, thus the accuracy that impact detects.
For addressing this problem, according to one embodiment of present invention, detection method for image salient region also comprises the steps:
5th step: the background calculating each super-pixel is connective
By finding the observation of foreground object remarkable in image, because the transition in natural image between background is generally relative smooth, relatively large change is then there is to the transition of background by prospect, this change can be understood as the encirclement of object, and super-pixel therefore can be utilized to the geodesic distance of background estimating to measure this change.
In one embodiment, can be connective as the background of super-pixel using the minimum value of the geodesic distance between k the most close for color in super-pixel and background estimating super-pixel.Set forth below is the method calculating this super-pixel background connectedness:
For input picture, set up undirected authorized graph G={V, an E}.Wherein, the node in G is the super-pixel set { S in input picture iadd a virtual background node B, i.e. V={S i∪ { B}; Limit in G has two kinds: the internal edges connecting neighbouring super pixels, and connects the color k neighbour of super-pixel in background estimating and the external edge of virtual background node B, i.e. E={ (P i, P j) | P iwith P jadjacent } ∪ { (P i, B) | P ithe color k neighbour of current super-pixel in background estimating }.Preferably, k=5.
By a super-pixel S igeodesic distance be defined as from S iset out along the limit weight of the accumulation of shortest path arrival background node B on figure G, then super-pixel S ithe connective Connectivity (S of background i) be expressed as follows:
Connectivity ( S i ) = min S 1 = S i , S 2 , . . . , S n = B &Sigma; j = 1 n - 1 weight ( S j , S j + 1 ) , &Exists; ( S j , S j + 1 ) &Element; E - - - ( 5 )
Wherein, adjacent two super-pixel S j, S j+1weight weight (S j, S j+1) be their La*b* color space distance, i.e. weight (S j, S j+1)=|| c j-c j+1|| 2, c jrepresent super-pixel S jla*b* color; Weight between the super-pixel be connected with virtual background node B and background node is 0.
After the background connectedness obtaining each super-pixel, linear stretch can be carried out to the value of this background connectedness, thus represent with figure.
Such as, for the embodiment that Fig. 4 (d) provides, the background that Fig. 4 (f) shows each super-pixel is connective, and wherein color is brighter represents that connective value is larger.
6th step: obtain super-pixel Saliency maps
After the background contrasts calculating each super-pixel and background connectedness, by these two kinds of metric forms are carried out the conspicuousness that linear superposition obtains each super-pixel, thus obtain the Saliency maps of super-pixel according to this conspicuousness, be shown below:
Saliency(S i)=Constrast(S i)+Connectivity(S i) (6)
Wherein, Saliency (S i) represent super-pixel S iconspicuousness, Constrast (S i) be the super-pixel S obtained by formula (4) ibackground contrasts, Connectivity (S i) the super-pixel S that obtains for formula (5) ibackground connective.α is the weight of linear superposition and is be greater than 0 number being less than 1, preferably, and α ∈ [0.3,0.6].
For background contrasts and background interconnectedness shown in Fig. 4 (e) and 4 (f), the result after two kinds of metric form linear superposition is as shown in Fig. 4 (g).
7th step: super-pixel Saliency maps is processed and obtains final pixel significance figure
Because the Saliency maps obtained after above-mentioned superposition is still base unit with super-pixel, obtain the more accurate Saliency maps represented by pixel, aftertreatment can be done to super-pixel Saliency maps according to the color of pixel in former figure and positional information, such as perform the conspicuousness that the smooth operation being similar to up-sampling obtains each pixel, be shown below:
Saliency ( I p ) = &Sigma; j = 1 N w pj Saliency ( S j ) - - - ( 7 )
Wherein, Saliency (I p) represent pixel I pconspicuousness, Saliency (S j) represent super-pixel S jconspicuousness, level and smooth weight w pjcalculating is expressed as follows:
w pj = 1 Z i exp ( - 1 2 ( &alpha; | | c ~ p - c j | | 2 + &beta; | | L ~ p - L j | | 2 ) ) - - - ( 8 )
Wherein, and c jbe respectively pixel I pwith super-pixel S jla*b* color vector, and L jbe respectively pixel I pwith super-pixel S jcoordinate vector in the picture, α and β represents the weight to color and position respectively, preferably, α=1/30, β=1/30.
For the super-pixel Saliency maps of Fig. 4 (g), the final pixel significance figure that Fig. 4 (h) obtains after giving and carrying out aftertreatment to it.
In a further embodiment, further process can also be done according to embody rule to obtained Saliency maps, such as, carry out binaryzation, thus the marking area more in saliency maps picture.
Said method according to human visual system to the high-frequency information minor variations of background texture (in the such as image) the insensitive fact in image, utilize the priori of background distributions in image, for the visual characteristic of obvious object in conjunction with the marking area in background contrasts and background detection of connectivity image, it can effectively detect obvious object and reduce Accumulation Phenomenon.
For verifying the validity of detection method for image salient region provided by the invention, inventor tests on ASD database and SED2 database by the appraisal procedure that marking area test problems is usual, experiment adopts following two evaluation metricses: precision-recall curve and Saliency maps carry out the precision (precision) of adaptive threshold fuzziness result, the contrast of recall rate (recall) and F value (F-measure).
Experiment on ASD database and evaluation result as follows:
ASD database is a subset of MSRA database, contains 1000 width test patterns, often opens image and marks there being the conspicuousness of artificial Pixel-level.On ASD database, the salient region detecting method after the background contrasts propose the present invention and background connectedness merge and 11 kinds of current best salient region detecting methods contrast.These 6 kinds of methods are respectively: FT method, RC method, GS method, SF method, GC method, MC method, method provided by the invention is abbreviated as TB method.The evaluation result of the Saliency maps that various method produces on ASD database is as shown in Fig. 5 (a) He 5 (b).As seen from the figure, detection method provided by the invention reaches the result comparable with current method.
Experiment on SED2 database and evaluation result as follows:
SED2 database is the database that every width image comprises two obvious objects.Comprise 100 width images in database, and provide the foreground pixel of corresponding accurately artificial mark.On SED2 database, detection method provided by the invention and 6 kinds of current good salient region detecting methods are contrasted.These 10 kinds of methods are respectively: FT method, RC method, SF method, GC method, MC method, method provided by the invention is abbreviated as TB method.The evaluation result of the Saliency maps that various method produces on SED2 database is as shown in Fig. 6 (a) He 6 (b).As seen from the figure, method provided by the invention achieves experimental result best on this database, and the F-measure index best result compared in existing method wherein after adaptive threshold fuzziness improves 3.6 percentage points.It should be noted that and understand, when not departing from the spirit and scope of the present invention required by accompanying claim, various amendment and improvement can be made to the present invention of foregoing detailed description.Therefore, the scope of claimed technical scheme is not by the restriction of given any specific exemplary teachings.

Claims (11)

1. a detection method for image salient region, comprising:
Step 1), background estimating is carried out to the image split through super-pixel;
Step 2), according to the color distortion of the super-pixel in super-pixel each in image and described background estimating, obtain the contrast of each super-pixel and background;
Step 3) obtain super-pixel Saliency maps based on the contrast of each super-pixel and background.
2. method according to claim 1, wherein, in step 1) in, by described image, the set comprising the super-pixel of the pixel in the pixel coverage of n, range image border is estimated as a setting, and wherein n is positive integer.
3. method according to claim 1 and 2, wherein, in step 2) in, adopt the contrast of each super-pixel and background in following steps computed image:
Step 21), obtain the set of the La*b* color space distance of all super-pixel in this super-pixel and described background estimating according to following formula:
D i = { | | c i - c j | | 2 | &ForAll; S j &Element; B ^ }
Wherein, D irepresent super-pixel S iwith the set of the La*b* color space distance of each super-pixel in background estimating, c irepresent super-pixel S ila*b* color, represent background estimating;
Step 22), by the La*b* color space distance in this set by sorting from small to large;
Step 23), using k La*b* color space distance before in this set and as the contrast of this super-pixel and background, wherein k is positive integer.
4. method according to claim 1 and 2, wherein, step 3) also comprise:
According to the geodesic distance of the super-pixel in super-pixel each in described image and described background estimating, the background obtaining each super-pixel is connective;
For each super-pixel in described image, the contrast of itself and background and background connectedness are carried out linear superposition;
Super-pixel Saliency maps is obtained according to linear superposition result.
5. method according to claim 4, wherein, for each super-pixel in described image, using connective as the background of this super-pixel for the minimum value of the geodesic distance of this super-pixel and its color k neighbour in described background estimating, wherein k is positive integer.
6. method according to claim 5, wherein, adopts following steps to calculate the background connectedness of each super-pixel in described image:
Undirected authorized graph is set up for described image, node in this figure comprises super-pixel in described image and virtual background node B, the limit E in this figure and comprises the internal edges that connects neighbouring super pixels and be connected the color k neighbour of super-pixel in described background estimating and the external edge of virtual background node B;
The background calculating each super-pixel according to following formula is connective:
Connectivity ( S i ) = min S 1 = S 1 , S 2 , . . . , S n = B &Sigma; j = 1 n - 1 weigth ( S j , S j + 1 ) , &Exists; ( S j , S j + 1 ) &Element; E
Wherein, adjacent two super-pixel S j, S j+1weight weight (S j, S j+1) for they La*b* color space distance and be expressed as follows:
weight(S j,S j+1)=||c j-c j+1|| 2
Wherein, c jrepresent super-pixel S jla*b* color, the weight between the super-pixel be connected with virtual background node B and virtual background node B is 0.
7. method according to claim 4, wherein, for each super-pixel in described image, according to following formula, the contrast of itself and background and background connectedness are carried out linear superposition:
Saliency(S i)=Constrast(S i)+α·Connectivity(S i)
Wherein, Saliency (S i) represent super-pixel S iconspicuousness, Constrast (S i) represent super-pixel S iwith the contrast of background, Connectivity (S i) represent super-pixel S ibackground connective, α is the weight of linear superposition and is be greater than 0 number being less than 1.
8. method according to claim 1 and 2, also comprises:
Step 4), super-pixel Saliency maps is processed, obtain pixel significance figure.
9. method according to claim 8, wherein, calculates the conspicuousness of each pixel in described image, obtains pixel significance figure according to following formula:
Saliency ( I p ) = &Sigma; j = 1 N w pj Saliency ( S j )
Wherein, Saliency (I p) represent pixel I pconspicuousness, Saliency (S j) represent super-pixel S jconspicuousness, weight w pjbe expressed as follows:
w pj = 1 Z i exp ( - 1 2 ( &alpha; | | c ~ p - c j | | 2 + &beta; | | L ~ p - L j | | 2 ) )
Wherein, and c jrepresent pixel I respectively pwith super-pixel S jla*b* color, and L jrepresent pixel I respectively pwith super-pixel S jcoordinate in described image, α and β represents weight.
10. method according to claim 1 and 2, wherein, in step 1) also comprise before:
Step 0), texture Fuzzy Processing is carried out to image.
11. methods according to claim 8, also comprise:
Step 5), binary conversion treatment is carried out to obtained Saliency maps.
CN201410742968.1A 2014-12-05 2014-12-05 Image salient region detection method Pending CN104463870A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410742968.1A CN104463870A (en) 2014-12-05 2014-12-05 Image salient region detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410742968.1A CN104463870A (en) 2014-12-05 2014-12-05 Image salient region detection method

Publications (1)

Publication Number Publication Date
CN104463870A true CN104463870A (en) 2015-03-25

Family

ID=52909852

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410742968.1A Pending CN104463870A (en) 2014-12-05 2014-12-05 Image salient region detection method

Country Status (1)

Country Link
CN (1) CN104463870A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809729A (en) * 2015-04-29 2015-07-29 山东大学 Robust automatic image salient region segmenting method
CN105654475A (en) * 2015-12-25 2016-06-08 中国人民解放军理工大学 Image saliency detection method and image saliency detection device based on distinguishable boundaries and weight contrast
CN105913427A (en) * 2016-04-12 2016-08-31 福州大学 Machine learning-based noise image saliency detecting method
CN105931241A (en) * 2016-04-22 2016-09-07 南京师范大学 Automatic marking method for natural scene image
CN106127197A (en) * 2016-04-09 2016-11-16 北京交通大学 A kind of saliency object detection method based on notable tag sorting
CN106780505A (en) * 2016-06-20 2017-05-31 大连民族大学 Super-pixel well-marked target detection algorithm based on region energy
CN106919950A (en) * 2017-01-22 2017-07-04 山东大学 Probability density weights the brain MR image segmentation of geodesic distance
CN107146258A (en) * 2017-04-26 2017-09-08 清华大学深圳研究生院 A kind of detection method for image salient region
CN107194886A (en) * 2017-05-03 2017-09-22 深圳大学 A kind of dust detection method and device for camera sensor
CN107527350A (en) * 2017-07-11 2017-12-29 浙江工业大学 A kind of solid waste object segmentation methods towards visual signature degraded image
CN107730515A (en) * 2017-10-12 2018-02-23 北京大学深圳研究生院 Panoramic picture conspicuousness detection method with eye movement model is increased based on region
CN108573506A (en) * 2017-03-13 2018-09-25 北京贝塔科技股份有限公司 Image processing method and system
CN109448015A (en) * 2018-10-30 2019-03-08 河北工业大学 Image based on notable figure fusion cooperates with dividing method
CN110084782A (en) * 2019-03-27 2019-08-02 西安电子科技大学 Full reference image quality appraisement method based on saliency detection
CN111091129A (en) * 2019-12-24 2020-05-01 沈阳建筑大学 Image salient region extraction method based on multi-color characteristic manifold sorting
CN112037109A (en) * 2020-07-15 2020-12-04 北京神鹰城讯科技股份有限公司 Improved image watermarking method and system based on saliency target detection

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136766A (en) * 2012-12-28 2013-06-05 上海交通大学 Object significance detecting method based on color contrast and color distribution
CN103208115A (en) * 2013-03-01 2013-07-17 上海交通大学 Detection method for salient regions of images based on geodesic line distance
CN104103082A (en) * 2014-06-06 2014-10-15 华南理工大学 Image saliency detection method based on region description and priori knowledge

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136766A (en) * 2012-12-28 2013-06-05 上海交通大学 Object significance detecting method based on color contrast and color distribution
CN103208115A (en) * 2013-03-01 2013-07-17 上海交通大学 Detection method for salient regions of images based on geodesic line distance
CN104103082A (en) * 2014-06-06 2014-10-15 华南理工大学 Image saliency detection method based on region description and priori knowledge

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘明媚: "基于区域的超像素显著性检测", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
王飞: "基于上下文和背景的视觉显著性检测", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809729B (en) * 2015-04-29 2018-08-28 山东大学 A kind of saliency region automatic division method of robust
CN104809729A (en) * 2015-04-29 2015-07-29 山东大学 Robust automatic image salient region segmenting method
CN105654475B (en) * 2015-12-25 2018-07-06 中国人民解放军理工大学 Based on the image significance detection method and its device that can distinguish boundary and weighting contrast
CN105654475A (en) * 2015-12-25 2016-06-08 中国人民解放军理工大学 Image saliency detection method and image saliency detection device based on distinguishable boundaries and weight contrast
CN106127197A (en) * 2016-04-09 2016-11-16 北京交通大学 A kind of saliency object detection method based on notable tag sorting
CN106127197B (en) * 2016-04-09 2020-07-07 北京交通大学 Image saliency target detection method and device based on saliency label sorting
CN105913427A (en) * 2016-04-12 2016-08-31 福州大学 Machine learning-based noise image saliency detecting method
CN105931241A (en) * 2016-04-22 2016-09-07 南京师范大学 Automatic marking method for natural scene image
CN105931241B (en) * 2016-04-22 2018-08-21 南京师范大学 A kind of automatic marking method of natural scene image
CN106780505A (en) * 2016-06-20 2017-05-31 大连民族大学 Super-pixel well-marked target detection algorithm based on region energy
CN106780505B (en) * 2016-06-20 2019-08-27 大连民族大学 Super-pixel well-marked target detection method based on region energy
CN106919950A (en) * 2017-01-22 2017-07-04 山东大学 Probability density weights the brain MR image segmentation of geodesic distance
CN106919950B (en) * 2017-01-22 2019-10-25 山东大学 The brain MR image segmentation of probability density weighting geodesic distance
CN108573506A (en) * 2017-03-13 2018-09-25 北京贝塔科技股份有限公司 Image processing method and system
CN107146258A (en) * 2017-04-26 2017-09-08 清华大学深圳研究生院 A kind of detection method for image salient region
CN107194886B (en) * 2017-05-03 2020-11-10 深圳大学 Dust detection method and device for camera sensor
CN107194886A (en) * 2017-05-03 2017-09-22 深圳大学 A kind of dust detection method and device for camera sensor
CN107527350A (en) * 2017-07-11 2017-12-29 浙江工业大学 A kind of solid waste object segmentation methods towards visual signature degraded image
CN107527350B (en) * 2017-07-11 2019-11-05 浙江工业大学 A kind of solid waste object segmentation methods towards visual signature degraded image
CN107730515A (en) * 2017-10-12 2018-02-23 北京大学深圳研究生院 Panoramic picture conspicuousness detection method with eye movement model is increased based on region
CN107730515B (en) * 2017-10-12 2019-11-22 北京大学深圳研究生院 Increase the panoramic picture conspicuousness detection method with eye movement model based on region
WO2019071976A1 (en) * 2017-10-12 2019-04-18 北京大学深圳研究生院 Panoramic image saliency detection method based on regional growth and eye movement model
CN109448015A (en) * 2018-10-30 2019-03-08 河北工业大学 Image based on notable figure fusion cooperates with dividing method
CN110084782A (en) * 2019-03-27 2019-08-02 西安电子科技大学 Full reference image quality appraisement method based on saliency detection
CN110084782B (en) * 2019-03-27 2022-02-01 西安电子科技大学 Full-reference image quality evaluation method based on image significance detection
CN111091129A (en) * 2019-12-24 2020-05-01 沈阳建筑大学 Image salient region extraction method based on multi-color characteristic manifold sorting
CN111091129B (en) * 2019-12-24 2023-05-09 沈阳建筑大学 Image salient region extraction method based on manifold ordering of multiple color features
CN112037109A (en) * 2020-07-15 2020-12-04 北京神鹰城讯科技股份有限公司 Improved image watermarking method and system based on saliency target detection

Similar Documents

Publication Publication Date Title
CN104463870A (en) Image salient region detection method
Kim et al. Adaptive smoothness constraints for efficient stereo matching using texture and edge information
CN109086724B (en) Accelerated human face detection method and storage medium
CN103136766B (en) A kind of object conspicuousness detection method based on color contrast and color distribution
CN103077531B (en) Based on the gray scale Automatic Target Tracking method of marginal information
CN106991686B (en) A kind of level set contour tracing method based on super-pixel optical flow field
CN103996198A (en) Method for detecting region of interest in complicated natural environment
CN104574366A (en) Extraction method of visual saliency area based on monocular depth map
CN103413303A (en) Infrared target segmentation method based on joint obviousness
Hua et al. Extended guided filtering for depth map upsampling
CN103745468A (en) Significant object detecting method based on graph structure and boundary apriority
US11042986B2 (en) Method for thinning and connection in linear object extraction from an image
CN104899892A (en) Method for quickly extracting star points from star images
CN104966285A (en) Method for detecting saliency regions
CN106355608A (en) Stereoscopic matching method on basis of variable-weight cost computation and S-census transformation
CN104715251A (en) Salient object detection method based on histogram linear fitting
CN107705313A (en) A kind of remote sensing images Ship Target dividing method
CN107123130A (en) Kernel correlation filtering target tracking method based on superpixel and hybrid hash
CN106778767B (en) Visual image feature extraction and matching method based on ORB and active vision
CN106023229B (en) In conjunction with the SAR remote sensing imagery change detection method of half Gauss model and Gauss model
CN104866853A (en) Method for extracting behavior characteristics of multiple athletes in football match video
CN103514610B (en) A kind of moving Object Segmentation method of stationary background
CN103679740B (en) ROI (Region of Interest) extraction method of ground target of unmanned aerial vehicle
CN105118051A (en) Saliency detecting method applied to static image human segmentation
CN105023264A (en) Infrared image remarkable characteristic detection method combining objectivity and background property

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150325

WD01 Invention patent application deemed withdrawn after publication