A kind of image saliency map extracting method based on region
Technical field
The present invention relates to the processing method of a kind of picture signal, especially relate to a kind of image saliency map based on region and carry
Access method.
Background technology
Receive and in information processing in human vision, due to brain resource-constrained and external environment information importance district
Not, in processing procedure, human brain environmental information to external world is not make no exception, but shows selection feature.People are seeing
When seeing image or video segment, attention is not evenly distributed to each region of image, but pays close attention to some marking area
Du Genggao.How marking area high for Saliency map in video being detected and to be extracted is computer vision and based on interior
One important research contents of the field of video retrieval held.
Existing notable graph model is a kind of selective attention model simulating organism vision noticing mechanism, and it is by meter
Calculate each pixel in terms of color, brightness, direction with the contrast of periphery background, and the saliency value of all pixels is constituted one
Zhang Xianzhu schemes, but this kind of method can not extract image saliency map information well, this is because based on pixel notable special
Levy the notable semantic feature that can not reflect well when human eye is watched, and marked feature based on region can be effectively improved
How the stability extracted and accuracy, therefore, carry out region segmentation to image, how to put forward the feature of regional
Take, how the marked feature of regional is described, how between significance and region and the region of gauge region itself
Significance, be all notable figure based on region is extracted in need the problem researched and solved.
Summary of the invention
The technical problem to be solved is to provide one and meets notable semantic feature, and has higher extracted stability
Image saliency map extracting method based on region with accuracy.
The present invention solves the technical scheme that above-mentioned technical problem used: a kind of image saliency map based on region is extracted
Method, it is characterised in that comprise the following steps:
1. pending source images is designated as { Ii(x, y) }, wherein, i=1,2,3,1≤x≤W, 1≤y≤H, W represent { Ii
(x, y) } width, H represents { Ii(x, y) } height, Ii(x y) represents { Ii(x, y) } in coordinate position be (x, pixel y)
The color value of i-th component, the 1st component is that R component, the 2nd component are G component and the 3rd component is B component;
First { I is obtainedi(x, y) } quantized image and the global color histogram of quantized image, then according to { Ii(x,
Y) quantized image }, obtains { Ii(x, y) } in the color category of each pixel, further according to { Ii(x, y) } quantized image
Global color histogram and { Ii(x, y) } in the color category of each pixel, obtain { Ii(x, y) } based on the overall situation face
The image saliency map of Color Histogram, is designated as { HS (x, y) }, wherein, HS (x, y) represent that in { HS (x, y) }, coordinate position is (x, y)
The pixel value of pixel, also represent { Ii(x, y) } in coordinate position be (x, pixel y) based on global color histogram
Saliency value;
3. use super-pixel cutting techniques by { Ii(x, y) } it is divided into the region of M non-overlapping copies, then by { Ii(x,y)}
Again it is expressed as the set in M region, is designated as { SPh, then calculate { SPhThe similarity between regional in }, by { SPh}
In pth region and q-th region between similarity be designated as Sim (SPp,SPq), wherein, M >=1, SPhRepresent { SPhIn }
The h region, 1≤h≤M, 1≤p≤M, 1≤q≤M, p ≠ q, SPpRepresent { SPhPth region in }, SPqRepresent
{SPhQ-th region in };
4. according to { SPhThe similarity between regional in }, obtains { Ii(x, y) } based on field color contrast
Image saliency map, be designated as { NGC (x, y) }, wherein, (x y) represents that in { NGC (x, y) }, coordinate position is (x, picture y) to NGC
The pixel value of vegetarian refreshments;
5. according to { SPhThe similarity between regional in }, obtains { Ii(x, y) } openness based on regional space
Image saliency map, be designated as { NSS (x, y) }, wherein, (x y) represents that in { NSS (x, y) }, coordinate position is (x, picture y) to NSS
The pixel value of vegetarian refreshments;
6. to { Ii(x, y) } image saliency map based on global color histogram { HS (x, y) }, { Ii(x, y) } based on
The image saliency map of field color contrast NGC (x, y) } and { Ii(x, y) } the image openness based on regional space notable
Figure { NSS (x, y) } merge, obtain { Ii(x, y) } final image saliency map, be designated as { Sal (x, y) }, will Sal (x,
Y) in } coordinate position be (x, the pixel value of pixel y) be designated as Sal (x, y), Sal (x, y)=HS (x, y) × NGC (x, y) ×
NSS(x,y)。
Described step detailed process 2. is:
2.-1, to { Ii(x, y) } in the color value of each component of each pixel quantify respectively, obtain { Ii
(x, y) } quantized image, be designated as { Pi(x, y) }, by { Pi(x, y) } in coordinate position be (x, the i-th component of pixel y)
Color value be designated as Pi(x, y),Wherein, symbolFor rounding downwards symbol;
2.-2, { P is calculatedi(x, y) } global color histogram, be designated as H (k) | 0≤k≤4095}, wherein, H (k) table
Show { Pi(x, y) } in belong to the number of all pixels of kth kind color;
2.-3, according to { Pi(x, y) } in the color value of each component of each pixel, calculate { Ii(x, y) } in corresponding
The color category of pixel, by { Ii(x, y) } in coordinate position be that (x, the color category of pixel y) is designated as kxy, kxy=P3
(x,y)×256+P2(x,y)×16+P1(x, y), wherein, P3(x y) represents { Pi(x, y) } in coordinate position be (x, picture y)
The color value of the 3rd component of vegetarian refreshments, P2(x y) represents { Pi(x, y) } in coordinate position be (x, 2nd point of pixel y)
The color value of amount, P1(x y) represents { Pi(x, y) } in coordinate position be (x, the color value of the 1st component of pixel y);
2.-4, { I is calculatedi(x, y) } in the saliency value based on global color histogram of each pixel, by { Ii(x,
Y) in } coordinate position be (x, the saliency value based on global color histogram of pixel y) be designated as HS (x, y),
Wherein, D (kxy, k) represent H (k) | the kth in 0≤k≤4095}xyPlant the Euclidean distance between color and kth kind color, pk,2=mod (k/16), Represent H (k) | 0≤k≤
Kth in 4095}xyPlant the color value of the 1st component corresponding to color,Represent H (k) | the kth in 0≤k≤4095}xy
Plant the color value of the 2nd component corresponding to color,Represent H (k) | the kth in 0≤k≤4095}xyPlant color corresponding
The color value of the 3rd component, pk,1Represent H (k) | the color of the 1st component that kth kind color in 0≤k≤4095} is corresponding
Value, pk,2Represent H (k) | the color value of the 2nd component that kth kind color in 0≤k≤4095} is corresponding, pk,3Represent H (k) |
The color value of the 3rd component that kth kind color in 0≤k≤4095} is corresponding, mod () is remainder number handling function;
2.-5, according to { Ii(x, y) } in the saliency value based on global color histogram of each pixel, obtain { Ii
(x, y) } image saliency map based on global color histogram, be designated as { HS (x, y) }.
Described step 3. in { SPhSimilarity Sim (the SP between pth region and q-th region in }p,SPq)
Acquisition process is:
3.-1, to { SPhThe color value of each component of each pixel in each region in } quantifies respectively,
Obtain { SPhThe quantization areas in each region in }, by { SPhThe quantization areas in h region in } is designated as { Ph,i(xh,
yh), by { Ph,i(xh,yh) in coordinate position be (xh,yh) the color value of i-th component of pixel be designated as Ph,i(xh,
yh), it is assumed that { Ph,i(xh,yh) in coordinate position be (xh,yh) pixel at { Ii(x, y) } in coordinate position be (x, y),
ThenWherein, 1≤xh≤Wh,1≤yh≤Hh, WhRepresent { SPhThe width in h region in }
Degree, HhRepresent { SPhThe height in h region in }, symbolFor rounding downwards symbol;
3.-2, { SP is calculatedhThe color histogram of the quantization areas in each region in }, by { Ph,i(xh,yh) color
Rectangular histogram is designated asWherein,Represent { Ph,i(xh,yh) in belong to the institute of kth kind color
There is the number of pixel;
3.-3, to { SPhThe color histogram of the quantization areas in each region in } is normalized operation, obtains correspondence
Normalization after color histogram, by rightAfter the normalization obtained after being normalized operation
Color histogram be designated as Wherein,Represent
{SPhQuantization areas { the P in h region in }h,i(xh,yh) in belong to the probability of occurrence of pixel of kth kind color,Represent { SPhQuantization areas { the P in h' region in }h',i(xh',yh') in belong to all pictures of kth kind color
The number of vegetarian refreshments, 1≤xh'≤Wh',1≤yh'≤Hh', Wh'Represent { SPhThe width in h' region in }, Hh'Represent { SPhIn }
The height in h' region, Ph',i(xh',yh') represent { Ph',i(xh',yh') in coordinate position be (xh',yh') pixel
The color value of i-th component;
3.-4, { SP is calculatedhThe similarity between pth region and q-th region in }, is designated as Sim (SPp,SPq),
Sim(SPp,SPq)=Simc(SPp,SPq)×Simd(SPp,SPq), Simc(SPp,SPq) represent { SPhPth region in } with
{SPhThe color similarity between q region in }, Simd
(SPp,SPq) represent { SPhPth region in } and { SPhThe spatial simlanty between q region in },Wherein, SPpRepresent { SPhPth region in }, SPqRepresent { SPhQ in }
Individual region,Represent { SPhQuantization areas { the P in pth the region in }p,i(xp,yp) in belong to the picture of kth kind color
The probability of occurrence of vegetarian refreshments,Represent { SPhQuantization areas { the P in the q-th region in }q,i(xq,yq) in belong to kth kind
The probability of occurrence of the pixel of color, 1≤xp≤Wp,1≤yp≤Hp, WpRepresent { SPhThe width in pth the region in }, HpTable
Show { SPhThe height in pth the region in }, Pp,i(xp,yp) represent { Pp,i(xp,yp) in coordinate position be (xp,yp) pixel
The color value of the i-th component of point, 1≤xq≤Wq,1≤yq≤Hq, WqRepresent { SPhThe width in the q-th region in }, HqRepresent
{SPhThe height in the q-th region in }, Pq,i(xq,yq) represent { Pq,i(xq,yq) in coordinate position be (xq,yq) pixel
The color value of i-th component, min () for taking minimum value function,Represent { SPhThe center pixel in pth region in }
The coordinate position of point,Represent { SPhThe coordinate position of the central pixel point in q-th region in }, symbol " ‖ ‖ " is for asking Europe
Formula distance symbol.
Described step detailed process 4. is:
4.-1, { SP is calculatedhThe color contrast in each region in }, by { SPhThe color contrast in h region in }
Degree is designated as
Wherein, SPhRepresent { SPhThe h region in }, SPqRepresent { SPhQ-th region in },Represent { SPhH in }
Total number of the pixel comprised in region, Simd(SPh,SPq) represent { SPhThe h region in } and { SPhQ district in }
Spatial simlanty between territory, Represent { SPhIn the h region in }
The coordinate position of central pixel point,Represent { SPhThe coordinate position of the central pixel point in q-th region in }, symbol
" ‖ ‖ " for seeking Euclidean distance symbol,Represent { SPhThe color mean vector in h region in },Represent { SPhIn }
The color mean vector in q-th region;
4.-2, to { SPhThe color contrast in each region in } is normalized operation, after obtaining the normalization of correspondence
Color contrast, will be to { SPhThe color contrast in h region in }Obtain after being normalized operation returns
Color contrast after one change is designated as Wherein, NGCminRepresent { SPh}
In M region in minimum color contrast, NGCmaxRepresent { SPhColor contrast maximum in M region in };
4.-3, { SP is calculatedhThe saliency value based on color contrast in each region in }, by { SPhThe h district in }
The saliency value based on color contrast in territory is designated as Its
In, Sim (SPh,SPq) represent { SPhIn } similarity between h region and q-th region;
4.-4, by { SPhThe saliency value based on color contrast in each region in } is as owning in corresponding region
The saliency value of pixel, thus obtain { Ii(x, y) } image saliency map based on field color contrast, be designated as NGC (x,
Y) }, wherein, (x y) represents that in { NGC (x, y) }, coordinate position is (x, the pixel value of pixel y) to NGC.
Described step detailed process 5. is:
5.-1, { SP is calculatedhThe spatial sparsity in each region in }, by { SPhThe space in h region in } is sparse
Property is designated as Wherein, Sim (SPh,SPq) represent { SPhH in }
Similarity between region and q-th region,Represent { SPhThe central pixel point in the h region in } and { Ii(x,
Y) Euclidean distance between central pixel point };
5.-2, to { SPhThe spatial sparsity in each region in } is normalized operation, after obtaining the normalization of correspondence
Spatial sparsity, will be to { SPhThe spatial sparsity in h region in }Obtain after being normalized operation returns
Spatial sparsity after one change is designated as Wherein, NSSminRepresent { SPhIn }
Spatial sparsity minimum in M region, NSSmaxRepresent { SPhSpatial sparsity maximum in M region in };
5.-3, { SP is calculatedhThe saliency value based on spatial sparsity in each region in }, by { SPhThe h district in }
The saliency value based on spatial sparsity in territory is designated as
5.-4, by { SPhThe saliency value based on spatial sparsity in each region in } is as owning in corresponding region
The saliency value of pixel, thus obtain { Ii(x, y) } the image saliency map openness based on regional space, be designated as NSS (x,
Y) }, wherein, (x y) represents that in { NSS (x, y) }, coordinate position is (x, the pixel value of pixel y) to NSS.
Compared with prior art, it is an advantage of the current invention that:
1) the inventive method is by being calculated image saliency map based on global color histogram respectively, based on region face
The image saliency map of color contrast and based on the openness image saliency map of regional space, and finally merge that to obtain image notable
Figure, the image saliency map obtained can preferably reflect the notable situation of change in the global and local region of image, and stable
Property and accuracy high.
2) the inventive method uses super-pixel cutting techniques to split image, and utilizes histogram feature to calculate respectively
The color contrast of regional and spatial sparsity, finally utilize the similarity between region to be weighted, obtain final
Image saliency map based on region, so can extract and meet notable semantic characteristic information.
Accompanying drawing explanation
Fig. 1 be the inventive method totally realize block diagram;
Fig. 2 a is the original image of " Image1 ";
Fig. 2 b is that true (the Ground truth) of " Image1 " image significantly schemes;
Fig. 2 c is the image saliency map based on global color histogram of " Image1 " image;
Fig. 2 d is the image saliency map based on field color contrast of " Image1 " image;
Fig. 2 e is the image saliency map openness based on regional space of " Image1 " image;
Fig. 2 f is the image saliency map that " Image1 " image is final;
Fig. 3 a is the original image of " Image2 ";
Fig. 3 b is that true (the Ground truth) of " Image2 " image significantly schemes;
Fig. 3 c is the image saliency map based on global color histogram of " Image2 " image;
Fig. 3 d is the image saliency map based on field color contrast of " Image2 " image;
Fig. 3 e is the image saliency map openness based on regional space of " Image2 " image;
Fig. 3 f is the image saliency map that " Image2 " image is final;
Fig. 4 a is the original image of " Image3 ";
Fig. 4 b is that true (the Ground truth) of " Image3 " image significantly schemes;
Fig. 4 c is the image saliency map based on global color histogram of " Image3 " image;
Fig. 4 d is the image saliency map based on field color contrast of " Image3 " image;
Fig. 4 e is the image saliency map openness based on regional space of " Image3 " image;
Fig. 4 f is the image saliency map that " Image3 " image is final;
Fig. 5 a is the original image of " Image4 ";
Fig. 5 b is that true (the Ground truth) of " Image4 " image significantly schemes;
Fig. 5 c is the image saliency map based on global color histogram of " Image4 " image;
Fig. 5 d is the image saliency map based on field color contrast of " Image4 " image;
Fig. 5 e is the image saliency map openness based on regional space of " Image4 " image;
Fig. 5 f is the image saliency map that " Image4 " image is final;
Fig. 6 a is the original image of " Image5 ";
Fig. 6 b is that true (the Ground truth) of " Image5 " image significantly schemes;
Fig. 6 c is the image saliency map based on global color histogram of " Image5 " image;
Fig. 6 d is the image saliency map based on field color contrast of " Image5 " image;
Fig. 6 e is the image saliency map openness based on regional space of " Image5 " image;
Fig. 6 f is the image saliency map that " Image5 " image is final.
Detailed description of the invention
Below in conjunction with accompanying drawing embodiment, the present invention is described in further detail.
The present invention propose a kind of based on region image saliency map extracting method, its totally realize block diagram as it is shown in figure 1,
It comprises the following steps:
1. pending source images is designated as { Ii(x, y) }, wherein, i=1,2,3,1≤x≤W, 1≤y≤H, W represent { Ii
(x, y) } width, H represents { Ii(x, y) } height, Ii(x y) represents { Ii(x, y) } in coordinate position be (x, pixel y)
The color value of i-th component, the 1st component is that R component, the 2nd component are G component and the 3rd component is B component.
If the most only considering local significance, then image changes the background area significance of more violent edge or complexity
Higher, and the internal significance in smooth target area is relatively low, so also needs to consider that overall situation significance, overall situation significance refer to respectively
Pixel is relative to the significance degree of global image, and therefore first the present invention obtains { Ii(x, y) } quantized image and quantify figure
The global color histogram of picture, then according to { Ii(x, y) } quantized image, obtain { Ii(x, y) } in the face of each pixel
Color kind, further according to { Ii(x, y) } the global color histogram of quantized image and { Ii(x, y) } in the face of each pixel
Color kind, obtains { Ii(x, y) } image saliency map based on global color histogram, be designated as { HS (x, y) }, wherein, HS (x,
Y) represent that in { HS (x, y) }, coordinate position be (x, the pixel value of pixel y), also expression { Ii(x, y) } in coordinate position be
(x, the saliency value based on global color histogram of pixel y).
In this particular embodiment, step detailed process 2. is:
2.-1, to { Ii(x, y) } in the color value of each component of each pixel quantify respectively, obtain { Ii
(x, y) } quantized image, be designated as { Pi(x, y) }, by { Pi(x, y) } in coordinate position be (x, the i-th component of pixel y)
Color value be designated as Pi(x, y),Wherein, symbolFor rounding downwards symbol.
2.-2, { P is calculatedi(x, y) } global color histogram, be designated as H (k) | 0≤k≤4095}, wherein, H (k) table
Show { Pi(x, y) } in belong to the number of all pixels of kth kind color.
2.-3, according to { Pi(x, y) } in the color value of each component of each pixel, calculate { Ii(x, y) } in corresponding
The color category of pixel, by { Ii(x, y) } in coordinate position be that (x, the color category of pixel y) is designated as kxy, kxy=P3
(x,y)×256+P2(x,y)×16+P1(x, y), wherein, P3(x y) represents { Pi(x, y) } in coordinate position be (x, picture y)
The color value of the 3rd component of vegetarian refreshments, P2(x y) represents { Pi(x, y) } in coordinate position be (x, 2nd point of pixel y)
The color value of amount, P1(x y) represents { Pi(x, y) } in coordinate position be (x, the color value of the 1st component of pixel y).
2.-4, { I is calculatedi(x, y) } in the saliency value based on global color histogram of each pixel, by { Ii(x,
Y) in } coordinate position be (x, the saliency value based on global color histogram of pixel y) be designated as HS (x, y),
Wherein, D (kxy, k) represent H (k) | the kth in 0≤k≤4095}xyPlant the Euclidean distance between color and kth kind color, pk,2=mod (k/16), Represent H (k) | 0≤k≤
Kth in 4095}xyPlant the color value of the 1st component corresponding to color,Represent H (k) | the kth in 0≤k≤4095}xy
Plant the color value of the 2nd component corresponding to color,Represent H (k) | the kth in 0≤k≤4095}xyPlant color corresponding
The color value of the 3rd component, pk,1Represent H (k) | the color of the 1st component that kth kind color in 0≤k≤4095} is corresponding
Value, pk,2Represent H (k) | the color value of the 2nd component that kth kind color in 0≤k≤4095} is corresponding, pk,3Represent H (k) |
The color value of the 3rd component that kth kind color in 0≤k≤4095} is corresponding, mod () is remainder number handling function.
2.-5, according to { Ii(x, y) } in the saliency value based on global color histogram of each pixel, obtain { Ii
(x, y) } image saliency map based on global color histogram, be designated as { HS (x, y) }.
3. use super-pixel (Superpixel) cutting techniques by { Ii(x, y) } it is divided into the region of M non-overlapping copies, so
After by { Ii(x, y) } again it is expressed as the set in M region, it is designated as { SPh, consider further that local significance, district similar in image
Typically having relatively low significance between territory, therefore the present invention calculates { SPhThe similarity between regional in }, by { SPh}
In pth region and q-th region between similarity be designated as Sim (SPp,SPq), wherein, M >=1, SPhRepresent { SPhIn }
The h region, 1≤h≤M, 1≤p≤M, 1≤q≤M, p ≠ q, SPpRepresent { SPhPth region in }, SPqRepresent
{SPhQ-th region in }.In the present embodiment, M=200 is taken.
In this particular embodiment, step 3. in { SPhThe similarity Sim between pth region and q-th region in }
(SPp,SPq) acquisition process be:
3.-1, to { SPhThe color value of each component of each pixel in each region in } quantifies respectively,
Obtain { SPhThe quantization areas in each region in }, by { SPhThe quantization areas in h region in } is designated as { Ph,i(xh,
yh), by { Ph,i(xh,yh) in coordinate position be (xh,yh) the color value of i-th component of pixel be designated as Ph,i(xh,
yh), it is assumed that { Ph,i(xh,yh) in coordinate position be (xh,yh) pixel at { Ii(x, y) } in coordinate position be (x, y),
ThenWherein, 1≤xh≤Wh,1≤yh≤Hh, WhRepresent { SPhThe width in h region in }
Degree, HhRepresent { SPhThe height in h region in }, symbolFor rounding downwards symbol.
3.-2, { SP is calculatedhThe color histogram of the quantization areas in each region in }, by { Ph,i(xh,yh) color
Rectangular histogram is designated asWherein,Represent { Ph,i(xh,yh) in belong to the institute of kth kind color
There is the number of pixel.
3.-3, to { SPhThe color histogram of the quantization areas in each region in } is normalized operation, obtains correspondence
Normalization after color histogram, by rightAfter the normalization obtained after being normalized operation
Color histogram be designated as Wherein,Represent
{SPhQuantization areas { the P in h region in }h,i(xh,yh) in belong to the probability of occurrence of pixel of kth kind color,Represent { SPhQuantization areas { the P in h' region in }h',i(xh',yh') in belong to all pictures of kth kind color
The number of vegetarian refreshments, 1≤xh'≤Wh',1≤yh'≤Hh', Wh'Represent { SPhThe width in h' region in }, Hh'Represent { SPhIn }
The height in h' region, Ph',i(xh',yh') represent { Ph',i(xh',yh') in coordinate position be (xh',yh') pixel
The color value of i-th component.
3.-4, { SP is calculatedhThe similarity between pth region and q-th region in }, is designated as Sim (SPp,SPq),
Sim(SPp,SPq)=Simc(SPp,SPq)×Simd(SPp,SPq), Simc(SPp,SPq) represent { SPhPth region in } with
{SPhThe color similarity between q region in }, Simd
(SPp,SPq) represent { SPhPth region in } and { SPhThe spatial simlanty between q region in },Wherein, SPpRepresent { SPhPth region in }, SPqRepresent { SPhQ in }
Individual region,Represent { SPhQuantization areas { the P in pth the region in }p,i(xp,yp) in belong to the picture of kth kind color
The probability of occurrence of vegetarian refreshments,Represent { SPhQuantization areas { the P in the q-th region in }q,i(xq,yq) in belong to kth kind
The probability of occurrence of the pixel of color, 1≤xp≤Wp,1≤yp≤Hp, WpRepresent { SPhThe width in pth the region in }, HpTable
Show { SPhThe height in pth the region in }, Pp,i(xp,yp) represent { Pp,i(xp,yp) in coordinate position be (xp,yp) pixel
The color value of the i-th component of point, 1≤xq≤Wq,1≤yq≤Hq, WqRepresent { SPhThe width in the q-th region in }, HqRepresent
{SPhThe height in the q-th region in }, Pq,i(xq,yq) represent { Pq,i(xq,yq) in coordinate position be (xq,yq) pixel
The color value of i-th component, min () for taking minimum value function,Represent { SPhThe center pixel in pth region in }
The coordinate position of point,Represent { SPhThe coordinate position of the central pixel point in q-th region in }, symbol " ‖ ‖ " is for asking Europe
Formula distance symbol.
4. according to { SPhThe similarity between regional in }, obtains { Ii(x, y) } based on field color contrast
Image saliency map, be designated as { NGC (x, y) }, wherein, (x y) represents that in { NGC (x, y) }, coordinate position is (x, picture y) to NGC
The pixel value of vegetarian refreshments.
In this particular embodiment, step detailed process 4. is:
4.-1, { SP is calculatedhThe color contrast in each region in }, by { SPhThe color contrast in h region in }
Degree is designated as
Wherein, SPhRepresent { SPhThe h region in }, SPqRepresent { SPhQ-th region in },Represent { SPhH in }
Total number of the pixel comprised in region, Simd(SPh,SPq) represent { SPhThe h region in } and { SPhQ district in }
Spatial simlanty between territory, Represent { SPhIn the h region in }
The coordinate position of central pixel point,Represent { SPhThe coordinate position of the central pixel point in q-th region in }, symbol
" ‖ ‖ " for seeking Euclidean distance symbol,Represent { SPhThe color mean vector in h region in }, will { SPhIn }
The color vector of all pixels in h region is averaging and obtainsRepresent { SPhThe face in the q-th region in }
Color mean vector.
4.-2, to { SPhThe color contrast in each region in } is normalized operation, after obtaining the normalization of correspondence
Color contrast, will be to { SPhThe color contrast in h region in }Obtain after being normalized operation returns
Color contrast after one change is designated as Wherein, NGCminRepresent { SPhIn }
M region in minimum color contrast, NGCmaxRepresent { SPhColor contrast maximum in M region in }.
4.-3, { SP is calculatedhThe saliency value based on color contrast in each region in }, by { SPhThe h district in }
The saliency value based on color contrast in territory is designated as Its
In, Sim (SPh,SPq) represent { SPhIn } similarity between h region and q-th region.
4.-4, by { SPhThe saliency value based on color contrast in each region in } is as owning in corresponding region
The saliency value of pixel, i.e. for { SPhThe h region in }, by { SPhH region in } based on color contrast
Saliency value is as the saliency value of all pixels in this region, thus obtains { Ii(x, y) } based on field color contrast
Image saliency map, be designated as { NGC (x, y) }, wherein, (x y) represents that in { NGC (x, y) }, coordinate position is (x, picture y) to NGC
The pixel value of vegetarian refreshments.
5. according to { SPhThe similarity between regional in }, obtains { Ii(x, y) } openness based on regional space
Image saliency map, be designated as { NSS (x, y) }, wherein, (x y) represents that in { NSS (x, y) }, coordinate position is (x, picture y) to NSS
The pixel value of vegetarian refreshments.
In this particular embodiment, step detailed process 5. is:
5.-1, { SP is calculatedhThe spatial sparsity in each region in }, by { SPhThe space in h region in } is sparse
Property is designated as Wherein, Sim (SPh,SPq) represent { SPhH in }
Similarity between region and q-th region,Represent { SPhThe central pixel point in the h region in } and { Ii(x,
Y) Euclidean distance between central pixel point }.
5.-2, to { SPhThe spatial sparsity in each region in } is normalized operation, after obtaining the normalization of correspondence
Spatial sparsity, will be to { SPhThe spatial sparsity in h region in }Obtain after being normalized operation returns
Spatial sparsity after one change is designated as Wherein, NSSminRepresent { SPhIn }
Spatial sparsity minimum in M region, NSSmaxRepresent { SPhSpatial sparsity maximum in M region in }.
5.-3, { SP is calculatedhThe saliency value based on spatial sparsity in each region in }, by { SPhThe h district in }
The saliency value based on spatial sparsity in territory is designated as
5.-4, by { SPhThe saliency value based on spatial sparsity in each region in } is as owning in corresponding region
The saliency value of pixel, i.e. for { SPhThe h region in }, by { SPhH region in } based on spatial sparsity
Saliency value is as the saliency value of all pixels in this region, thus obtains { Ii(x, y) } openness based on regional space
Image saliency map, be designated as { NSS (x, y) }, wherein, (x y) represents that in { NSS (x, y) }, coordinate position is (x, picture y) to NSS
The pixel value of vegetarian refreshments.
6. to { Ii(x, y) } image saliency map based on global color histogram { HS (x, y) }, { Ii(x, y) } based on
The image saliency map of field color contrast NGC (x, y) } and { Ii(x, y) } the image openness based on regional space notable
Figure { NSS (x, y) } merge, obtain { Ii(x, y) } final image saliency map, be designated as { Sal (x, y) }, will Sal (x,
Y) in } coordinate position be (x, the pixel value of pixel y) be designated as Sal (x, y), Sal (x, y)=HS (x, y) × NGC (x, y) ×
NSS(x,y)。
Hereinafter just utilize Image1 in the notable object images storehouse MSRA that Microsoft Research, Asia provides by the inventive method,
The notable figure of five groups of images of Image2, Image3, Image4 and Image5 extracts.Fig. 2 a gives the original of " Image1 "
Image, Fig. 2 b gives true (the Ground truth) of " Image1 " image significantly to scheme, and Fig. 2 c gives " Image1 " image
Image saliency map based on global color histogram, Fig. 2 d give the based on field color contrast of " Image1 " image
Image saliency map, Fig. 2 e give the image saliency map openness based on regional space of " Image1 " image, Fig. 2 f gives
The image saliency map that " Image1 " image is final;Fig. 3 a gives the original image of " Image2 ", and Fig. 3 b gives " Image2 "
True (the Ground truth) of image significantly schemes, and Fig. 3 c gives the figure based on global color histogram of " Image2 " image
As notable figure, Fig. 3 d gives the image saliency map based on field color contrast of " Image2 " image, Fig. 3 e gives
The image saliency map openness based on regional space of " Image2 " image, Fig. 3 f give the image that " Image2 " image is final
Notable figure;Fig. 4 a gives the original image of " Image3 ", and Fig. 4 b gives the true (Ground of " Image3 " image
Truth) significantly scheming, Fig. 4 c gives the image saliency map based on global color histogram of " Image3 " image, Fig. 4 d is given
The image saliency map based on field color contrast of " Image3 " image, Fig. 4 e give " Image3 " image based on district
The openness image saliency map of domain space, Fig. 4 f give the image saliency map that " Image3 " image is final;Fig. 5 a gives
The original image of " Image4 ", Fig. 5 b gives true (the Ground truth) of " Image4 " image significantly to scheme, and Fig. 5 c is given
The image saliency map based on global color histogram of " Image4 " image, Fig. 5 d give " Image4 " image based on district
The image openness based on regional space that the image saliency map of territory color contrast, Fig. 5 e give " Image4 " image is notable
Figure, Fig. 5 f give the image saliency map that " Image4 " image is final;Fig. 6 a gives the original image of " Image5 ", and Fig. 6 b gives
True (the Ground truth) that gone out " Image5 " image significantly schemes, Fig. 6 c give " Image5 " image based on the overall situation face
The image saliency map of Color Histogram, Fig. 6 d give " Image5 " image image saliency map based on field color contrast,
Fig. 6 e gives the image saliency map openness based on regional space of " Image5 " image, Fig. 6 f gives " Image5 " image
Final image saliency map.From Fig. 2 a to Fig. 6 f it can be seen that use the image saliency map that the inventive method obtains owing to considering
The notable situation of change in global and local region, therefore, it is possible to meet notable semantic feature well.