Disclosure of Invention
The invention aims to provide a region-based image saliency map extraction method which accords with salient semantic features and has higher extraction stability and accuracy.
The technical scheme adopted by the invention for solving the technical problems is as follows: a region-based image saliency map extraction method is characterized by comprising the following steps:
recording the source image to be processed as { Ii(x, y) }, wherein I =1,2,3, 1 ≦ x ≦ W, 1 ≦ y ≦ H, W represents { I ≦ Hi(x, y) }, H denotes { IiHigh of (x, y) }, Ii(x, y) represents { IiThe color value of the ith component of the pixel point with the coordinate position (x, y) in (x, y) }, wherein the 1 st component is an R component, the 2 nd component is a G component and the 3 rd component is a B component;
② first obtaining { Ii(x, y) } quantized image and global color histogram of quantized image, then according to { I }i(x, y) } obtaining { I } from the quantized imageiThe color type of each pixel point in (x, y) } is determined according to { I }iGlobal color histogram of quantized image of (x, y) } and { IiThe color type of each pixel point in (x, y) } is obtained to obtain { I }i(x, y) } is an image saliency map based on a global color histogram, and is denoted as { HS (x, y) }, wherein HS (x, y) represents a pixel value of a pixel point with a coordinate position (x, y) in { HS (x, y) }, and also represents { I }iThe coordinate position in the (x, y) } is the significant value of the pixel point of (x, y) based on the global color histogram;
(iii) using superpixel segmentation technique to divide { Ii(x, y) } into M non-overlapping regions, and then dividing { I }i(x, y) } is re-represented as a set of M regions, denoted as { SP }h}, recalculating { SPhSimilarity between the respective regions in (will) { SP }hThe similarity between the p-th and q-th regions in (SP) is denoted as Sim (SP)p,SPq) Wherein M is more than or equal to 1, SPhRepresents SPhIn the h-th area, h is more than or equal to 1 and less than or equal to M, p is more than or equal to 1 and less than or equal to M, q is more than or equal to 1 and less than or equal to M, p is not equal to q, SP is equal topRepresents SPhP-th area in (SP)qRepresents SPhThe q-th region in (1);
fourthly, according to the { SPhObtaining the similarity among all the areas in the { I }, and obtaining the { I } of the areas in the { I }, wherein the areas in the { I } are similar to each otheri(x, y) } is an image saliency map based on regional color contrast, and is marked as { NGC (x, y) }, wherein the NGC (x, y) represents the pixel value of a pixel point with a coordinate position (x, y) in the { NGC (x, y) };
fifthly, according to { SPhObtaining the similarity among all the areas in the { I }, and obtaining the { I } of the areas in the { I }, wherein the areas in the { I } are similar to each otheri(x, y) } is an image saliency map based on region space sparsity and is marked as { NSS (x, y) }, wherein NSS (x, y) represents the pixel value of a pixel point with a coordinate position (x, y) in { NSS (x, y) };
sixthly, { Ii(x, y) } global color histogram-based image saliency maps { HS (x, y) }, { I (I) }i(x, y) } region color contrast based image saliency maps { NGC (x, y) } and { Ii(x, y) } image saliency maps { NSS (x, y) } based on region space sparsity are fused to obtain { IiThe final image saliency map of (x, y) } is denoted as { Sal (x, y) }, and the pixel value of the pixel point whose coordinate position is (x, y) in { Sal (x, y) } is denoted as Sal (x, y), and Sal (x, y) = HS (x, y) × NGC (x, y) × NSS (x, y).
The concrete process of the second step is as follows:
2- (1) pair of
iRespectively quantizing the color value of each component of each pixel point in (x, y) to obtain { I }
i(x, y) } quantized image, denoted as { P }
i(x, y) }, will { P
iThe color value of the ith component of the pixel point with the coordinate position (x, y) in (x, y) is recorded as P
i(x,y),
Wherein, the symbol
Is a rounded-down symbol;
2, calculating { Pi(x, y) }, denoted as { H (k) |0 ≦ k ≦ 4095}, where H (k) represents { P ≦ 4095}, where H (k) representsiThe number of all pixel points belonging to the kth color in (x, y) };
2-3 according to { Pi(x, y) calculating color values of respective components of each pixel in the (x, y) } image, calculating { I }i(x, y) } the color type of the corresponding pixel point will be { IiThe color type of the pixel point with the coordinate position (x, y) in (x, y) is recorded as kxy,kxy=P3(x,y)×256+P2(x,y)×16+P1(x, y) wherein P3(x, y) represents { P }iThe color value, P, of the 3 rd component of the pixel point with the coordinate position (x, y) in (x, y) } is2(x, y) represents { P }iThe color value, P, of the 2 nd component of the pixel with coordinate position (x, y) in (x, y) } is1(x, y) represents { P }iColor of 1 st component of pixel point with coordinate position (x, y) in (x, y) }A value;
② 4, calculating { I
i(x, y) } the global color histogram-based saliency value for each pixel point in the (x, y) } will be { I
iThe significant value based on the global color histogram of the pixel point with the coordinate position (x, y) in (x, y) is marked as HS (x, y),
<math>
<mrow>
<mi>HS</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>k</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mn>4095</mn>
</munderover>
<mrow>
<mo>(</mo>
<mi>H</mi>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
<mo>×</mo>
<mi>D</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>k</mi>
<mi>xy</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
<mo>,</mo>
</mrow>
</math> wherein D (k)
xyK) represents the k-th item in { H (k) |0 ≦ k ≦ 4095}
xyThe euclidean distance between the seed color and the kth color,
p
k,2=mod(k/16),
denotes the k-th in { H (k) |0 ≦ k ≦ 4095}
xyThe color value of the 1 st component corresponding to a seed color,
denotes the k-th in { H (k) |0 ≦ k ≦ 4095}
xyThe color value of the 2 nd component corresponding to the seed color,
denotes the k-th in { H (k) |0 ≦ k ≦ 4095}
xyColor value of 3 rd component, p, corresponding to a color
k,1Denotes a color value, p, of the 1 st component corresponding to the k-th color in { H (k) |0 ≦ k ≦ 4095}
k,2Denotes a color value, p, of the 2 nd component corresponding to the k-th color in { H (k) |0 ≦ k ≦ 4095}
k,3Representing the color value of the 3 rd component corresponding to the k-th color in { H (k) |0 ≦ k ≦ 4095}, and mod () is a remainder taking operation function;
② 5 according to { IiThe significant value of each pixel point in (x, y) based on the global color histogram is obtained to obtain { I }i(x, y) } global color histogram based image saliency map, denoted as { HS (x, y) }.
In step (c) { SPhSimilarity Sim (SP) between p-th and q-th regions inp,SPq) The acquisition process comprises the following steps:
③ 1, pair { SP
hQuantizing the color value of each component of each pixel point in each region to obtain { SP }
hQuantized region of each region in { SP } would be
hThe quantization region of the h-th region in (1) } is denoted as { P
h,i(x
h,y
h) Will { P }
h,i(x
h,y
h) The position of the middle coordinate is (x)
h,y
h) Of the ith component of the pixelColor value P
h,i(x
h,y
h) Suppose { P
h,i(x
h,y
h) The position of the middle coordinate is (x)
h,y
h) Has a pixel point of { I
iThe coordinate position in (x, y) } is (x, y), then
Wherein x is more than or equal to 1
h≤W
h,1≤y
h≤H
h,W
hRepresents SP
hWidth of the H-th area in (H) } H
hRepresents SP
hHeight of h-th area in (1), sign
Is a rounded-down symbol;
③ 2, calculate { SP
hColor histogram of quantized region of each region in { P }, will be { P
h,i(x
h,y
h) The color histogram of is noted as
Wherein,
represents { P
h,i(x
h,y
h) The number of all pixel points belonging to the kth color in the pixel;
③ 3, pair { SP
hNormalizing the color histogram of the quantization area of each area to obtain a corresponding normalized color histogram, and performing normalization on the color histograms
The normalized color histogram obtained after normalization is recorded as
<math>
<mrow>
<mo>{</mo>
<msub>
<msup>
<mi>H</mi>
<mo>′</mo>
</msup>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
</msub>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
<mn>0</mn>
<mo>≤</mo>
<mi>k</mi>
<mo>≤</mo>
<mn>4095</mn>
<mo>}</mo>
<mo>,</mo>
<msub>
<msup>
<mi>H</mi>
<mo>′</mo>
</msup>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
</msub>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>H</mi>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
</msub>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msup>
<mi>h</mi>
<mo>′</mo>
</msup>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</munderover>
<msub>
<mi>H</mi>
<msub>
<mi>SP</mi>
<msup>
<mi>h</mi>
<mo>′</mo>
</msup>
</msub>
</msub>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein,
represents SP
hH-th region of { P } quantization region of the h-th region
h,i(x
h,y
h) The probability of occurrence of a pixel belonging to the k-th color in the pixel,
represents SP
hQuantization region of h' th region in { P }
h',i(x
h',y
h') X is more than or equal to 1 and the number of all pixel points belonging to the k color in the pixel
h'≤W
h',1≤y
h'≤H
h',W
h'Represents SP
hWidth of H' th area in (H) }, H
h'Represents SP
hHeight of h' th area in (P) } h
h',i(x
h',y
h') Represents { P
h',i(x
h',y
h') The position of the middle coordinate is (x)
h',y
h') The color value of the ith component of the pixel point of (1);
③ 4, calculate { SP
hThe similarity between the p-th and q-th regions in (1), denoted as Sim (SP)
p,SP
q),Sim(SP
p,SP
q)=Sim
c(SP
p,SP
q)×Sim
d(SP
p,SP
q),Sim
c(SP
p,SP
q) Represents SP
hThe p-th region in (f) and (SP)
hThe color similarity between the q-th regions in (j),
<math>
<mrow>
<msub>
<mi>Sim</mi>
<mi>c</mi>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>SP</mi>
<mi>p</mi>
</msub>
<mo>,</mo>
<msub>
<mi>SP</mi>
<mi>q</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>k</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mn>4095</mn>
</munderover>
<mi>min</mi>
<mrow>
<mo>(</mo>
<msub>
<msup>
<mi>H</mi>
<mo>′</mo>
</msup>
<msub>
<mi>SP</mi>
<mi>p</mi>
</msub>
</msub>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<msub>
<msup>
<mi>H</mi>
<mo>′</mo>
</msup>
<msub>
<mi>SP</mi>
<mi>q</mi>
</msub>
</msub>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
<mo>,</mo>
</mrow>
</math> Sim
d(SP
p,SP
q) Represents SP
hThe p-th region in (f) and (SP)
hThe spatial similarity between the q-th regions in (j),
wherein, SP
pRepresents SP
hP-th area in (SP)
qRepresents SP
hThe q-th area in (1),
represents SP
hQuantization region of the P-th region in { P } quantization region of the P-th region { P
p,i(x
p,y
p) The probability of occurrence of a pixel belonging to the k-th color in the pixel,
represents SP
hQuantization region of the qth region in { P }, a quantization region of the qth region of { P } is a quantization region of the qth region
q,i(x
q,y
q) The probability of appearance of pixel points belonging to the k-th color in the pixel is more than or equal to 1 and less than or equal to x
p≤W
p,1≤y
p≤H
p,W
pRepresents SP
hWidth of p-th area in (H)
pRepresents SP
hHeight of P-th area in (P) }, P
p,i(x
p,y
p) Represents { P
p,i(x
p,y
p) The position of the middle coordinate is (x)
p,y
p) The color value of the ith component of the pixel point is more than or equal to 1 and less than or equal to x
q≤W
q,1≤y
q≤H
q,W
qRepresents SP
hWidth of the q-th area in (H) } m
qRepresents SP
hHeight of the q-th area in (P) } P
q,i(x
q,y
q) Represents { P
q,i(x
q,y
q) The position of the middle coordinate is (x)
q,y
q) Min () is a minimum function,
represents SP
hThe coordinate position of the center pixel point in the p-th region in (1),
represents SP
hThe coordinate position of the central pixel point in the qth region in (1)The symbol "ii |" is a euclidean distance symbol.
The specific process of the step IV is as follows:
fourthly-1, calculating { SP
hColor contrast of each region in { SP } will be { SP }
hColor contrast of the h-th area in (1) } is noted as
<math>
<mrow>
<msub>
<mi>NGC</mi>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
</msub>
<mo>=</mo>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</munderover>
<mi>W</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
<mo>,</mo>
<msub>
<mi>SP</mi>
<mi>q</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>×</mo>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>m</mi>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
</msub>
<mo>-</mo>
<msub>
<mi>m</mi>
<msub>
<mi>SP</mi>
<mi>q</mi>
</msub>
</msub>
<mo>|</mo>
<mo>|</mo>
<mo>,</mo>
</mrow>
</math> Wherein, SP
hRepresents SP
hH area in (SP)
qRepresents SP
hThe q-th area in (1),
represents SP
hTotal number of pixel points included in the h-th area in (Sim)
d(SP
h,SP
q) Represents SP
hH area in the with { SP }
hThe spatial similarity between the q-th regions in (j),
represents SP
hThe coordinate position of the center pixel point in the h-th area in (1),
represents SP
hThe coordinate position of the central pixel point in the qth area in (1), the symbol "iill" is the euclidean distance symbol,
represents SP
hThe color mean vector of the h-th region in (j),
represents SP
hThe color mean vector of the qth region in (j);
tetra-2, pair { SP
hNormalizing the color contrast of each region in the { SP } to obtain the corresponding normalized color contrast, and aligning the { SP }
hColor contrast of the h-th area in (1) } color contrast
The normalized color contrast obtained after normalization was recorded as
Wherein, NGC
minRepresents SP
hMinimum color contrast of M regions in (NGC) } NGC
maxRepresents SP
hMaximum color contrast in M regions in (j);
fourthly-3, calculating { SP
hColor contrast based saliency value for each region in the will SP
hColor-based contrast of the h-th area in (1) } inIs marked as
<math>
<mrow>
<msub>
<msup>
<mi>NGC</mi>
<mrow>
<mo>′</mo>
<mo>′</mo>
</mrow>
</msup>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</munderover>
<mrow>
<mo>(</mo>
<mi>Sim</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
<mo>,</mo>
<msub>
<mi>SP</mi>
<mi>q</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>×</mo>
<msub>
<msup>
<mi>NGC</mi>
<mo>′</mo>
</msup>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</munderover>
<mi>Sim</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
<mo>,</mo>
<msub>
<mi>SP</mi>
<mi>q</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein, Sim (SP)
h,SP
q) Represents SP
hSimilarity between the h region and the q region in (1);
fourthly-4, mixing the { SPhThe significant value of each area based on the color contrast is taken as the significant value of all pixel points in the corresponding area, so as to obtain { I }iThe (x, y) } image saliency map based on area color contrast is denoted as { NGC (x, y) }, wherein NGC (x, y) represents the pixel value of a pixel point with a coordinate position (x, y) in { NGC (x, y) }.
The concrete process of the fifth step is as follows:
fifthly-1, calculating { SP
hSpatial sparsity of each region in { SP } will be
hThe spatial sparsity of the h-th region in (1) } is noted as
<math>
<mrow>
<msub>
<mi>NSS</mi>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</munderover>
<mrow>
<mo>(</mo>
<mi>Sim</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
<mo>,</mo>
<msub>
<mi>SP</mi>
<mi>q</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>×</mo>
<msub>
<mi>D</mi>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</munderover>
<mi>Sim</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
<mo>,</mo>
<msub>
<mi>SP</mi>
<mi>q</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein, Sim (SP)
h,SP
q) Represents SP
hThe similarity between the h-th and q-th regions in (1),
represents SP
hThe central pixel point in the h-th area in (I) } and (I)
i(x, y) } euclidean distance between center pixel points;
fifthly-2, for { SP
hNormalizing the space sparsity of each region in the { SP } to obtain corresponding normalized space sparsity
hSpatial sparsity of the h-th region in
The normalized space sparsity obtained after normalization is recorded as
Wherein NSS
minRepresents SP
hMinimum spatial sparsity, NSS, of M regions in
maxRepresents SP
hMaximum spatial sparsity in M regions in (j);
fifthly-3, calculating { SP
hSignificant value based on spatial sparsity for each region in the { SP will be
hSignificant value based on spatial sparsity of the h-th region in (1) } is noted
<math>
<mrow>
<msub>
<msup>
<mi>NSS</mi>
<mrow>
<mo>′</mo>
<mo>′</mo>
</mrow>
</msup>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</munderover>
<mrow>
<mo>(</mo>
<mi>Sim</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
<mo>,</mo>
<msub>
<mi>SP</mi>
<mi>q</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>×</mo>
<msub>
<msup>
<mi>NSS</mi>
<mo>′</mo>
</msup>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</munderover>
<mi>Sim</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
<mo>,</mo>
<msub>
<mi>SP</mi>
<mi>q</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>;</mo>
</mrow>
</math>
Fifthly-4, mixing { SPhThe significant value of each region based on space sparsity is used as the significant value of all pixel points in the corresponding region, so as to obtain { I }iThe image saliency map based on the area space sparsity of (x, y) } is marked as { NSS (x, y) }, wherein NSS (x, y) represents the pixel value of a pixel point with a coordinate position (x, y) in { NSS (x, y) }.
Compared with the prior art, the invention has the advantages that:
1) according to the method, the image saliency map based on the global color histogram, the image saliency map based on the regional color contrast and the image saliency map based on the regional space sparsity are obtained through calculation respectively and are finally fused to obtain the image saliency map, the obtained image saliency map can better reflect the saliency change conditions of the global and local regions of the image, and the stability and the accuracy are high.
2) The method provided by the invention adopts a superpixel segmentation technology to segment the image, utilizes histogram features to respectively calculate the color contrast and space sparsity of each region, and finally utilizes the similarity between the regions to carry out weighting to obtain a final image saliency map based on the regions, so that the feature information conforming to the saliency semantics can be extracted.
Drawings
FIG. 1 is a block diagram of an overall implementation of the method of the present invention;
FIG. 2a is an original Image of "Image 1";
FIG. 2b is a real (Ground route) saliency map of an "Image 1" Image;
FIG. 2c is a global color histogram based Image saliency map of an "Image 1" Image;
FIG. 2d is a region color contrast based Image saliency map of an "Image 1" Image;
FIG. 2e is an Image saliency map based on region-space sparsity of an "Image 1" Image;
FIG. 2f is the final Image saliency map of the "Image 1" Image;
FIG. 3a is an original Image of "Image 2";
FIG. 3b is a real (Ground route) saliency map of an "Image 2" Image;
FIG. 3c is a global color histogram based Image saliency map of an "Image 2" Image;
FIG. 3d is a region color contrast based Image saliency map of an "Image 2" Image;
FIG. 3e is an Image saliency map based on region-space sparsity for an "Image 2" Image;
FIG. 3f is the final Image saliency map of the "Image 2" Image;
FIG. 4a is an original Image of "Image 3";
FIG. 4b is a real (Ground route) saliency map of an "Image 3" Image;
FIG. 4c is a global color histogram based Image saliency map for an "Image 3" Image;
FIG. 4d is a region color contrast based Image saliency map of an "Image 3" Image;
FIG. 4e is an Image saliency map based on region space sparsity for an "Image 3" Image;
FIG. 4f is the final Image saliency map of the "Image 3" Image;
FIG. 5a is an original Image of "Image 4";
FIG. 5b is a real (Ground route) saliency map of an "Image 4" Image;
FIG. 5c is a global color histogram based Image saliency map for an "Image 4" Image;
FIG. 5d is a region color contrast based Image saliency map of an "Image 4" Image;
FIG. 5e is an Image saliency map based on region-space sparsity for an "Image 4" Image;
FIG. 5f is the final Image saliency map of the "Image 4" Image;
FIG. 6a is an original Image of "Image 5";
FIG. 6b is a real (Ground route) saliency map of the "Image 5" Image;
FIG. 6c is a global color histogram based Image saliency map for an "Image 5" Image;
FIG. 6d is a region color contrast based Image saliency map of an "Image 5" Image;
FIG. 6e is an Image saliency map based on region space sparsity for an "Image 5" Image;
fig. 6f is a final Image saliency map of the "Image 5" Image.
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
The invention provides a region-based image saliency map extraction method, the overall implementation block diagram of which is shown in FIG. 1, and the method comprises the following steps:
recording the source image to be processed as { Ii(x, y) }, wherein I =1,2,3, 1 ≦ x ≦ W, 1 ≦ y ≦ H, W represents { I ≦ Hi(x, y) }, H denotes { IiHigh of (x, y) }, Ii(x, y) represents { IiThe color value of the ith component of the pixel point with the coordinate position (x, y) in (x, y) }, the 1 st component is an R component, the 2 nd component is a G component, and the 3 rd component is a B component.
Secondly, if only local saliency is considered, the saliency of the edge with violent change or the complicated background area in the image is higher, the saliency of the interior of the smooth target area is lower, and thus the global saliency needs to be considered, wherein the global saliency refers to the saliency of each pixel point relative to the global image, so that the method firstly obtains { I }i(x, y) } quantized image and global color histogram of quantized image, then according to { I }i(x, y) } obtaining { I } from the quantized imageiThe color type of each pixel point in (x, y) } is determined according to { I }i(x,y)The global color histogram of the quantized image of { I } andithe color type of each pixel point in (x, y) } is obtained to obtain { I }i(x, y) } is an image saliency map based on a global color histogram, and is denoted as { HS (x, y) }, wherein HS (x, y) represents a pixel value of a pixel point with a coordinate position (x, y) in { HS (x, y) }, and also represents { I }iAnd (x, y) the significant value of the pixel point with the coordinate position of (x, y) based on the global color histogram.
In this embodiment, the specific process of step two is:
2- (1) pair of
iRespectively quantizing the color value of each component of each pixel point in (x, y) to obtain { I }
i(x, y) } quantized image, denoted as { P }
i(x, y) }, will { P
iThe color value of the ith component of the pixel point with the coordinate position (x, y) in (x, y) is recorded as P
i(x,y),
Wherein, the symbol
To round the symbol down.
2, calculating { Pi(x, y) }, denoted as { H (k) |0 ≦ k ≦ 4095}, where H (k) represents { P ≦ 4095}, where H (k) representsiThe number of all pixel points belonging to the k-th color in (x, y) }.
2-3 according to { Pi(x, y) calculating color values of respective components of each pixel in the (x, y) } image, calculating { I }i(x, y) } the color type of the corresponding pixel point will be { IiThe color type of the pixel point with the coordinate position (x, y) in (x, y) is recorded as kxy,kxy=P3(x,y)×256+P2(x,y)×16+P1(x, y) wherein P3(x, y) represents { P }iThe color value, P, of the 3 rd component of the pixel point with the coordinate position (x, y) in (x, y) } is2(x, y) represents { P }iThe color value, P, of the 2 nd component of the pixel with coordinate position (x, y) in (x, y) } is1(x, y) represents { P }i(x, y) } middle coordinateAnd (3) the color value of the 1 st component of the pixel point with the position (x, y).
② 4, calculating { I
i(x, y) } the global color histogram-based saliency value for each pixel point in the (x, y) } will be { I
iThe significant value based on the global color histogram of the pixel point with the coordinate position (x, y) in (x, y) is marked as HS (x, y),
<math>
<mrow>
<mi>HS</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>k</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mn>4095</mn>
</munderover>
<mrow>
<mo>(</mo>
<mi>H</mi>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
<mo>×</mo>
<mi>D</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>k</mi>
<mi>xy</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
<mo>,</mo>
</mrow>
</math> wherein D (k)
xyK) represents the k-th item in { H (k) |0 ≦ k ≦ 4095}
xyThe euclidean distance between the seed color and the kth color,
p
k,2=mod(k/16),
denotes the k-th in { H (k) |0 ≦ k ≦ 4095}
xyThe color value of the 1 st component corresponding to a seed color,
denotes the k-th in { H (k) |0 ≦ k ≦ 4095}
xyThe color value of the 2 nd component corresponding to the seed color,
denotes the k-th in { H (k) |0 ≦ k ≦ 4095}
xyColor value of 3 rd component, p, corresponding to a color
k,1Denotes a color value, p, of the 1 st component corresponding to the k-th color in { H (k) |0 ≦ k ≦ 4095}
k,2Denotes a color value, p, of the 2 nd component corresponding to the k-th color in { H (k) |0 ≦ k ≦ 4095}
k,3And (b) a color value representing the 3 rd component corresponding to the k-th color in { H (k) |0 ≦ k ≦ 4095}, and mod () is a remainder-taking operation function.
② 5 according to { IiThe significant value of each pixel point in (x, y) based on the global color histogram is obtained to obtain { I }i(x, y) } global color histogram based image saliency map, denoted as { HS (x, y) }.
(iii) adopting super pixel (Superpixel) segmentation technique to divide { I }i(x, y) } into M non-overlapping regions, and then dividing { I }i(x, y) } is re-represented as a set of M regions, denoted as { SP }hConsidering local saliency, similar areas in the image generally have lower saliency, so the invention calculates { SPhSimilarity between the respective regions in (will) { SP }hThe similarity between the p-th and q-th regions in (SP) is denoted as Sim (SP)p,SPq) Wherein M is more than or equal to 1, SPhRepresents SPhIn the h-th area, h is more than or equal to 1 and less than or equal to M,1≤p≤M,1≤q≤M,p≠q,SPprepresents SPhP-th area in (SP)qRepresents SPhThe q-th region in (1). In the present embodiment, M =200 is taken.
In this embodiment, { SP ] in step (c)hSimilarity Sim (SP) between p-th and q-th regions inp,SPq) The acquisition process comprises the following steps:
③ 1, pair { SP
hQuantizing the color value of each component of each pixel point in each region to obtain { SP }
hQuantized region of each region in { SP } would be
hThe quantization region of the h-th region in (1) } is denoted as { P
h,i(x
h,y
h) Will { P }
h,i(x
h,y
h) The position of the middle coordinate is (x)
h,y
h) The color value of the ith component of the pixel point is recorded as P
h,i(x
h,y
h) Suppose { P
h,i(x
h,y
h) The position of the middle coordinate is (x)
h,y
h) Has a pixel point of { I
iThe coordinate position in (x, y) } is (x, y), then
Wherein x is more than or equal to 1
h≤W
h,1≤y
h≤H
h,W
hRepresents SP
hWidth of the H-th area in (H) } H
hRepresents SP
hHeight of h-th area in (1), sign
To round the symbol down.
③ 2, calculate { SP
hColor histogram of quantized region of each region in { P }, will be { P
h,i(x
h,y
h) The color histogram of is noted as
Wherein,
represents { P
h,i(x
h,y
h) The number of all pixel points belonging to the k-th color in the pixel.
③ 3, pair { SP
hNormalizing the color histogram of the quantization area of each area to obtain a corresponding normalized color histogram, and performing normalization on the color histograms
The normalized color histogram obtained after normalization is recorded as
<math>
<mrow>
<mo>{</mo>
<msub>
<msup>
<mi>H</mi>
<mo>′</mo>
</msup>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
</msub>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
<mn>0</mn>
<mo>≤</mo>
<mi>k</mi>
<mo>≤</mo>
<mn>4095</mn>
<mo>}</mo>
<mo>,</mo>
<msub>
<msup>
<mi>H</mi>
<mo>′</mo>
</msup>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
</msub>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>H</mi>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
</msub>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msup>
<mi>h</mi>
<mo>′</mo>
</msup>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</munderover>
<msub>
<mi>H</mi>
<msub>
<mi>SP</mi>
<msup>
<mi>h</mi>
<mo>′</mo>
</msup>
</msub>
</msub>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein,
represents SP
hH-th region of { P } quantization region of the h-th region
h,i(x
h,y
h) The probability of occurrence of a pixel belonging to the k-th color in the pixel,
represents SP
hQuantization region of h' th region in { P }
h',i(x
h',y
h') X is more than or equal to 1 and the number of all pixel points belonging to the k color in the pixel
h'≤W
h',1≤y
h'≤H
h',W
h'Represents SP
hWidth of H' th area in (H) }, H
h'Represents SP
hHeight of h' th area in (P) } h
h',i(x
h',y
h') Represents { P
h',i(x
h',y
h') The position of the middle coordinate is (x)
h',y
h') The color value of the ith component of the pixel point of (1).
③ 4, calculate { SP
hThe similarity between the p-th and q-th regions in (1), denoted as Sim (SP)
p,SP
q),Sim(SP
p,SP
q)=Sim
c(SP
p,SP
q)×Sim
d(SP
p,SP
q),Sim
c(SP
p,SP
q) Represents SP
hThe p-th region in (f) and (SP)
hThe color similarity between the q-th regions in (j),
<math>
<mrow>
<msub>
<mi>Sim</mi>
<mi>c</mi>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>SP</mi>
<mi>p</mi>
</msub>
<mo>,</mo>
<msub>
<mi>SP</mi>
<mi>q</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>k</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mn>4095</mn>
</munderover>
<mi>min</mi>
<mrow>
<mo>(</mo>
<msub>
<msup>
<mi>H</mi>
<mo>′</mo>
</msup>
<msub>
<mi>SP</mi>
<mi>p</mi>
</msub>
</msub>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<msub>
<msup>
<mi>H</mi>
<mo>′</mo>
</msup>
<msub>
<mi>SP</mi>
<mi>q</mi>
</msub>
</msub>
<mrow>
<mo>(</mo>
<mi>k</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
<mo>,</mo>
</mrow>
</math> Sim
d(SP
p,SP
q) Represents SP
hThe p-th region in (f) and (SP)
hThe spatial similarity between the q-th regions in (j),
wherein, SP
pRepresents SP
hP-th area in (SP)
qRepresents SP
hThe q-th area in (1),
represents SP
hQuantization region of the P-th region in { P } quantization region of the P-th region { P
p,i(x
p,y
p) The probability of occurrence of a pixel belonging to the k-th color in the pixel,
represents SP
hQuantization region of the qth region in { P }, a quantization region of the qth region of { P } is a quantization region of the qth region
q,i(x
q,y
q) The probability of appearance of pixel points belonging to the k-th color in the pixel is more than or equal to 1 and less than or equal to x
p≤W
p,1≤y
p≤H
p,W
pRepresents SP
hWidth of p-th area in (H)
pRepresents SP
hHeight of P-th area in (P) }, P
p,i(x
p,y
p) Represents { P
p,i(x
p,y
p) The position of the middle coordinate is (x)
p,y
p) Pixel point of1 ≦ x for the color value of the ith component of
q≤W
q,1≤y
q≤H
q,W
qRepresents SP
hWidth of the q-th area in (H) } m
qRepresents SP
hHeight of the q-th area in (P) } P
q,i(x
q,y
q) Represents { P
q,i(x
q,y
q) The position of the middle coordinate is (x)
q,y
q) Min () is a minimum function,
represents SP
hThe coordinate position of the center pixel point in the p-th region in (1),
represents SP
hThe coordinate position of the central pixel point in the qth area in (1) is the symbol "iill" which is the euclidean distance symbol.
Fourthly, according to the { SPhObtaining the similarity among all the areas in the { I }, and obtaining the { I } of the areas in the { I }, wherein the areas in the { I } are similar to each otheriThe (x, y) } image saliency map based on area color contrast is denoted as { NGC (x, y) }, wherein NGC (x, y) represents the pixel value of a pixel point with a coordinate position (x, y) in { NGC (x, y) }.
In this embodiment, the specific process of step iv is:
fourthly-1, calculating { SP
hColor contrast of each region in { SP } will be { SP }
hColor contrast of the h-th area in (1) } is noted as
<math>
<mrow>
<msub>
<mi>NGC</mi>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
</msub>
<mo>=</mo>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</munderover>
<mi>W</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
<mo>,</mo>
<msub>
<mi>SP</mi>
<mi>q</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>×</mo>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>m</mi>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
</msub>
<mo>-</mo>
<msub>
<mi>m</mi>
<msub>
<mi>SP</mi>
<mi>q</mi>
</msub>
</msub>
<mo>|</mo>
<mo>|</mo>
<mo>,</mo>
</mrow>
</math> Wherein, SP
hRepresents SP
hH area in (SP)
qRepresents SP
hThe q-th area in (1),
represents SP
hTotal number of pixel points included in the h-th area in (Sim)
d(SP
h,SP
q) Represents SP
hH area in the with { SP }
hThe spatial similarity between the q-th regions in (j),
represents SP
hThe coordinate position of the center pixel point in the h-th area in (1),
represents SP
hThe coordinate position of the central pixel point in the qth area in (1), the symbol "iill" is the euclidean distance symbol,
represents SP
hColor mean vector of h-th region in (i.e. { SP) }
hAveraging the color vectors of all pixel points in the h-th area to obtain
Represents SP
hThe color mean vector of the q-th region in (j).
Tetra-2, pair { SP
hNormalizing the color contrast of each region in the { SP } to obtain the corresponding normalized color contrast, and aligning the { SP }
hColor contrast of the h-th area in (1) } color contrast
The normalized color contrast obtained after normalization was recorded as
Wherein, NGC
minRepresents SP
hMinimum color contrast of M regions in (NGC) } NGC
maxRepresents SP
hMaximum color contrast in M regions in (j).
Fourthly-3, calculating { SP
hColor contrast based saliency value for each region in the will SP
hThe h region in (1) has a significant value based on color contrast recorded as
<math>
<mrow>
<msub>
<msup>
<mi>NGC</mi>
<mrow>
<mo>′</mo>
<mo>′</mo>
</mrow>
</msup>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</munderover>
<mrow>
<mo>(</mo>
<mi>Sim</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
<mo>,</mo>
<msub>
<mi>SP</mi>
<mi>q</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>×</mo>
<msub>
<msup>
<mi>NGC</mi>
<mo>′</mo>
</msup>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</munderover>
<mi>Sim</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
<mo>,</mo>
<msub>
<mi>SP</mi>
<mi>q</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein, Sim (SP)
h,SP
q) Represents SP
hSimilarity between the h-th and q-th regions in (1).
Fourthly-4, mixing the { SPhThe color contrast based saliency value for each region in the { SP } is taken as the saliency value for all pixel points in the corresponding region, i.e. for { SP }hH region of { SP } will behThe significant value of the h-th area based on the color contrast is taken as the significant value of all the pixel points in the area, so as to obtain { I }iThe (x, y) } image saliency map based on area color contrast is denoted as { NGC (x, y) }, wherein NGC (x, y) represents the pixel value of a pixel point with a coordinate position (x, y) in { NGC (x, y) }.
Fifthly, according to { SPhObtaining the similarity among all the areas in the { I }, and obtaining the { I } of the areas in the { I }, wherein the areas in the { I } are similar to each otheri(x, y) } image saliency map based on region-space sparsity, denoted as { NSS (x, y) }, where NSS (x, y) represents the coordinate position in { NSS (x, y) }Is the pixel value of the pixel point of (x, y).
In this embodiment, the specific process of the fifth step is as follows:
fifthly-1, calculating { SP
hSpatial sparsity of each region in { SP } will be
hThe spatial sparsity of the h-th region in (1) } is noted as
<math>
<mrow>
<msub>
<mi>NSS</mi>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</munderover>
<mrow>
<mo>(</mo>
<mi>Sim</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
<mo>,</mo>
<msub>
<mi>SP</mi>
<mi>q</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>×</mo>
<msub>
<mi>D</mi>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</munderover>
<mi>Sim</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
<mo>,</mo>
<msub>
<mi>SP</mi>
<mi>q</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein, Sim (SP)
h,SP
q) Watch (A)Show SP
hThe similarity between the h-th and q-th regions in (1),
represents SP
hThe central pixel point in the h-th area in (I) } and (I)
i(x, y) } euclidean distance between center pixel points.
Fifthly-2, for { SP
hNormalizing the space sparsity of each region in the { SP } to obtain corresponding normalized space sparsity
hSpatial sparsity of the h-th region in
The normalized space sparsity obtained after normalization is recorded as
Wherein NSS
minRepresents SP
hMinimum spatial sparsity, NSS, of M regions in
maxRepresents SP
hThe greatest spatial sparsity of the M regions in (j).
Fifthly-3, calculating { SP
hSignificant value based on spatial sparsity for each region in the { SP will be
hSignificant value based on spatial sparsity of the h-th region in (1) } is noted
<math>
<mrow>
<msub>
<mrow>
<mi>NS</mi>
<msup>
<mi>S</mi>
<mrow>
<mo>′</mo>
<mo>′</mo>
</mrow>
</msup>
</mrow>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</munderover>
<mrow>
<mo>(</mo>
<mi>Sim</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
<mo>,</mo>
<msub>
<mi>SP</mi>
<mi>q</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>×</mo>
<msub>
<msup>
<mi>NSS</mi>
<mo>′</mo>
</msup>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</munderover>
<mi>Sim</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>SP</mi>
<mi>h</mi>
</msub>
<mo>,</mo>
<msub>
<mi>SP</mi>
<mi>q</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>.</mo>
</mrow>
</math>
Fifthly-4, mixing { SPhThe spatial sparsity-based saliency value for each region in the { SP } is taken as the saliency value for all pixel points in the corresponding region, i.e., for { SP }hH region of { SP } will behThe significant value of the h-th area based on space sparsity is used as the significant value of all pixel points in the area, so as to obtain { I }iThe image saliency map based on the area space sparsity of (x, y) } is marked as { NSS (x, y) }, wherein NSS (x, y) represents the pixel value of a pixel point with a coordinate position (x, y) in { NSS (x, y) }.
Sixthly, { Ii(x, y) } global color histogram-based image saliency maps { HS (x, y) }, { I (I) }iThe radical of (x, y) } or (y) or (c)Image saliency maps { NGC (x, y) } and { I } at regional color contrasti(x, y) } image saliency maps { NSS (x, y) } based on region space sparsity are fused to obtain { IiThe final image saliency map of (x, y) } is denoted as { Sal (x, y) }, and the pixel value of the pixel point whose coordinate position is (x, y) in { Sal (x, y) } is denoted as Sal (x, y), and Sal (x, y) = HS (x, y) × NGC (x, y) × NSS (x, y).
The method of the invention is used for extracting the saliency maps of five groups of images, namely Image1, Image2, Image3, Image4 and Image5 in a saliency object Image library MSRA provided by Microsoft Asian research institute. Fig. 2a shows an original Image of "Image 1", fig. 2b shows a real (Ground route) saliency map of an Image of "Image 1", fig. 2c shows an Image saliency map of an Image of "Image 1" based on a global color histogram, fig. 2d shows an Image saliency map of an Image of "Image 1" based on a regional color contrast, fig. 2e shows an Image saliency map of an Image of "Image 1" based on a regional spatial sparsity, and fig. 2f shows a final Image saliency map of an Image of "Image 1"; fig. 3a shows an original Image of "Image 2", fig. 3b shows a real (Ground route) saliency map of an Image of "Image 2", fig. 3c shows an Image saliency map of an Image of "Image 2" based on a global color histogram, fig. 3d shows an Image saliency map of an Image of "Image 2" based on a regional color contrast, fig. 3e shows an Image saliency map of an Image of "Image 2" based on a regional spatial sparsity, and fig. 3f shows a final Image saliency map of an Image of "Image 2"; FIG. 4a shows an original Image of "Image 3", FIG. 4b shows a real (Ground route) saliency map of an Image of "Image 3", FIG. 4c shows an Image saliency map of an Image of "Image 3" based on a global color histogram, FIG. 4d shows an Image saliency map of an Image of "Image 3" based on regional color contrast, FIG. 4e shows an Image saliency map of an Image of "Image 3" based on regional spatial sparsity, and FIG. 4f shows a final Image saliency map of an Image of "Image 3"; FIG. 5a shows an original Image of "Image 4", FIG. 5b shows a true (Ground route) saliency map of an Image of "Image 4", FIG. 5c shows an Image saliency map of an Image of "Image 4" based on a global color histogram, FIG. 5d shows an Image saliency map of an Image of "Image 4" based on regional color contrast, FIG. 5e shows an Image saliency map of an Image of "Image 4" based on regional spatial sparsity, and FIG. 5f shows a final Image saliency map of an Image of "Image 4"; fig. 6a shows an original Image of "Image 5", fig. 6b shows a real (Ground route) saliency map of an Image of "Image 5", fig. 6c shows an Image saliency map based on a global color histogram of an Image of "Image 5", fig. 6d shows an Image saliency map based on a regional color contrast of an Image of "Image 5", fig. 6e shows an Image saliency map based on a regional spatial sparsity of an Image of "Image 5", and fig. 6f shows a final Image saliency map of an Image of "Image 5". As can be seen from fig. 2a to fig. 6f, the image saliency map obtained by the method of the present invention can well conform to the features of the saliency semantics due to the consideration of the saliency change of the global and local regions.