CN103632153B - Region-based image saliency map extracting method - Google Patents

Region-based image saliency map extracting method Download PDF

Info

Publication number
CN103632153B
CN103632153B CN201310651864.5A CN201310651864A CN103632153B CN 103632153 B CN103632153 B CN 103632153B CN 201310651864 A CN201310651864 A CN 201310651864A CN 103632153 B CN103632153 B CN 103632153B
Authority
CN
China
Prior art keywords
region
color
represent
pixel
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310651864.5A
Other languages
Chinese (zh)
Other versions
CN103632153A (en
Inventor
邵枫
姜求平
蒋刚毅
郁梅
李福翠
彭宗举
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZHEJIANG DUYAN INFORMATION TECHNOLOGY Co.,Ltd.
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201310651864.5A priority Critical patent/CN103632153B/en
Publication of CN103632153A publication Critical patent/CN103632153A/en
Application granted granted Critical
Publication of CN103632153B publication Critical patent/CN103632153B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a region-based image saliency map extracting method. The method includes: firstly, calculating a global color histogram of an image to obtain an image saliency map based on the global color histogram; secondly, adopting superpixel segmentation technology to segment the image, calculating color contrast and space sparsity of each region, and weighting by the aid of similarity among the regions to obtain an image saliency map based on the region color contrast and an image saliency map based on the region space sparsity; finally, fusing the image saliency map based on the global color histogram, the image saliency map based on the region color contrast and the image saliency map based on the region space sparsity to obtain a final image saliency map. The method has the advantage that the obtained image saliency map can well reflect saliency changes of global and local regions, and conforms to image saliency semantic features.

Description

A kind of image saliency map extracting method based on region
Technical field
The present invention relates to the processing method of a kind of picture signal, especially relate to a kind of image saliency map based on region and carry Access method.
Background technology
Receive and in information processing in human vision, due to brain resource-constrained and external environment information importance district Not, in processing procedure, human brain environmental information to external world is not make no exception, but shows selection feature.People are seeing When seeing image or video segment, attention is not evenly distributed to each region of image, but pays close attention to some marking area Du Genggao.How marking area high for Saliency map in video being detected and to be extracted is computer vision and based on interior One important research contents of the field of video retrieval held.
Existing notable graph model is a kind of selective attention model simulating organism vision noticing mechanism, and it is by meter Calculate each pixel in terms of color, brightness, direction with the contrast of periphery background, and the saliency value of all pixels is constituted one Zhang Xianzhu schemes, but this kind of method can not extract image saliency map information well, this is because based on pixel notable special Levy the notable semantic feature that can not reflect well when human eye is watched, and marked feature based on region can be effectively improved How the stability extracted and accuracy, therefore, carry out region segmentation to image, how to put forward the feature of regional Take, how the marked feature of regional is described, how between significance and region and the region of gauge region itself Significance, be all notable figure based on region is extracted in need the problem researched and solved.
Summary of the invention
The technical problem to be solved is to provide one and meets notable semantic feature, and has higher extracted stability Image saliency map extracting method based on region with accuracy.
The present invention solves the technical scheme that above-mentioned technical problem used: a kind of image saliency map based on region is extracted Method, it is characterised in that comprise the following steps:
1. pending source images is designated as { Ii(x, y) }, wherein, i=1,2,3,1≤x≤W, 1≤y≤H, W represent { Ii (x, y) } width, H represents { Ii(x, y) } height, Ii(x y) represents { Ii(x, y) } in coordinate position be (x, pixel y) The color value of i-th component, the 1st component is that R component, the 2nd component are G component and the 3rd component is B component;
First { I is obtainedi(x, y) } quantized image and the global color histogram of quantized image, then according to { Ii(x, Y) quantized image }, obtains { Ii(x, y) } in the color category of each pixel, further according to { Ii(x, y) } quantized image Global color histogram and { Ii(x, y) } in the color category of each pixel, obtain { Ii(x, y) } based on the overall situation face The image saliency map of Color Histogram, is designated as { HS (x, y) }, wherein, HS (x, y) represent that in { HS (x, y) }, coordinate position is (x, y) The pixel value of pixel, also represent { Ii(x, y) } in coordinate position be (x, pixel y) based on global color histogram Saliency value;
3. use super-pixel cutting techniques by { Ii(x, y) } it is divided into the region of M non-overlapping copies, then by { Ii(x,y)} Again it is expressed as the set in M region, is designated as { SPh, then calculate { SPhThe similarity between regional in }, by { SPh} In pth region and q-th region between similarity be designated as Sim (SPp,SPq), wherein, M >=1, SPhRepresent { SPhIn } The h region, 1≤h≤M, 1≤p≤M, 1≤q≤M, p ≠ q, SPpRepresent { SPhPth region in }, SPqRepresent {SPhQ-th region in };
4. according to { SPhThe similarity between regional in }, obtains { Ii(x, y) } based on field color contrast Image saliency map, be designated as { NGC (x, y) }, wherein, (x y) represents that in { NGC (x, y) }, coordinate position is (x, picture y) to NGC The pixel value of vegetarian refreshments;
5. according to { SPhThe similarity between regional in }, obtains { Ii(x, y) } openness based on regional space Image saliency map, be designated as { NSS (x, y) }, wherein, (x y) represents that in { NSS (x, y) }, coordinate position is (x, picture y) to NSS The pixel value of vegetarian refreshments;
6. to { Ii(x, y) } image saliency map based on global color histogram { HS (x, y) }, { Ii(x, y) } based on The image saliency map of field color contrast NGC (x, y) } and { Ii(x, y) } the image openness based on regional space notable Figure { NSS (x, y) } merge, obtain { Ii(x, y) } final image saliency map, be designated as { Sal (x, y) }, will Sal (x, Y) in } coordinate position be (x, the pixel value of pixel y) be designated as Sal (x, y), Sal (x, y)=HS (x, y) × NGC (x, y) × NSS(x,y)。
Described step detailed process 2. is:
2.-1, to { Ii(x, y) } in the color value of each component of each pixel quantify respectively, obtain { Ii (x, y) } quantized image, be designated as { Pi(x, y) }, by { Pi(x, y) } in coordinate position be (x, the i-th component of pixel y) Color value be designated as Pi(x, y),Wherein, symbolFor rounding downwards symbol;
2.-2, { P is calculatedi(x, y) } global color histogram, be designated as H (k) | 0≤k≤4095}, wherein, H (k) table Show { Pi(x, y) } in belong to the number of all pixels of kth kind color;
2.-3, according to { Pi(x, y) } in the color value of each component of each pixel, calculate { Ii(x, y) } in corresponding The color category of pixel, by { Ii(x, y) } in coordinate position be that (x, the color category of pixel y) is designated as kxy, kxy=P3 (x,y)×256+P2(x,y)×16+P1(x, y), wherein, P3(x y) represents { Pi(x, y) } in coordinate position be (x, picture y) The color value of the 3rd component of vegetarian refreshments, P2(x y) represents { Pi(x, y) } in coordinate position be (x, 2nd point of pixel y) The color value of amount, P1(x y) represents { Pi(x, y) } in coordinate position be (x, the color value of the 1st component of pixel y);
2.-4, { I is calculatedi(x, y) } in the saliency value based on global color histogram of each pixel, by { Ii(x, Y) in } coordinate position be (x, the saliency value based on global color histogram of pixel y) be designated as HS (x, y), HS ( x , y ) = Σ k = 0 4095 ( H ( k ) × D ( k xy , k ) ) , D ( k xy , k ) = ( p k xy , 1 - p k , 1 ) 2 + ( p k xy , 2 - p k , 2 ) 2 + ( p k xy , 3 - p k , 3 ) 2 , Wherein, D (kxy, k) represent H (k) | the kth in 0≤k≤4095}xyPlant the Euclidean distance between color and kth kind color, p k xy , 2 = mod ( k xy / 16 ) , pk,2=mod (k/16), Represent H (k) | 0≤k≤ Kth in 4095}xyPlant the color value of the 1st component corresponding to color,Represent H (k) | the kth in 0≤k≤4095}xy Plant the color value of the 2nd component corresponding to color,Represent H (k) | the kth in 0≤k≤4095}xyPlant color corresponding The color value of the 3rd component, pk,1Represent H (k) | the color of the 1st component that kth kind color in 0≤k≤4095} is corresponding Value, pk,2Represent H (k) | the color value of the 2nd component that kth kind color in 0≤k≤4095} is corresponding, pk,3Represent H (k) | The color value of the 3rd component that kth kind color in 0≤k≤4095} is corresponding, mod () is remainder number handling function;
2.-5, according to { Ii(x, y) } in the saliency value based on global color histogram of each pixel, obtain { Ii (x, y) } image saliency map based on global color histogram, be designated as { HS (x, y) }.
Described step 3. in { SPhSimilarity Sim (the SP between pth region and q-th region in }p,SPq) Acquisition process is:
3.-1, to { SPhThe color value of each component of each pixel in each region in } quantifies respectively, Obtain { SPhThe quantization areas in each region in }, by { SPhThe quantization areas in h region in } is designated as { Ph,i(xh, yh), by { Ph,i(xh,yh) in coordinate position be (xh,yh) the color value of i-th component of pixel be designated as Ph,i(xh, yh), it is assumed that { Ph,i(xh,yh) in coordinate position be (xh,yh) pixel at { Ii(x, y) } in coordinate position be (x, y), ThenWherein, 1≤xh≤Wh,1≤yh≤Hh, WhRepresent { SPhThe width in h region in } Degree, HhRepresent { SPhThe height in h region in }, symbolFor rounding downwards symbol;
3.-2, { SP is calculatedhThe color histogram of the quantization areas in each region in }, by { Ph,i(xh,yh) color Rectangular histogram is designated asWherein,Represent { Ph,i(xh,yh) in belong to the institute of kth kind color There is the number of pixel;
3.-3, to { SPhThe color histogram of the quantization areas in each region in } is normalized operation, obtains correspondence Normalization after color histogram, by rightAfter the normalization obtained after being normalized operation Color histogram be designated as { H ′ SP h ( k ) | 0 ≤ k ≤ 4095 } , H ′ SP h ( k ) = H SP h ( k ) Σ h ′ = 1 M H SP h ′ ( k ) , Wherein,Represent {SPhQuantization areas { the P in h region in }h,i(xh,yh) in belong to the probability of occurrence of pixel of kth kind color,Represent { SPhQuantization areas { the P in h' region in }h',i(xh',yh') in belong to all pictures of kth kind color The number of vegetarian refreshments, 1≤xh'≤Wh',1≤yh'≤Hh', Wh'Represent { SPhThe width in h' region in }, Hh'Represent { SPhIn } The height in h' region, Ph',i(xh',yh') represent { Ph',i(xh',yh') in coordinate position be (xh',yh') pixel The color value of i-th component;
3.-4, { SP is calculatedhThe similarity between pth region and q-th region in }, is designated as Sim (SPp,SPq), Sim(SPp,SPq)=Simc(SPp,SPq)×Simd(SPp,SPq), Simc(SPp,SPq) represent { SPhPth region in } with {SPhThe color similarity between q region in }, Sim c ( SP p , SP q ) = Σ k = 0 4095 min ( H ′ SP p ( k ) , H ′ SP q ( k ) ) , Simd (SPp,SPq) represent { SPhPth region in } and { SPhThe spatial simlanty between q region in },Wherein, SPpRepresent { SPhPth region in }, SPqRepresent { SPhQ in } Individual region,Represent { SPhQuantization areas { the P in pth the region in }p,i(xp,yp) in belong to the picture of kth kind color The probability of occurrence of vegetarian refreshments,Represent { SPhQuantization areas { the P in the q-th region in }q,i(xq,yq) in belong to kth kind The probability of occurrence of the pixel of color, 1≤xp≤Wp,1≤yp≤Hp, WpRepresent { SPhThe width in pth the region in }, HpTable Show { SPhThe height in pth the region in }, Pp,i(xp,yp) represent { Pp,i(xp,yp) in coordinate position be (xp,yp) pixel The color value of the i-th component of point, 1≤xq≤Wq,1≤yq≤Hq, WqRepresent { SPhThe width in the q-th region in }, HqRepresent {SPhThe height in the q-th region in }, Pq,i(xq,yq) represent { Pq,i(xq,yq) in coordinate position be (xq,yq) pixel The color value of i-th component, min () for taking minimum value function,Represent { SPhThe center pixel in pth region in } The coordinate position of point,Represent { SPhThe coordinate position of the central pixel point in q-th region in }, symbol " ‖ ‖ " is for asking Europe Formula distance symbol.
Described step detailed process 4. is:
4.-1, { SP is calculatedhThe color contrast in each region in }, by { SPhThe color contrast in h region in } Degree is designated as NGC SP h = Σ q = 1 M W ( SP h , SP q ) × | | m SP h - m SP q | | , Wherein, SPhRepresent { SPhThe h region in }, SPqRepresent { SPhQ-th region in },Represent { SPhH in } Total number of the pixel comprised in region, Simd(SPh,SPq) represent { SPhThe h region in } and { SPhQ district in } Spatial simlanty between territory, Represent { SPhIn the h region in } The coordinate position of central pixel point,Represent { SPhThe coordinate position of the central pixel point in q-th region in }, symbol " ‖ ‖ " for seeking Euclidean distance symbol,Represent { SPhThe color mean vector in h region in },Represent { SPhIn } The color mean vector in q-th region;
4.-2, to { SPhThe color contrast in each region in } is normalized operation, after obtaining the normalization of correspondence Color contrast, will be to { SPhThe color contrast in h region in }Obtain after being normalized operation returns Color contrast after one change is designated as Wherein, NGCminRepresent { SPh} In M region in minimum color contrast, NGCmaxRepresent { SPhColor contrast maximum in M region in };
4.-3, { SP is calculatedhThe saliency value based on color contrast in each region in }, by { SPhThe h district in } The saliency value based on color contrast in territory is designated as NGC ′ ′ SP h = Σ q = 1 M ( Sim ( SP h , SP q ) × NGC ′ SP h ) Σ q = 1 M Sim ( SP h , SP q ) , Its In, Sim (SPh,SPq) represent { SPhIn } similarity between h region and q-th region;
4.-4, by { SPhThe saliency value based on color contrast in each region in } is as owning in corresponding region The saliency value of pixel, thus obtain { Ii(x, y) } image saliency map based on field color contrast, be designated as NGC (x, Y) }, wherein, (x y) represents that in { NGC (x, y) }, coordinate position is (x, the pixel value of pixel y) to NGC.
Described step detailed process 5. is:
5.-1, { SP is calculatedhThe spatial sparsity in each region in }, by { SPhThe space in h region in } is sparse Property is designated as NSS SP h = Σ q = 1 M ( Sim ( SP h , SP q ) × D SP h ) Σ q = 1 M Sim ( SP h , SP q ) , Wherein, Sim (SPh,SPq) represent { SPhH in } Similarity between region and q-th region,Represent { SPhThe central pixel point in the h region in } and { Ii(x, Y) Euclidean distance between central pixel point };
5.-2, to { SPhThe spatial sparsity in each region in } is normalized operation, after obtaining the normalization of correspondence Spatial sparsity, will be to { SPhThe spatial sparsity in h region in }Obtain after being normalized operation returns Spatial sparsity after one change is designated as Wherein, NSSminRepresent { SPhIn } Spatial sparsity minimum in M region, NSSmaxRepresent { SPhSpatial sparsity maximum in M region in };
5.-3, { SP is calculatedhThe saliency value based on spatial sparsity in each region in }, by { SPhThe h district in } The saliency value based on spatial sparsity in territory is designated as NSS ′ ′ SP h = Σ q = 1 M ( Sim ( SP h , SP q ) × NSS ′ SP h ) Σ q = 1 M Sim ( SP h , SP q ) ;
5.-4, by { SPhThe saliency value based on spatial sparsity in each region in } is as owning in corresponding region The saliency value of pixel, thus obtain { Ii(x, y) } the image saliency map openness based on regional space, be designated as NSS (x, Y) }, wherein, (x y) represents that in { NSS (x, y) }, coordinate position is (x, the pixel value of pixel y) to NSS.
Compared with prior art, it is an advantage of the current invention that:
1) the inventive method is by being calculated image saliency map based on global color histogram respectively, based on region face The image saliency map of color contrast and based on the openness image saliency map of regional space, and finally merge that to obtain image notable Figure, the image saliency map obtained can preferably reflect the notable situation of change in the global and local region of image, and stable Property and accuracy high.
2) the inventive method uses super-pixel cutting techniques to split image, and utilizes histogram feature to calculate respectively The color contrast of regional and spatial sparsity, finally utilize the similarity between region to be weighted, obtain final Image saliency map based on region, so can extract and meet notable semantic characteristic information.
Accompanying drawing explanation
Fig. 1 be the inventive method totally realize block diagram;
Fig. 2 a is the original image of " Image1 ";
Fig. 2 b is that true (the Ground truth) of " Image1 " image significantly schemes;
Fig. 2 c is the image saliency map based on global color histogram of " Image1 " image;
Fig. 2 d is the image saliency map based on field color contrast of " Image1 " image;
Fig. 2 e is the image saliency map openness based on regional space of " Image1 " image;
Fig. 2 f is the image saliency map that " Image1 " image is final;
Fig. 3 a is the original image of " Image2 ";
Fig. 3 b is that true (the Ground truth) of " Image2 " image significantly schemes;
Fig. 3 c is the image saliency map based on global color histogram of " Image2 " image;
Fig. 3 d is the image saliency map based on field color contrast of " Image2 " image;
Fig. 3 e is the image saliency map openness based on regional space of " Image2 " image;
Fig. 3 f is the image saliency map that " Image2 " image is final;
Fig. 4 a is the original image of " Image3 ";
Fig. 4 b is that true (the Ground truth) of " Image3 " image significantly schemes;
Fig. 4 c is the image saliency map based on global color histogram of " Image3 " image;
Fig. 4 d is the image saliency map based on field color contrast of " Image3 " image;
Fig. 4 e is the image saliency map openness based on regional space of " Image3 " image;
Fig. 4 f is the image saliency map that " Image3 " image is final;
Fig. 5 a is the original image of " Image4 ";
Fig. 5 b is that true (the Ground truth) of " Image4 " image significantly schemes;
Fig. 5 c is the image saliency map based on global color histogram of " Image4 " image;
Fig. 5 d is the image saliency map based on field color contrast of " Image4 " image;
Fig. 5 e is the image saliency map openness based on regional space of " Image4 " image;
Fig. 5 f is the image saliency map that " Image4 " image is final;
Fig. 6 a is the original image of " Image5 ";
Fig. 6 b is that true (the Ground truth) of " Image5 " image significantly schemes;
Fig. 6 c is the image saliency map based on global color histogram of " Image5 " image;
Fig. 6 d is the image saliency map based on field color contrast of " Image5 " image;
Fig. 6 e is the image saliency map openness based on regional space of " Image5 " image;
Fig. 6 f is the image saliency map that " Image5 " image is final.
Detailed description of the invention
Below in conjunction with accompanying drawing embodiment, the present invention is described in further detail.
The present invention propose a kind of based on region image saliency map extracting method, its totally realize block diagram as it is shown in figure 1, It comprises the following steps:
1. pending source images is designated as { Ii(x, y) }, wherein, i=1,2,3,1≤x≤W, 1≤y≤H, W represent { Ii (x, y) } width, H represents { Ii(x, y) } height, Ii(x y) represents { Ii(x, y) } in coordinate position be (x, pixel y) The color value of i-th component, the 1st component is that R component, the 2nd component are G component and the 3rd component is B component.
If the most only considering local significance, then image changes the background area significance of more violent edge or complexity Higher, and the internal significance in smooth target area is relatively low, so also needs to consider that overall situation significance, overall situation significance refer to respectively Pixel is relative to the significance degree of global image, and therefore first the present invention obtains { Ii(x, y) } quantized image and quantify figure The global color histogram of picture, then according to { Ii(x, y) } quantized image, obtain { Ii(x, y) } in the face of each pixel Color kind, further according to { Ii(x, y) } the global color histogram of quantized image and { Ii(x, y) } in the face of each pixel Color kind, obtains { Ii(x, y) } image saliency map based on global color histogram, be designated as { HS (x, y) }, wherein, HS (x, Y) represent that in { HS (x, y) }, coordinate position be (x, the pixel value of pixel y), also expression { Ii(x, y) } in coordinate position be (x, the saliency value based on global color histogram of pixel y).
In this particular embodiment, step detailed process 2. is:
2.-1, to { Ii(x, y) } in the color value of each component of each pixel quantify respectively, obtain { Ii (x, y) } quantized image, be designated as { Pi(x, y) }, by { Pi(x, y) } in coordinate position be (x, the i-th component of pixel y) Color value be designated as Pi(x, y),Wherein, symbolFor rounding downwards symbol.
2.-2, { P is calculatedi(x, y) } global color histogram, be designated as H (k) | 0≤k≤4095}, wherein, H (k) table Show { Pi(x, y) } in belong to the number of all pixels of kth kind color.
2.-3, according to { Pi(x, y) } in the color value of each component of each pixel, calculate { Ii(x, y) } in corresponding The color category of pixel, by { Ii(x, y) } in coordinate position be that (x, the color category of pixel y) is designated as kxy, kxy=P3 (x,y)×256+P2(x,y)×16+P1(x, y), wherein, P3(x y) represents { Pi(x, y) } in coordinate position be (x, picture y) The color value of the 3rd component of vegetarian refreshments, P2(x y) represents { Pi(x, y) } in coordinate position be (x, 2nd point of pixel y) The color value of amount, P1(x y) represents { Pi(x, y) } in coordinate position be (x, the color value of the 1st component of pixel y).
2.-4, { I is calculatedi(x, y) } in the saliency value based on global color histogram of each pixel, by { Ii(x, Y) in } coordinate position be (x, the saliency value based on global color histogram of pixel y) be designated as HS (x, y), HS ( x , y ) = Σ k = 0 4095 ( H ( k ) × D ( k xy , k ) ) , D ( k xy , k ) = ( p k xy , 1 - p k , 1 ) 2 + ( p k xy , 2 - p k , 2 ) 2 + ( p k xy , 3 - p k , 3 ) 2 , Wherein, D (kxy, k) represent H (k) | the kth in 0≤k≤4095}xyPlant the Euclidean distance between color and kth kind color, p k xy , 2 = mod ( k xy / 16 ) , pk,2=mod (k/16), Represent H (k) | 0≤k≤ Kth in 4095}xyPlant the color value of the 1st component corresponding to color,Represent H (k) | the kth in 0≤k≤4095}xy Plant the color value of the 2nd component corresponding to color,Represent H (k) | the kth in 0≤k≤4095}xyPlant color corresponding The color value of the 3rd component, pk,1Represent H (k) | the color of the 1st component that kth kind color in 0≤k≤4095} is corresponding Value, pk,2Represent H (k) | the color value of the 2nd component that kth kind color in 0≤k≤4095} is corresponding, pk,3Represent H (k) | The color value of the 3rd component that kth kind color in 0≤k≤4095} is corresponding, mod () is remainder number handling function.
2.-5, according to { Ii(x, y) } in the saliency value based on global color histogram of each pixel, obtain { Ii (x, y) } image saliency map based on global color histogram, be designated as { HS (x, y) }.
3. use super-pixel (Superpixel) cutting techniques by { Ii(x, y) } it is divided into the region of M non-overlapping copies, so After by { Ii(x, y) } again it is expressed as the set in M region, it is designated as { SPh, consider further that local significance, district similar in image Typically having relatively low significance between territory, therefore the present invention calculates { SPhThe similarity between regional in }, by { SPh} In pth region and q-th region between similarity be designated as Sim (SPp,SPq), wherein, M >=1, SPhRepresent { SPhIn } The h region, 1≤h≤M, 1≤p≤M, 1≤q≤M, p ≠ q, SPpRepresent { SPhPth region in }, SPqRepresent {SPhQ-th region in }.In the present embodiment, M=200 is taken.
In this particular embodiment, step 3. in { SPhThe similarity Sim between pth region and q-th region in } (SPp,SPq) acquisition process be:
3.-1, to { SPhThe color value of each component of each pixel in each region in } quantifies respectively, Obtain { SPhThe quantization areas in each region in }, by { SPhThe quantization areas in h region in } is designated as { Ph,i(xh, yh), by { Ph,i(xh,yh) in coordinate position be (xh,yh) the color value of i-th component of pixel be designated as Ph,i(xh, yh), it is assumed that { Ph,i(xh,yh) in coordinate position be (xh,yh) pixel at { Ii(x, y) } in coordinate position be (x, y), ThenWherein, 1≤xh≤Wh,1≤yh≤Hh, WhRepresent { SPhThe width in h region in } Degree, HhRepresent { SPhThe height in h region in }, symbolFor rounding downwards symbol.
3.-2, { SP is calculatedhThe color histogram of the quantization areas in each region in }, by { Ph,i(xh,yh) color Rectangular histogram is designated asWherein,Represent { Ph,i(xh,yh) in belong to the institute of kth kind color There is the number of pixel.
3.-3, to { SPhThe color histogram of the quantization areas in each region in } is normalized operation, obtains correspondence Normalization after color histogram, by rightAfter the normalization obtained after being normalized operation Color histogram be designated as { H ′ SP h ( k ) | 0 ≤ k ≤ 4095 } , H ′ SP h ( k ) = H SP h ( k ) Σ h ′ = 1 M H SP h ′ ( k ) , Wherein,Represent {SPhQuantization areas { the P in h region in }h,i(xh,yh) in belong to the probability of occurrence of pixel of kth kind color,Represent { SPhQuantization areas { the P in h' region in }h',i(xh',yh') in belong to all pictures of kth kind color The number of vegetarian refreshments, 1≤xh'≤Wh',1≤yh'≤Hh', Wh'Represent { SPhThe width in h' region in }, Hh'Represent { SPhIn } The height in h' region, Ph',i(xh',yh') represent { Ph',i(xh',yh') in coordinate position be (xh',yh') pixel The color value of i-th component.
3.-4, { SP is calculatedhThe similarity between pth region and q-th region in }, is designated as Sim (SPp,SPq), Sim(SPp,SPq)=Simc(SPp,SPq)×Simd(SPp,SPq), Simc(SPp,SPq) represent { SPhPth region in } with {SPhThe color similarity between q region in }, Sim c ( SP p , SP q ) = Σ k = 0 4095 min ( H ′ SP p ( k ) , H ′ SP q ( k ) ) , Simd (SPp,SPq) represent { SPhPth region in } and { SPhThe spatial simlanty between q region in },Wherein, SPpRepresent { SPhPth region in }, SPqRepresent { SPhQ in } Individual region,Represent { SPhQuantization areas { the P in pth the region in }p,i(xp,yp) in belong to the picture of kth kind color The probability of occurrence of vegetarian refreshments,Represent { SPhQuantization areas { the P in the q-th region in }q,i(xq,yq) in belong to kth kind The probability of occurrence of the pixel of color, 1≤xp≤Wp,1≤yp≤Hp, WpRepresent { SPhThe width in pth the region in }, HpTable Show { SPhThe height in pth the region in }, Pp,i(xp,yp) represent { Pp,i(xp,yp) in coordinate position be (xp,yp) pixel The color value of the i-th component of point, 1≤xq≤Wq,1≤yq≤Hq, WqRepresent { SPhThe width in the q-th region in }, HqRepresent {SPhThe height in the q-th region in }, Pq,i(xq,yq) represent { Pq,i(xq,yq) in coordinate position be (xq,yq) pixel The color value of i-th component, min () for taking minimum value function,Represent { SPhThe center pixel in pth region in } The coordinate position of point,Represent { SPhThe coordinate position of the central pixel point in q-th region in }, symbol " ‖ ‖ " is for asking Europe Formula distance symbol.
4. according to { SPhThe similarity between regional in }, obtains { Ii(x, y) } based on field color contrast Image saliency map, be designated as { NGC (x, y) }, wherein, (x y) represents that in { NGC (x, y) }, coordinate position is (x, picture y) to NGC The pixel value of vegetarian refreshments.
In this particular embodiment, step detailed process 4. is:
4.-1, { SP is calculatedhThe color contrast in each region in }, by { SPhThe color contrast in h region in } Degree is designated as NGC SP h = Σ q = 1 M W ( SP h , SP q ) × | | m SP h - m SP q | | , Wherein, SPhRepresent { SPhThe h region in }, SPqRepresent { SPhQ-th region in },Represent { SPhH in } Total number of the pixel comprised in region, Simd(SPh,SPq) represent { SPhThe h region in } and { SPhQ district in } Spatial simlanty between territory, Represent { SPhIn the h region in } The coordinate position of central pixel point,Represent { SPhThe coordinate position of the central pixel point in q-th region in }, symbol " ‖ ‖ " for seeking Euclidean distance symbol,Represent { SPhThe color mean vector in h region in }, will { SPhIn } The color vector of all pixels in h region is averaging and obtainsRepresent { SPhThe face in the q-th region in } Color mean vector.
4.-2, to { SPhThe color contrast in each region in } is normalized operation, after obtaining the normalization of correspondence Color contrast, will be to { SPhThe color contrast in h region in }Obtain after being normalized operation returns Color contrast after one change is designated as Wherein, NGCminRepresent { SPhIn } M region in minimum color contrast, NGCmaxRepresent { SPhColor contrast maximum in M region in }.
4.-3, { SP is calculatedhThe saliency value based on color contrast in each region in }, by { SPhThe h district in } The saliency value based on color contrast in territory is designated as NGC ′ ′ SP h = Σ q = 1 M ( Sim ( SP h , SP q ) × NGC ′ SP h ) Σ q = 1 M Sim ( SP h , SP q ) , Its In, Sim (SPh,SPq) represent { SPhIn } similarity between h region and q-th region.
4.-4, by { SPhThe saliency value based on color contrast in each region in } is as owning in corresponding region The saliency value of pixel, i.e. for { SPhThe h region in }, by { SPhH region in } based on color contrast Saliency value is as the saliency value of all pixels in this region, thus obtains { Ii(x, y) } based on field color contrast Image saliency map, be designated as { NGC (x, y) }, wherein, (x y) represents that in { NGC (x, y) }, coordinate position is (x, picture y) to NGC The pixel value of vegetarian refreshments.
5. according to { SPhThe similarity between regional in }, obtains { Ii(x, y) } openness based on regional space Image saliency map, be designated as { NSS (x, y) }, wherein, (x y) represents that in { NSS (x, y) }, coordinate position is (x, picture y) to NSS The pixel value of vegetarian refreshments.
In this particular embodiment, step detailed process 5. is:
5.-1, { SP is calculatedhThe spatial sparsity in each region in }, by { SPhThe space in h region in } is sparse Property is designated as NSS SP h = Σ q = 1 M ( Sim ( SP h , SP q ) × D SP h ) Σ q = 1 M Sim ( SP h , SP q ) , Wherein, Sim (SPh,SPq) represent { SPhH in } Similarity between region and q-th region,Represent { SPhThe central pixel point in the h region in } and { Ii(x, Y) Euclidean distance between central pixel point }.
5.-2, to { SPhThe spatial sparsity in each region in } is normalized operation, after obtaining the normalization of correspondence Spatial sparsity, will be to { SPhThe spatial sparsity in h region in }Obtain after being normalized operation returns Spatial sparsity after one change is designated as Wherein, NSSminRepresent { SPhIn } Spatial sparsity minimum in M region, NSSmaxRepresent { SPhSpatial sparsity maximum in M region in }.
5.-3, { SP is calculatedhThe saliency value based on spatial sparsity in each region in }, by { SPhThe h district in } The saliency value based on spatial sparsity in territory is designated as NS S ′ ′ SP h = Σ q = 1 M ( Sim ( SP h , SP q ) × NSS ′ SP h ) Σ q = 1 M Sim ( SP h , SP q ) .
5.-4, by { SPhThe saliency value based on spatial sparsity in each region in } is as owning in corresponding region The saliency value of pixel, i.e. for { SPhThe h region in }, by { SPhH region in } based on spatial sparsity Saliency value is as the saliency value of all pixels in this region, thus obtains { Ii(x, y) } openness based on regional space Image saliency map, be designated as { NSS (x, y) }, wherein, (x y) represents that in { NSS (x, y) }, coordinate position is (x, picture y) to NSS The pixel value of vegetarian refreshments.
6. to { Ii(x, y) } image saliency map based on global color histogram { HS (x, y) }, { Ii(x, y) } based on The image saliency map of field color contrast NGC (x, y) } and { Ii(x, y) } the image openness based on regional space notable Figure { NSS (x, y) } merge, obtain { Ii(x, y) } final image saliency map, be designated as { Sal (x, y) }, will Sal (x, Y) in } coordinate position be (x, the pixel value of pixel y) be designated as Sal (x, y), Sal (x, y)=HS (x, y) × NGC (x, y) × NSS(x,y)。
Hereinafter just utilize Image1 in the notable object images storehouse MSRA that Microsoft Research, Asia provides by the inventive method, The notable figure of five groups of images of Image2, Image3, Image4 and Image5 extracts.Fig. 2 a gives the original of " Image1 " Image, Fig. 2 b gives true (the Ground truth) of " Image1 " image significantly to scheme, and Fig. 2 c gives " Image1 " image Image saliency map based on global color histogram, Fig. 2 d give the based on field color contrast of " Image1 " image Image saliency map, Fig. 2 e give the image saliency map openness based on regional space of " Image1 " image, Fig. 2 f gives The image saliency map that " Image1 " image is final;Fig. 3 a gives the original image of " Image2 ", and Fig. 3 b gives " Image2 " True (the Ground truth) of image significantly schemes, and Fig. 3 c gives the figure based on global color histogram of " Image2 " image As notable figure, Fig. 3 d gives the image saliency map based on field color contrast of " Image2 " image, Fig. 3 e gives The image saliency map openness based on regional space of " Image2 " image, Fig. 3 f give the image that " Image2 " image is final Notable figure;Fig. 4 a gives the original image of " Image3 ", and Fig. 4 b gives the true (Ground of " Image3 " image Truth) significantly scheming, Fig. 4 c gives the image saliency map based on global color histogram of " Image3 " image, Fig. 4 d is given The image saliency map based on field color contrast of " Image3 " image, Fig. 4 e give " Image3 " image based on district The openness image saliency map of domain space, Fig. 4 f give the image saliency map that " Image3 " image is final;Fig. 5 a gives The original image of " Image4 ", Fig. 5 b gives true (the Ground truth) of " Image4 " image significantly to scheme, and Fig. 5 c is given The image saliency map based on global color histogram of " Image4 " image, Fig. 5 d give " Image4 " image based on district The image openness based on regional space that the image saliency map of territory color contrast, Fig. 5 e give " Image4 " image is notable Figure, Fig. 5 f give the image saliency map that " Image4 " image is final;Fig. 6 a gives the original image of " Image5 ", and Fig. 6 b gives True (the Ground truth) that gone out " Image5 " image significantly schemes, Fig. 6 c give " Image5 " image based on the overall situation face The image saliency map of Color Histogram, Fig. 6 d give " Image5 " image image saliency map based on field color contrast, Fig. 6 e gives the image saliency map openness based on regional space of " Image5 " image, Fig. 6 f gives " Image5 " image Final image saliency map.From Fig. 2 a to Fig. 6 f it can be seen that use the image saliency map that the inventive method obtains owing to considering The notable situation of change in global and local region, therefore, it is possible to meet notable semantic feature well.

Claims (4)

1. an image saliency map extracting method based on region, it is characterised in that comprise the following steps:
1. pending source images is designated as { Ii(x, y) }, wherein, i=1,2,3,1≤x≤W, 1≤y≤H, W represent { Ii(x, Y) width }, H represents { Ii(x, y) } height, Ii(x y) represents { Ii(x, y) } in coordinate position be (x, the i-th of pixel y) The color value of individual component, the 1st component is that R component, the 2nd component are G component and the 3rd component is B component;
First { I is obtainedi(x, y) } quantized image and the global color histogram of quantized image, then according to { Ii(x,y)} Quantized image, obtain { Ii(x, y) } in the color category of each pixel, further according to { Ii(x, y) } quantized image complete Office's color histogram and { Ii(x, y) } in the color category of each pixel, obtain { Ii(x, y) } straight based on global color The image saliency map of side's figure, is designated as { HS (x, y) }, and wherein, (x y) represents that in { HS (x, y) }, coordinate position is (x, picture y) to HS The pixel value of vegetarian refreshments, also represents { Ii(x, y) } in coordinate position be (x, pixel y) based on global color histogram aobvious Work value;
Described step detailed process 2. is:
2.-1, to { Ii(x, y) } in the color value of each component of each pixel quantify respectively, obtain { Ii(x,y)} Quantized image, be designated as { Pi(x, y) }, by { Pi(x, y) } in coordinate position be (x, the color of the i-th component of pixel y) Value is designated as Pi(x, y),Wherein, symbolFor rounding downwards symbol;
2.-2, { P is calculatedi(x, y) } global color histogram, be designated as H (k) | 0≤k≤4095}, wherein, H (k) represents { Pi (x, y) } in belong to the number of all pixels of kth kind color;
2.-3, according to { Pi(x, y) } in the color value of each component of each pixel, calculate { Ii(x, y) } in respective pixel The color category of point, by { Ii(x, y) } in coordinate position be that (x, the color category of pixel y) is designated as kxy, kxy=P3(x,y) ×256+P2(x,y)×16+P1(x, y), wherein, P3(x y) represents { Pi(x, y) } in coordinate position be (x, pixel y) The color value of the 3rd component, P2(x y) represents { Pi(x, y) } in coordinate position be (x, the face of the 2nd component of pixel y) Colour, P1(x y) represents { Pi(x, y) } in coordinate position be (x, the color value of the 1st component of pixel y);
2.-4, { I is calculatedi(x, y) } in the saliency value based on global color histogram of each pixel, by { Ii(x, y) } in Coordinate position be (x, the saliency value based on global color histogram of pixel y) be designated as HS (x, y), Wherein, D (kxy, k) represent H (k) | the kth in 0≤k≤4095}xyPlant the Euclidean distance between color and kth kind color, pk,2=mod (k/16), Represent H (k) | the kth in 0≤k≤4095}xyKind of color corresponding the The color value of 1 component,Represent H (k) | the kth in 0≤k≤4095}xyPlant the color of the 2nd component corresponding to color Value,Represent H (k) | the kth in 0≤k≤4095}xyPlant the color value of the 3rd component corresponding to color, pk,1Represent { H (k) | the color value of the 1st component that kth kind color in 0≤k≤4095} is corresponding, pk,2Represent H (k) | 0≤k≤4095} In the color value of the 2nd component corresponding to kth kind color, pk,3Represent H (k) | the kth kind color pair in 0≤k≤4095} The color value of the 3rd component answered, mod () is remainder number handling function;
2.-5, according to { Ii(x, y) } in the saliency value based on global color histogram of each pixel, obtain { Ii(x,y)} Image saliency map based on global color histogram, be designated as { HS (x, y) };
3. use super-pixel cutting techniques by { Ii(x, y) } it is divided into the region of M non-overlapping copies, then by { Ii(x, y) } again It is expressed as the set in M region, is designated as { SPh, then calculate { SPhThe similarity between regional in }, by { SPhIn } Similarity between pth region and q-th region is designated as Sim (SPp,SPq), wherein, M >=1, SPhRepresent { SPhH in } Individual region, 1≤h≤M, 1≤p≤M, 1≤q≤M, p ≠ q, SPpRepresent { SPhPth region in }, SPqRepresent { SPhIn } Q-th region;
4. according to { SPhThe similarity between regional in }, obtains { Ii(x, y) } figure based on field color contrast As notable figure, being designated as { NGC (x, y) }, wherein, (x y) represents that in { NGC (x, y) }, coordinate position is (x, pixel y) to NGC Pixel value;
5. according to { SPhThe similarity between regional in }, obtains { Ii(x, y) } the figure openness based on regional space As notable figure, being designated as { NSS (x, y) }, wherein, (x y) represents that in { NSS (x, y) }, coordinate position is (x, pixel y) to NSS Pixel value;
6. to { Ii(x, y) } image saliency map based on global color histogram { HS (x, y) }, { Ii(x, y) } based on region The image saliency map of color contrast NGC (x, y) } and { Ii(x, y) } the image saliency map openness based on regional space NSS (x, y) } merge, obtain { Ii(x, y) } final image saliency map, be designated as { Sal (x, y) }, will Sal (x, Y) in } coordinate position be (x, the pixel value of pixel y) be designated as Sal (x, y), Sal (x, y)=HS (x, y) × NGC (x, y) ×NSS(x,y)。
A kind of image saliency map extracting method based on region the most according to claim 1, it is characterised in that described step The most 3. { SP inhSimilarity Sim (the SP between pth region and q-th region in }p,SPq) acquisition process be:
3.-1, to { SPhThe color value of each component of each pixel in each region in } quantifies respectively, obtains {SPhThe quantization areas in each region in }, by { SPhThe quantization areas in h region in } is designated as { Ph,i(xh,yh), will {Ph,i(xh,yh) in coordinate position be (xh,yh) the color value of i-th component of pixel be designated as Ph,i(xh,yh), it is assumed that {Ph,i(xh,yh) in coordinate position be (xh,yh) pixel at { Ii(x, y) } in coordinate position be (x, y), thenWherein, 1≤xh≤Wh,1≤yh≤Hh, WhRepresent { SPhThe width in h region in }, HhRepresent { SPhThe height in h region in }, symbolFor rounding downwards symbol;
3.-2, { SP is calculatedhThe color histogram of the quantization areas in each region in }, by { Ph,i(xh,yh) color histogram Seal isWherein,Represent { Ph,i(xh,yh) in belong to all pictures of kth kind color The number of vegetarian refreshments;
3.-3, to { SPhThe color histogram of the quantization areas in each region in } is normalized operation, obtains returning of correspondence Color histogram after one change, by rightFace after the normalization obtained after being normalized operation Color Histogram is designated as Wherein,Represent { SPhIn } Quantization areas { the P in h regionh,i(xh,yh) in belong to the probability of occurrence of pixel of kth kind color,Represent {SPhQuantization areas { the P in h' region in }h',i(xh',yh') in belong to the number of all pixels of kth kind color, 1 ≤xh'≤Wh',1≤yh'≤Hh', Wh'Represent { SPhThe width in h' region in }, Hh'Represent { SPhThe h' region in } Height, Ph',i(xh',yh') represent { Ph',i(xh',yh') in coordinate position be (xh',yh') the i-th component of pixel Color value;
3.-4, { SP is calculatedhThe similarity between pth region and q-th region in }, is designated as Sim (SPp,SPq), Sim (SPp,SPq)=Simc(SPp,SPq)×Simd(SPp,SPq), Simc(SPp,SPq) represent { SPhPth region in } with {SPhThe color similarity between q region in },Simd (SPp,SPq) represent { SPhPth region in } and { SPhThe spatial simlanty between q region in },Wherein, SPpRepresent { SPhPth region in }, SPqRepresent { SPhQ-th in } Region,Represent { SPhQuantization areas { the P in pth the region in }p,i(xp,yp) in belong to the pixel of kth kind color The probability of occurrence of point,Represent { SPhQuantization areas { the P in the q-th region in }q,i(xq,yq) in belong to kth kind face The probability of occurrence of the pixel of color, 1≤xp≤Wp,1≤yp≤Hp, WpRepresent { SPhThe width in pth the region in }, HpRepresent {SPhThe height in pth the region in }, Pp,i(xp,yp) represent { Pp,i(xp,yp) in coordinate position be (xp,yp) pixel The color value of i-th component, 1≤xq≤Wq,1≤yq≤Hq, WqRepresent { SPhThe width in the q-th region in }, HqRepresent {SPhThe height in the q-th region in }, Pq,i(xq,yq) represent { Pq,i(xq,yq) in coordinate position be (xq,yq) pixel The color value of i-th component, min () for taking minimum value function,Represent { SPhThe center pixel in pth region in } The coordinate position of point,Represent { SPhThe coordinate position of the central pixel point in q-th region in }, symbol " | | | | " be Seek Euclidean distance symbol.
A kind of image saliency map extracting method based on region the most according to claim 2, it is characterised in that described step Rapid detailed process 4. is:
4.-1, { SP is calculatedhThe color contrast in each region in }, by { SPhThe color contrast note in h region in } For Wherein, SPhRepresent { SPhThe h region in }, SPqRepresent { SPhQ-th region in },Represent { SPhH in } Total number of the pixel comprised in region, Simd(SPh,SPq) represent { SPhThe h region in } and { SPhQ district in } Spatial simlanty between territory, Represent { SPhIn in the h region in } The coordinate position of imago vegetarian refreshments,Represent { SPhThe coordinate position of the central pixel point in q-th region in }, symbol " | | | | " for seeking Euclidean distance symbol,Represent { SPhThe color mean vector in h region in },Represent { SPhIn } The color mean vector in q-th region;
4.-2, to { SPhThe color contrast in each region in } is normalized operation, obtains the face after the normalization of correspondence Color contrast, will be to { SPhThe color contrast in h region in }The normalization obtained after being normalized operation After color contrast be designated as Wherein, NGCminRepresent { SPhM in } Color contrast minimum in individual region, NGCmaxRepresent { SPhColor contrast maximum in M region in };
4.-3, { SP is calculatedhThe saliency value based on color contrast in each region in }, by { SPhH region in } Saliency value based on color contrast is designated as Wherein, Sim (SPh,SPq) represent { SPhIn } similarity between h region and q-th region;
4.-4, by { SPhThe saliency value based on color contrast in each region in } is as all pixels in corresponding region Saliency value, thus obtain { Ii(x, y) } image saliency map based on field color contrast, be designated as { NGC (x, y) }, its In, (x y) represents that in { NGC (x, y) }, coordinate position is (x, the pixel value of pixel y) to NGC.
A kind of image saliency map extracting method based on region the most according to claim 3, it is characterised in that described step Rapid detailed process 5. is:
5.-1, { SP is calculatedhThe spatial sparsity in each region in }, by { SPhThe spatial sparsity note in h region in } For Wherein, Sim (SPh,SPq) represent { SPhThe h region in } And the similarity between q-th region,Represent { SPhThe central pixel point in the h region in } and { Ii(x, y) } Euclidean distance between central pixel point;
5.-2, to { SPhThe spatial sparsity in each region in } is normalized operation, obtains the sky after the normalization of correspondence Between openness, will be to { SPhThe spatial sparsity in h region in }The normalization obtained after being normalized operation After spatial sparsity be designated as Wherein, NSSminRepresent { SPhM district in } Spatial sparsity minimum in territory, NSSmaxRepresent { SPhSpatial sparsity maximum in M region in };
5.-3, { SP is calculatedhThe saliency value based on spatial sparsity in each region in }, by { SPhH region in } Saliency value based on spatial sparsity is designated as
5.-4, by { SPhThe saliency value based on spatial sparsity in each region in } is as all pixels in corresponding region Saliency value, thus obtain { Ii(x, y) } the image saliency map openness based on regional space, be designated as { NSS (x, y) }, its In, (x y) represents that in { NSS (x, y) }, coordinate position is (x, the pixel value of pixel y) to NSS.
CN201310651864.5A 2013-12-05 2013-12-05 Region-based image saliency map extracting method Active CN103632153B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310651864.5A CN103632153B (en) 2013-12-05 2013-12-05 Region-based image saliency map extracting method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310651864.5A CN103632153B (en) 2013-12-05 2013-12-05 Region-based image saliency map extracting method

Publications (2)

Publication Number Publication Date
CN103632153A CN103632153A (en) 2014-03-12
CN103632153B true CN103632153B (en) 2017-01-11

Family

ID=50213181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310651864.5A Active CN103632153B (en) 2013-12-05 2013-12-05 Region-based image saliency map extracting method

Country Status (1)

Country Link
CN (1) CN103632153B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050674B (en) * 2014-06-27 2017-01-25 中国科学院自动化研究所 Salient region detection method and device
CN104133956B (en) * 2014-07-25 2017-09-12 小米科技有限责任公司 Handle the method and device of picture
CN104134217B (en) * 2014-07-29 2017-02-15 中国科学院自动化研究所 Video salient object segmentation method based on super voxel graph cut
CN104392233B (en) * 2014-11-21 2017-06-06 宁波大学 A kind of image saliency map extracting method based on region
CN106611427B (en) * 2015-10-21 2019-11-15 中国人民解放军理工大学 Saliency detection method based on candidate region fusion
CN105512663A (en) * 2015-12-02 2016-04-20 南京邮电大学 Significance detection method based on global and local contrast
CN106611178A (en) * 2016-03-10 2017-05-03 四川用联信息技术有限公司 Salient object identification method
CN106709512B (en) * 2016-12-09 2020-03-17 河海大学 Infrared target detection method based on local sparse representation and contrast

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102867313A (en) * 2012-08-29 2013-01-09 杭州电子科技大学 Visual saliency detection method with fusion of region color and HoG (histogram of oriented gradient) features
CN103218832A (en) * 2012-10-15 2013-07-24 上海大学 Visual saliency algorithm based on overall color contrast ratio and space distribution in image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102867313A (en) * 2012-08-29 2013-01-09 杭州电子科技大学 Visual saliency detection method with fusion of region color and HoG (histogram of oriented gradient) features
CN103218832A (en) * 2012-10-15 2013-07-24 上海大学 Visual saliency algorithm based on overall color contrast ratio and space distribution in image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Context-Aware Saliency Detection;Stas Goferman等;《IEEE Transactions on Pattern Analysis and Machine Intelligence archive》;20121031;第34卷(第10期);第1915-1926页 *
基于感知重要性的立体图像质量评价方法;段芬芳 等;《光电工程》;20131031;第40卷(第10期);第70-76页 *

Also Published As

Publication number Publication date
CN103632153A (en) 2014-03-12

Similar Documents

Publication Publication Date Title
CN103632153B (en) Region-based image saliency map extracting method
US10803554B2 (en) Image processing method and device
WO2018023734A1 (en) Significance testing method for 3d image
CN107292318B (en) Image significance object detection method based on center dark channel prior information
CN103020992B (en) A kind of video image conspicuousness detection method based on motion color-associations
CN104392233B (en) A kind of image saliency map extracting method based on region
JP2019514123A (en) Remote determination of the quantity stored in containers in geographical areas
CN105493078B (en) Colored sketches picture search
CN103955718A (en) Image subject recognition method
CN110796143A (en) Scene text recognition method based on man-machine cooperation
CN110827312B (en) Learning method based on cooperative visual attention neural network
CN103745104A (en) Examination paper marking method based on augmented reality technology
CN102147867A (en) Method for identifying traditional Chinese painting images and calligraphy images based on subject
CN104299241A (en) Remote sensing image significance target detection method and system based on Hadoop
CN104050674B (en) Salient region detection method and device
CN103632372A (en) Video saliency image extraction method
Zhou et al. PADENet: An efficient and robust panoramic monocular depth estimation network for outdoor scenes
CN113628180B (en) Remote sensing building detection method and system based on semantic segmentation network
CN113902753A (en) Image semantic segmentation method and system based on dual-channel and self-attention mechanism
CN105631849B (en) The change detecting method and device of target polygon
CN113901931A (en) Knowledge distillation model-based behavior recognition method for infrared and visible light videos
CN104008374B (en) Miner's detection method based on condition random field in a kind of mine image
CN102831621A (en) Video significance processing method based on spectral analysis
CN112132880A (en) Real-time dense depth estimation method based on sparse measurement and monocular RGB (red, green and blue) image
CN105338335B (en) A kind of stereo-picture notable figure extracting method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20191219

Address after: Room 1,020, Nanxun Science and Technology Pioneering Park, No. 666 Chaoyang Road, Nanxun District, Huzhou City, Zhejiang Province, 313000

Patentee after: Huzhou You Yan Intellectual Property Service Co., Ltd.

Address before: 315211 Zhejiang Province, Ningbo Jiangbei District Fenghua Road No. 818

Patentee before: Ningbo University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200702

Address after: 313000 Room 121,221, Building 3, 1366 Hongfeng Road, Wuxing District, Huzhou City, Zhejiang Province

Patentee after: ZHEJIANG DUYAN INFORMATION TECHNOLOGY Co.,Ltd.

Address before: Room 1,020, Nanxun Science and Technology Pioneering Park, No. 666 Chaoyang Road, Nanxun District, Huzhou City, Zhejiang Province, 313000

Patentee before: Huzhou You Yan Intellectual Property Service Co.,Ltd.