CN106327507A - Color image significance detection method based on background and foreground information - Google Patents
Color image significance detection method based on background and foreground information Download PDFInfo
- Publication number
- CN106327507A CN106327507A CN201610654316.1A CN201610654316A CN106327507A CN 106327507 A CN106327507 A CN 106327507A CN 201610654316 A CN201610654316 A CN 201610654316A CN 106327507 A CN106327507 A CN 106327507A
- Authority
- CN
- China
- Prior art keywords
- background
- foreground
- significance
- saliency
- superpixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 41
- 238000000034 method Methods 0.000 claims abstract description 42
- 230000011218 segmentation Effects 0.000 claims abstract description 26
- 238000005457 optimization Methods 0.000 claims abstract description 15
- 230000010354 integration Effects 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 15
- 238000012545 processing Methods 0.000 claims description 8
- 238000003064 k means clustering Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 8
- 230000005764 inhibitory process Effects 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 8
- 230000001149 cognitive effect Effects 0.000 description 6
- 238000012360 testing method Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 239000004956 Amodel Substances 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Probability & Statistics with Applications (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a color image significance detection method based on background and foreground information, and the method comprises the following steps: carrying out the segmentation of an inputted color image, and obtaining a series of super-pixel blocks; selecting background seeds, and obtaining the roughess significance through the feature comparison of the super-pixel blocks with the background seeds; defining the background weights of the super-pixel blocks based on the feature distribution of the background seeds, improving the roughness significance through the background weights, and obtaining the significance based on the background information; carrying out the segmentation of a saliency map which is formed at the previous step and based on the background information, selecting a compact foreground region from all segmentation results, extracting the features of the foreground region, and obtaining the significance based on the foreground information through the feature comparison; integrating the significance, obtained at the former two steps, based on the background and foreground information, carrying out the smoothening, and obtaining the significance of all super-pixel blocks after optimization. The method can highlight a foreground target in the image more consistently, and achieves a good inhibition effect for the background noise in the image.
Description
Technical Field
The invention belongs to the technical field of significance detection of image scenes, and particularly relates to a color image significance detection method based on background and foreground information.
Background
The visual saliency is an important research content of visual cognition and scene understanding, and relates to multiple disciplines such as cognitive psychology, cognitive neuroscience, computer vision and the like. Due to the characteristic difference between the foreground object and its background in the scene, the human visual system can often quickly locate the foreground region in the scene and preferentially process the information of the region. In order to simulate such a high-efficiency information processing method of the human eye, saliency detection has recently received attention from scholars in the related art.
The study of significance testing can be traced back to the theory of feature integration proposed by Treisman et al (Anneteisman and Garry Gelade (1980). "A feature-integration approach of integration." Cognitive Psychology, Vol.12, No.1, pp.97-136). On the basis of this, Itti and Koch et al proposed the earliest calculation model for significance detection (L.Itti, C.Koch, E.Niebur, "model of saliency-based assessment for vertical scene analysis", IEEE transactions, Pattern. Early saliency detection algorithms focused on predicting the gaze focus, were unable to consistently highlight foreground target regions, and formed saliency maps contained significant amounts of background noise, which significantly limited the application of saliency detection algorithms.
With the continuous development of computer vision, especially in the last ten years, scholars propose a large number of significance detection algorithms, and the main idea of the significance detection algorithms is to highlight foreground objects by means of feature comparison. The saliency detection algorithm based on local feature contrast adopts a central-peripheral contrast strategy, namely, the target object is highlighted through feature contrast of a central area and a peripheral neighborhood of the central area. This approach tends to highlight the edge regions of the foreground object and does not uniformly highlight the entire foreground object. A saliency detection algorithm based on global feature contrast often selects appropriate background information, for example, features of a frame region of an image, and highlights a foreground object through contrast. The method only considers the characteristics of the selected background area, directly considers the characteristics as background characteristics for characteristic comparison, and neglects the spatial distribution of the characteristics of the background area, so that the extracted background characteristics may contain partial foreground information, and the subsequent significance detection is negatively influenced. Therefore, the method based on feature comparison can detect the significant target in a simple scene, but the detection effect is still not ideal for a complex scene with multiple features coexisting in the foreground or the background.
In recent years, researchers have come to recognize the heuristic role of cognitive sciences such as cognitive neuroscience and cognitive psychology on significance detection. For example, Wei et al (Geodesic saliency using background colors) found that human eyes often defaults the border portion of an image as a background component when observing the image, thereby introducing a background prior and forming a global significance detection through the contrast of features. The method only uses the features of the background components to highlight the foreground area, but does not consider the spatial distribution of the features. In order to effectively suppress the background noise in the Saliency map, the ruhu-chuan team of the university of the continental engineering (j.wang, h.lu, x.li, n.tong, w.liu, salience detection vision background and used for selected selection, neuro-rendering 152(2015) 359-368.) introduces foreground information in the Saliency detection process, which successively regards as foreground regions the convex hull formed by the corner points in the image and the regions generated by adaptively segmenting the background Saliency map. However, the convex hull formed by the corner points does not consider the contour information of the target object, and the adaptive segmentation does not consider the compact characteristic of the target object, so that the foreground information introduced by using the method can contain a large amount of background noise, and the subsequent noise suppression effect is poor.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, provides a color image saliency detection method based on background and foreground information, and solves the technical problems that foreground objects cannot be highlighted consistently and background noise in a saliency map cannot be effectively suppressed in the prior art.
In order to solve the technical problem, the invention provides a color image significance detection method based on background and foreground information, which is characterized by comprising the following steps of:
step one, image preprocessing: performing over-segmentation processing on an input color image to obtain a series of super pixel blocks, and taking the super pixel blocks as a minimum processing unit;
secondly, detecting the significance based on the background information: selecting background seeds, and obtaining roughness significance by comparing the characteristics of each superpixel block and the background seeds; defining the background weight of each super pixel block based on the characteristic distribution of the background seeds, and improving the rough significance through the background weight to obtain the significance based on the background information;
thirdly, detecting the significance based on the foreground information: segmenting the saliency map based on the background information formed in the previous step, selecting a compact foreground region from all segmentation results, extracting foreground region characteristics, and obtaining the saliency based on the foreground information through characteristic comparison;
step four, integration of significance degree: and integrating the saliency based on the background and foreground information obtained in the first two steps to obtain integrated saliency, and performing smoothing operation on the integrated saliency to obtain the optimized saliency of all superpixel blocks.
Further, in the first step, the over-segmentation process adopts a SLIC super-pixel segmentation method.
Further, in the second step, the specific process of obtaining the significance based on the background information is as follows:
11) selecting a super-pixel block at the image frame as a background seed, and performing characteristic comparison on each super-pixel block in the image and the background seed to obtain the roughness significance of each super-pixel block;
12) performing K-means clustering on the selected background seeds, determining the probability of each cluster belonging to the background according to the spatial distribution of each cluster, wherein the background weight of the background seeds in the kth cluster is defined as follows:
Pk=1-exp(-α(Ls+Lo)) (k=1,2,…,K)
wherein L issIs the length of the shortest superpixel chain containing all kth clusters, LoIs the super pixelThe number of superpixel blocks belonging to other clusters in the chain is 0.01-0.08 in the range of the parameter α, and K is the number of the selected cluster centers;
13) for other superpixel blocks in the image, firstly, the geodesic distances between the superpixel blocks and all background seeds are calculated, and the background seeds with the smallest geodesic distances to the superpixel blocks are obtained:
wherein BG is a set of background seeds, dgeo(si,sj) Geodesic distances for two superpixel blocks; from the previous step 12), the background probability of the background seed is known, and the background probability of the background seed is recorded asThe geodesic distance between the superpixel block and the background seed isThe background weight for this super-pixel block is then:
then sequentially calculating the background weight of each superpixel block;
14) the background information based saliency that defines a superpixel block is:
wherein,is a super pixel block siBased on the significance of the background information of (1),for superpixel blocks s calculated in step 11)iIs not used.
Further, the parameter α is 0.05.
Further, in the third step, the specific process of obtaining the significance based on the foreground information is as follows:
21) segmenting the background-based saliency map obtained in the last step by using a parametric maxflow method to obtain a series of compact foreground regions, wherein the segmentation result of the maxflow method is as follows:
wherein N is the number of super pixels in the image, AiIs a super pixel block siArea of (a), xi∈ {1,0} denotes a superpixel block siWhether it belongs to the foreground region, eijIs the similarity, x, between adjacent superpixel blocksfObtaining a foreground region segmentation result;
22) selecting the segmentation result with the optimal value as a foreground region according to the following formula:
wherein, XfN is the number of super pixel blocks for the obtained multiple segmentation results,is a super pixel block siBased on the background saliency of V (x)f) As a result of the segmentation xfThe variance of the spatial coordinates of (a);
23) extracting the features of the selected foreground region, and determining the significance of each super-pixel block in the image based on the foreground through feature comparison:
where FG is the set of acquired foreground superpixel blocks, dc(si,sj) And dl(si,sj) Respectively the color distance and the spatial distance between superpixel blocks.
Further, in the fourth step, the specific process is as follows:
31) integrating the obtained significance based on the background and the foreground, wherein the integration formula is as follows:
wherein,is a super pixel block siThe degree of integration of (a) to (b),representing superpixel blocks siBased on the degree of saliency of the background of (1),representing superpixel blocks siThe value range of the parameter β is 2.5-8 based on the significance of the foreground,
32) the significance of the integration is further optimized, and the optimization function is as follows:
wherein, wc(si,sj) Representing the color similarity, S, of two adjacent superpixel blocksiAnd SjRepresenting the saliency of two adjacent superpixel blocks to be optimized,andrespectively a super pixel block siThe background weight and the foreground label of (a),is a super pixel block siThe integration significance of (a), N is the number of superpixels in the image; the optimization function is a global optimization function, and the optimization function is optimized to obtain the advantages of all superpixel blocksThe significance of the reaction is improved.
Further, the value of the parameter β is 4.
Compared with the prior art, the invention has the following beneficial effects: according to the method, the background weight based on the image frame region feature distribution is adopted, so that the significance detection effect based on background feature comparison is improved; extracting a foreground area based on a maxflow method, considering both the edge information of a foreground target and the compactness of a target object, and accurately describing the foreground target in a scene; and different actions of the background and foreground information in the saliency detection are integrated to obtain two saliency maps, the saliency maps obtained by integration are further optimized, and the optimized saliency maps tend to be smoother in the background and foreground regions. The method can more consistently highlight the foreground target in the image and has good inhibition effect on background noise in the image.
Drawings
Fig. 1 is a schematic flow chart of color image saliency detection according to an embodiment of the present invention.
Fig. 2 is a diagram illustrating the effect of background weight improvement saliency detection according to an embodiment of the present invention, where (a) is an input image, (b) is a true value of a saliency target, (c) is a rough saliency map, (d) is a background weight, and (e) is a background-based saliency map.
FIG. 3 is a schematic diagram of a background weight calculation process according to an embodiment of the present invention, where (a) is an input image, (b) is a superpixel segmentation result, (c) is a background seed clustering result, (d) is a background weight of a selected seed, and (e) is a background weight of all superpixel blocks.
Fig. 4 is a diagram of foreground region extraction and noise suppression effects thereof according to an embodiment of the present invention, where (a) is an input image, (b) is a background-based saliency map, (c) is an extracted foreground region, (d) is a foreground-based saliency map, (e) is an integrated saliency map, and (f) is an optimized saliency map.
Fig. 5 is a comparison graph of the saliency detection result of the embodiment of the present invention and the detection result of the prior art, where (a) is the input image, (b) is the true value of the saliency target, (c) is the detection result of the present invention, (d) is the detection result of the IT model, (e) is the detection result of the XIE model, and (f) is the detection result of the BFS model.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
Fig. 1 is a schematic flow chart of color image saliency detection according to an embodiment of the present invention. As shown in fig. 1, a color image saliency detection method based on background and foreground information of the method is characterized by comprising the following steps:
step one, image preprocessing: the input color image is subjected to over-segmentation processing to obtain a series of super-pixel blocks, and the super-pixel blocks are used as minimum processing units.
The input color image is over-divided into a plurality of super-pixel blocks by using a SLIC super-pixel division method, and the super-pixel blocks are used as the minimum processing units of subsequent operation.
Secondly, detecting the significance based on the background information:
the human eye tends to pay attention to the center position of the image (usually, it is considered that the target appears in the center of the image) while ignoring the frame region of the image (the frame of the image is the background) when observing the image. Therefore, the super-pixel blocks at the image frame are selected as background seeds, and the rough saliency of all the super-pixel blocks is obtained by performing feature comparison on each super-pixel block in the image and the background seeds to form a rough saliency map, as shown in fig. 2 (c). In order to eliminate the interference of foreground information, the feature distribution of the selected background seeds is further considered, feature clustering is performed on the background seeds, the probability that the seeds belong to the background is determined according to the spatial distribution of each cluster, and accordingly, the background weight of each superpixel block is defined, as shown in fig. 2 (d). And (e) improving the roughness saliency by using the background weight to obtain the saliency based on the background information, and forming a saliency map based on the background, as shown in fig. 2 (e).
The specific process for obtaining the saliency map based on the background information is as follows:
11) selecting a super-pixel block at the image frame as a background seed, and performing characteristic comparison on each super-pixel block in the image and the background seed to obtain the rough saliency of each super-pixel block to form a rough saliency map, wherein the process refers to the prior art;
12) performing K-means clustering on the selected background seeds, as shown in fig. 3(c), determining the probability of each cluster belonging to the background according to the spatial distribution of each cluster, wherein the background weight of the background seeds in the kth cluster is defined as follows:
Pk=1-exp(-α(Ls+Lo)) (k=1,2,…,K)
wherein L issIs the length of the shortest superpixel chain containing all kth clusters, LoFor the number of superpixel blocks belonging to other clusters in the superpixel chain, the parameter α is a constant and can be set to be 0.01-0.08, the optimal detection effect can be obtained when α is set to be 0.05 through actual experiments, K is the number of selected cluster centers, the probability that the superpixel blocks belong to the background is higher when the background weight value of the superpixel blocks is larger, and conversely, the probability that the superpixel blocks belong to the foreground is higher when the value is smaller.
13) And for other superpixel blocks in the image, calculating the background weight of the superpixel blocks according to the connectivity of the superpixel blocks and the selected background seeds. Firstly, the geodesic distances between the superpixel block and all the background seeds are calculated to obtain the background seeds with the minimum geodesic distance with the superpixel block:
wherein BG is a set of background seeds, dgeo(si,sj) Is the geodesic distance of two superpixel blocks. From the previous step 12), the background probability of the background seed is known, and the background probability of the background seed is recorded asThe geodesic distance between the superpixel block and the background seed isThe background weight for this super-pixel block is then:
the calculation effect is shown in fig. 3 (e);
14) the background information based saliency that defines a superpixel block is:
wherein,is a super pixel block siBased on the significance of the background information of (1),for superpixel blocks s calculated in step 11)iThe saliency map based on the background information is obtained from the saliency based on the background information of each super-pixel block, as shown in fig. 2 (e).
Thirdly, detecting the significance based on the foreground information: and segmenting the background-based saliency map obtained in the previous step, selecting a compact foreground region from all segmentation results, extracting foreground target features, and obtaining the saliency based on the foreground information through feature comparison to form the foreground-information-based saliency map.
The specific process for obtaining the saliency map based on the foreground information is as follows:
21) segmenting the background-based significance obtained in the previous step by using a parametric maxflow method to obtain a series of compact foreground regions:
wherein N is the number of super pixels in the image, AiIs a super pixel block siArea of (a), xi∈ {1,0} denotes a superpixel block siWhether it belongs to the foreground region, eijIs the similarity, x, between adjacent superpixel blocksfAnd obtaining a foreground region segmentation result. Compared with the OTSU method, the foreground region segmented by the method is consistent with the close characteristic of the salient object, so that the salient object in the image can be better described, as shown in fig. 4 (c);
22) according to the consistency degree of each segmentation result and the background-based saliency map and the spatial precision of the saliency target, selecting the most appropriate segmentation result as a foreground region according to the following formula:
wherein, XfN is the number of super pixel blocks for the obtained multiple segmentation results,is a super pixel block siBased on the background saliency of V (x)f) As a result of the segmentation xfThe variance of the spatial coordinates of (a). As shown in fig. 4(c), the foreground region extraction method considers both the edge information of the foreground object and the compactness of the object, and the extracted foreground region can accurately reflect the foreground object characteristics;
23) extracting foreground region characteristics, and determining the significance of each super-pixel block in the image based on the foreground through characteristic comparison:
where FG is the set of acquired foreground superpixel blocks, dc(si,sj) And dl(si,sj) Respectively the color distance and the spatial distance between superpixel blocks. The saliency map based on the foreground is obtained from the saliency based on the foreground of each super-pixel block, as shown in fig. 4(d), because the background area and the foreground area often have differences in color and space, the saliency map based on the foreground calculated by the method can effectively suppress background noise.
Step four, integrating and optimizing the saliency map:
since the background information and the foreground information have different roles in saliency detection, namely the background information is used for highlighting a foreground target, and the foreground information is used for suppressing background noise, the saliency maps based on the background and the foreground information obtained in the first two steps are integrated. The significance integration formula based on background and foreground is as follows:
wherein,for the integrated saliency of the ith superpixel block,representing the background-based saliency of the ith superpixel block,the significance degree of the ith superpixel block based on the foreground is represented, the value of the parameter β is a constant, the parameter range can be 2.5-8, the optimal detection effect can be obtained when β is set to be 4 through actual tests, and the significance map obtained through integration can highlight the foreground target and effectively inhibit background noise.
To obtain a more visually smooth saliency map, the integrated saliency map is further optimized, the optimization function being as follows:
wherein, wc(si,sj) Representing the color similarity, S, of two adjacent superpixel blocksiAnd SjRepresenting the saliency of two adjacent superpixel blocks to be optimized,andrespectively a super pixel block siThe background weight and the foreground label of (a),is a super pixel block siN is the number of superpixels in the image. The optimization function is a global optimization function, the optimization function is solved, the solving process refers to the prior art, and finally, the optimization significance of all the superpixel blocks is directly solved for one time to form a final significance map based on background and foreground information.
Fig. 5 is a graph comparing a significance test result according to an embodiment of the present invention with a test result of the prior art. Wherein, FIG. 5(c) is the detection result of the present invention, FIG. 5(d) is the detection result of IT model (L.Itti, C.Koch, E.Niebur, Amodel of safe-based visual attribute for vertical scene analysis, IEEEns.Pattern animal. Mach. Intell 20(11) (1998) 1254. formed by the specification of the invention), FIG. 5(e) is the detection result of XIE model (Y.Xie, H.Lu, M. -H.Yang, Bayesian safe Via low and middle customers, IEEEs.image Processing 22(5) (2013) 1698. formed by the specification of XIE model, and FIG. 5(f) is the detection result of BFS model (J.Lung, H.Lu, X.N.168n.152. obtained by fusion of the specification of the invention, and pillow.368). The IT model is a gaze focus prediction model that does not consistently highlight an entire salient object. The XIE model and the BFS model respectively introduce foreground information through the corner convex hull and the area generated by self-adaptive segmentation, and the generated area often contains a background part and cannot accurately reflect the characteristics of a foreground target.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.
Claims (7)
1. A color image significance detection method based on background and foreground information is characterized by comprising the following steps:
step one, image preprocessing: performing over-segmentation processing on an input color image to obtain a series of super pixel blocks, and taking the super pixel blocks as a minimum processing unit;
secondly, detecting the significance based on the background information: selecting background seeds, and obtaining roughness significance by comparing the characteristics of each superpixel block and the background seeds; defining the background weight of each super pixel block based on the characteristic distribution of the background seeds, and improving the rough significance through the background weight to obtain the significance based on the background information;
thirdly, detecting the significance based on the foreground information: segmenting the saliency map based on the background information formed in the previous step, selecting a compact foreground region from all segmentation results, extracting foreground region characteristics, and obtaining the saliency based on the foreground information through characteristic comparison;
step four, integration of significance degree: and integrating the saliency based on the background and foreground information obtained in the first two steps to obtain integrated saliency, and performing smoothing operation on the integrated saliency to obtain the optimized saliency of all superpixel blocks.
2. The method of claim 1, wherein in the first step, the over-segmentation process uses a SLIC super-pixel segmentation method.
3. The method for detecting the saliency of color images based on background and foreground information as claimed in claim 1, wherein in said second step, the specific process of obtaining the saliency based on background information is:
11) selecting a super-pixel block at the image frame as a background seed, and performing characteristic comparison on each super-pixel block in the image and the background seed to obtain the roughness significance of each super-pixel block;
12) performing K-means clustering on the selected background seeds, determining the probability of each cluster belonging to the background according to the spatial distribution of each cluster, wherein the background weight of the background seeds in the kth cluster is defined as follows:
Pk=1-exp(-α(Ls+Lo)) (k=1,2,…,K)
wherein L issIs the length of the shortest superpixel chain containing all kth clusters, LoThe number of superpixel blocks belonging to other clusters in the superpixel chain is represented, the range of the parameter α is 0.01-0.08, and K is the number of centers of the selected clusters;
13) for other superpixel blocks in the image, firstly, the geodesic distances between the superpixel blocks and all background seeds are calculated, and the background seeds with the smallest geodesic distances to the superpixel blocks are obtained:
wherein BG is a set of background seeds, dgeo(si,sj) Geodesic distances for two superpixel blocks; from the previous step 12), the background probability of the background seed is known, and the background probability of the background seed is recorded asThe geodesic distance between the superpixel block and the background seed isThe background weight for this super-pixel block is then:
then sequentially calculating the background weight of each superpixel block;
14) the background information based saliency that defines a superpixel block is:
wherein,is a super pixel block siBased on the significance of the background information of (1),for superpixel blocks s calculated in step 11)iIs not used.
4. The method of claim 3 wherein the parameter α is 0.05.
5. The method according to claim 3, wherein in the third step, the process of obtaining the saliency based on the foreground information comprises:
21) segmenting the background-based saliency map obtained in the last step by using a parametric maxflow method to obtain a series of compact foreground regions, wherein the segmentation result of the maxflow method is as follows:
wherein N is the number of super pixels in the image, AiIs a super pixel block siArea of (a), xi∈ {1,0} denotes a superpixel block siWhether it belongs to the foreground region, eijIs the similarity, x, between adjacent superpixel blocksfObtaining a foreground region segmentation result;
22) selecting the segmentation result with the optimal value as a foreground region according to the following formula:
wherein, XfN is the number of super pixel blocks for the obtained multiple segmentation results,is a super pixel block siBased on the background saliency of V (x)f) As a result of the segmentation xfThe variance of the spatial coordinates of (a);
23) extracting the features of the selected foreground region, and determining the significance of each super-pixel block in the image based on the foreground through feature comparison:
where FG is the set of acquired foreground superpixel blocks, dc(si,sj) And dl(si,sj) Respectively the color distance and the spatial distance between superpixel blocks.
6. The method for detecting the saliency of color images based on background and foreground information of claim 5, wherein in said step four, the specific process is as follows:
31) integrating the obtained significance based on the background and the foreground, wherein the integration formula is as follows:
wherein,is a super pixel block siThe degree of integration of (a) to (b),representing superpixel blocks siBased on the degree of saliency of the background of (1),representing superpixel blocks siThe significance based on the foreground of the image is that the value range of the parameter β is 2.5-8;
32) the significance of the integration is further optimized, and the optimization function is as follows:
wherein, wc(si,sj) Representing the color similarity, S, of two adjacent superpixel blocksiAnd SjRepresenting the saliency of two adjacent superpixel blocks to be optimized,andrespectively a super pixel block siThe background weight and the foreground label of (a),is a super pixel block siThe integration significance of (a), N is the number of superpixels in the image; the optimization function is a global optimization function, and the optimization function is optimized to obtain the optimization significance of all the superpixel blocks.
7. The method of claim 6 wherein the parameter β is 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610654316.1A CN106327507B (en) | 2016-08-10 | 2016-08-10 | A kind of color image conspicuousness detection method based on background and foreground information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610654316.1A CN106327507B (en) | 2016-08-10 | 2016-08-10 | A kind of color image conspicuousness detection method based on background and foreground information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106327507A true CN106327507A (en) | 2017-01-11 |
CN106327507B CN106327507B (en) | 2019-02-22 |
Family
ID=57740141
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610654316.1A Expired - Fee Related CN106327507B (en) | 2016-08-10 | 2016-08-10 | A kind of color image conspicuousness detection method based on background and foreground information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106327507B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106709474A (en) * | 2017-01-23 | 2017-05-24 | 无锡职业技术学院 | Handwritten telephone number identification, verification and information sending system |
CN107016682A (en) * | 2017-04-11 | 2017-08-04 | 四川大学 | A kind of notable object self-adapting division method of natural image |
CN107194870A (en) * | 2017-05-24 | 2017-09-22 | 北京大学深圳研究生院 | A kind of image scene reconstructing method based on conspicuousness object detection |
CN107452013A (en) * | 2017-05-27 | 2017-12-08 | 深圳市美好幸福生活安全系统有限公司 | Conspicuousness detection method based on Harris Corner Detections and Sugeno fuzzy integrals |
CN108198172A (en) * | 2017-12-28 | 2018-06-22 | 北京大学深圳研究生院 | Image significance detection method and device |
CN108965739A (en) * | 2018-06-22 | 2018-12-07 | 北京华捷艾米科技有限公司 | video keying method and machine readable storage medium |
CN109166106A (en) * | 2018-08-02 | 2019-01-08 | 山东大学 | A kind of target detection aligning method and apparatus based on sliding window |
CN109472259A (en) * | 2018-10-30 | 2019-03-15 | 河北工业大学 | Conspicuousness detection method is cooperateed with based on energy-optimised image |
CN110310263A (en) * | 2019-06-24 | 2019-10-08 | 北京师范大学 | A kind of SAR image residential block detection method based on significance analysis and background priori |
CN110866896A (en) * | 2019-10-29 | 2020-03-06 | 中国地质大学(武汉) | Image saliency target detection method based on k-means and level set super-pixel segmentation |
CN112183556A (en) * | 2020-09-27 | 2021-01-05 | 长光卫星技术有限公司 | Port ore heap contour extraction method based on spatial clustering and watershed transformation |
CN112861858A (en) * | 2021-02-19 | 2021-05-28 | 首都师范大学 | Significance truth diagram generation method and significance detection model training method |
CN113378873A (en) * | 2021-01-13 | 2021-09-10 | 杭州小创科技有限公司 | Algorithm for determining attribution or classification of target object |
CN117745563A (en) * | 2024-02-21 | 2024-03-22 | 深圳市格瑞邦科技有限公司 | Dual-camera combined tablet personal computer enhanced display method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150235374A1 (en) * | 2014-02-20 | 2015-08-20 | Nokia Corporation | Method, apparatus and computer program product for image segmentation |
CN105513070A (en) * | 2015-12-07 | 2016-04-20 | 天津大学 | RGB-D salient object detection method based on foreground and background optimization |
-
2016
- 2016-08-10 CN CN201610654316.1A patent/CN106327507B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150235374A1 (en) * | 2014-02-20 | 2015-08-20 | Nokia Corporation | Method, apparatus and computer program product for image segmentation |
CN105513070A (en) * | 2015-12-07 | 2016-04-20 | 天津大学 | RGB-D salient object detection method based on foreground and background optimization |
Non-Patent Citations (3)
Title |
---|
JIANPENG WANG, ET AL.: "Saliency detection via background and foreground seed selection", 《NEUROCOMPUTING》 * |
LAURENT ITTI, ET AL.: "A Model of Saliency-Based Visual Attention for Rapid Scene Analysis", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
韩守东 等: "基于高斯超像素的快速Graph Cuts图像分割方法", 《自动化学报》 * |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106709474A (en) * | 2017-01-23 | 2017-05-24 | 无锡职业技术学院 | Handwritten telephone number identification, verification and information sending system |
CN107016682A (en) * | 2017-04-11 | 2017-08-04 | 四川大学 | A kind of notable object self-adapting division method of natural image |
CN107194870B (en) * | 2017-05-24 | 2020-07-28 | 北京大学深圳研究生院 | Image scene reconstruction method based on salient object detection |
CN107194870A (en) * | 2017-05-24 | 2017-09-22 | 北京大学深圳研究生院 | A kind of image scene reconstructing method based on conspicuousness object detection |
CN107452013A (en) * | 2017-05-27 | 2017-12-08 | 深圳市美好幸福生活安全系统有限公司 | Conspicuousness detection method based on Harris Corner Detections and Sugeno fuzzy integrals |
CN108198172A (en) * | 2017-12-28 | 2018-06-22 | 北京大学深圳研究生院 | Image significance detection method and device |
CN108965739A (en) * | 2018-06-22 | 2018-12-07 | 北京华捷艾米科技有限公司 | video keying method and machine readable storage medium |
CN109166106A (en) * | 2018-08-02 | 2019-01-08 | 山东大学 | A kind of target detection aligning method and apparatus based on sliding window |
CN109472259B (en) * | 2018-10-30 | 2021-03-26 | 河北工业大学 | Image collaborative saliency detection method based on energy optimization |
CN109472259A (en) * | 2018-10-30 | 2019-03-15 | 河北工业大学 | Conspicuousness detection method is cooperateed with based on energy-optimised image |
CN110310263A (en) * | 2019-06-24 | 2019-10-08 | 北京师范大学 | A kind of SAR image residential block detection method based on significance analysis and background priori |
CN110310263B (en) * | 2019-06-24 | 2020-12-01 | 北京师范大学 | SAR image residential area detection method based on significance analysis and background prior |
CN110866896A (en) * | 2019-10-29 | 2020-03-06 | 中国地质大学(武汉) | Image saliency target detection method based on k-means and level set super-pixel segmentation |
CN112183556A (en) * | 2020-09-27 | 2021-01-05 | 长光卫星技术有限公司 | Port ore heap contour extraction method based on spatial clustering and watershed transformation |
CN112183556B (en) * | 2020-09-27 | 2022-08-30 | 长光卫星技术股份有限公司 | Port ore heap contour extraction method based on spatial clustering and watershed transformation |
CN113378873A (en) * | 2021-01-13 | 2021-09-10 | 杭州小创科技有限公司 | Algorithm for determining attribution or classification of target object |
CN112861858A (en) * | 2021-02-19 | 2021-05-28 | 首都师范大学 | Significance truth diagram generation method and significance detection model training method |
CN112861858B (en) * | 2021-02-19 | 2024-06-07 | 北京龙翼风科技有限公司 | Method for generating saliency truth value diagram and method for training saliency detection model |
CN117745563A (en) * | 2024-02-21 | 2024-03-22 | 深圳市格瑞邦科技有限公司 | Dual-camera combined tablet personal computer enhanced display method |
CN117745563B (en) * | 2024-02-21 | 2024-05-14 | 深圳市格瑞邦科技有限公司 | Dual-camera combined tablet personal computer enhanced display method |
Also Published As
Publication number | Publication date |
---|---|
CN106327507B (en) | 2019-02-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106327507B (en) | A kind of color image conspicuousness detection method based on background and foreground information | |
Yan et al. | Unsupervised image saliency detection with Gestalt-laws guided optimization and visual attention based refinement | |
CN109522908B (en) | Image significance detection method based on region label fusion | |
JP7026826B2 (en) | Image processing methods, electronic devices and storage media | |
EP2523165B1 (en) | Image processing method and image processing device | |
CN105869173B (en) | A kind of stereoscopic vision conspicuousness detection method | |
CN108537239B (en) | Method for detecting image saliency target | |
CN108629783B (en) | Image segmentation method, system and medium based on image feature density peak search | |
CN109389129A (en) | A kind of image processing method, electronic equipment and storage medium | |
US20140314299A1 (en) | System and Method for Multiplexed Biomarker Quantitation Using Single Cell Segmentation on Sequentially Stained Tissue | |
CN105913456A (en) | Video significance detecting method based on area segmentation | |
WO2019071976A1 (en) | Panoramic image saliency detection method based on regional growth and eye movement model | |
EP4250228A2 (en) | Deformity edge detection | |
Xu et al. | A novel edge-oriented framework for saliency detection enhancement | |
CN113706564A (en) | Meibomian gland segmentation network training method and device based on multiple supervision modes | |
CN108364300A (en) | Vegetables leaf portion disease geo-radar image dividing method, system and computer readable storage medium | |
CN106295639A (en) | A kind of virtual reality terminal and the extracting method of target image and device | |
CN111091129A (en) | Image salient region extraction method based on multi-color characteristic manifold sorting | |
CN106529441A (en) | Fuzzy boundary fragmentation-based depth motion map human body action recognition method | |
Katkar et al. | A novel approach for medical image segmentation using PCA and K-means clustering | |
CN108154513A (en) | Cell based on two photon imaging data detects automatically and dividing method | |
CN112419335B (en) | Shape loss calculation method of cell nucleus segmentation network | |
CN102509308A (en) | Motion segmentation method based on mixtures-of-dynamic-textures-based spatiotemporal saliency detection | |
CN117292217A (en) | Skin typing data augmentation method and system based on countermeasure generation network | |
CN108765384B (en) | Significance detection method for joint manifold sequencing and improved convex hull |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190222 Termination date: 20210810 |