US11120556B2 - Iterative method for salient foreground detection and multi-object segmentation - Google Patents
Iterative method for salient foreground detection and multi-object segmentation Download PDFInfo
- Publication number
- US11120556B2 US11120556B2 US16/880,505 US202016880505A US11120556B2 US 11120556 B2 US11120556 B2 US 11120556B2 US 202016880505 A US202016880505 A US 202016880505A US 11120556 B2 US11120556 B2 US 11120556B2
- Authority
- US
- United States
- Prior art keywords
- image
- saliency
- score
- graph
- superpixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 74
- 230000011218 segmentation Effects 0.000 title abstract description 27
- 238000001514 detection method Methods 0.000 title abstract description 7
- 239000003086 colorant Substances 0.000 claims description 13
- 230000003190 augmentative effect Effects 0.000 claims description 12
- 239000013598 vector Substances 0.000 claims description 11
- 239000011159 matrix material Substances 0.000 claims description 7
- 238000003709 image segmentation Methods 0.000 claims 2
- 238000003064 k means clustering Methods 0.000 description 5
- 238000000354 decomposition reaction Methods 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 238000007621 cluster analysis Methods 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000035755 proliferation Effects 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G06K9/4671—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/162—Segmentation; Edge detection involving graph-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20072—Graph-based image processing
Definitions
- a system and method overcomes the deficiencies in prior art systems and methods by employing an iterative approach so that a wider array of images, including images with multiple subjects, can be analyzed for salient foreground objects.
- the present invention is directed to a system and method for iterative foreground detection and multi-object segmentation.
- a new “background prior” is introduced to improve the foreground segmentation results.
- three complimentary embodiments are presented and demonstrated to detect and segment foregrounds containing multiple objects.
- the first embodiment performs an iterative segmentation of the image to “pull out” the various salient objects in the image.
- a higher dimensional embedding of the image graph is used to estimate the saliency score and extract multiple salient objects.
- a newly proposed metric is used to automatically pick the number of eigenvectors to consider in an alternative method to iteratively compute the image saliency map.
- Experimental results show that the proposed methods succeed in extracting multiple foreground objects from an image with a much better accuracy than previous methods.
- FIG. 1 shows a comparison of the saliency maps after an application of the method of the present invention, specifically showing an original image and corresponding saliency maps according to prior art methods and an improved saliency map according to the present invention
- FIG. 2 shows an original image having a plurality of separate objects and a plurality of saliency maps with non-zero eigenvectors according to the present invention
- FIG. 3 shows an original image having a single object and a plurality of saliency maps with non-zero eigenvectors according to the present invention
- FIG. 4 shows a flowchart corresponding to a first embodiment of the invention
- FIG. 5 shows an example of the progression of the method of the present invention, specifically showing an original image with a plurality of separate objects, and a selection of an optimum saliency map associated with a number of iterations of the method of the present invention
- FIG. 6 shows an example of the progression of the method of the present invention, specifically showing an original image with a single object, and a selection of an optimum saliency map associated with a number of iterations of the method of the present invention
- FIG. 7 shows an example of the progression of the method of the present invention, specifically showing an original image with four separate objects, and a selection of an optimum saliency map associated with a number of iterations of the method of the present invention
- FIG. 8 shows a flowchart corresponding to a second embodiment of the invention.
- FIG. 9 shows another example of the progression of the method of the present invention, wherein the total number of eigenvectors is three, and the best saliency map for an image with a single object corresponds to one iteration, and the best saliency map for an image with four objects corresponds to three iterations;
- FIG. 10 shows a flowchart corresponding to a third embodiment of the invention.
- FIG. 11 shows another example of the progression of the method of the present invention for an image with multiple salient subjects and corresponding saliency maps for a total of six iterations of the method of the present invention
- FIG. 12 shows another example of the progression of the method of the present invention for an image with a single salient subject wherein only one iteration is performed;
- FIG. 13 shows examples of saliency maps obtained according to the third method of the present invention.
- FIG. 14 shows examples of improved performance in saliency maps using a higher dimensional node embedding according to the method of the present invention.
- the present invention is directed to a system and method for automatically detecting salient foreground objects and iteratively segmenting these objects in the scene of an image.
- RAG Image Region Adjacency Graph
- each vertex v ⁇ V represents a superpixel from SLIC and is assigned a value of the mean Lab color of the superpixel.
- the edge set E consists of the edges connecting vertices i and j, if their corresponding superpixels share a border in the segmented image. Each edge is assigned a weighed that is proportional to the Lab color difference between neighboring superpixels:
- the graph G can be augmented with a background node b, which is assigned the mean color of the boundary, and a set of edges that connects the background node and the superpixels on the edge of the image with weights computed by equation (1).
- Embodiments of the present invention are directed to augmenting the “background prior,” which will be described in more detail.
- the background is often very cluttered, and thus computing the edge weights by considering the average background color will fail to capture the background prior effectively by computing very small weights, since the average background color will be sufficiently different from each of the border superpixels and thus resulting in an unsatisfying saliency map.
- a set of colors representing the background is assigned to the background node.
- a K-Means clustering of the border colors is performed, and then the K-Means cluster centers, ⁇ c 1 b , . . . , c k b ⁇ , are used to represent the background prior in the node.
- the maximum of the weights are computed between region i and each of the k cluster center colors:
- w i , b max j ⁇ ⁇ 1 , ... ⁇ , k ⁇ ⁇ 1 ⁇ c i - c j b ⁇ 2 + ⁇ ( 4 )
- FIG. 1 shows a comparison of the saliency maps after such enforcement. Specifically, FIG. 1 shows the original image 101 and its saliency map ground truth 104 , as well as the saliency map 102 produced according to the present invention, which is much better than the saliency map 103 produced by Perazzi et al.
- Embodiments of the present invention are also directed to detecting multiple objects, which will be described in more detail.
- the foreground segmentation method allows for detecting multiple salient subjects in the image by using the following schemes: (1) an iterative foreground segmentation scheme, and (2) two alternative multi-object foreground segmentation schemes, which use the eigenvector of the Image Region Adjacency Graph (“RAG”) as an embedding for the nodes and analysis of the presence of additional objects. This embedding is then used to calculate an alternative saliency score.
- RAG Image Region Adjacency Graph
- Both schemes use a metric to determine the ideal foreground segmentation.
- the metric used for picking the best saliency map, and the Silhouette score which is a main component of the metric, is described.
- the Silhouette score is now described in further detail.
- the K-Means clustering to cluster the saliency score into two (Foreground/Background) clusters is used, and then a metric is computed known as the “Silhouette score”, first introduced by Rousseeuw (P. Rousseeuw, “Silhouettes: A graphical aid to the interpretation and validation of cluster analysis,” Journal of Computational and Applied Mathematics, 20:53-65, 1987).
- the Silhouette score is one of the possible metrics that is used in interpretation and validation of cluster analysis.
- s ⁇ ( i ) b ⁇ ( i ) - a ⁇ ( i ) max ⁇ ⁇ ⁇ a ⁇ ( i ) , b ⁇ ( i ) ⁇ ( 5 ) which is then combined into a final score f sil for the image by taking the average of s(i) for all of the superpixels.
- Stopping criterion/metric is now described. Both of the above multi-object segmentation schemes detailed in the next section relies on some sort of stopping criterion or metric, which would determine either the ideal number of iterations or eigenvectors to consider when computing the saliency map for images with multiple objects. In order to determine the ideal number of iterations or number of eigenvectors, a metric that combines the Silhouette score, f sil , and mean image saliency of the image is used:
- FIG. 2 shows a first original image 201 a , including a plurality of objects, a saliency map 202 a from a first non-zero eigenvector, a saliency map 203 a from a second non-zero eigenvector, a saliency map 204 a from a third non-zero eigenvector, and a final saliency map 205 a .
- FIG. 2 also shows a second original image 201 b , including a plurality of objects, a saliency map 202 b from a first non-zero eigenvector, a saliency map 203 b from a second non-zero eigenvector, a saliency map 204 b from a third non-zero eigenvector, and a final saliency map 205 b .
- the same cannot be said of many of the images that only contain a single salient object, as can be seen in FIG. 3 .
- the fielder vector picks out the most salient object in the image and the subsequent eigenvector (at times several) contains redundant information regarding the object. Shown in FIG.
- FIG. 3 is a first original image including a single salient object 301 a , a corresponding saliency map from a first non-zero eigenvector 302 a , and a corresponding saliency map from a second non-zero eigenvector 303 a . Also shown in FIG. 3 is a second original image including a single salient object 301 b , a corresponding saliency map from a first non-zero eigenvector 302 b , and a corresponding saliency map from a second non-zero eigenvector 303 b.
- Stopping criterion based on the eigenvalue difference is now described.
- a different Stopping criterion is based on the percentage eigenvalue difference between subsequent dimensions.
- ⁇ i ⁇ i + 1 - ⁇ i ⁇ i + 1 ( 7 )
- ⁇ i the i th eigenvalue.
- Multi-object segmentation schemes according to embodiments of the present invention are now described.
- a first iterative foreground segmentation scheme is described below:
- FIG. 4 shows a flowchart that corresponds to the first scheme described above.
- step 402 decide the number of iterations, n, to consider in choosing the best Saliency map.
- step 406 compute the score image for iteration i.
- step 410 find the set, £, of nodes or superpixels for which the saliency score is greater than a threshold S th .
- step 412 cut out the nodes from the RAG that belongs to the set E. Compute saliency scores for the reduced graph as previously described.
- step 414 combine the Saliency scores of the smaller region with the scores for the nodes from the set E.
- step 416 compute the Saliency map using the new saliency scores, and return to step 406 .
- FIG. 5 shows an example embodiment of the progression of the method of the present invention, wherein the best saliency map is chosen having either three or four iterations.
- FIG. 5 shows an example embodiment of the progression of the method of the present invention, wherein the best saliency map is chosen having either three or four iterations.
- saliency map 506 a is chosen
- FIG. 6 shows the original image 601 of a scene with one salient object and the corresponding saliency maps as the number of eigenvectors is varied for superpixel embedding: one eigenvector 602 , two eigenvectors 603 , and three eigenvectors 604 .
- the saliency map 602 with one eigenvector was selected to be the best according to the score.
- FIG. 6 shows the original image 601 of a scene with one salient object and the corresponding saliency maps as the number of eigenvectors is varied for superpixel embedding: one eigenvector 602 , two eigenvectors 603 , and three eigenvectors 604 .
- the saliency map 602 with one eigenvector was selected to be the best according to the score.
- FIG. 7 shows the original image 701 of a scene with multiple salient objects and the corresponding saliency maps as the number of eigenvectors is varied for superpixel embedding: one eigenvector 702 , two eigenvectors 703 , and three eigenvectors 704 .
- the saliency map 704 with three eigenvectors was selected to be the best according to the score.
- FIG. 8 shows the flowchart that corresponds to the second method of the present invention.
- the total number of iterations, n, to consider in choosing the best Saliency map is decided.
- the RAG of the image as described in Perazzi et al. is constructed and augmented with the improved background node.
- a Laplacian matrix of the image RAG is constructed and its decomposition is computed, and k is set equal to 1.
- the k smallest eigenvectors corresponding to k smallest nonzero eigenvalues are considered and are used as k-dimensional embedding of the graph nodes.
- the k-dimensional embedding is a numerical representation of each of the nodes of the image RAG.
- the embedding includes k numerical descriptors that are obtained from the k eigenvectors in consideration (i.e., the component of each eigenvector that corresponds to a particular node is used, e.g., if the node called i is represented by the m th component of an eigenvector, the k-dimensional embedding of node i includes the m th components of each eigenvector.)
- the distance between the k-dimensional embedding of the background node and node i is calculated.
- step 812 all of the distances are rescaled to lie in the range between [0,1], which gives the relevant saliency scores S.
- step 814 the saliency map and the new saliency scores are computed.
- step 816 the score image for iteration i is computed.
- FIG. 9 shows an example of the progression of a embodiment of the present invention, when the total number of eigenvectors to consider to be chosen is three.
- the number of iterations, n is selected.
- n is set equal to three.
- FIG. 9 shows an original image 901 a , which includes a single object.
- FIG. 9 also shows a second original image 901 b , which includes four objects.
- An alternative embodiment of the present invention is directed to a method comprising extracting multiple salient objects.
- the method first computes the desired number of eigenvectors to consider and subsequently constructs the saliency map.
- an adaptive way is used to calculate a threshold.
- the adaptive threshold was proposed in “Frequency-tuned Salient Region Detection,” by R. Achanta, S. Hemami, F. Estrada and S. Süsstrunk, IEEE International Conference on Computer Vision and Pattern Recognition, pp. 1597-1604 (2009).
- the adaptive threshold is defined as twice the mean image saliency:
- FIG. 10 shows a flowchart that corresponds to this method.
- the number of iterations, n, to consider in choosing the best Saliency map is decided.
- the RAG of the image as described in Perazzi et al. is constructed, and augmented with an improved background node.
- the image threshold T k for dimension k is computed.
- the new vector of the Saliency score S k for each superpixel i is computed as set forth above.
- Step 1014 asks if k is equal to n. If yes, then the method terminates at step 1016 . If no, then k is set to k+1, and the method continues at step 1008 .
- FIG. 11 shows an example of the progression of the method illustrated in FIG. 10 and described above.
- FIG. 11 includes an original image 1100 , which includes multiple salient objects.
- the best dimension (which is six in this case) is chosen according to equation (8), as is shown in graph 1107 .
- FIG. 12 shows an example of the progression of the method illustrated in FIG. 10 and described above.
- FIG. 12 includes an original image 1200 , which includes a single salient object.
- the best dimension (which is one in this case) is chosen according to equation (8) as is shown in graph 1202 .
- FIG. 13 shows an example of the saliency maps as obtained by the method illustrated in FIG. 10 and described above. Specifically, FIG. 13 shows example plots ( 1305 , 1306 )—one of the Eigenvalue Function Difference as defined by equation (7) for a multi-subject image (plot 1305 ), and one of the Eigenvalue Function Difference for a single subject image (plot 1306 ).
- Plot 1305 corresponds to the original image 1301 , which contains multiple salient objects.
- the final saliency map for original image 1301 is shown as 1302 .
- Plot 1306 corresponds to the original image 1302 , which contains just a single salient object.
- the final saliency map for original image 1303 is shown as 1304 .
- Segmentation results of the present invention are described below.
- the background node By assigning the background node a set of most frequent colors, in the case where the image has a “complicated” background or multiple colors in the image, the resulting graph will have higher weights on the edges connecting the border to the background node, which often produces good foreground detection results.
- an embodiment of the present invention iteratively detects the most salient objects in the foreground. As can be seen from the example output depicted in FIG. 14 , improved results in detecting multiple salient subjects as compared to the prior art methods are obtained.
- FIG. 14 shows three original images ( 1401 a , 1401 b , and 1401 c ), their corresponding saliency maps as obtained pursuant to the prior art method described by Perazzi et al. ( 1402 a , 1402 b , and 1402 c ), and corresponding saliency maps obtained pursuant to the present invention described herein ( 1403 a , 1403 b , and 1403 c ).
- the method of the present invention provides:
- w i , b max j ⁇ ⁇ 1 , ... ⁇ , k ⁇ ⁇ 1 ⁇ c i - c j b ⁇ 2 + ⁇ .
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
where ci is a mean Lab color of the ith superpixel and e is a small constant to ensure the numerical stability of the algorithm (e.g., e=10−4). In order to represent the assumption that most of the border pixels belong to the background, the graph G can be augmented with a background node b, which is assigned the mean color of the boundary, and a set of edges that connects the background node and the superpixels on the edge of the image with weights computed by equation (1).
S=−sign(f b)·f (2)
And S is then scaled to the range [0,1], where fb represents the entry of the Fiedler vector corresponding to the background node.
S(i)=∥f i −f b∥ (3)
where S(i) is the ith component of the vector S and the saliency score for the ith superpixel and fi and fb are the embedding of the ith and background superpixels.
-
- a(i): average distance to the points in the same cluster as i (label that cluster A)
- D(i, C): average distance to the points in cluster C
- b(i)=minC·AD(i, C): by choosing a minimum of D(i, C), we compute the distance to the next best cluster assignment for i.
The final Silhouette score for point i is computed as:
which is then combined into a final score fsil for the image by taking the average of s(i) for all of the superpixels.
where S(x, y) is the image saliency score at location (x, y) and A(I) represents the area of the image, and the mean image saliency is the summation of the image saliency score at each location (x, y) divided by the area of the image A(I). Then, in order to pick the final saliency map, the map with the highest overall image saliency score defined in equation (6) is chosen.
where λi is the ith eigenvalue.
Then, in order to get the ideal dimension n, the dimension that produces the largest difference is chosen:
n=argmax1≤i<k{Δi} (8)
-
- Perform an initial foreground segmentation as described in Perazzi et al. with the improved background model introduced earlier, and compute the scoreimage for this map.
- Now, iteratively perform the following steps:
- 1. Find the set, £, of nodes or superpixels for which the saliency score Si is greater than a threshold Sth.
- 2. Modify the Image RAG by cutting out the nodes that belong to the set £ (store the saliency scores of these nodes for later processing).
- 3. Find new saliency scores for the region which remained in RAG by computing the Fiedler Vector of the new graph and computing and modifying it the same way described in Perazzi et al.
- 4. Combine the Saliency scores of the smaller region with the scores for the nodes from the set £, to obtain the new saliency image and compute its scoreimage.
- 5. Repeat for predetermined number of iterations.
- 6. Choose the segmentation map with highest scoreimage.
-
- Construct the RAG of the image as described in Perazzi et al. and augmented with the improved background node.
- Construct the Laplacian matrix of the Image RAG.
- Consider the k smallest eigenvectors corresponding to nonzero eigenvalues and use them as a k-dimensional embedding of the graph nodes.
- Calculate the new saliency score by:
- 1. Calculate the distance between the k-dimensional embedding of the background node and node i.
- 2. Rescale all the distances to lie in the range between [0, 1], which will give us the relevant saliency scores S.
- Compute a metric (such as the one described above) for maps created by considering projections with varying number of eigenvectors (we consider up to four eigenvectors for the embedding of our graph) and choose the map with highest score achieved by the metric (i.e., highest scoreimage if using the above metric).
This embodiment involves a method comprising the following steps:
-
- First, pre-compute the number, n, of eigenvectors to consider.
- Compute the vector of Saliency scores, S, for the superpixels using the improved background prior.
- If the n=1, then the method is completed. Otherwise repeat the following procedure for n≥2. Assume the saliency scores for the first k, k<n dimensions, which we will call Sk, have been computed. To incorporate the k+1th dimension in the computation of the final saliency scores S, proceed as follows:
- Compute the saliency scores for the k+1th dimension, Sk+1, by computing the distance of each superpixel to the background node and rescaling the score between [0;1].
- Compute the threshold Ta k+1 based on Sk+1 and extract the set of superpixels i for which it is true that Si k+1≥Ta k+1 and call the set N.
- For i∈N, let Si k+1:=max{Si k+1,Si k}, otherwise Si k+1:=Si k.
- If k+1<n, then repeat the procedure, else construct the image saliency map.
-
- 1. Modification of the image prior: instead of assigning to the image “background” node the average border background color (average color of the border superpixels), the method first performs a K-Means clustering of the colors. Then the method attaches to the background node a set of colors that represent the cluster centers. To compute the edge weight between the background node and the border regions, the maximum of the weights computed between region i and each of the k cluster center colors is taken:
-
- 2. An iterative segmentation scheme, which extends the foreground segmentation to allow for the presence of multiple salient subjects in the image.
- 3. Alternative multi-object foreground segmentation, which uses the eigenvector of the Image RAG as an embedding for the nodes. This embedding is then used to calculate an alternative saliency score.
- 4. A new stopping criterion and metric for multi-object segmentation is used.
Claims (11)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/880,505 US11120556B2 (en) | 2016-12-20 | 2020-05-21 | Iterative method for salient foreground detection and multi-object segmentation |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662436803P | 2016-12-20 | 2016-12-20 | |
US15/847,050 US10706549B2 (en) | 2016-12-20 | 2017-12-19 | Iterative method for salient foreground detection and multi-object segmentation |
US16/880,505 US11120556B2 (en) | 2016-12-20 | 2020-05-21 | Iterative method for salient foreground detection and multi-object segmentation |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/847,050 Division US10706549B2 (en) | 2016-12-20 | 2017-12-19 | Iterative method for salient foreground detection and multi-object segmentation |
Publications (2)
Publication Number | Publication Date |
---|---|
US20200286239A1 US20200286239A1 (en) | 2020-09-10 |
US11120556B2 true US11120556B2 (en) | 2021-09-14 |
Family
ID=60991573
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/847,050 Active 2038-08-27 US10706549B2 (en) | 2016-12-20 | 2017-12-19 | Iterative method for salient foreground detection and multi-object segmentation |
US16/880,505 Active US11120556B2 (en) | 2016-12-20 | 2020-05-21 | Iterative method for salient foreground detection and multi-object segmentation |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/847,050 Active 2038-08-27 US10706549B2 (en) | 2016-12-20 | 2017-12-19 | Iterative method for salient foreground detection and multi-object segmentation |
Country Status (4)
Country | Link |
---|---|
US (2) | US10706549B2 (en) |
EP (1) | EP3559906B1 (en) |
CN (1) | CN110088805B (en) |
WO (1) | WO2018118914A2 (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018142496A1 (en) * | 2017-02-01 | 2018-08-09 | 株式会社日立製作所 | Three-dimensional measuring device |
CN109359654B (en) * | 2018-09-18 | 2021-02-12 | 北京工商大学 | Image segmentation method and system based on frequency tuning global saliency and deep learning |
CN110111338B (en) * | 2019-04-24 | 2023-03-31 | 广东技术师范大学 | Visual tracking method based on superpixel space-time saliency segmentation |
JP7475959B2 (en) * | 2020-05-20 | 2024-04-30 | キヤノン株式会社 | Image processing device, image processing method, and program |
CN111815582B (en) * | 2020-06-28 | 2024-01-26 | 江苏科技大学 | Two-dimensional code region detection method for improving background priori and foreground priori |
CN112200826B (en) * | 2020-10-15 | 2023-11-28 | 北京科技大学 | Industrial weak defect segmentation method |
CN112163589B (en) * | 2020-11-10 | 2022-05-27 | 中国科学院长春光学精密机械与物理研究所 | Image processing method, device, equipment and storage medium |
CN112418218B (en) * | 2020-11-24 | 2023-02-28 | 中国地质大学(武汉) | Target area detection method, device, equipment and storage medium |
CN112991361B (en) * | 2021-03-11 | 2023-06-13 | 温州大学激光与光电智能制造研究院 | Image segmentation method based on local graph structure similarity |
CN113160251B (en) * | 2021-05-24 | 2023-06-09 | 北京邮电大学 | Automatic image segmentation method based on saliency priori |
CN113705579B (en) * | 2021-08-27 | 2024-03-15 | 河海大学 | Automatic image labeling method driven by visual saliency |
CN114998320B (en) * | 2022-07-18 | 2022-12-16 | 银江技术股份有限公司 | Method, system, electronic device and storage medium for visual saliency detection |
CN115631208B (en) * | 2022-10-13 | 2023-06-16 | 中国矿业大学 | Unmanned aerial vehicle image mining area ground crack extraction method based on improved active contour model |
CN116703939B (en) * | 2023-05-16 | 2024-08-20 | 绿萌科技股份有限公司 | Color fruit image segmentation method based on color difference condition of super pixel region |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090254236A1 (en) * | 2005-10-11 | 2009-10-08 | Peters Ii Richard A | System and method for image mapping and visual attention |
US20100226564A1 (en) * | 2009-03-09 | 2010-09-09 | Xerox Corporation | Framework for image thumbnailing based on visual similarity |
US20110295515A1 (en) * | 2010-05-18 | 2011-12-01 | Siemens Corporation | Methods and systems for fast automatic brain matching via spectral correspondence |
US20120275701A1 (en) * | 2011-04-26 | 2012-11-01 | Minwoo Park | Identifying high saliency regions in digital images |
US20160239981A1 (en) * | 2013-08-28 | 2016-08-18 | Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirketi | A semi automatic target initialization method based on visual saliency |
US20160379055A1 (en) * | 2015-06-25 | 2016-12-29 | Kodak Alaris Inc. | Graph-based framework for video object segmentation and extraction in feature space |
US20170337711A1 (en) * | 2011-03-29 | 2017-11-23 | Lyrical Labs Video Compression Technology, LLC | Video processing and encoding |
US20180295375A1 (en) * | 2017-04-05 | 2018-10-11 | Lyrical Labs Video Compression Technology, LLC | Video processing and encoding |
US10198629B2 (en) * | 2015-06-22 | 2019-02-05 | Photomyne Ltd. | System and method for detecting objects in an image |
US20190139282A1 (en) * | 2017-11-09 | 2019-05-09 | Adobe Inc. | Saliency-Based Collage Generation using Digital Images |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8929636B2 (en) * | 2012-02-02 | 2015-01-06 | Peter Yim | Method and system for image segmentation |
US9147255B1 (en) * | 2013-03-14 | 2015-09-29 | Hrl Laboratories, Llc | Rapid object detection by combining structural information from image segmentation with bio-inspired attentional mechanisms |
EP3028256A4 (en) * | 2013-07-31 | 2016-10-19 | Microsoft Technology Licensing Llc | Geodesic saliency using background priors |
US9330334B2 (en) * | 2013-10-24 | 2016-05-03 | Adobe Systems Incorporated | Iterative saliency map estimation |
CN104809729B (en) * | 2015-04-29 | 2018-08-28 | 山东大学 | A kind of saliency region automatic division method of robust |
CN105760886B (en) * | 2016-02-23 | 2019-04-12 | 北京联合大学 | A kind of more object segmentation methods of image scene based on target identification and conspicuousness detection |
-
2017
- 2017-12-19 CN CN201780078605.4A patent/CN110088805B/en active Active
- 2017-12-19 WO PCT/US2017/067309 patent/WO2018118914A2/en unknown
- 2017-12-19 EP EP17829821.2A patent/EP3559906B1/en active Active
- 2017-12-19 US US15/847,050 patent/US10706549B2/en active Active
-
2020
- 2020-05-21 US US16/880,505 patent/US11120556B2/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7835820B2 (en) * | 2005-10-11 | 2010-11-16 | Vanderbilt University | System and method for image mapping and visual attention |
US20110082871A1 (en) * | 2005-10-11 | 2011-04-07 | Vanderbilt University | System and method for image mapping and visual attention |
US8060272B2 (en) * | 2005-10-11 | 2011-11-15 | Vanderbilt University | System and method for image mapping and visual attention |
US20090254236A1 (en) * | 2005-10-11 | 2009-10-08 | Peters Ii Richard A | System and method for image mapping and visual attention |
US20100226564A1 (en) * | 2009-03-09 | 2010-09-09 | Xerox Corporation | Framework for image thumbnailing based on visual similarity |
US20110295515A1 (en) * | 2010-05-18 | 2011-12-01 | Siemens Corporation | Methods and systems for fast automatic brain matching via spectral correspondence |
US20170337711A1 (en) * | 2011-03-29 | 2017-11-23 | Lyrical Labs Video Compression Technology, LLC | Video processing and encoding |
US20120275701A1 (en) * | 2011-04-26 | 2012-11-01 | Minwoo Park | Identifying high saliency regions in digital images |
US8401292B2 (en) * | 2011-04-26 | 2013-03-19 | Eastman Kodak Company | Identifying high saliency regions in digital images |
US20160239981A1 (en) * | 2013-08-28 | 2016-08-18 | Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirketi | A semi automatic target initialization method based on visual saliency |
US9595114B2 (en) * | 2013-08-28 | 2017-03-14 | Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirketi | Semi automatic target initialization method based on visual saliency |
US10198629B2 (en) * | 2015-06-22 | 2019-02-05 | Photomyne Ltd. | System and method for detecting objects in an image |
US20160379055A1 (en) * | 2015-06-25 | 2016-12-29 | Kodak Alaris Inc. | Graph-based framework for video object segmentation and extraction in feature space |
US10192117B2 (en) * | 2015-06-25 | 2019-01-29 | Kodak Alaris Inc. | Graph-based framework for video object segmentation and extraction in feature space |
US20180295375A1 (en) * | 2017-04-05 | 2018-10-11 | Lyrical Labs Video Compression Technology, LLC | Video processing and encoding |
US20190139282A1 (en) * | 2017-11-09 | 2019-05-09 | Adobe Inc. | Saliency-Based Collage Generation using Digital Images |
Also Published As
Publication number | Publication date |
---|---|
WO2018118914A2 (en) | 2018-06-28 |
EP3559906B1 (en) | 2024-02-21 |
US20200286239A1 (en) | 2020-09-10 |
US20180174301A1 (en) | 2018-06-21 |
CN110088805A (en) | 2019-08-02 |
EP3559906A2 (en) | 2019-10-30 |
CN110088805B (en) | 2023-06-06 |
US10706549B2 (en) | 2020-07-07 |
WO2018118914A3 (en) | 2018-07-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11120556B2 (en) | Iterative method for salient foreground detection and multi-object segmentation | |
JP4979033B2 (en) | Saliency estimation of object-based visual attention model | |
US7711146B2 (en) | Method and system for performing image re-identification | |
US12002259B2 (en) | Image processing apparatus, training apparatus, image processing method, training method, and storage medium | |
JP4699298B2 (en) | Human body region extraction method, apparatus, and program | |
US9501837B2 (en) | Method and system for unsupervised image segmentation using a trained quality metric | |
US20060039587A1 (en) | Person tracking method and apparatus using robot | |
JP4098021B2 (en) | Scene identification method, apparatus, and program | |
US9349194B2 (en) | Method for superpixel life cycle management | |
EP3073443B1 (en) | 3d saliency map | |
JP5939056B2 (en) | Method and apparatus for positioning a text region in an image | |
CN112837344A (en) | Target tracking method for generating twin network based on conditional confrontation | |
Chi | Self‐organizing map‐based color image segmentation with k‐means clustering and saliency map | |
Palou et al. | Occlusion-based depth ordering on monocular images with binary partition tree | |
Porikli et al. | Automatic video object segmentation using volume growing and hierarchical clustering | |
CN109785367B (en) | Method and device for filtering foreign points in three-dimensional model tracking | |
Jia et al. | Dense interpolation of 3d points based on surface and color | |
Sima et al. | An extension of the Felzenszwalb-Huttenlocher segmentation to 3D point clouds | |
Liang et al. | KmsGC: An Unsupervised Color Image Segmentation Algorithm Based on K‐Means Clustering and Graph Cut | |
Thinh et al. | Depth-aware salient object segmentation | |
Haindl et al. | Unsupervised hierarchical weighted multi-segmenter | |
Khelifi et al. | A new multi-criteria fusion model for color textured image segmentation | |
Kucer et al. | Augmenting salient foreground detection using fiedler vector for multi-object segmentation | |
CN113538256A (en) | Visual saliency model establishment method based on multiple regional characteristics | |
Hirzer et al. | An automatic hybrid segmentation approach for aligned face portrait images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: KODAK ALARIS INC., NEW YORK Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LOUI, ALEXANDER;KLOOSTERMAN, DAVID;KUCER, MICHAL;AND OTHERS;SIGNING DATES FROM 20170413 TO 20170418;REEL/FRAME:052728/0582 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED |
|
AS | Assignment |
Owner name: KPP (NO. 2) TRUSTEES LIMITED, NORTHERN IRELAND Free format text: SECURITY INTEREST;ASSIGNOR:KODAK ALARIS INC.;REEL/FRAME:053993/0454 Effective date: 20200930 |
|
FEPP | Fee payment procedure |
Free format text: PETITION RELATED TO MAINTENANCE FEES GRANTED (ORIGINAL EVENT CODE: PTGR); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: THE BOARD OF THE PENSION PROTECTION FUND, UNITED KINGDOM Free format text: ASSIGNMENT OF SECURITY INTEREST;ASSIGNOR:KPP (NO. 2) TRUSTEES LIMITED;REEL/FRAME:058175/0651 Effective date: 20211031 |
|
AS | Assignment |
Owner name: THE BOARD OF THE PENSION PROTECTION FUND, UNITED KINGDOM Free format text: IP SECURITY AGREEMENT SUPPLEMENT (FISCAL YEAR 2022);ASSIGNOR:KODAK ALARIS INC.;REEL/FRAME:061504/0900 Effective date: 20220906 |
|
AS | Assignment |
Owner name: FGI WORLDWIDE LLC, NEW YORK Free format text: SECURITY AGREEMENT;ASSIGNOR:KODAK ALARIS INC.;REEL/FRAME:068325/0938 Effective date: 20240801 |
|
AS | Assignment |
Owner name: KODAK ALARIS INC., NEW YORK Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:THE BOARD OF THE PENSION PROTECTION FUND;REEL/FRAME:068481/0300 Effective date: 20240801 |