CN104504692A - Method for extracting obvious object in image on basis of region contrast - Google Patents

Method for extracting obvious object in image on basis of region contrast Download PDF

Info

Publication number
CN104504692A
CN104504692A CN201410781285.7A CN201410781285A CN104504692A CN 104504692 A CN104504692 A CN 104504692A CN 201410781285 A CN201410781285 A CN 201410781285A CN 104504692 A CN104504692 A CN 104504692A
Authority
CN
China
Prior art keywords
threshold
saliency maps
pixel
value
designated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410781285.7A
Other languages
Chinese (zh)
Other versions
CN104504692B (en
Inventor
刘志
叶林伟
李君浩
李利娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201410781285.7A priority Critical patent/CN104504692B/en
Publication of CN104504692A publication Critical patent/CN104504692A/en
Application granted granted Critical
Publication of CN104504692B publication Critical patent/CN104504692B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a method for extracting an obvious object in an image on the basis of region contrast. The method comprises the following specific steps: (1) inputting an original image, recording an obviousness graph of the original image into (reference to specification) and recording an object probability graph of the original image into (reference to specification); (2) calculating a fusion coefficient of the obviousness graph and the object probability graph; (3) according to the fusion coefficient, calculating a region contrast fused graph and extracting the obvious object in the image. According to the method, the region contrast fused graph is calculated by adopting the obviousness graph and the object probability graph, so that the obvious object in the image is extracted; relative to a method which singly uses the obviousness graph or the object probability graph for extracting the obvious object, the method disclosed by the invention can be used for more accurately and more completely extracting the obvious object in the image.

Description

Based on the extracting method of object remarkable in the image of region contrast
Technical field
The present invention relates to computerized information, technical field of image processing, specifically relate to a kind of method of the remarkable object extracted in image.
Background technology
According to the research of psychology and human vision, when people observes piece image, unequal to the concern in each region of image, thus produce the Saliency maps corresponding with attention rate.In most of the cases, when people observes piece image, can not mean allocation notice on the entire image, but certain object in image can be primarily focused on, such object is called as remarkable object.If automatically by remarkable object extraction out, great help can will be provided to applications such as image scaling, image recognition, image retrievals.Remarkable object extraction method is applied under this background just and gives birth to, it is intended to extract exactly in image the remarkable object after eliminating background, such as, the people such as Rother have delivered " Grabcut: interactive foreground extraction figure cutting method " in Association for Computing Machinery's graphics journal in 2004 civilian, this article utilizes artificial hand animation rectangular window, specify the remarkable subject area of candidate, then extract remarkable object by figure cutting method.But because need to specify manually and remarkable object candidates region can only be defined with rectangular window, limit the widespread use of the method.It is civilian that the people such as Cheng have delivered " salient region based on global contrast detects " in IEEE's computer visions in 2011 and pattern-recognition meeting, this article utilizes global color contrast and area of space contrast to obtain Saliency maps, then Iamge Segmentation is carried out according to Saliency maps iteration Grabcut figure cutting method, extract the remarkable object in image, the concrete steps of the method are as follows:
The significance value of (1) pixel defines by the contrast of other pixel color in this pixel and image, and the pixel with same color distributes identical significance value.
(2) abandon the less color of the frequency of occurrences based on statistics with histogram, the significance value of each color is replaced by the weighted mean of Similar color significance value.
(3) with the image partition method based on figure, Iamge Segmentation is become some regions, utilize the Euclidean distance of two regional barycenters, computer memory region contrast, obtains Saliency maps.
(4) fixed threshold is got to Saliency maps, carry out Iamge Segmentation by Grabcut figure cutting method.
(5) with the result figure after expansion, etching operation Iamge Segmentation, obtain new figure to be split, then carry out Iamge Segmentation by Grabcut figure cutting method.
(6) step (5) is repeated until convergence.Obtain final result figure, the remarkable object namely extracted.
The people such as Liu deliver in the image procossing journal of IEEE-USA in 2014 " conspicuousness is set: a new conspicuousness detects a framework " literary composition, zonule one by one in the node table diagram picture of this article tree structure, by measuring global contrast, spatial sparsity and object priority, merge territory, original cell, generate Saliency maps, finally use maximum between-cluster variance value binaryzation Saliency maps, extract the remarkable object in image.The method increase the degree of accuracy of Saliency maps, but the maximum between-cluster variance value method in the method, still can not multiple remarkable object in complete extraction image when extracting remarkable object.The people such as Alexe have delivered " object in measurement image window " in the pattern analysis and machine intelligence journal of IEEE-USA in 2012, this article proposes by image window, the i.e. concept of rectangular window detected object and computing method, the probability of object is comprised by calculating a large amount of rectangular window, utilize Bayesian formula associating multi thread to obtain the location probability of remarkable object region, obtain object probability graph.The concrete steps of the method are as follows:
(1) obtain multiple dimensioned conspicuousness clue by frequency-domain residual method, and produce a large amount of rectangular window.
(2) by card side's distance of color space histogram, color contrast clue between rectangular window is calculated.
(3) use Canny operator detection boundaries, obtain marginal density clue.
(4) with the image partition method based on figure, Iamge Segmentation is become some regions, according to the Minimum Area difference of rectangular window inner region and rectangular window exterior domain, obtain region bifurcated clue.
(5) Gaussian distribution is utilized to estimate position and the size clue of rectangular window.
(6) integrate the clue obtained by step (1)-(5) with Bayesian formula, calculate the location probability of remarkable object region, obtain object probability graph.
But.The deficiency that said method exists is, the method just indicates the location probability of remarkable object with rectangular window, and does not comprise the profile information of significantly object accurately, can not extract the remarkable object in image exactly.
In sum, the method for the extraction of remarkable object in existing image, accurately, intactly can not extract the remarkable object in image, this have impact on the widespread use of remarkable object extraction.
Summary of the invention
The object of the invention is to the defect for existing in prior art, proposing a kind of extracting method based on object remarkable in the image of region contrast, the method comparatively accurately, intactly can extract the remarkable object in image.
In order to achieve the above object, the technical solution used in the present invention is as follows:
Based on an extracting method for object remarkable in the image of region contrast, its concrete steps are as follows:
(1), input original image, the Saliency maps of original image is designated as , the object probability graph of original image is designated as ;
(2) fusion coefficients of Saliency maps and object probability graph, is calculated;
(3), according to fusion coefficients, zoning contrast merges figure, extracts the remarkable object in image.
Calculating Saliency maps described in above-mentioned steps (2) and the fusion coefficients of object probability graph, its concrete steps are as follows:
(2-1), if the first threshold figure of Saliency maps , Saliency maps Second Threshold figure , Saliency maps the 3rd threshold figure , the threshold figure of object probability graph is , it is specific as follows:
To the significance value normalized of the pixel in Saliency maps, its normalized value is [0,1], and wherein, if the threshold value that 3 different, respectively, first threshold is designated as , =0.75; If Second Threshold is the maximum between-cluster variance value of Saliency maps, be designated as ; If the 3rd threshold value is the average significance value of Saliency maps, be designated as ; The pixel value that in Saliency maps, significance value is more than or equal to the pixel of above-mentioned threshold value is designated as 1, and the pixel value being less than the pixel of above-mentioned threshold value is designated as 0, obtains the threshold figure that Saliency maps is corresponding,
If with first threshold corresponding threshold figure is the first threshold figure of Saliency maps, is designated as if, and Second Threshold corresponding threshold figure is the Second Threshold figure of Saliency maps, is designated as if, with the 3rd threshold value corresponding threshold figure is the 3rd threshold figure of Saliency maps, is designated as , object probability graph middle object probable value is more than or equal to first threshold the pixel value of pixel be designated as 1, be less than first threshold the pixel value of pixel be designated as 0, obtain the threshold figure of object probability graph ;
(2-2), by the ratio value sum that overlaps of the pixel between the first threshold figure of overlap ratio value and the Saliency maps of the pixel between the first threshold figure of Saliency maps and the Second Threshold figure of Saliency maps and the 3rd threshold figure of Saliency maps, be designated as , its calculating formula is:
Wherein, prepresent the pixel in threshold figure, represent the pixel belonged in the first threshold figure of Saliency maps, for the pixel in the first threshold figure of Saliency maps pcorresponding pixel value, represent the pixel belonged in the Second Threshold figure of Saliency maps, for the pixel in the Second Threshold figure of Saliency maps pcorresponding pixel value, represent the pixel belonged in the 3rd threshold figure of Saliency maps, for the pixel in the 3rd threshold figure of Saliency maps pcorresponding pixel value; ;
(2-3), calculate the overlapping area of the first threshold figure of Saliency maps and the threshold figure of object probability graph, be designated as , its calculating formula is:
(2)
Wherein, represent with common factor, represent with union, represent the overlapping area of the first threshold figure of Saliency maps and the threshold figure of object probability graph;
(2-4), if the catercorner length of original image, be designated as D, calculate the normalization centroid distance between the center of mass point of first threshold figure of Saliency maps and the center of mass point of the threshold figure of object probability graph, be designated as , its calculating formula is:
(3)
Wherein, represent the center of mass point of the first threshold figure of Saliency maps, represent the threshold figure of object detection figure center of mass point, calculate arrive between Euclidean distance;
(2-5), calculate Saliency maps and object probability graph fusion coefficients according to the first threshold figure of the coincidence ratio value sum of the pixel between described each threshold figure, described Saliency maps and the overlapping area of the threshold figure of object probability graph, centroid distance value between the center of mass point of the first threshold figure of described Saliency maps and the center of mass point of the threshold figure of object probability graph, be designated as , its calculating formula is:
(4)
Described in above-mentioned steps (3) according to fusion coefficients, zoning contrast merges figure, and extract the remarkable object in image, its concrete steps are as follows:
(3-1), according to the fusion coefficients of Saliency maps and object probability graph, merge Saliency maps and object probability graph, zoning contrast merges figure, is designated as , its calculating formula is:
(5)
Wherein saliency maps, represent get and the maximum number between 1, , exp represents e index, represent that in object probability graph, pixels probability value is more than or equal to the pixel value of pixel, represent that in object probability graph, pixels probability value is less than or equal to the pixel value of pixel, to represent in object probability graph pixels probability value between , between the pixel value of pixel, represent that region contrast merges figure;
(3-2), if the 4th threshold value is 0.2, region contrast merges figure the pixel value being more than or equal to the pixel of the 4th threshold value is designated as 1, and the pixel value being less than the pixel of the 4th threshold value is designated as 0, obtains the threshold figure that region contrast fusion figure is corresponding, then adopts Grabcut figure cutting method to carry out Iamge Segmentation, extracts the remarkable object in image.
Extracting method based on object remarkable in the image of region contrast of the present invention compared with prior art, tool has the following advantages: the method adopts Saliency maps and object probability graph zoning contrast to merge figure, extract remarkable object in image, extract the method for remarkable object relative to single use Saliency maps or object probability graph, more accurately, intactly can extract the remarkable object in image.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the extracting method based on object remarkable in the image of region contrast of the present invention;
Fig. 2 (a) is the original image of input;
Fig. 2 (b) is the Saliency maps of original image;
Fig. 2 (c) is the object probability graph of original image;
Fig. 3 (a) is region contrast fusion figure;
Fig. 3 (b) is the remarkable object extracted after the threshold figure Grabcut of region contrast fusion figure schemes cutting.
Embodiment
Below in conjunction with Figure of description, embodiments of the invention are described in further detail.
The emulation experiment that the present invention carries out be CPU be 3.5GHz, in save as 16G PC test platform on programming realization.
As shown in Figure 1, the extracting method based on object remarkable in the image of region contrast of the present invention, its concrete steps are as follows:
(1), input original image, the Saliency maps of original image and object probability graph, its concrete steps are as follows:
Input original image, as Fig. 2 (a), the Saliency maps of original image, is designated as , as Fig. 2 (b), the object probability graph of original image, is designated as , as Fig. 2 (c);
(2), calculate the fusion coefficients of Saliency maps and object probability graph, its concrete steps are as follows:
(2-1), if the first threshold figure of Saliency maps , Saliency maps Second Threshold figure , Saliency maps the 3rd threshold figure , the threshold figure of object probability graph , it is specific as follows:
To the significance value normalized of the pixel in Saliency maps, its normalized value is [0,1], and wherein, if the threshold value that 3 different, respectively, first threshold is designated as , =0.75; If Second Threshold is the maximum between-cluster variance value of Saliency maps, be designated as ; If the 3rd threshold value is the average significance value of Saliency maps, be designated as ; The pixel value that in Saliency maps, significance value is more than or equal to the pixel of above-mentioned threshold value is designated as 1, and the pixel value being less than the pixel of above-mentioned threshold value is designated as 0, obtains the threshold figure that Saliency maps is corresponding,
If with first threshold corresponding threshold figure is the first threshold figure of Saliency maps, is designated as if, and Second Threshold corresponding threshold figure is the Second Threshold figure of Saliency maps, is designated as if, with the 3rd threshold value corresponding threshold figure is the 3rd threshold figure of Saliency maps, is designated as ,
Object probability graph middle object probable value is more than or equal to first threshold the pixel value of pixel be designated as 1, be less than first threshold the pixel value of pixel be designated as 0, obtain the threshold figure of object probability graph ;
(2-2), by the ratio value sum that overlaps of the pixel between the first threshold figure of overlap ratio value and the Saliency maps of the pixel between the first threshold figure of Saliency maps and the Second Threshold figure of Saliency maps and the 3rd threshold figure of Saliency maps, be designated as , its calculating formula is:
(1)
Wherein, prepresent the pixel in threshold figure, represent the pixel belonged in the first threshold figure of Saliency maps, for the pixel in the first threshold figure of Saliency maps pcorresponding pixel value, represent the pixel belonged in the Second Threshold figure of Saliency maps, for the pixel in the Second Threshold figure of Saliency maps pcorresponding pixel value, represent the pixel belonged in the 3rd threshold figure of Saliency maps, for the pixel in the 3rd threshold figure of Saliency maps pcorresponding pixel value; ;
(2-3), calculate the overlapping area of the first threshold figure of Saliency maps and the threshold figure of object probability graph, be designated as , its calculating formula is:
(2)
Wherein, represent with common factor, represent with union, for the overlapping area of the first threshold figure of Saliency maps and the threshold figure of object probability graph;
(2-4), if the catercorner length of original image, be designated as D, calculate the normalization centroid distance between the center of mass point of first threshold figure of Saliency maps and the center of mass point of the threshold figure of object probability graph, be designated as , its calculating formula is:
(3)
Wherein, represent the center of mass point of the first threshold figure of Saliency maps, represent the threshold figure of object detection figure center of mass point, calculate arrive between Euclidean distance;
(2-5), calculate Saliency maps and object probability graph fusion coefficients according to the first threshold figure of the coincidence ratio value sum of the pixel between described each threshold figure, described Saliency maps and the overlapping area of the threshold figure of object probability graph, centroid distance value between the center of mass point of the first threshold figure of described Saliency maps and the center of mass point of the threshold figure of object probability graph, be designated as , its calculating formula is:
(4)
(3), according to fusion coefficients, zoning contrast merges figure, and extract the remarkable object in image, its concrete steps are as follows:
(3-1), according to the Threshold Fusion coefficient of Saliency maps and object probability graph, merge Saliency maps and object probability graph, zoning contrast merges figure, is designated as , its calculating formula is:
(5)
Wherein, saliency maps, represent get and the maximum number between 1, , exp represents e index, represent that in object probability graph, pixels probability value is more than or equal to the pixel value of pixel, represent that in object probability graph, pixels probability value is less than or equal to the pixel value of pixel, to represent in object probability graph pixels probability value between , between the pixel value of pixel, represent that region contrast merges figure, as shown in Fig. 3 (a), region contrast in Fig. 3 (a) merges figure more completely highlights whole remarkable object trunk relative to the Saliency maps in Fig. 2 (b), and the region contrast in Fig. 3 (a) merges figure depicts whole remarkable object more accurately profile relative to the object probability graph in Fig. 2 (c);
(3-2), if the 4th threshold value is 0.2, region contrast merges figure the pixel value being more than or equal to the pixel of the 4th threshold value is designated as 1, the pixel value being less than the pixel of the 4th threshold value is designated as 0, obtains the threshold figure that region contrast fusion figure is corresponding, then adopts Grabcut figure cutting method to carry out Iamge Segmentation, extract the remarkable object in image, as shown in Fig. 3 (b).
As can be seen from above-mentioned the simulation experiment result, method of the present invention utilizes Saliency maps and object probability graph, obtains region contrast fusion figure ,remarkable object is extracted from region contrast fusion figure, can be more accurate, complete.

Claims (3)

1., based on an extracting method for object remarkable in the image of region contrast, it is characterized in that, its concrete steps are as follows:
(1), input original image, the Saliency maps of original image is designated as , the object probability graph of original image is designated as ;
(2) fusion coefficients of Saliency maps and object probability graph, is calculated;
(3), according to fusion coefficients, zoning contrast merges figure, extracts the remarkable object in image.
2. the extracting method based on object remarkable in the image of region contrast according to claim 1, it is characterized in that, the calculating Saliency maps described in above-mentioned steps (2) and the fusion coefficients of object probability graph, its concrete steps are as follows:
(2-1), if the first threshold figure of Saliency maps , Saliency maps Second Threshold figure , Saliency maps the 3rd threshold figure , the threshold figure of object probability graph is , it is specific as follows:
To the significance value normalized of the pixel in Saliency maps, its normalized value is [0,1], and wherein, if the threshold value that 3 different, respectively, first threshold is designated as , =0.75; If Second Threshold is the maximum between-cluster variance value of Saliency maps, be designated as ; If the 3rd threshold value is the average significance value of Saliency maps, be designated as ; The pixel value that in Saliency maps, significance value is more than or equal to the pixel of above-mentioned threshold value is designated as 1, and the pixel value being less than the pixel of above-mentioned threshold value is designated as 0, obtains the threshold figure that Saliency maps is corresponding,
If with first threshold corresponding threshold figure is the first threshold figure of Saliency maps, is designated as if, and Second Threshold corresponding threshold figure is the Second Threshold figure of Saliency maps, is designated as if, with the 3rd threshold value corresponding threshold figure is the 3rd threshold figure of Saliency maps, is designated as , object probability graph middle object probable value is more than or equal to first threshold the pixel value of pixel be designated as 1, be less than first threshold the pixel value of pixel be designated as 0, obtain the threshold figure of object probability graph ;
(2-2), by the ratio value sum that overlaps of the pixel between the first threshold figure of overlap ratio value and the Saliency maps of the pixel between the first threshold figure of Saliency maps and the Second Threshold figure of Saliency maps and the 3rd threshold figure of Saliency maps, be designated as , its calculating formula is:
Wherein, prepresent the pixel in threshold figure, represent the pixel belonged in the first threshold figure of Saliency maps, for the pixel in the first threshold figure of Saliency maps pcorresponding pixel value, represent the pixel belonged in the Second Threshold figure of Saliency maps, for the pixel in the Second Threshold figure of Saliency maps pcorresponding pixel value, represent the pixel belonged in the 3rd threshold figure of Saliency maps, for the pixel in the 3rd threshold figure of Saliency maps pcorresponding pixel value; ;
(2-3), calculate the overlapping area of the first threshold figure of Saliency maps and the threshold figure of object probability graph, be designated as , its calculating formula is:
(2)
Wherein, represent with common factor, represent with union, represent the overlapping area of the first threshold figure of Saliency maps and the threshold figure of object probability graph;
(2-4), if the catercorner length of original image, be designated as D, calculate the normalization centroid distance between the center of mass point of first threshold figure of Saliency maps and the center of mass point of the threshold figure of object probability graph, be designated as , its calculating formula is:
(3)
Wherein, represent the center of mass point of the first threshold figure of Saliency maps, represent the threshold figure of object detection figure center of mass point, calculate arrive between Euclidean distance;
(2-5), calculate Saliency maps and object probability graph fusion coefficients according to the first threshold figure of the coincidence ratio value sum of the pixel between described each threshold figure, described Saliency maps and the overlapping area of the threshold figure of object probability graph, centroid distance value between the center of mass point of the first threshold figure of described Saliency maps and the center of mass point of the threshold figure of object probability graph, be designated as , its calculating formula is:
(4)。
3. the extracting method based on object remarkable in the image of region contrast according to claim 1, it is characterized in that, described in above-mentioned steps (3) according to fusion coefficients, zoning contrast merge figure, extract the remarkable object in image, its concrete steps are as follows:
(3-1), according to the fusion coefficients of Saliency maps and object probability graph, merge Saliency maps and object probability graph, zoning contrast merges figure, is designated as , its calculating formula is: (5)
Wherein saliency maps, represent get and the maximum number between 1, , exp represents e index, represent that in object probability graph, pixels probability value is more than or equal to the pixel value of pixel, represent that in object probability graph, pixels probability value is less than or equal to the pixel value of pixel, to represent in object probability graph pixels probability value between , between the pixel value of pixel, represent that region contrast merges figure;
(3-2), if the 4th threshold value is 0.2, region contrast merges figure the pixel value being more than or equal to the pixel of the 4th threshold value is designated as 1, and the pixel value being less than the pixel of the 4th threshold value is designated as 0, obtains the threshold figure that region contrast fusion figure is corresponding, then adopts Grabcut figure cutting method to carry out Iamge Segmentation, extracts the remarkable object in image.
CN201410781285.7A 2014-12-17 2014-12-17 The extracting method of notable object in image based on region contrast Expired - Fee Related CN104504692B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410781285.7A CN104504692B (en) 2014-12-17 2014-12-17 The extracting method of notable object in image based on region contrast

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410781285.7A CN104504692B (en) 2014-12-17 2014-12-17 The extracting method of notable object in image based on region contrast

Publications (2)

Publication Number Publication Date
CN104504692A true CN104504692A (en) 2015-04-08
CN104504692B CN104504692B (en) 2017-06-23

Family

ID=52946086

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410781285.7A Expired - Fee Related CN104504692B (en) 2014-12-17 2014-12-17 The extracting method of notable object in image based on region contrast

Country Status (1)

Country Link
CN (1) CN104504692B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407978A (en) * 2016-09-24 2017-02-15 上海大学 Unconstrained in-video salient object detection method combined with objectness degree
CN106886995A (en) * 2017-01-13 2017-06-23 北京航空航天大学 Polyteny example returns the notable object segmentation methods of image of device polymerization
CN107730564A (en) * 2017-09-26 2018-02-23 上海大学 A kind of image edit method based on conspicuousness
CN108428240A (en) * 2018-03-08 2018-08-21 南京大学 A kind of obvious object dividing method adaptive to input information

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208125A (en) * 2013-03-14 2013-07-17 上海大学 Visual salience algorithm of color and motion overall contrast in video frame image

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208125A (en) * 2013-03-14 2013-07-17 上海大学 Visual salience algorithm of color and motion overall contrast in video frame image

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
BOGDAN ALEXE 等: "Measuring the objectness of image windows", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
KAI-YUEH CHANG 等: "Fusing Generic Objectness and Visual Saliency for Salient Object Detection", 《2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 *
PENG JIANG 等: "Salient Region Detection by UFO: Uniqueness, Focusness and Objectness", 《ICCV 2013》 *
TIE LIU 等: "Learning to Detect a Salient Object", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407978A (en) * 2016-09-24 2017-02-15 上海大学 Unconstrained in-video salient object detection method combined with objectness degree
CN106407978B (en) * 2016-09-24 2020-10-30 上海大学 Method for detecting salient object in unconstrained video by combining similarity degree
CN106886995A (en) * 2017-01-13 2017-06-23 北京航空航天大学 Polyteny example returns the notable object segmentation methods of image of device polymerization
US10387748B2 (en) 2017-01-13 2019-08-20 Beihang University Method for salient object segmentation of image by aggregating multi-linear exemplar regressors
CN106886995B (en) * 2017-01-13 2019-09-20 北京航空航天大学 Polyteny example returns the significant object segmentation methods of image of device polymerization
CN107730564A (en) * 2017-09-26 2018-02-23 上海大学 A kind of image edit method based on conspicuousness
CN108428240A (en) * 2018-03-08 2018-08-21 南京大学 A kind of obvious object dividing method adaptive to input information
CN108428240B (en) * 2018-03-08 2022-03-25 南京大学 Salient object segmentation method adaptive to input information

Also Published As

Publication number Publication date
CN104504692B (en) 2017-06-23

Similar Documents

Publication Publication Date Title
Yang et al. Learning object bounding boxes for 3d instance segmentation on point clouds
Bazazian et al. Fast and robust edge extraction in unorganized point clouds
CN105160317B (en) One kind being based on area dividing pedestrian gender identification method
CN102708370B (en) Method and device for extracting multi-view angle image foreground target
CN105809651B (en) Image significance detection method based on the comparison of edge non-similarity
CN106709568A (en) RGB-D image object detection and semantic segmentation method based on deep convolution network
CN104966286A (en) 3D video saliency detection method
CN103886589A (en) Goal-oriented automatic high-precision edge extraction method
CN105160686B (en) A kind of low latitude various visual angles Remote Sensing Images Matching Method based on improvement SIFT operators
CN107369158A (en) The estimation of indoor scene layout and target area extracting method based on RGB D images
CN103093470A (en) Rapid multi-modal image synergy segmentation method with unrelated scale feature
EP3073443A1 (en) 3D Saliency map
CN103761515A (en) Human face feature extracting method and device based on LBP
CN102799646B (en) A kind of semantic object segmentation method towards multi-view point video
CN104504692A (en) Method for extracting obvious object in image on basis of region contrast
Fu et al. Learning confidence measures by multi-modal convolutional neural networks
CN105118051A (en) Saliency detecting method applied to static image human segmentation
CN103871089B (en) Image superpixel meshing method based on fusion
CN108364300A (en) Vegetables leaf portion disease geo-radar image dividing method, system and computer readable storage medium
CN114782714A (en) Image matching method and device based on context information fusion
CN106991676A (en) A kind of super-pixel fusion method of local correlation
CN104050674A (en) Salient region detection method and device
CN108399630B (en) Method for quickly measuring distance of target in region of interest in complex scene
CN105809683A (en) Shopping image collaborative segmenting method
Dimiccoli et al. Exploiting t-junctions for depth segregation in single images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170623

Termination date: 20211217