CN108416347A - Well-marked target detection algorithm based on boundary priori and iteration optimization - Google Patents

Well-marked target detection algorithm based on boundary priori and iteration optimization Download PDF

Info

Publication number
CN108416347A
CN108416347A CN201810008543.6A CN201810008543A CN108416347A CN 108416347 A CN108416347 A CN 108416347A CN 201810008543 A CN201810008543 A CN 201810008543A CN 108416347 A CN108416347 A CN 108416347A
Authority
CN
China
Prior art keywords
boundary
model
superpixel
background
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810008543.6A
Other languages
Chinese (zh)
Inventor
周圆
李绰
霍树伟
张业达
毛爱玲
杨晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201810008543.6A priority Critical patent/CN108416347A/en
Publication of CN108416347A publication Critical patent/CN108416347A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

Image information Step 1: extraction characteristic image information, and is expressed as the form of eigenmatrix by the invention discloses a kind of well-marked target detection algorithm based on boundary priori and iteration optimization;Step 2: establishing a kind of regional background likelihood score estimation model based on boundary priori, position and the profile of well-marked target can be accurately detected by the model;Step 3: generating the notable figure based on iteration optimization enhances model, that is, it is iteratively performed two processing of foreground/background initial point selection and saliency value global optimization.Compared with prior art, the well-marked target detection algorithm of the invention based on boundary priori and iteration optimization has merged a variety of significant characteristics and clue, can significantly promote the notable plot quality of any accuracy.

Description

Salient object detection algorithm based on boundary prior and iterative optimization
Technical Field
The invention relates to the field of artificial intelligence and computer vision, in particular to an image saliency target detection algorithm.
Background
The detection of a salient object is one of important subjects in the field of computer vision, and the main task of the detection is to simulate the visual attention mechanism of a human and quickly segment an object or an area which is most easily attracted to the attention from an image. At present, salient object detection has been applied to a variety of fields including image retrieval, object tracking, object recognition, and the like as an important image information preprocessing technology. The visual saliency analysis can effectively guide the redundant suppression of images, and has important significance on image processing in the big data era. However, because of the wide variety of objects and complex and diverse scenes in the image, designing a saliency analysis algorithm applicable to various scenes is still a very challenging subject.
The salient target detection can quickly and accurately extract the salient target area. One of the ultimate goals of saliency detection is to reduce the amount of data subsequently processed to meet the current challenges of massive image data. If the computational time complexity of the saliency detection algorithm itself is high, it will increase the burden of subsequent processing. In addition, although the image is simple in many cases, the salient object detection algorithm cannot well highlight the target and inhibit the background, so that the precision requirement cannot be met.
Disclosure of Invention
The invention aims to provide a significant target detection algorithm based on boundary prior and iterative optimization in combination with scene depth information, and significant target detection is realized by two stages of establishing a region background likelihood estimation model based on the boundary prior and establishing a significant image enhancement model based on the iterative optimization.
The invention relates to a significant target detection algorithm based on boundary prior and iterative optimization, which comprises the following steps:
step one, extracting characteristic image information, and representing the image information in a characteristic matrix form; the method specifically comprises the following steps:
first, image segmentation and region simplification are performed: performing region segmentation on an input image by adopting a simple linear iterative clustering algorithm, wherein each region is called a super pixel, and obtaining a characteristic matrix formed by distributing the serial number of the super pixel to each pixel;
secondly, according to the feature matrix, establishing a undirected graph model G ═ V, E, wherein V represents a node set of the graph model, and E represents an undirected edge set.
Establishing a boundary prior-based regional background likelihood estimation model, and accurately detecting the position and the contour of the significant target through the model, wherein the step specifically comprises the following steps:
firstly, establishing a boundary prior-based region background likelihood estimation model, and specifically processing the model comprises the following steps:
finding out superpixel r to be researchediThe same asMass region H (r)i),H(ri) Representation and superpixel riA homogenous set of superpixels;
extracting a boundary super-pixel set B of the image;
computing the sum r in the boundary regioniThe proportion of the overlapping portion of the homogeneous regions of (b) to the boundary region, i.e. the super-pixel riIs defined as:
in the above formula, the first and second carbon atoms are,representing a super pixel riThe background likelihood, |, represents the total number of pixels in the superpixel or superpixel set;
secondly, a homogeneity probability p is performedijThe estimation formula is as follows:
pij=MCs(ri,rj)×MCon(ri,rj)×MSp(ri)
wherein i is the superpixel index to be estimated, j is the boundary superpixel index, MCs(ri,rj) Is a super pixel pair (r)i,rj) Color similarity of (1), MCon(ri,rj) Defining smoothness of connection between superpixel pairs for negative index of geodesic distance, MSp(ri) Is a brand new central prior enhanced model. For convenience of later calculation, p is usedijConstructing a matrixWherein N isBIs the total number of image border superpixels;
furthermore, the background map estimation and the initial saliency map generation are realized, that is, the background map is generated according to the above-mentioned region background likelihood estimation model and converted into an initial saliency map vector, as shown in the following formula:
wherein,its elements are the normalized area size of all boundary superpixels, indicating the area of the jth boundary superpixel.
Vector quantityPresenting the image by using a background likelihood probability map with the same resolution as the original image, wherein the part with the higher gray value represents a background area, and the part with the lower gray value represents a salient object area; the background map is inverted into an initial saliency map by utilizing the concept of Shannon self information; the self-information calculation formula is as follows:
a superpixel that represents a lower background likelihood will also typically contain more saliency information; by usingRepresenting the initial degree of saliency of each super-pixel i; i is the superpixel index to be estimated, j is the boundary superpixel index, and N is the number of superpixels.
Step three, generating a saliency map enhancement model based on iterative optimization, namely iteratively executing the foregroundSelecting background seeds and carrying out global optimization on significant values, which specifically comprises the following steps: in each iteration, firstly, a seed selection method based on Bayesian theory is used to extract a few obvious/background regions which are easy to identify to form a seed setAndand endowing corresponding class labels to guide the subsequent optimization process; then, a least square optimization model is used for fusing three clues of class labels, prior estimation and smooth prior, so that the output result has higher accuracy and integrity than the last iteration input. The model consists of an objective function and a plurality of constraint conditions, and the expression of the model is as follows:
in the t-th iteration, the super pixel riIs expressed asUpper middle label (·)(t)Indicating that the variable is the variable in the t iteration; the objective function is a weighted sum of three least squares terms, i.e. a priori term, a classification term and a smoothing term,and deltaiIs the adaptive weight in the t iteration, is used for balancing the above three items; in the constraint condition, the number of the optical fiber,is the label value of the guide classification, which is 1 for the foreground seedThe landscape seed has a value of 0, and the rest super pixels can take any value.
Compared with the prior art, the salient object detection algorithm based on boundary prior and iterative optimization integrates various salient features and clues, and can greatly improve the quality of a salient image with any accuracy; the optimization model of the invention also has strong universality and error correction capability.
Drawings
Fig. 1 is an example of a background map and an initial saliency map. (a) Original image, (b) background image, (c) initial saliency image, (d) a processing result of an algorithm MAP which is published first, (e) true value image;
FIG. 2 is a schematic diagram of relationships among variables in a saliency map enhancement model based on iterative optimization;
FIG. 3 is a schematic diagram of a process for optimizing a background of a saliency map enhancement model based on iterative optimization; (a) initial saliency maps, (b) to (d) saliency maps and corresponding Mean Absolute Error (MAE) maps after 1, 3 and 5 times of iterative optimization, and (e) true value maps.
FIG. 4 is a schematic diagram of a saliency map optimization process based on an iterative optimization saliency map enhancement model; (a) an original drawing and a truth drawing; (b) randomly generating a saliency map (first, three rows) and a result (second, four rows) after optimization by using the patent model; (c) a saliency map (first, three rows) and an optimization result (second, four rows) are estimated by using a Gaussian central prior model; (d) a saliency map (first, three rows) and optimization results (second, four rows) generated using the CA algorithm; (e) a saliency map (first, three rows) and optimization results (second, four rows) generated using the FT algorithm; (f) a saliency map (first, three rows) and optimization results (second, four rows) generated using the SVO algorithm;
FIG. 5 is a schematic overall flow chart of a significant object detection algorithm based on boundary prior and iterative optimization.
Detailed Description
Embodiments of the present invention will be described in further detail below with reference to the accompanying drawings.
Fig. 5 is a general flow chart of a significant object detection algorithm based on boundary prior and iterative optimization according to the present invention. The process specifically comprises the following steps:
step 1, in order to accurately identify the position and the outline of a salient target in an image, extracting some characteristic image information which is helpful for the significance analysis, and expressing the image information in the form of a characteristic matrix. The method specifically comprises the following steps:
first, image segmentation and region simplification are performed: a Simple Linear Iterative Clustering (SLIC) algorithm is adopted to perform region segmentation on an input image, and each region is called a super-pixel. The algorithm flow is as follows:
inputting: number of super pixels K, shape regularity coefficient m
1-1, initializing each clustering center, and sampling pixels by step length S;
1-2, adjusting a clustering center to the lowest point (the neighborhood pixel point with the largest difference value between the pixel point and 8 pixel points in the field) of the gradient (the direction of the neighborhood pixel point with the largest difference value between the pixel point and 8 pixel points in the field) in a small local range (a pixel block with the size of 3 multiplied by 3);
1-3, extracting LAB space color feature [ L ] for pixel i in the range of 2S multiplied by 2S of each cluster centeri,ai,bi]And location characteristics (x)i,yi) And calculating the distance of the pixel pair (i, j) in the feature space:
1-4, root ofAccording to the distance dijCarrying out K-means clustering on the pixels to obtain a new clustering center;
1-5, calculating L between the new and old cluster centers1A norm distance E;
1-6, stopping iteration if E is less than a set threshold, otherwise, repeating the steps 1-3 and 1-4;
and (3) outputting: and the characteristic matrix is a matrix formed by allocating the serial number of the super pixel to each pixel.
Secondly, establishing a graph model: the constructed feature matrix independently describes the basic features of each region of the image, and in order to further describe the interrelation among the super pixels, the invention also establishes a non-directional graph model G (V, E), wherein V represents the node set of the graph model, and E represents the non-directional edge set. Each superpixel is considered as a node in an undirected graph (again denoted r for convenience)i) Node pairs (r) satisfying the following conditionsi,rj) Is connected as follows:
(1)riand rjAdjacent;
(2)riand rjAlthough not adjacent, both are connected to the node rkAdjacent;
(3)riand rjAre at the image boundaries (containing image boundary pixels).
As can be seen from the definition of the connection relation of the nodes, the undirected graph adopted by the invention is a sparse graph. Using the matrix W ═ Wij]N×NTo represent an arbitrary pair of superpixels (r)i,rj) The similarity relationship between them, then the vast majority of elements in W are 0. In this patent, the similarity matrix is defined as follows:
wherein,representing the difference in average color between the ith and j-th superpixels; neig (r)i,rj) Is a node pair (r) for determiningi,rj) Connected or not, when r isiAnd rjWhen they are connected Neig (r)i,rj) 1, otherwise Neig (r)i,rj) 0; λ is a constant used to balance the magnitude of the node connection weights.
Step 2, establishing a boundary prior-based region background likelihood estimation model, and accurately detecting the position and the contour of the significant target through the model:
firstly, establishing a boundary prior-based region background likelihood estimation model, wherein the specific establishment process mainly comprises the following three parts:
2-1, finding out the superpixel r to be researchediRegion of homogeneity H (r)i),H(ri) Representation and superpixel riA homogenous set of superpixels;
2-2, extracting a boundary superpixel set B of the image;
2-3, calculating the sum r in the boundary areaiThe proportion of the overlapping portion of the homogeneous regions of (b) to the boundary region, i.e. the super-pixel riIs defined as:
in the above formula, the first and second carbon atoms are,representing a super pixel riRepresents the total number of pixels in a superpixel or superpixel set.
Secondly, a homogeneity probability p is performedijEstimation of (2): probability of homogeneity pijMeasure the superpixel riAnd a boundary superpixel rjThe probability of belonging to homogeneous regions is a key parameter in background detection. Comprehensively considering three factors of color similarity, connection smoothness and space proximity and the homogeneity probability pijThe estimation formula of (c) is as follows:
pij=MCs(ri,rj)×MCon(ri,rj)×MSp(ri) (4)
wherein i is the superpixel index to be estimated, j is the boundary superpixel index, MCs(ri,rj) Is a super pixel pair (r)i,rj) Color similarity of (1), MCon(ri,rj) Defining smoothness of connection between superpixel (node) pairs for negative index of geodesic distance, MSp(ri) Is a brand new central prior enhanced model. For convenience of later calculation, p is usedijConstructing a matrixWherein N isBIs the total number of image border superpixels. Inspired by boundary prior, the method creates a region-level background likelihood estimation model, so that an accurate prediction result of the significance of all regions of the image is indirectly obtained.
Furthermore, the background map estimation and the initial saliency map generation are realized, that is, the background map is generated according to the region background likelihood estimation model and converted into the initial saliency map: the homogeneity probability matrix P gives any super-pixel riSuperpixel with a certain boundaryWith the probability of homogeneity characteristic, writing the background likelihood of all super-pixels into a vectorWherein,representing a super pixel riBackground likelihood (equation (3))Background likelihood for N superpixelsVector form (equation (5)) from the superpixel riThe vector has a value as shown in the following equation:
wherein,its elements are the normalized area size of all boundary superpixels, indicating the area of the jth boundary superpixel.
Vector quantityThe image may be presented using a background likelihood probability map of equal resolution to the original image, as shown in fig. 1. The part with higher gray value in the graph represents the background area, and the part with lower gray value represents the salient object area. The meaning of the background image is just opposite to that of the saliency map, and the background image is inverted into the initial saliency map by utilizing the concept of the shannon self information. Self-information is a good way of saliency computation, meaning that a superpixel with a lower background likelihood will generally also contain more saliency information. By usingRepresenting the initial degree of saliency of each superpixel, the value of which can be calculated using the expression,
as can be seen from fig. 1, the method of the present invention is very effective in delineating the location and contours of salient objects, although this patent only uses them as initial estimates, and the effect is comparable to the current advanced technology.
Step 3, generating a saliency map enhancement model based on iterative optimization, wherein the optimization framework of the method iteratively executes two steps of foreground/background seed selection and saliency global optimization:
in each iteration, firstly, a seed selection method based on Bayesian theory is used to extract a few obvious/background regions which are easy to identify to form a seed setAndand endowing corresponding class labels to guide the subsequent optimization process; then, a least square optimization model is used for fusing three clues of class labels, prior estimation and smooth prior, so that the output result has higher accuracy and integrity than the last iteration input. The model consists of an objective function and a plurality of constraint conditions, and the expression of the model is as follows:
in the t-th iteration, the super pixel riIs expressed asUpper middle label (·)(t)Indicating that the variable is the variable in the t-th iteration. The objective function is a weighted sum of three least squares terms, i.e. a priori term, a classification term and a smoothing term,and deltaiIs the adaptive weight in the t-th iteration for balancing the above three terms. In the constraint condition, the number of the optical fiber,the label value of the guided classification is 1 for the foreground seed, 0 for the background seed, and the values of the rest super pixels can be arbitrarily selected. The quality of the saliency map is qualitatively improved, and the accuracy and the integrity of an initial estimation result are greatly improved. As shown in fig. 4, the saliency maps generated by random generation, gaussian model and three classical algorithms (CA, FT, SVO) are selected, and although these maps do not well represent the position and contour of a saliency target, the output result can still obtain high accuracy after the model optimization of the present invention. The method also has strong error correction capability, and when the input result has extremely low precision and even has strong misleading property, the method can also optimize the high-quality saliency map.

Claims (1)

1. A salient object detection algorithm based on boundary prior and iterative optimization is characterized by comprising the following steps:
step one, extracting characteristic image information, and representing the image information in a characteristic matrix form; the method specifically comprises the following steps:
first, image segmentation and region simplification are performed: performing region segmentation on an input image by adopting a simple linear iterative clustering algorithm, wherein each region is called a super pixel, and obtaining a characteristic matrix formed by distributing the serial number of the super pixel to each pixel;
secondly, establishing a undirected graph model G (V, E) according to the feature matrix, wherein V represents a node set of a graph model, and E represents an undirected edge set;
establishing a boundary prior-based region background likelihood estimation model, and accurately detecting the position and the contour of a significant target through the model, wherein the step specifically comprises the following steps:
firstly, establishing a boundary prior-based region background likelihood estimation model, and specifically processing the model comprises the following steps:
finding out superpixel r to be researchediRegion of homogeneity H (r)i),H(ri) Representation and superpixel riA homogenous set of superpixels;
extracting a boundary super-pixel set B of the image;
computing the sum r in the boundary regioniThe proportion of the overlapping portion of the homogeneous regions of (b) to the boundary region, i.e. the super-pixel riIs defined as:
in the above formula, the first and second carbon atoms are,representing a super pixel riThe background likelihood, |, represents the total number of pixels in a superpixel or superpixel set;
secondly, a homogeneity probability p is performedijThe estimation formula is as follows:
pij=MCs(ri,rj)×MCon(ri,rj)×MSp(ri)
wherein i is the superpixel index to be estimated, j is the boundary superpixel index, MCs(ri,rj) Is a super pixel pair (r)i,rj) Color similarity of (1), MCon(ri,rj) Defining smoothness of connection between superpixel pairs for negative index of geodesic distance, MSp(ri) The method is a brand new central prior enhanced model; by pijConstructing a matrixWherein N isBIs the total number of image border superpixels;
furthermore, the background map estimation and the initial saliency map generation are realized, that is, the background map is generated according to the above-mentioned region background likelihood estimation model and converted into an initial saliency map vector, as shown in the following formula:
wherein,its elements are the normalized area size of all boundary superpixels, represents the area of the jth boundary superpixel;
vector quantityPresenting the image by using a background likelihood probability map with the same resolution as the original image, wherein the part with the higher gray value represents a background area, and the part with the lower gray value represents a salient object area; the background image is inverted into an initial saliency image by utilizing the concept of Shannon self information; the self-information calculation formula is as follows:
a superpixel that represents a lower background likelihood will also typically contain more saliency information; by usingRepresenting the initial degree of saliency of each super-pixel i; i is a superpixel index to be estimated, j is a boundary superpixel index, and N is the number of superpixels;
step three, generating a saliency map enhancement model based on iterative optimization, namely iteratively executing two processes of foreground/background seed selection and saliency global optimization, specifically comprising: in each iteration, firstly, a seed selection method based on Bayesian theory is used to extract a few obvious/background regions which are easy to identify to form a seed setAndand endowing corresponding class labels to guide the subsequent optimization process; then, a least square optimization model is used for fusing three clues of class labels, prior estimation and smooth prior, so that the output result has higher accuracy and integrity than the last iteration input; the model consists of an objective function and a plurality of constraint conditions, and the expression of the model is as follows:
Si (t+1)∈[0,1],Si (t)∈[0,1]
in the t-th iteration, the super pixel riIs expressed asUpper middle label (·)(t)Indicating that the variable is the variable in the t iteration; the objective function is a weighted sum of three least squares terms, i.e. a priori term, a classification term and a smoothing term,and deltaiIs the adaptive weight in the t iteration, is used for balancing the above three items; in the constraint condition, the number of the optical fiber,the label value of the guided classification is 1 for the foreground seed, 0 for the background seed, and the values of the rest super pixels can be arbitrarily selected.
CN201810008543.6A 2018-01-04 2018-01-04 Well-marked target detection algorithm based on boundary priori and iteration optimization Pending CN108416347A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810008543.6A CN108416347A (en) 2018-01-04 2018-01-04 Well-marked target detection algorithm based on boundary priori and iteration optimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810008543.6A CN108416347A (en) 2018-01-04 2018-01-04 Well-marked target detection algorithm based on boundary priori and iteration optimization

Publications (1)

Publication Number Publication Date
CN108416347A true CN108416347A (en) 2018-08-17

Family

ID=63125737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810008543.6A Pending CN108416347A (en) 2018-01-04 2018-01-04 Well-marked target detection algorithm based on boundary priori and iteration optimization

Country Status (1)

Country Link
CN (1) CN108416347A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784216A (en) * 2018-12-28 2019-05-21 华南理工大学 Vehicle-mounted thermal imaging pedestrian detection RoIs extracting method based on probability graph
CN110211115A (en) * 2019-06-03 2019-09-06 大连理工大学 A kind of light field conspicuousness detection implementation method based on depth guidance cellular automata
CN110826472A (en) * 2019-11-01 2020-02-21 新疆大学 Image detection method and device
CN111274964A (en) * 2020-01-20 2020-06-12 中国地质大学(武汉) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
CN111696099A (en) * 2020-06-16 2020-09-22 北京大学 General outlier likelihood estimation method based on image edge consistency
CN111815610A (en) * 2020-07-13 2020-10-23 广东工业大学 Lesion focus detection method and device of lesion image
CN112418218A (en) * 2020-11-24 2021-02-26 中国地质大学(武汉) Target area detection method, device, equipment and storage medium
CN112886571A (en) * 2021-01-18 2021-06-01 清华大学 Decomposition, coordination and optimization operation method and device of electric heating comprehensive energy system based on boundary variable feasible region
US20210248181A1 (en) * 2020-02-11 2021-08-12 Samsung Electronics Co., Ltd. Electronic device and control method thereof
CN113379785A (en) * 2021-06-22 2021-09-10 辽宁工程技术大学 Salient object detection method fusing boundary prior and frequency domain information

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130223740A1 (en) * 2012-02-23 2013-08-29 Microsoft Corporation Salient Object Segmentation
CN103914834A (en) * 2014-03-17 2014-07-09 上海交通大学 Significant object detection method based on foreground priori and background priori
US20160104054A1 (en) * 2014-10-08 2016-04-14 Adobe Systems Incorporated Saliency Map Computation
CN105513070A (en) * 2015-12-07 2016-04-20 天津大学 RGB-D salient object detection method based on foreground and background optimization
CN106203430A (en) * 2016-07-07 2016-12-07 北京航空航天大学 A kind of significance object detecting method based on foreground focused degree and background priori
CN106373131A (en) * 2016-08-25 2017-02-01 上海交通大学 Edge-based image significant region detection method
CN106951830A (en) * 2017-02-23 2017-07-14 北京联合大学 A kind of many object marking methods of image scene constrained based on priori conditions
CN106960434A (en) * 2017-03-03 2017-07-18 大连理工大学 A kind of image significance detection method based on surroundedness and Bayesian model
CN106991678A (en) * 2017-04-07 2017-07-28 无锡职业技术学院 A kind of method of target conspicuousness detection
CN107330861A (en) * 2017-07-03 2017-11-07 清华大学 Image significance object detection method based on diffusion length high confidence level information

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130223740A1 (en) * 2012-02-23 2013-08-29 Microsoft Corporation Salient Object Segmentation
CN103914834A (en) * 2014-03-17 2014-07-09 上海交通大学 Significant object detection method based on foreground priori and background priori
US20160104054A1 (en) * 2014-10-08 2016-04-14 Adobe Systems Incorporated Saliency Map Computation
CN105513070A (en) * 2015-12-07 2016-04-20 天津大学 RGB-D salient object detection method based on foreground and background optimization
CN106203430A (en) * 2016-07-07 2016-12-07 北京航空航天大学 A kind of significance object detecting method based on foreground focused degree and background priori
CN106373131A (en) * 2016-08-25 2017-02-01 上海交通大学 Edge-based image significant region detection method
CN106951830A (en) * 2017-02-23 2017-07-14 北京联合大学 A kind of many object marking methods of image scene constrained based on priori conditions
CN106960434A (en) * 2017-03-03 2017-07-18 大连理工大学 A kind of image significance detection method based on surroundedness and Bayesian model
CN106991678A (en) * 2017-04-07 2017-07-28 无锡职业技术学院 A kind of method of target conspicuousness detection
CN107330861A (en) * 2017-07-03 2017-11-07 清华大学 Image significance object detection method based on diffusion length high confidence level information

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
SHUWEI HUO 等: "Semi-supervised Saliency Classifier Based on a Linear Feedback Control System Model", 《2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)》 *
WANGJIANG ZHU 等: "Saliency Optimization from Robust Background Detection", 《2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
YICHEN WEI 等: "Geodesic Saliency Using Background Priors", 《ECCV 2012: COMPUTER VISION》 *
周强强 等: "联合灰色关联度和先验的图像显著性分析", 《系统仿真学报》 *
范青 等: "自然场景下基于边界先验的图像显著性检测", 《计算机工程》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784216A (en) * 2018-12-28 2019-05-21 华南理工大学 Vehicle-mounted thermal imaging pedestrian detection RoIs extracting method based on probability graph
CN109784216B (en) * 2018-12-28 2023-06-20 华南理工大学 Vehicle-mounted thermal imaging pedestrian detection Rois extraction method based on probability map
CN110211115A (en) * 2019-06-03 2019-09-06 大连理工大学 A kind of light field conspicuousness detection implementation method based on depth guidance cellular automata
CN110211115B (en) * 2019-06-03 2023-04-07 大连理工大学 Light field significance detection implementation method based on depth-guided cellular automaton
CN110826472A (en) * 2019-11-01 2020-02-21 新疆大学 Image detection method and device
CN110826472B (en) * 2019-11-01 2023-06-27 新疆大学 Image detection method and device
CN111274964B (en) * 2020-01-20 2023-04-07 中国地质大学(武汉) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
CN111274964A (en) * 2020-01-20 2020-06-12 中国地质大学(武汉) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
US11816149B2 (en) * 2020-02-11 2023-11-14 Samsung Electronics Co., Ltd. Electronic device and control method thereof
US20210248181A1 (en) * 2020-02-11 2021-08-12 Samsung Electronics Co., Ltd. Electronic device and control method thereof
CN111696099A (en) * 2020-06-16 2020-09-22 北京大学 General outlier likelihood estimation method based on image edge consistency
CN111696099B (en) * 2020-06-16 2022-09-27 北京大学 General outlier likelihood estimation method based on image edge consistency
CN111815610A (en) * 2020-07-13 2020-10-23 广东工业大学 Lesion focus detection method and device of lesion image
CN111815610B (en) * 2020-07-13 2023-09-12 广东工业大学 Lesion detection method and device for lesion image
CN112418218B (en) * 2020-11-24 2023-02-28 中国地质大学(武汉) Target area detection method, device, equipment and storage medium
CN112418218A (en) * 2020-11-24 2021-02-26 中国地质大学(武汉) Target area detection method, device, equipment and storage medium
CN112886571B (en) * 2021-01-18 2022-07-08 清华大学 Decomposition, coordination and optimization operation method and device of electric heating comprehensive energy system based on boundary variable feasible region
CN112886571A (en) * 2021-01-18 2021-06-01 清华大学 Decomposition, coordination and optimization operation method and device of electric heating comprehensive energy system based on boundary variable feasible region
CN113379785A (en) * 2021-06-22 2021-09-10 辽宁工程技术大学 Salient object detection method fusing boundary prior and frequency domain information
CN113379785B (en) * 2021-06-22 2024-03-15 辽宁工程技术大学 Saliency target detection method integrating boundary priori and frequency domain information

Similar Documents

Publication Publication Date Title
CN108416347A (en) Well-marked target detection algorithm based on boundary priori and iteration optimization
CN108776975B (en) Visual tracking method based on semi-supervised feature and filter joint learning
WO2020077858A1 (en) Video description generation method based on neural network, and medium, terminal and apparatus
Zhang et al. Multi-objective evolutionary fuzzy clustering for image segmentation with MOEA/D
CN106157330B (en) Visual tracking method based on target joint appearance model
CN110458192B (en) Hyperspectral remote sensing image classification method and system based on visual saliency
CN102982539B (en) Characteristic self-adaption image common segmentation method based on image complexity
CN109859238A (en) One kind being based on the optimal associated online multi-object tracking method of multiple features
KR101618996B1 (en) Sampling method and image processing apparatus for estimating homography
CN105335965B (en) Multi-scale self-adaptive decision fusion segmentation method for high-resolution remote sensing image
CN109685806B (en) Image significance detection method and device
CN109509191A (en) A kind of saliency object detection method and system
Abdelsamea et al. A SOM-based Chan–Vese model for unsupervised image segmentation
CN117557579A (en) Method and system for assisting non-supervision super-pixel segmentation by using cavity pyramid collaborative attention mechanism
CN113221065A (en) Data density estimation and regression method, corresponding device, electronic device, and medium
Khan et al. A modified adaptive differential evolution algorithm for color image segmentation
Bourouis et al. Color object segmentation and tracking using flexible statistical model and level-set
CN113298129A (en) Polarized SAR image classification method based on superpixel and graph convolution network
CN115393631A (en) Hyperspectral image classification method based on Bayesian layer graph convolution neural network
CN112509017A (en) Remote sensing image change detection method based on learnable difference algorithm
JP2023038144A (en) Method and system for detecting abnormality in image by using a plurality of machine learning programs
Blasiak A comparison of image segmentation methods
CN108280845B (en) Scale self-adaptive target tracking method for complex background
CN118115868A (en) Remote sensing image target detection method, remote sensing image target detection device, computer equipment and storage medium
Firouznia et al. Adaptive chaotic sampling particle filter to handle occlusion and fast motion in visual object tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180817