CN105809672A - Super pixels and structure constraint based image's multiple targets synchronous segmentation method - Google Patents
Super pixels and structure constraint based image's multiple targets synchronous segmentation method Download PDFInfo
- Publication number
- CN105809672A CN105809672A CN201610120407.7A CN201610120407A CN105809672A CN 105809672 A CN105809672 A CN 105809672A CN 201610120407 A CN201610120407 A CN 201610120407A CN 105809672 A CN105809672 A CN 105809672A
- Authority
- CN
- China
- Prior art keywords
- super
- pixel
- segmentation
- segmentation result
- subtree
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 105
- 238000000034 method Methods 0.000 title claims abstract description 58
- 230000001360 synchronised effect Effects 0.000 title abstract 2
- 238000011156 evaluation Methods 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 10
- 238000005457 optimization Methods 0.000 claims description 9
- 238000005070 sampling Methods 0.000 claims description 6
- 238000013139 quantization Methods 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 3
- 238000009795 derivation Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 3
- 238000012706 support-vector machine Methods 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 abstract description 4
- 230000007246 mechanism Effects 0.000 abstract description 4
- 238000011160 research Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000010606 normalization Methods 0.000 description 4
- 230000001154 acute effect Effects 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 230000004888 barrier function Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
Abstract
The present invention discloses a super pixels and structure constraint based image's multiple targets synchronous segmentation method. The method is applied to a set of images with shared targets, and each of the images is allowed to have more than more shared target. With this method, the shared targets can be precisely segmented. Firstly, a pre-segmentation operation is performed to the inputted set of images to develop segmented images. Then, based on a target detection mechanism, all super pixels are classified as background ones and foreground ones. With it, foreground classifiers are developed. Based on results from the classifiers, models are established for foreground targets and precise segmentation is fulfilled to targets by an optimized algorithm utilizing forest model assumptions and beam constraints. Compared to current algorithms, the method of the invention adopts an optimized algorithm utilizing forest model assumptions and beam constraints, which provides increased segmentation accuracy and makes the method capable of dealing with various complex scenarios.
Description
Technical field
The present invention relates to a kind of collaborative dividing method of image multiple target retrained based on super-pixel and structuring, it is adaptable to the multiple target of picture works in coordination with the fields such as the object segmentation in segmentation, sports picture and picture classification identification.
Background technology
At computer vision field, image segmentation is a difficult problem for a basis and classics, and other numerous image processing problem such as such as target recognition, object classification etc. can be played good assosting effect by its solution.In actual applications, the fields such as intelligent monitoring, medical diagnosis, robotics and intelligent machine, industrial automation or even military guidance all have with image segmentation and contact closely.By means of the Internet, people can be very easy to obtain a large amount of pictures including same object or same category object, and how automatically to distinguish and be partitioned into people's common objects interested from this kind of picture and become the main purpose of our research.Target interested can be partitioned into by the bottom-up information (color, texture etc.) of image, but the image data information only relying only on bottom can not obtain the segmentation result wanted, the implicit information across picture then can help to distinguish what is for needing the common objects of identification.The plurality of pictures that this kind of utilization comprises same object or identical category object completes the research that common objects interested is split, and is referred to as collaborative segmentation.Collaborative segmentation is the popular research theme risen in recent years, there is the more research work about collaborative segmentation at present.But, make a general survey of the research about collaborative segmentation field and application it can be seen that current collaborative segmentation area research still to there is many technical barriers as follows:
1) existing method mainly utilizes the features such as the color of bottom, shape, and have ignored the feature based on super-pixel that high level can be learnt, and the structuring constraint of object under multiple target scene;
2) current main flow algorithm is mostly for single goal segmentation design, and multiobject segmentation often effect is undesirable, there is no and optimizes targetedly;
3) extensibility of most methods is undesirable, it is impossible to solve the process to large database concept.
Above technical barrier is that collaborative cutting techniques brings many puzzlements in the extensive use of MultiMedia Field, develops and a set of work in coordination with the method for segmentation suitable in multiple target and have higher using value.
Summary of the invention
In order to solve the difficult problem existed in prior art, the invention discloses a kind of collaborative dividing method of image multiple target retrained based on super-pixel and structuring, the method is applicable to the segmentation with multiobject common objects, the method combined with study by target detection mechanism obtains front background class device so that algorithm has better extensibility.And the forest model proposed and the iterative splitting algorithm based on tree graph structuring constraint, effectively Combinatorial Optimization energy model is solved, so that multiobject segmentation is more accurate, and substantially increase computational efficiency.
The present invention is by the following technical solutions: a kind of collaborative dividing method of image multiple target retrained based on super-pixel and structuring, comprises the steps of
(1) image pre-segmentation: for comprising the image data set I={I of common objective object1..., INIn every piece image Ii, i=1,2 ..., N, carry out over-segmentation process, obtain super-pixel collection
(2) automatic target finds: based on the super-pixel collection of each imageAdd up each super-pixelSignificance valueWith repeatability value wim, and calculate super-pixelEvaluation of estimate scoreim, By evaluation of estimate less than 0.6 × max (scorei) super-pixel be set to background, by evaluation of estimate be more than or equal to 0.6 × max (scorei) super-pixel be set to prospect;max(scorei) for super-pixel collectionThe evaluation of estimate of the super-pixel that middle evaluation of estimate is maximum;
(3) grader study.Found the super-pixel collection in training set by automatic targetIt is divided into foreground and background, for each super-pixel, adopts the characteristic vector that following 2004 are tieed up to describe: the hsv color of (a) 800 Dimension Vector Quantization of Linear Prediction represents (k mean cluster obtains);B SIFT word bag model (1200 dimensions, the image block being limit with 16,24,32 pixels respectively is sampled, and the sampling interval is 3 pixels) that () multiple dimensioned intensive sampling obtains;C () 4 binaryzation features, contact situation in order to what describe four borders of super-pixel and image.Based on features above, the support vector machine learning method of standard is utilized to obtain front background class device.
(4) Target Modeling: based on step (2) sorted information, common objective object is set up object module Ψ based on hsv color spacefWith background model Ψb.Hellinger distance metric method is adopted to calculate the similarity degree between super-pixel or super-pixel combination and object module respectivelySimilarity degree between super-pixel or super-pixel combination and background model
Object module ΨfMethod for building up as follows: original image is carried out the conversion of color space, obtains the image under hsv color space;Image H, S, V under hsv color space and " G " four color components are carried out uniform quantization, adds up target object distribution on each color component, obtain histogram distribution, i.e. object module Ψf;By the same way, add up background image distribution on each color component, obtain histogram distribution, i.e. background model Ψb;Wherein " G " component represents the color quantizing value of the saturation pixel lower than 5%;
WithIt is respectively as follows: It isNormalized value,ForNormalized value, and
C is all interval number after decile, hRAnd hfThe respectively color histogram of the color histogram of super-pixel after normalization or super-pixel combination R and object module, hR′And hbThe respectively color histogram of the color histogram of super-pixel after normalization or super-pixel combination R ' and background model.
(5) based on the segmentation of super-pixel: utilize object module ΨfWith background model Ψb, adopt the algorithm of Combinatorial Optimization that super-pixel carries out the subseries again of front background, thus obtaining the final segmentation of target object;Propose forest model it is assumed that namely assume that each super-pixel is corresponding to a summit, single goal is split, last segmentation result is made up of multiple adjacent super-pixel, and can be expressed as adjacent mapSubtree;For Segmentation of Multi-target, last segmentation result is represented by adjacent mapMultiple subtrees constitute forest.To sum up, it is assumed that last segmentation result is adjacent mapMultiple subtrees constitute forest.By setting up adjacent mapInfer that the method for subtree collection determines last segmentation result;Implement process as follows:
(5.1) adjacent map is built: each super-pixel assuming in image, corresponding to a summit in figure, is connected by a limit between two adjacent super-pixel, thus constitutes adjacent mapFor final target object segmentation result, it is assumed that the forest that its multiple subtrees comprised for adjacent map are constituted;
(5.2) set up numerical model to solve: set up numerical model, the problem of Target Segmentation is converted to solving of combinatorial optimization problem, as follows:
When R is the super-pixel in prospect or super-pixel combination,When R ' combines for the super-pixel in background or super-pixel,Constraints represents for any one super-pixel R class that can only belong in front background.Can obtain by deriving, to solve segmentation result, actually can be exchanged into the method solving optimum subtree collection, and require optimum subtree collection, it is necessary to first estimate maximum spanning tree;
(5.3) derivation maximum spanning tree: obtain all possible candidate's subtree collection by the beam search method of beamsearchBased on candidate's subtree collectionMaximum spanning tree is obtained by the method for maximal possibility estimationDerive as follows:
Represent all potential spanning tree set,Represent data likelihood probability, may finally derive,
Candidate's subtree collection,For a certain subtree,It is right to representMaximal possibility estimation, δ () is indicator function, δ ((x, y) ∈ Cq) (whether x y) belongs to a certain subtree C on instruction limitq;For subtree CqWith the similarity degree of object module, whether instruction limit belongs to a certain subtree, P (x, y) represent limit (x, probability y),For to P (x, maximal possibility estimation y).Maximum spanning tree can be obtained by above formulaMaximal possibility estimation.
(5.4) search segmentation subtree collection: based on maximum spanning treeMaximal possibility estimation try to achieveThen pass through dynamic programming techniques to existMiddle search obtains optimum subtree collection, implements step as follows:
(5.4.1) for image Ii, front background class device is acted on super-pixel setObtain the seed super-pixel set of the prospect that is categorized asIts seed super-pixel setBeing made up of discrete seed super-pixel, first the similarity degree according to each seed super-pixel Yu object module is ranked up obtaining
(5.4.2) the super-pixel s closest to object module is chosen1As start node, infer maximum spanning tree with this and draw the segmentation result of corresponding optimum subtree and correspondence thereofJudge the similarity degree of this segmentation result and object module: if similarity degree is eligibleThen think that segmentation result is effective, otherwise willIt is set to empty setAnd the wrong seed super-pixel comprised in segmentation area is fed back toCarry out deleting and update;
(5.4.3) traversal setFind out and whether there is seed super-pixel s beyond the segmentation area corresponding to optimum subtree beforekIf existing, repeating above step and obtaining segmentation resultIn like manner carry out the similarity with object module to judge and subsequent treatment, update segmentation resultWith seed super-pixel set;
(5.4.4) complete seed super-pixel setWhole traversals after, we obtain final for image IiSegmentation resultWith the seed super-pixel set after renewalAnd the renewal of object module and the renewal of seed super-pixel constraint information is completed according to these information, so that the seed super-pixel estimated closer to the situation of change existed in real scene debug of model, then begin to iteration next time.
(6) iterative segmentation: update the object module in step 4, the method described in step 5 according to the segmentation result that step 5 obtains, then split;
(7) repeat step 6, until final segmentation result no longer changes, namely obtain final segmentation result.
Further, in described step 2, super-pixel significance valueTolerance particularly as follows:
By significance detection technique, to the i-th width image IiObtain original Saliency maps φi, then calculate the significance average of all pixels that each super-pixel comprises as its tolerance, be specifically calculated as follows:
WhereinI-th width image IiMiddle m-th super-pixel RimAverage significance value,Represent the significance value of jth pixel, area (Rim) for super-pixel RimThe number of pixels comprised.
Super-pixel repeatability value wimTolerance, particularly as follows:
Measure each super-pixel and the distance minima of all super-pixel in other each image, obtain N-1 minimum range { d (Rim, Ik)}k≠i, further to N-1 minimum range { d (Rim, Ik)}k≠iIt is averaging, obtains average minimumWherein distance metric d (Rim, Ik) by the vector distance based on hsv color and the word bag model based on SIFT feature is distance weighted obtains, specific as follows:
Wherein cimAnd gimRepresent the i-th width image I respectivelyiMiddle m-th super-pixel RimHsv color characteristic vector and SIFT word bag model characteristic vector, ckm′And gkm′Represent kth width image I respectivelykIn the individual super-pixel R of m 'km′Hsv color characteristic vector and SIFT word bag model characteristic vector;
Super-pixel repeatability metric weights w is calculated by sigmoid formulaim:
Wherein μ and σ is the parameter controlling this sigmoid functional form, μ=0.5, σ=0.1.
Further, step 6 particularly as follows:
(6.1) according to up-to-date segmentation result, the foreground target model before renewal, so as to be more nearly target to be split;
(6.2) according to the object module after updating, regenerate all possible candidate's subtree collection and estimate maximum spanning tree;
(6.3) according to the object module after updating and maximum spanning tree, again adopt dynamic programming techniques to search for the forest being made up of subtree collection, obtain segmentation result;
(6.4) judging whether to meet cut-off condition, namely whether last segmentation result no longer changes.If meeting, then iteration terminates;If being unsatisfactory for, then repeat (6.1)-(6.3).
The present invention adopts above technical scheme compared with prior art, has following technical effect that
1) combining target auto discovery mechanism and study, the present invention can obtain a general front background class device, and the autgmentability of big datagram image set is better.
2) the seed super-pixel set obtained based on front background class device is more accurate compared with the prospect obtained either directly through target auto discovery mechanism, is favorably improved follow-up segmentation precision.
3) proposing forest model and tree graph structuring constraints, segmentation accuracy is greatly improved, the Segmentation of Multi-target effect especially for the fine structure with complexity is better, and the equation for Combinatorial Optimization asks optimal solution to provide new Optimization Solution algorithm.
Accompanying drawing explanation
Fig. 1 is the overall flow figure of the present invention;
Fig. 2 is front background class device learning process schematic diagram;
Fig. 3 is based on the schematic diagram of super-pixel segmentation;
Fig. 4 is the Segmentation of Multi-target result in yardstick, attitude acute variation situation;
Detailed description of the invention
Below in conjunction with accompanying drawing by specific embodiment, technical scheme is described in further detail.
Following example are carried out under premised on technical solution of the present invention, give detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
Multiclass image in disclosed iCoseg data set is processed by the present embodiment.The image of these classifications also exists the acute variation such as color, illumination condition, attitude, yardstick, and there is the situation of multiple common objects in image, brings huge challenge to existing cutting techniques.Fig. 1 is overall flow figure, Fig. 2 of the present invention is grader learning process schematic diagram, and Fig. 3 is based on the schematic diagram of super-pixel segmentation.The present embodiment comprises the steps of
(1) image pre-segmentation: for comprising the image data set I={I of common objective object1..., INIn every piece image Ii, i=1,2 ..., N, carry out over-segmentation process, obtain super-pixel collection
(2) automatic target finds: based on the super-pixel collection of each imageAdd up each super-pixelSignificance valueWith repeatability value wim, and calculate super-pixelEvaluation of estimate scoreim, By evaluation of estimate less than 0.6 × max (scorei) super-pixel be set to background, by evaluation of estimate be more than or equal to 0.6 × max (scorei) super-pixel be set to prospect;max(scorei) for super-pixel collectionThe evaluation of estimate of the super-pixel that middle evaluation of estimate is maximum;
(3) grader study.Found the super-pixel collection in training set by automatic targetIt is divided into foreground and background, for each super-pixel, adopts the characteristic vector that following 2004 are tieed up to describe: the hsv color of (a) 800 Dimension Vector Quantization of Linear Prediction represents (k mean cluster obtains);B SIFT word bag model (1200 dimensions, the image block being limit with 16,24,32 pixels respectively is sampled, and the sampling interval is 3 pixels) that () multiple dimensioned intensive sampling obtains;C () 4 binaryzation features, contact situation in order to what describe four borders of super-pixel and image.Based on features above, the support vector machine learning method of standard is utilized to obtain front background class device.
(4) Target Modeling: based on step (2) sorted information, common objective object is set up object module Ψ based on hsv color spacefWith background model Ψb.Hellinger distance metric method is adopted to calculate the similarity degree between super-pixel or super-pixel combination and object module respectivelySimilarity degree between super-pixel or super-pixel combination and background model
Object module ΨfMethod for building up as follows: original image is carried out the conversion of color space, obtains the image under hsv color space;Image H, S, V under hsv color space and " G " four color components are carried out uniform quantization, adds up target object distribution on each color component, obtain histogram distribution, i.e. object module Ψf;By the same way, add up background image distribution on each color component, obtain histogram distribution, i.e. background model Ψb;Wherein " G " component represents the color quantizing value of the saturation pixel lower than 5%;
WithIt is respectively as follows: It isNormalized value,ForNormalized value, and
C is all interval number after decile, hRAnd hfThe respectively color histogram of the color histogram of super-pixel after normalization or super-pixel combination R and object module, hR′And hbThe respectively color histogram of the color histogram of super-pixel after normalization or super-pixel combination R ' and background model.
(5) based on the segmentation of super-pixel: utilize object module ΨfWith background model Ψb, adopt the algorithm of Combinatorial Optimization that super-pixel carries out the subseries again of front background, thus obtaining the final segmentation of target object;Propose forest model it is assumed that namely assume that each super-pixel is corresponding to a summit, single goal is split, last segmentation result is made up of multiple adjacent super-pixel, and can be expressed as adjacent mapSubtree;For Segmentation of Multi-target, last segmentation result is represented by adjacent mapMultiple subtrees constitute forest.To sum up, it is assumed that last segmentation result is adjacent mapMultiple subtrees constitute forest.By setting up adjacent mapInfer that the method for subtree collection determines last segmentation result;Implement process as follows:
(5.1) adjacent map is built: each super-pixel assuming in image, corresponding to a summit in figure, is connected by a limit between two adjacent super-pixel, thus constitutes adjacent mapFor final target object segmentation result, it is assumed that the forest that its multiple subtrees comprised for adjacent map are constituted;
(5.2) set up numerical model to solve: set up numerical model, the problem of Target Segmentation is converted to solving of combinatorial optimization problem, as follows:
When R is the super-pixel in prospect or super-pixel combination,When R ' combines for the super-pixel in background or super-pixel,Constraints represents for any one super-pixel R class that can only belong in front background.Can obtain by deriving, to solve segmentation result, actually can be exchanged into the method solving optimum subtree collection, and require optimum subtree collection, it is necessary to first estimate maximum spanning tree;
(5.3) derivation maximum spanning tree: obtain all possible candidate's subtree collection by the beam search method of beamsearchBased on candidate's subtree collectionMaximum spanning tree is obtained by the method for maximal possibility estimationDerive as follows:
Represent all potential spanning tree set,Represent data likelihood probability, may finally derive,
Candidate's subtree collection,For a certain subtree,It is right to representMaximal possibility estimation, δ () is indicator function, δ ((x, y) ∈ Cq) (whether x y) belongs to a certain subtree C on instruction limitq;For subtree CqWith the similarity degree of object module, whether instruction limit belongs to a certain subtree, P (x, y) represent limit (x, probability y),For to P (x, maximal possibility estimation y).Maximum spanning tree can be obtained by above formulaMaximal possibility estimation.
(5.4) search segmentation subtree collection: based on maximum spanning treeMaximal possibility estimation try to achieveThen pass through dynamic programming techniques to existMiddle search obtains optimum subtree collection, implements step as follows:
(5.4.1) for image Ii, front background class device is acted on super-pixel setObtain the seed super-pixel set of the prospect that is categorized asIts seed super-pixel setBeing made up of discrete seed super-pixel, first the similarity degree according to each seed super-pixel Yu object module is ranked up obtaining
(5.4.2) the super-pixel s closest to object module is chosen1As start node, infer maximum spanning tree with this and draw the segmentation result of corresponding optimum subtree and correspondence thereofJudge the similarity degree of this segmentation result and object module: if similarity degree is eligibleThen think that segmentation result is effective, otherwise willIt is set to empty setAnd the wrong seed super-pixel comprised in segmentation area is fed back toCarry out deleting and update;
(5.4.3) traversal setFind out and whether there is seed super-pixel s beyond the segmentation area corresponding to optimum subtree beforekIf existing, repeating above step and obtaining segmentation resultIn like manner carry out the similarity with object module to judge and subsequent treatment, update segmentation resultWith seed super-pixel set;
(5.4.4) complete seed super-pixel setWhole traversals after, we obtain final for image IiSegmentation resultWith the seed super-pixel set after renewalAnd the renewal of object module and the renewal of seed super-pixel constraint information is completed according to these information, so that the seed super-pixel estimated closer to the situation of change existed in real scene debug of model, then begin to iteration next time.
(6) iterative segmentation: update the object module in step 4, the method described in step 5 according to the segmentation result that step 5 obtains, then split;
(7) repeat step 6, until final segmentation result no longer changes, namely obtain final segmentation result.
Further, in described step 2, super-pixel significance valueTolerance particularly as follows:
By significance detection technique, to the i-th width image IiObtain original Saliency maps φi, then calculate the significance average of all pixels that each super-pixel comprises as its tolerance, be specifically calculated as follows:
WhereinI-th width image IiMiddle m-th super-pixel RimAverage significance value,Represent the significance value of jth pixel, area (Rim) for super-pixel RimThe number of pixels comprised.
Super-pixel repeatability value wimTolerance, particularly as follows:
Measure each super-pixel and the distance minima of all super-pixel in other each image, obtain N-1 minimum range { d (Rim, Ik)}k≠i, further to N-1 minimum range { d (Rim, Ik)}k≠iIt is averaging, obtains average minimumWherein distance metric d (Rim, Ik) by the vector distance based on hsv color and the word bag model based on SIFT feature is distance weighted obtains, specific as follows:
Wherein cimAnd gimRepresent the i-th width image I respectivelyiMiddle m-th super-pixel RimHsv color characteristic vector and SIFT word bag model characteristic vector, ckm′And gkm′Represent kth width image I respectivelykIn the individual super-pixel R of m 'km′Hsv color characteristic vector and SIFT word bag model characteristic vector;
Super-pixel repeatability metric weights w is calculated by sigmoid formulaim:
Wherein μ and σ is the parameter controlling this sigmoid functional form, μ=0.5, σ=0.1.
Further, step 6 particularly as follows:
(6.1) according to up-to-date segmentation result, the foreground target model before renewal, so as to be more nearly target to be split;
(6.2) according to the object module after updating, regenerate all possible candidate's subtree collection and estimate maximum spanning tree;
(6.3) according to the object module after updating and maximum spanning tree, again adopt dynamic programming techniques to search for the forest being made up of subtree collection, obtain segmentation result;
(6.4) judging whether to meet cut-off condition, namely whether last segmentation result no longer changes.If meeting, then iteration terminates;If being unsatisfactory for, then repeat (6.1)-(6.3).
Implementation result:
According to above-mentioned steps, the some pictures chosen in iCoseg data base carry out Target Segmentation.The picture that Fig. 4 illustrates selected from iCoseg carries out Segmentation of Multi-target test.From fig. 4, it can be seen that under when the present invention exists the acute variation such as yardstick, attitude, illumination for target to be split and image includes multiple target, accurate object segmentation result still can be obtained.
Claims (2)
1. the collaborative dividing method of image multiple target retrained based on super-pixel and structuring, it is characterised in that comprise the steps of
(1) image pre-segmentation: for comprising the image data set I={I of common objective object1,…,INIn every piece image Ii, i=1,2 ..., N, carry out over-segmentation process, obtain super-pixel collection
(2) automatic target finds: based on the super-pixel collection of each imageAdd up each super-pixelSignificance valueWith repeatability value wim, and calculate super-pixelEvaluation of estimate scoreim, By evaluation of estimate less than 0.6 × max (scorei) super-pixel be set to background, by evaluation of estimate be more than or equal to 0.6 × max (scorei) super-pixel be set to prospect;max(scorei) for super-pixel collectionThe evaluation of estimate of the super-pixel that middle evaluation of estimate is maximum;
(3) grader study.Found the super-pixel collection in training set by automatic targetIt is divided into foreground and background, for each super-pixel, adopts the characteristic vector that following 2004 are tieed up to describe: the hsv color of (a) 800 Dimension Vector Quantization of Linear Prediction represents (k mean cluster obtains);B SIFT word bag model (1200 dimensions, the image block being limit with 16,24,32 pixels respectively is sampled, and the sampling interval is 3 pixels) that () multiple dimensioned intensive sampling obtains;C () 4 binaryzation features, contact situation in order to what describe four borders of super-pixel and image.Based on features above, utilize the support vector machine learning method of standard just can obtain front background class device.
(4) Target Modeling: based on step (3) sorted information, foreground target object is set up object module Ψ based on hsv color spacefWith background model Ψb.Hellinger distance metric method is adopted to calculate the similarity degree between super-pixel or super-pixel combination and object module respectivelySimilarity degree between super-pixel or super-pixel combination and background model
(5) based on the segmentation of super-pixel: utilize object module ΨfWith background model Ψb, adopt the algorithm of Combinatorial Optimization that super-pixel carries out the subseries again of front background, thus obtaining the final segmentation of target object;Propose forest model it is assumed that namely assume that each super-pixel is corresponding to a summit, single goal is split, last segmentation result is made up of multiple adjacent super-pixel, and can be expressed as adjacent mapSubtree;For Segmentation of Multi-target, last segmentation result is represented by adjacent mapMultiple subtrees constitute forest.To sum up, it is assumed that last segmentation result is adjacent mapMultiple subtrees constitute forest.By setting up adjacent mapInfer that the method for subtree collection determines last segmentation result;Implement process as follows:
(5.1) adjacent map is built: each super-pixel assuming in image, corresponding to a summit in figure, is connected by a limit between two adjacent super-pixel, thus constitutes adjacent mapFor final target object segmentation result, it is assumed that the forest that its multiple subtrees comprised for adjacent map are constituted;
(5.2) set up numerical model to solve: set up numerical model, the problem of Target Segmentation is converted to solving of combinatorial optimization problem, as follows:
When R is the super-pixel in prospect or super-pixel combination,When R ' combines for the super-pixel in background or super-pixel,Constraints represents for any one super-pixel R class that can only belong in front background.Can obtain by deriving, to solve segmentation result, actually can be exchanged into the method solving optimum subtree collection, and require optimum subtree collection, it is necessary to first estimate maximum spanning tree;
(5.3) derivation maximum spanning tree: obtain all possible candidate's subtree collection by the beam search method of beamsearchBased on candidate's subtree collectionMaximum spanning tree is obtained by the method for maximal possibility estimationDerive as follows:
Represent all potential spanning tree set,Represent data likelihood probability, may finally derive,
Candidate's subtree collection,For a certain subtree,It is right to representMaximal possibility estimation, δ () is indicator function, δ ((x, y) ∈ Cq) (whether x y) belongs to a certain subtree C on instruction limitq;For subtree CqWith the similarity degree of object module, whether instruction limit belongs to a certain subtree, P (x, y) represent limit (x, probability y),For to P (x, maximal possibility estimation y).Maximum spanning tree can be obtained by above formulaMaximal possibility estimation.
(5.4) search segmentation subtree collection: based on maximum spanning treeMaximal possibility estimation try to achieveThen pass through dynamic programming techniques to existMiddle search obtains optimum subtree collection, implements step as follows:
(5.4.1) for image Ii, front background class device is acted on super-pixel setObtain the seed super-pixel set of the prospect that is categorized asIts seed super-pixel setBeing made up of discrete seed super-pixel, first the similarity degree according to each seed super-pixel Yu object module is ranked up obtaining
(5.4.2) the super-pixel s closest to object module is chosen1As start node, infer maximum spanning tree with this and draw the segmentation result of corresponding optimum subtree and correspondence thereofJudge the similarity degree of this segmentation result and object module: if similarity degree is eligibleThen think that segmentation result is effective, otherwise willIt is set to empty setAnd the wrong seed super-pixel comprised in segmentation area is fed back toCarry out deleting and update;
(5.4.3) traversal setFind out and whether there is seed super-pixel s beyond the segmentation area corresponding to optimum subtree beforekIf existing, repeating above step and obtaining segmentation resultIn like manner carry out the similarity with object module to judge and subsequent treatment, update segmentation resultWith seed super-pixel set;
(5.4.4) complete seed super-pixel setWhole traversals after, we obtain final for image IiSegmentation resultWith the seed super-pixel set after renewalAnd the renewal of object module and the renewal of seed super-pixel constraint information is completed according to these information, so that the seed super-pixel estimated closer to the situation of change existed in real scene debug of model, then begin to iteration next time.
(6) iterative segmentation: update the object module in step 4, the method described in step 5 according to the segmentation result that step 5 obtains, then split;
(7) repeat step 6, until final segmentation result no longer changes, namely obtain final segmentation result.
2. method according to claim 1, it is characterised in that: step 6 particularly as follows:
(6.1) according to up-to-date segmentation result, the foreground target model before renewal, so as to be more nearly target to be split;
(6.2) according to the object module after updating, regenerate all possible candidate's subtree collection and estimate maximum spanning tree;
(6.3) according to the object module after updating and maximum spanning tree, again adopt dynamic programming techniques to search for the forest being made up of subtree collection, obtain segmentation result;
(6.4) judging whether to meet cut-off condition, namely whether last segmentation result no longer changes.If meeting, then iteration terminates;If being unsatisfactory for, then repeat (6.1)-(6.3).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610120407.7A CN105809672B (en) | 2016-03-03 | 2016-03-03 | A kind of image multiple target collaboration dividing method constrained based on super-pixel and structuring |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610120407.7A CN105809672B (en) | 2016-03-03 | 2016-03-03 | A kind of image multiple target collaboration dividing method constrained based on super-pixel and structuring |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105809672A true CN105809672A (en) | 2016-07-27 |
CN105809672B CN105809672B (en) | 2019-09-13 |
Family
ID=56466027
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610120407.7A Expired - Fee Related CN105809672B (en) | 2016-03-03 | 2016-03-03 | A kind of image multiple target collaboration dividing method constrained based on super-pixel and structuring |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105809672B (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106875406A (en) * | 2017-01-24 | 2017-06-20 | 北京航空航天大学 | The video semanteme object segmentation methods and device of image guiding |
CN107103326A (en) * | 2017-04-26 | 2017-08-29 | 苏州大学 | The collaboration conspicuousness detection method clustered based on super-pixel |
CN107256412A (en) * | 2017-05-26 | 2017-10-17 | 东南大学 | A kind of figure building method based on many human eye perceptual grouping characteristics |
CN107527348A (en) * | 2017-07-11 | 2017-12-29 | 湖州师范学院 | Conspicuousness detection method based on multi-scale division |
CN107610133A (en) * | 2017-08-28 | 2018-01-19 | 昆明理工大学 | A kind of multiple target image of clothing cooperates with dividing method |
CN107909079A (en) * | 2017-10-11 | 2018-04-13 | 天津大学 | One kind collaboration conspicuousness detection method |
CN107909576A (en) * | 2017-11-22 | 2018-04-13 | 南开大学 | Indoor RGB D method for segmenting objects in images based on support semantic relation |
CN108305258A (en) * | 2018-01-31 | 2018-07-20 | 成都快眼科技有限公司 | A kind of superpixel segmentation method, system and storage device based on minimum spanning tree |
CN109559321A (en) * | 2018-11-28 | 2019-04-02 | 清华大学 | A kind of sonar image dividing method and equipment |
CN109934235A (en) * | 2019-03-20 | 2019-06-25 | 中南大学 | A kind of unsupervised abdominal CT sequence image multiple organ automatic division method simultaneously |
CN110598711A (en) * | 2019-08-31 | 2019-12-20 | 华南理工大学 | Target segmentation method combined with classification task |
CN111223118A (en) * | 2018-11-27 | 2020-06-02 | 富士通株式会社 | Image processing apparatus, image processing method, and computer-readable recording medium |
CN112164087A (en) * | 2020-10-13 | 2021-01-01 | 北京无线电测量研究所 | Super-pixel segmentation method and device based on edge constraint and segmentation boundary search |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101756696A (en) * | 2009-12-31 | 2010-06-30 | 中国人民解放军空军总医院 | Multiphoton skin lens image automatic analytical system and method for diagnosing malignant melanoma by using same system |
JP2014149563A (en) * | 2013-01-31 | 2014-08-21 | Akita Univ | Frame division device and frame division program |
CN105046714A (en) * | 2015-08-18 | 2015-11-11 | 浙江大学 | Unsupervised image segmentation method based on super pixels and target discovering mechanism |
-
2016
- 2016-03-03 CN CN201610120407.7A patent/CN105809672B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101756696A (en) * | 2009-12-31 | 2010-06-30 | 中国人民解放军空军总医院 | Multiphoton skin lens image automatic analytical system and method for diagnosing malignant melanoma by using same system |
JP2014149563A (en) * | 2013-01-31 | 2014-08-21 | Akita Univ | Frame division device and frame division program |
CN105046714A (en) * | 2015-08-18 | 2015-11-11 | 浙江大学 | Unsupervised image segmentation method based on super pixels and target discovering mechanism |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10354392B2 (en) | 2017-01-24 | 2019-07-16 | Beihang University | Image guided video semantic object segmentation method and apparatus |
CN106875406A (en) * | 2017-01-24 | 2017-06-20 | 北京航空航天大学 | The video semanteme object segmentation methods and device of image guiding |
CN106875406B (en) * | 2017-01-24 | 2020-04-14 | 北京航空航天大学 | Image-guided video semantic object segmentation method and device |
CN107103326A (en) * | 2017-04-26 | 2017-08-29 | 苏州大学 | The collaboration conspicuousness detection method clustered based on super-pixel |
CN107103326B (en) * | 2017-04-26 | 2020-06-02 | 苏州大学 | Collaborative significance detection method based on super-pixel clustering |
CN107256412A (en) * | 2017-05-26 | 2017-10-17 | 东南大学 | A kind of figure building method based on many human eye perceptual grouping characteristics |
CN107256412B (en) * | 2017-05-26 | 2019-07-12 | 东南大学 | A kind of figure building method based on more human eye perceptual grouping characteristics |
CN107527348A (en) * | 2017-07-11 | 2017-12-29 | 湖州师范学院 | Conspicuousness detection method based on multi-scale division |
CN107527348B (en) * | 2017-07-11 | 2020-10-30 | 湖州师范学院 | Significance detection method based on multi-scale segmentation |
CN107610133A (en) * | 2017-08-28 | 2018-01-19 | 昆明理工大学 | A kind of multiple target image of clothing cooperates with dividing method |
CN107610133B (en) * | 2017-08-28 | 2020-08-25 | 昆明理工大学 | Multi-target garment image collaborative segmentation method |
CN107909079A (en) * | 2017-10-11 | 2018-04-13 | 天津大学 | One kind collaboration conspicuousness detection method |
CN107909079B (en) * | 2017-10-11 | 2021-06-04 | 天津大学 | Cooperative significance detection method |
CN107909576A (en) * | 2017-11-22 | 2018-04-13 | 南开大学 | Indoor RGB D method for segmenting objects in images based on support semantic relation |
CN107909576B (en) * | 2017-11-22 | 2021-06-25 | 南开大学 | Indoor RGB-D image object segmentation method based on support semantic relation |
CN108305258A (en) * | 2018-01-31 | 2018-07-20 | 成都快眼科技有限公司 | A kind of superpixel segmentation method, system and storage device based on minimum spanning tree |
CN108305258B (en) * | 2018-01-31 | 2022-07-26 | 成都快眼科技有限公司 | Super-pixel segmentation method, system and storage device based on minimum spanning tree |
CN111223118A (en) * | 2018-11-27 | 2020-06-02 | 富士通株式会社 | Image processing apparatus, image processing method, and computer-readable recording medium |
CN109559321A (en) * | 2018-11-28 | 2019-04-02 | 清华大学 | A kind of sonar image dividing method and equipment |
CN109934235A (en) * | 2019-03-20 | 2019-06-25 | 中南大学 | A kind of unsupervised abdominal CT sequence image multiple organ automatic division method simultaneously |
CN110598711A (en) * | 2019-08-31 | 2019-12-20 | 华南理工大学 | Target segmentation method combined with classification task |
CN112164087A (en) * | 2020-10-13 | 2021-01-01 | 北京无线电测量研究所 | Super-pixel segmentation method and device based on edge constraint and segmentation boundary search |
CN112164087B (en) * | 2020-10-13 | 2023-12-08 | 北京无线电测量研究所 | Super-pixel segmentation method and device based on edge constraint and segmentation boundary search |
Also Published As
Publication number | Publication date |
---|---|
CN105809672B (en) | 2019-09-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105809672A (en) | Super pixels and structure constraint based image's multiple targets synchronous segmentation method | |
CN110298404B (en) | Target tracking method based on triple twin Hash network learning | |
CN107229757B (en) | Video retrieval method based on deep learning and Hash coding | |
CN113221905B (en) | Semantic segmentation unsupervised domain adaptation method, device and system based on uniform clustering and storage medium | |
WO2023138300A1 (en) | Target detection method, and moving-target tracking method using same | |
CN110532920B (en) | Face recognition method for small-quantity data set based on FaceNet method | |
CN107633226B (en) | Human body motion tracking feature processing method | |
CN106845430A (en) | Pedestrian detection and tracking based on acceleration region convolutional neural networks | |
US10262214B1 (en) | Learning method, learning device for detecting lane by using CNN and testing method, testing device using the same | |
CN113033520B (en) | Tree nematode disease wood identification method and system based on deep learning | |
Xia et al. | Loop closure detection for visual SLAM using PCANet features | |
CN105046714A (en) | Unsupervised image segmentation method based on super pixels and target discovering mechanism | |
CN105243139A (en) | Deep learning based three-dimensional model retrieval method and retrieval device thereof | |
CN110222718B (en) | Image processing method and device | |
CN105740915A (en) | Cooperation segmentation method fusing perception information | |
CN112528845B (en) | Physical circuit diagram identification method based on deep learning and application thereof | |
CN105975932A (en) | Gait recognition and classification method based on time sequence shapelet | |
CN113592894B (en) | Image segmentation method based on boundary box and co-occurrence feature prediction | |
Sheng et al. | Vehicle detection and classification using convolutional neural networks | |
CN113705596A (en) | Image recognition method and device, computer equipment and storage medium | |
CN115018039A (en) | Neural network distillation method, target detection method and device | |
CN117252904B (en) | Target tracking method and system based on long-range space perception and channel enhancement | |
CN114219936A (en) | Object detection method, electronic device, storage medium, and computer program product | |
CN113553975A (en) | Pedestrian re-identification method, system, equipment and medium based on sample pair relation distillation | |
CN110287970B (en) | Weak supervision object positioning method based on CAM and covering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190913 |