CN105809672B - A kind of image multiple target collaboration dividing method constrained based on super-pixel and structuring - Google Patents

A kind of image multiple target collaboration dividing method constrained based on super-pixel and structuring Download PDF

Info

Publication number
CN105809672B
CN105809672B CN201610120407.7A CN201610120407A CN105809672B CN 105809672 B CN105809672 B CN 105809672B CN 201610120407 A CN201610120407 A CN 201610120407A CN 105809672 B CN105809672 B CN 105809672B
Authority
CN
China
Prior art keywords
super
pixel
segmentation
subtree
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610120407.7A
Other languages
Chinese (zh)
Other versions
CN105809672A (en
Inventor
于慧敏
杨白
汪东旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201610120407.7A priority Critical patent/CN105809672B/en
Publication of CN105809672A publication Critical patent/CN105809672A/en
Application granted granted Critical
Publication of CN105809672B publication Critical patent/CN105809672B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Abstract

Non-supervisory dividing method is cooperateed with the image multiple target that structuring constrains based on super-pixel the invention discloses a kind of.It include several image data collection of common objects for one group, and every width picture may include multiple common objects, this method can accurately be partitioned into common objects.Firstly, this method carries out pre-segmentation operation to the image set of input, the image after obtaining over-segmentation;Then, the classification of foreground and background is carried out to all super-pixel based on target detection mechanism, background class device before study obtains;Finally, based on classifier obtain as a result, modeled to foreground target, assume to complete the Accurate Segmentation to target using the algorithm of Combinatorial Optimization with tree graph constraint using forest model.Compared with analogous algorithms, the present invention is assumed by proposing new forest model and method for solving, passes through the constrained optimization combinatorial optimization algorithm of tree graph and improves segmentation precision, can adapt to various complex scenes.

Description

A kind of image multiple target collaboration dividing method constrained based on super-pixel and structuring
Technical field
The present invention relates to a kind of image multiple targets constrained based on super-pixel and structuring to cooperate with dividing method, is suitable for figure The fields such as multiple target collaboration segmentation, the object segmentation in sports picture and the picture classification identification of piece.
Background technique
In computer vision field, image segmentation is a basic and classical problem, its solution can be to other Numerous image processing problems such as target identification, object classification play good booster action.In practical applications, intelligence prison The fields such as control, medical diagnosis, robot technology and intelligence machine, industrial automation or even military guidance all have with image segmentation Closely connection.By means of internet, it includes same object or same category object that people, which can be very easy to obtain, A large amount of pictures, and how to be distinguished automatically from this kind of picture and be partitioned into what the interested common objects of people were studied as us Main purpose.It can be partitioned into interested target by the bottom-up information (color, texture etc.) of image, but only rely only on bottom The image data information of layer can not obtain desired segmentation result, and the implicit information across picture can then help to distinguish what is The common objects for needing to recognize.It is this kind of completed using the plurality of pictures comprising same object or the same category object to feel it is emerging The research that the common objects of interest are split, referred to as collaboration segmentation.Collaboration segmentation is the popular research risen in recent years Theme, the more research work about collaboration segmentation existing at present.However, make a general survey of in relation to cooperate with segmentation field research and Using it is found that current collaboration segmentation area research remains unchanged, there are many technical problems are as follows:
1) existing method mainly utilizes the features such as the color of bottom, shape, and has ignored the high-rise base that can be learnt The structuring constraint of object under the feature and multiple target scene of super-pixel;
2) current mainstream algorithm is directed to single goal segmentation design mostly, and to the segmentation of multiple target, often effect is undesirable, It has no and targetedly optimizes;
3) scalability of most methods is undesirable, can not solve the processing to large database concept.
Above technical problem is that cutting techniques is cooperateed with to bring many puzzlements in the extensive use of MultiMedia Field, is developed A set of method application value with higher suitable for multiple target collaboration segmentation out.
Summary of the invention
In order to solve problem existing in the prior art, constrained the invention discloses a kind of based on super-pixel and structuring Image multiple target cooperates with dividing method, and this method is suitable for the segmentation of the common objects with multiple target, passes through target detection machine The background class device before the method that study combines obtains is made, so that algorithm has better scalability.And the forest proposed Model and the iterative splitting algorithm based on tree graph structuring constraint, effectively solve Combinatorial Optimization energy model, thus So that the segmentation to multiple target is more accurate, and substantially increase computational efficiency.
The invention adopts the following technical scheme: a kind of image multiple target constrained based on super-pixel and structuring cooperates with segmentation Method comprises the steps of:
(1) image pre-segmentation: for the image data set I={ I comprising common objective object1..., INIn each width Image Ii, i=1,2 ..., N carry out over-segmentation processing, obtain super-pixel collection
(2) automatic target is found: the super-pixel collection based on each imageCount each super-pixelConspicuousness ValueWith repeated value wim, and calculate super-pixelEvaluation of estimate scoreim, It will Evaluation of estimate is less than 0.6 × max (scorei) super-pixel be set as background, by evaluation of estimate be more than or equal to 0.6 × max (scorei) Super-pixel be set as prospect;max(scorei) it is super-pixel collectionThe evaluation of estimate of the middle maximum super-pixel of evaluation of estimate;
(3) classifier learns.It is found by automatic target by the super-pixel collection in training setIt is divided into foreground and background, For each super-pixel, described using the characteristic vector of 2004 following dimensions: (a) hsv color of 800 Dimension Vector Quantization of Linear Prediction is indicated (k mean cluster obtains);(b) SIFT bag of words (1200 dimensions, respectively with 16,24,32 pixels that multiple dimensioned intensive sampling obtains It is sampled for the image block on side, the sampling interval is 3 pixels);(c) 4 binaryzation features, to describe super-pixel and four sides of image The contact situation on boundary.Based on features above, background class device before being obtained using the support vector machines learning method of standard.
(4) Target Modeling: being based on step (2) sorted information, is established to common objective object based on hsv color space Object module ΨfWith background model Ψb.Using Hellinger distance metric method calculate separately super-pixel or super-pixel combination with Similarity degree between object moduleSimilarity degree between super-pixel or super-pixel combination and background model
Object module ΨfMethod for building up it is as follows: by original image carry out color space transformation, obtain hsv color space Under image;To under hsv color space image H, S, V and " G " four color components carry out uniform quantizations, count target Distribution of the object on each color component obtains histogram distribution, i.e. object module Ψf;By the same way, background is counted Distribution of the image on each color component obtains histogram distribution, i.e. background model Ψb;Wherein " G " component represents saturation degree The color quantizing value of pixel lower than 5%;
Be respectively as follows:The normalization for being Value, the normalized value for being, and
C is all section numbers after equal part, hRAnd hfThe face of super-pixel or super-pixel combination R after respectively normalizing The color histogram of Color Histogram and object module, hR′And hbThe face of super-pixel or super-pixel combination R ' after respectively normalizing The color histogram of Color Histogram and background model.
(5) based on the segmentation of super-pixel: utilizing object module ΨfWith background model Ψb, using the algorithm pair of Combinatorial Optimization The subseries again of background before super-pixel carries out, to obtain the final segmentation of target object;It is proposed forest model it is assumed that i.e. vacation If each super-pixel correspond to a vertex, for single goal divide, last segmentation result by multiple adjoinings super-pixel structure At, and adjacent map can be expressed asSubtree;For Segmentation of Multi-target, last segmentation result is represented by adjacent map's The forest that multiple subtrees are constituted.To sum up, it is assumed that last segmentation result is adjacent mapMultiple subtrees constitute forest.Pass through Establish adjacent mapTo infer that the method for subtree collection determines last segmentation result;Specific implementation process It is as follows:
(5.1) adjacent map is constructed: assuming that each super-pixel in image corresponds to a vertex in figure, it is two adjacent It is connected between super-pixel by a line, thus constitutes adjacent mapKnot is divided for final target object Fruit, it is assumed that the forest that its multiple subtree for including by adjacent map is constituted;
(5.2) it establishes numerical model solution: establishing numerical model, the problem of Target Segmentation is converted into combinatorial optimization problem Solution, it is as follows:
When R is super-pixel or super-pixel combination in prospect,When R ' is super-pixel or super picture in background When element combination,Constraint condition indicates one kind before can only belonging to for any one super-pixel R in background.Pass through Derivation can obtain, and to solution segmentation result, actually can be exchanged into the method for solving optimal subtree collection, and require optimal subtree Set, needs first to estimate maximum spanning tree;
(5.3) it derives maximum spanning tree: obtaining all possible candidate by the beam search method of beam search Subtree collectionBased on candidate subtree collectionMaximum spanning tree is obtained by the method for maximal possibility estimationIt derives such as Under:
Indicate all potential spanning tree set,It indicates data likelihood probability, can finally export,
Candidate subtree collection,For a certain subtree,Expression pairMaximum likelihood Estimation, δ () are indicator function, δ ((x, y) ∈ Cq) indicate whether side (x, y) belongs to a certain subtree Cq For subtree CqWith the similarity degree of object module, indicating whether side belongs to a certain subtree, P (x, y) indicates the probability of side (x, y),For the maximal possibility estimation to P (x, y).Maximum spanning tree can be obtained by above formulaMaximal possibility estimation.
(5.4) search segmentation subtree collection: it is based on maximum spanning treeMaximal possibility estimation acquireThen pass through Dynamic programming techniques existMiddle search obtains optimal subtree collection, and the specific implementation steps are as follows:
(5.4.1) is for image Ii, preceding background class device is acted on into super-pixel setObtain the kind for being classified as prospect Sub- super-pixel setIts seed super-pixel setSurpassed by discrete seed Pixel is constituted, and is ranked up to obtain according to the similarity degree of each seed super-pixel and object module first
(5.4.2) chooses the super-pixel s closest to object module1As start node, infer maximum spanning tree simultaneously with this Obtain corresponding optimal subtree and its corresponding segmentation resultJudge the similarity degree of this segmentation result and object module: such as Fruit similarity degree is eligibleThen think that segmentation result is effective, otherwise willIt is set as empty setAnd the wrong seed super-pixel for including in segmentation area is fed back toCarry out deletion update;
(5.4.3) traversal setFind out the segmentation result corresponding to optimal subtree before It whether there is seed super-pixel s other than regionk, then repeat if it exists more than step obtain segmentation resultSimilarly carry out with The similarity of object module judges and subsequent processing, updates segmentation resultWith seed super-pixel set;
(5.4.4) is completed to seed super-pixel setWhole traversals after, we obtain finally being directed to image Ii's Segmentation resultWith updated seed super-pixel setAnd object module is completed according to these information Update and seed super-pixel constraint information update, to make the estimation of model more close to change present in real scene The seed super-pixel for changing situation and debug, then begins to iteration next time.
(6) iterative segmentation: the object module in step 4 is updated according to the segmentation result that step 5 obtains, according to step 5 institute The method stated, then be split;
(7) step 6 is repeated, until final segmentation result no longer changes to arrive final segmentation result.
Further, in the step 2, super-pixel significance valueMeasurement specifically:
By conspicuousness detection technique, to the i-th width image IiObtain original Saliency maps φi, then calculate each super-pixel For the conspicuousness mean value for all pixels point for including as its measurement, specific calculating is as follows:
WhereinI-th width image IiIn m-th of super-pixel RimAverage significance value,Indicate j-th of pixel Significance value, area (Rim) it is super-pixel RimThe number of pixels for including.
Super-pixel repeatability value wimMeasurement, specifically:
Each super-pixel minimum value at a distance from all super-pixel in other each images is measured, N-1 most narrow spacings are obtained From { d (Rim, Ik)}k≠i, further to N-1 minimum range { d (Rim, Ik)}k≠iIt is averaging, obtains average minimumIts Middle distance metric d (Rim, Ik) distance weighted by the vector distance based on hsv color and the bag of words based on SIFT feature It arrives, specific as follows:
Wherein cimAnd gimRespectively represent the i-th width image IiIn m-th of super-pixel RimHsv color characteristic vector and SIFT Bag of words characteristic vector, ckm′And gkm′Respectively represent kth width image IkIn a super-pixel R of m 'km′Hsv color Characteristic Vectors Amount and SIFT bag of words characteristic vector;
Super-pixel repeatability metric weights w is calculated by sigmoid formulaim:
Wherein μ and σ is the parameter for controlling the sigmoid functional form, μ=0.5, σ=0.1.
Further, step 6 specifically:
(6.1) according to newest segmentation result, foreground target model before updating is allowed to be more nearly mesh to be split Mark;
(6.2) it according to updated object module, regenerates all possible candidate subtree collection and estimates maximum Spanning tree;
(6.3) it according to updated object module and maximum spanning tree, is searched for again using dynamic programming techniques by subtree Gather the forest constituted, obtains segmentation result;
(6.4) judge whether to meet cut-off condition, i.e., whether last segmentation result no longer changes.If satisfied, then iteration knot Beam;If not satisfied, then repeating (6.1)-(6.3).
The invention adopts the above technical scheme compared with prior art, has following technical effect that
1) combining target auto discovery mechanism and study, the present invention can obtain a general preceding background class device, right The scalability of big data image set is preferable.
2) the seed super-pixel set obtained based on preceding background class device is more directly obtained by target auto discovery mechanism Prospect it is more accurate, help to improve subsequent segmentation precision.
3) forest model and tree graph structuring constraint condition are proposed, segmentation accuracy greatly improved, especially for having The Segmentation of Multi-target effect of complicated fine structure is preferable, and asks optimal solution to provide new optimization for the equation of Combinatorial Optimization Derivation algorithm.
Detailed description of the invention
Fig. 1 is overall flow figure of the invention;
Fig. 2 is preceding background class device learning process schematic diagram;
Fig. 3 is the schematic diagram based on super-pixel segmentation;
Fig. 4 is the Segmentation of Multi-target result in the case of scale, posture acute variation;
Specific embodiment
With reference to the accompanying drawing by specific embodiment, technical solution of the present invention is described in further detail.
Following embodiment is implemented under the premise of the technical scheme of the present invention, gives detailed embodiment and tool The operating process of body, but protection scope of the present invention is not limited to following embodiments.
The present embodiment handles the multiclass image in disclosed iCoseg data set.The image of these classifications there is The acute variations such as color, illumination condition, posture, scale, and the case where there are multiple common objects in image, give existing segmentation Technology brings huge challenge.Fig. 1 is overall flow figure of the invention, and Fig. 2 is classifier learning process schematic diagram, and Fig. 3 is base In the schematic diagram of super-pixel segmentation.The present embodiment comprises the steps of:
(1) image pre-segmentation: for the image data set I={ I comprising common objective object1..., INIn each width Image Ii, i=1,2 ..., N carry out over-segmentation processing, obtain super-pixel collection
(2) automatic target is found: the super-pixel collection based on each imageCount each super-pixelConspicuousness ValueWith repeated value wim, and calculate super-pixelEvaluation of estimate scoreim, It will Evaluation of estimate is less than 0.6 × max (scorei) super-pixel be set as background, by evaluation of estimate be more than or equal to 0.6 × max (scorei) Super-pixel be set as prospect;max(scorei) it is super-pixel collectionThe evaluation of estimate of the middle maximum super-pixel of evaluation of estimate;
(3) classifier learns.It is found by automatic target by the super-pixel collection in training setIt is divided into foreground and background, it is right In each super-pixel, described using the characteristic vector of 2004 following dimensions: (a) hsv color of 800 Dimension Vector Quantization of Linear Prediction indicates (k Mean cluster obtains);(b) (1200 tie up the SIFT bag of words that multiple dimensioned intensive sampling obtains, and are with 16,24,32 pixels respectively The image block on side samples, and the sampling interval is 3 pixels);(c) 4 binaryzation features, to describe super-pixel and four boundaries of image Contact situation.Based on features above, background class device before being obtained using the support vector machines learning method of standard.
(4) Target Modeling: being based on step (2) sorted information, is established to common objective object based on hsv color space Object module ΨfWith background model Ψb.Using Hellinger distance metric method calculate separately super-pixel or super-pixel combination with Similarity degree between object moduleSimilarity degree between super-pixel or super-pixel combination and background model
Object module ΨfMethod for building up it is as follows: by original image carry out color space transformation, obtain hsv color space Under image;To under hsv color space image H, S, V and " G " four color components carry out uniform quantizations, count target Distribution of the object on each color component obtains histogram distribution, i.e. object module Ψf;By the same way, background is counted Distribution of the image on each color component obtains histogram distribution, i.e. background model Ψb;Wherein " G " component represents saturation degree The color quantizing value of pixel lower than 5%;
Be respectively as follows:The normalization for being Value, the normalized value for being, and
C is all section numbers after equal part, hRAnd hfThe face of super-pixel or super-pixel combination R after respectively normalizing The color histogram of Color Histogram and object module, hR′And hbThe face of super-pixel or super-pixel combination R ' after respectively normalizing The color histogram of Color Histogram and background model.
(5) based on the segmentation of super-pixel: utilizing object module ΨfWith background model Ψb, using the algorithm pair of Combinatorial Optimization The subseries again of background before super-pixel carries out, to obtain the final segmentation of target object;It is proposed forest model it is assumed that i.e. vacation If each super-pixel correspond to a vertex, for single goal divide, last segmentation result by multiple adjoinings super-pixel structure At, and adjacent map can be expressed asSubtree;For Segmentation of Multi-target, last segmentation result is represented by adjacent map Multiple subtrees constitute forest.To sum up, it is assumed that last segmentation result is adjacent mapMultiple subtrees constitute forest.It is logical It crosses and establishes adjacent mapTo infer that the method for subtree collection determines last segmentation result;It implemented Journey is as follows:
(5.1) adjacent map is constructed: assuming that each super-pixel in image corresponds to a vertex in figure, it is two adjacent It is connected between super-pixel by a line, thus constitutes adjacent mapKnot is divided for final target object Fruit, it is assumed that the forest that its multiple subtree for including by adjacent map is constituted;
(5.2) it establishes numerical model solution: establishing numerical model, the problem of Target Segmentation is converted into combinatorial optimization problem Solution, it is as follows:
When R is super-pixel or super-pixel combination in prospect,When R ' is super-pixel or super picture in background When element combination,Constraint condition indicates one kind before can only belonging to for any one super-pixel R in background.Pass through Derivation can obtain, and to solution segmentation result, actually can be exchanged into the method for solving optimal subtree collection, and require optimal subtree Set, needs first to estimate maximum spanning tree;
(5.3) it derives maximum spanning tree: obtaining all possible candidate by the beam search method of beam search Subtree collectionBased on candidate subtree collectionMaximum spanning tree is obtained by the method for maximal possibility estimationIt derives such as Under:
Indicate all potential spanning tree set,It indicates data likelihood probability, can finally export,
Candidate subtree collection,For a certain subtree,Expression pairMaximum likelihood Estimation, δ () are indicator function, δ ((x, y) ∈ Cq) indicate whether side (x, y) belongs to a certain subtree Cq For subtree CqWith the similarity degree of object module, indicating whether side belongs to a certain subtree, P (x, y) indicates the probability of side (x, y),For the maximal possibility estimation to P (x, y).Maximum spanning tree can be obtained by above formulaMaximal possibility estimation.
(5.4) search segmentation subtree collection: it is based on maximum spanning treeMaximal possibility estimation acquireThen pass through Dynamic programming techniques existMiddle search obtains optimal subtree collection, and the specific implementation steps are as follows:
(5.4.1) is for image Ii, preceding background class device is acted on into super-pixel setObtain the kind for being classified as prospect Sub- super-pixel setIts seed super-pixel setSurpassed by discrete seed Pixel is constituted, and is ranked up to obtain according to the similarity degree of each seed super-pixel and object module first
(5.4.2) chooses the super-pixel s closest to object module1As start node, infer maximum spanning tree simultaneously with this Obtain corresponding optimal subtree and its corresponding segmentation resultJudge the similarity degree of this segmentation result and object module: such as Fruit similarity degree is eligibleThen think that segmentation result is effective, otherwise willIt is set as empty setAnd the wrong seed super-pixel for including in segmentation area is fed back toCarry out deletion update;
(5.4.3) traversal setFind out the segmentation result corresponding to optimal subtree before It whether there is seed super-pixel s other than regionk, then repeat if it exists more than step obtain segmentation resultSimilarly carry out with The similarity of object module judges and subsequent processing, updates segmentation resultWith seed super-pixel set;
(5.4.4) is completed to seed super-pixel setWhole traversals after, we obtain finally being directed to image Ii's Segmentation resultWith updated seed super-pixel setAnd object module is completed according to these information Update and seed super-pixel constraint information update, to make the estimation of model more close to change present in real scene The seed super-pixel for changing situation and debug, then begins to iteration next time.
(6) iterative segmentation: the object module in step 4 is updated according to the segmentation result that step 5 obtains, according to step 5 institute The method stated, then be split;
(7) step 6 is repeated, until final segmentation result no longer changes to arrive final segmentation result.
Further, in the step 2, super-pixel significance valueMeasurement specifically:
By conspicuousness detection technique, to the i-th width image IiObtain original Saliency maps φi, then calculate each super-pixel For the conspicuousness mean value for all pixels point for including as its measurement, specific calculating is as follows:
WhereinI-th width image IiIn m-th of super-pixel RimAverage significance value,Indicate j-th of pixel Significance value, area (Rim) it is super-pixel RimThe number of pixels for including.
Super-pixel repeatability value wimMeasurement, specifically:
Each super-pixel minimum value at a distance from all super-pixel in other each images is measured, N-1 most narrow spacings are obtained From { d (Rim, Ik)}k≠i, further to N-1 minimum range { d (Rim, Ik)}k≠iIt is averaging, obtains average minimumIts Middle distance metric d (Rim, Ik) distance weighted by the vector distance based on hsv color and the bag of words based on SIFT feature It arrives, specific as follows:
Wherein cimAnd gimRespectively represent the i-th width image IiIn m-th of super-pixel RimHsv color characteristic vector and SIFT Bag of words characteristic vector, ckm′And gkm′Respectively represent kth width image IkIn a super-pixel R of m 'km′Hsv color Characteristic Vectors Amount and SIFT bag of words characteristic vector;
Super-pixel repeatability metric weights w is calculated by sigmoid formulaim:
Wherein μ and σ is the parameter for controlling the sigmoid functional form, μ=0.5, σ=0.1.
Further, step 6 specifically:
(6.1) according to newest segmentation result, foreground target model before updating is allowed to be more nearly mesh to be split Mark;
(6.2) it according to updated object module, regenerates all possible candidate subtree collection and estimates maximum Spanning tree;
(6.3) it according to updated object module and maximum spanning tree, is searched for again using dynamic programming techniques by subtree Gather the forest constituted, obtains segmentation result;
(6.4) judge whether to meet cut-off condition, i.e., whether last segmentation result no longer changes.If satisfied, then iteration knot Beam;If not satisfied, then repeating (6.1)-(6.3).
Implementation result:
According to above-mentioned steps, several pictures chosen in iCoseg database carry out Target Segmentation.Fig. 4, which is illustrated, to be selected from The picture of iCoseg carries out Segmentation of Multi-target test.From fig. 4, it can be seen that the present invention is for target to be split, there are scales, appearance In the case that the acute variations such as state, illumination and image include multiple targets, accurate object segmentation result can be still obtained.

Claims (2)

1. a kind of image multiple target constrained based on super-pixel and structuring cooperates with dividing method, which is characterized in that comprising following Step:
(1) image pre-segmentation: for the image data set I={ I comprising common objective object1..., INIn every piece image Ii, i=1,2......, N carry out over-segmentation processing, obtain super-pixel collection
(2) automatic target is found: the super-pixel collection based on each imageCount each super-pixelSignificance value With repeated value wim, and calculate super-pixelEvaluation of estimate scoreim, Evaluation of estimate is small In 0.6 × max (scorei) super-pixel be set as background, by evaluation of estimate be more than or equal to 0.6 × max (scorei) super-pixel It is set as prospect;max(scorei) it is super-pixel collectionThe evaluation of estimate of the middle maximum super-pixel of evaluation of estimate;
(3) classifier learns: being found by automatic target by the super-pixel collection in training setIt is divided into foreground and background, for every One super-pixel is described using the characteristic vector of 2004 following dimensions: (a) obtaining 800 Dimension Vector Quantization of Linear Prediction by k mean cluster Hsv color indicates;(b) the SIFT bag of words that multiple dimensioned intensive sampling obtains, multiple dimensioned intensive sampling be 1200 dimension, respectively with 16,24,32 pixels are the image block multi-scale sampling on side, and the sampling interval is 3 pixels;(c) 4 binaryzation features, to describe The contact situation of super-pixel and four boundaries of image;Based on features above, the support vector machines learning method of standard is utilized To obtain preceding background class device;
(4) Target Modeling: being based on step (3) sorted information, is based on hsv color space to foreground target object and establishes target Model ΨfWith background model Ψb;Super-pixel or super-pixel combination and target are calculated separately using Hellinger distance metric method Similarity degree between modelSimilarity degree between super-pixel or super-pixel combination and background model
(5) based on the segmentation of super-pixel: utilizing object module ΨfWith background model Ψb, using the algorithm of Combinatorial Optimization to super picture The subseries again of background before element carries out, to obtain the final segmentation of target object;It is proposed forest model it is assumed that assuming every A super-pixel corresponds to a vertex, and single goal is divided, and last segmentation result is made of the super-pixel of multiple adjoinings, and It can be expressed as adjacent mapSubtree;For Segmentation of Multi-target, last segmentation result is expressed as adjacent mapMultiple sons Set the forest constituted;To sum up, it is assumed that last segmentation result is adjacent mapMultiple subtrees constitute forest;It is adjacent by establishing Map interlinkingTo infer that the method for subtree collection determines last segmentation result;The specific implementation process is as follows:
(5.1) adjacent map is constructed: assuming that each super-pixel in image corresponds to a vertex in figure, two adjacent super pictures It is connected between element by a line, thus constitutes adjacent mapFor final target object segmentation result, Assuming that the forest that its multiple subtree for including by adjacent map is constituted;
(5.2) it establishes numerical model solution: establishing numerical model, the problem of Target Segmentation is converted into asking for combinatorial optimization problem Solution is as follows:
When R is super-pixel or super-pixel combination in prospect,When R ' is the super-pixel or super-pixel group in background When conjunction,Constraint condition indicates one kind before can only belonging to for any one super-pixel R in background;Pass through derivation It obtains, to solution segmentation result, actually can be exchanged into the method for solving optimal subtree collection, and require optimal subtree set It closes, needs first to estimate maximum spanning tree;
(5.3) it derives maximum spanning tree: obtaining all possible candidate subtree by the beam search method of beam search SetBased on candidate subtree collectionMaximum spanning tree is obtained by the method for maximal possibility estimationIt derives as follows:
Indicate all potential spanning tree set,Indicate data likelihood probability, it is final to export,
For candidate subtree collection,For a certain subtree,Expression pairMaximum likelihood estimate Meter, δ () are indicator function, δ ((x, y) ∈ Cq) indicate whether side (x, y) belongs to a certain subtree Cq For Subtree CqWith the similarity degree of object module, indicating whether side belongs to a certain subtree, P (x, y) indicates the probability of side (x, y),For the maximal possibility estimation to P (x, y);Maximum spanning tree is obtained by above formulaMaximal possibility estimation;
(5.4) search segmentation subtree collection: it is based on maximum spanning treeMaximal possibility estimation acquireThen pass through dynamic Planning technology existsMiddle search obtains optimal subtree collection, and the specific implementation steps are as follows:
(5.4.1) is for image Ii, preceding background class device is acted on into super-pixel collectionObtain the super picture of seed for being classified as prospect Element setIts seed super-pixel setBy discrete seed super-pixel structure At being ranked up to obtain according to the similarity degree of each seed super-pixel and object module first
(5.4.2) chooses the super-pixel S closest to object module1As start node, maximum spanning tree is inferred with this and obtains phase The optimal subtree answered and its corresponding segmentation resultJudge the similarity degree of this segmentation result to object module: if similar Degree is eligible:Then think that segmentation result is effective, otherwise willIt is set as empty set:And the wrong seed super-pixel for including in segmentation area is fed back toCarry out deletion update;
(5.4.3) traversal setFind out the segmentation area corresponding to optimal subtree before It whether there is seed super-pixel s in additionk, then repeat if it exists more than step obtain segmentation resultSimilarly progress and target The similarity of model judges and subsequent processing, updates segmentation resultWith seed super-pixel set;
(5.4.4) is completed to seed super-pixel setWhole traversals after, we obtain finally being directed to image IiSegmentation As a resultWith updated seed super-pixel setAnd object module is completed more according to these information New and seed super-pixel constraint information update, to make the estimation of model more close to changing feelings present in real scene Condition and the seed super-pixel of debug, then begin to iteration next time;
(6) iterative segmentation: the object module in step 4 is updated according to the segmentation result that step 5 obtains, according to described in step 5 Method, then be split;
(7) step 6 is repeated, until final segmentation result no longer changes to arrive final segmentation result.
2. according to the method described in claim 1, it is characterized by: step 6 specifically:
(6.1) according to newest segmentation result, foreground target model before updating is allowed to be more nearly target to be split;
(6.2) it according to updated object module, regenerates all possible candidate subtree collection and estimates maximum generation Tree;
(6.3) it according to updated object module and maximum spanning tree, is searched for again using dynamic programming techniques by subtree collection The forest of composition, obtains segmentation result;
(6.4) judge whether to meet cut-off condition, i.e., whether last segmentation result no longer changes;If satisfied, then iteration terminates;If It is unsatisfactory for, then repeats (6.1)-(6.3).
CN201610120407.7A 2016-03-03 2016-03-03 A kind of image multiple target collaboration dividing method constrained based on super-pixel and structuring Expired - Fee Related CN105809672B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610120407.7A CN105809672B (en) 2016-03-03 2016-03-03 A kind of image multiple target collaboration dividing method constrained based on super-pixel and structuring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610120407.7A CN105809672B (en) 2016-03-03 2016-03-03 A kind of image multiple target collaboration dividing method constrained based on super-pixel and structuring

Publications (2)

Publication Number Publication Date
CN105809672A CN105809672A (en) 2016-07-27
CN105809672B true CN105809672B (en) 2019-09-13

Family

ID=56466027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610120407.7A Expired - Fee Related CN105809672B (en) 2016-03-03 2016-03-03 A kind of image multiple target collaboration dividing method constrained based on super-pixel and structuring

Country Status (1)

Country Link
CN (1) CN105809672B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106875406B (en) 2017-01-24 2020-04-14 北京航空航天大学 Image-guided video semantic object segmentation method and device
CN107103326B (en) * 2017-04-26 2020-06-02 苏州大学 Collaborative significance detection method based on super-pixel clustering
CN107256412B (en) * 2017-05-26 2019-07-12 东南大学 A kind of figure building method based on more human eye perceptual grouping characteristics
CN107527348B (en) * 2017-07-11 2020-10-30 湖州师范学院 Significance detection method based on multi-scale segmentation
CN107610133B (en) * 2017-08-28 2020-08-25 昆明理工大学 Multi-target garment image collaborative segmentation method
CN107909079B (en) * 2017-10-11 2021-06-04 天津大学 Cooperative significance detection method
CN107909576B (en) * 2017-11-22 2021-06-25 南开大学 Indoor RGB-D image object segmentation method based on support semantic relation
CN108305258B (en) * 2018-01-31 2022-07-26 成都快眼科技有限公司 Super-pixel segmentation method, system and storage device based on minimum spanning tree
CN111223118A (en) * 2018-11-27 2020-06-02 富士通株式会社 Image processing apparatus, image processing method, and computer-readable recording medium
CN109559321A (en) * 2018-11-28 2019-04-02 清华大学 A kind of sonar image dividing method and equipment
CN109934235B (en) * 2019-03-20 2021-04-20 中南大学 Unsupervised abdominal CT sequence image multi-organ simultaneous automatic segmentation method
CN110598711B (en) * 2019-08-31 2022-12-16 华南理工大学 Target segmentation method combined with classification task
CN112164087B (en) * 2020-10-13 2023-12-08 北京无线电测量研究所 Super-pixel segmentation method and device based on edge constraint and segmentation boundary search

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101756696A (en) * 2009-12-31 2010-06-30 中国人民解放军空军总医院 Multiphoton skin lens image automatic analytical system and method for diagnosing malignant melanoma by using same system
CN105046714A (en) * 2015-08-18 2015-11-11 浙江大学 Unsupervised image segmentation method based on super pixels and target discovering mechanism

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014149563A (en) * 2013-01-31 2014-08-21 Akita Univ Frame division device and frame division program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101756696A (en) * 2009-12-31 2010-06-30 中国人民解放军空军总医院 Multiphoton skin lens image automatic analytical system and method for diagnosing malignant melanoma by using same system
CN105046714A (en) * 2015-08-18 2015-11-11 浙江大学 Unsupervised image segmentation method based on super pixels and target discovering mechanism

Also Published As

Publication number Publication date
CN105809672A (en) 2016-07-27

Similar Documents

Publication Publication Date Title
CN105809672B (en) A kind of image multiple target collaboration dividing method constrained based on super-pixel and structuring
Ibrahim et al. Image segmentation methods based on superpixel techniques: A survey
CN109948425B (en) Pedestrian searching method and device for structure-aware self-attention and online instance aggregation matching
CN108960140B (en) Pedestrian re-identification method based on multi-region feature extraction and fusion
CN109190524B (en) Human body action recognition method based on generation of confrontation network
CN106055576B (en) A kind of fast and effectively image search method under large-scale data background
CN107330451B (en) Clothing attribute retrieval method based on deep convolutional neural network
EP2811424B1 (en) Method and apparatus for training an estimator for estimating a pose of an articulated object
CN109165540B (en) Pedestrian searching method and device based on prior candidate box selection strategy
CN108288051B (en) Pedestrian re-recognition model training method and device, electronic equipment and storage medium
CN105740915B (en) A kind of collaboration dividing method merging perception information
CN108319957A (en) A kind of large-scale point cloud semantic segmentation method based on overtrick figure
CN107256017B (en) Route planning method and system
CN106845430A (en) Pedestrian detection and tracking based on acceleration region convolutional neural networks
CN106570480B (en) A kind of human action classification method based on gesture recognition
CN114841257B (en) Small sample target detection method based on self-supervision comparison constraint
CN105243139A (en) Deep learning based three-dimensional model retrieval method and retrieval device thereof
CN112132014B (en) Target re-identification method and system based on non-supervised pyramid similarity learning
CN105046714A (en) Unsupervised image segmentation method based on super pixels and target discovering mechanism
CN112884742A (en) Multi-algorithm fusion-based multi-target real-time detection, identification and tracking method
CN108846404A (en) A kind of image significance detection method and device based on the sequence of related constraint figure
CN110147841A (en) The fine grit classification method for being detected and being divided based on Weakly supervised and unsupervised component
CN112733602B (en) Relation-guided pedestrian attribute identification method
CN110163130B (en) Feature pre-alignment random forest classification system and method for gesture recognition
CN113032613B (en) Three-dimensional model retrieval method based on interactive attention convolution neural network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190913