CN104915946B - A kind of object segmentation methods based on conspicuousness suitable for serious degraded image - Google Patents

A kind of object segmentation methods based on conspicuousness suitable for serious degraded image Download PDF

Info

Publication number
CN104915946B
CN104915946B CN201510069617.3A CN201510069617A CN104915946B CN 104915946 B CN104915946 B CN 104915946B CN 201510069617 A CN201510069617 A CN 201510069617A CN 104915946 B CN104915946 B CN 104915946B
Authority
CN
China
Prior art keywords
msub
mrow
mtd
point
mtr
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510069617.3A
Other languages
Chinese (zh)
Other versions
CN104915946A (en
Inventor
刘盛
王建峰
张少波
陈胜勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Shishang Technology Co ltd
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201510069617.3A priority Critical patent/CN104915946B/en
Publication of CN104915946A publication Critical patent/CN104915946A/en
Application granted granted Critical
Publication of CN104915946B publication Critical patent/CN104915946B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

A kind of object segmentation methods based on conspicuousness suitable for serious degraded image, comprise the following steps:(1) the notable object seed of initialization is generated by notable figure;(2) skeleton based on local autocorrelation is generated;(3) starting point for extension is generated on the edge of notable object;(4) direction of extension is initialized for each starting point;(5) end condition of extension is set and extended operation is completed;(6) mark could not find the starting point of terminating point as degradation regions mark point;(7) repairing is carried out and smooth to the result after extension;(8) object segmentation is repaired with super-pixel according to degradation regions mark point.The present invention, which combines local autocorrelation and super-pixel, can effectively improve the accuracy and integrality of segmentation result, avoid by the segmentation result lost part region that serious image degradation is brought the problem of, improve robustness of the notable Object Segmentation for image degradation.

Description

A kind of object segmentation methods based on conspicuousness suitable for serious degraded image
Technical field
The present invention relates to technical fields such as computer vision, image procossings, the Object Segmentation side of conspicuousness is based especially on Method.
Background technology
Object Segmentation based on conspicuousness is a more popular research, and its target is will be interested right in image As splitting.Detect that salient region is the imitation for human visual system in natural image, that is, automatically look for Go out the area-of-interest that the mankind can gather sight.When people observe a natural image or real scene, Ta Menhui More notices are spent on whole significantly object, and on not exclusively one piece marking area.Therefore, the object based on conspicuousness Segmentation is the research being necessary, it is widely used in many higher layer applications, such as Object identifying, behavioural analysis, figure As object of interest segmentation etc..Nevertheless, in the case that image is by serious degenerate, especially prospect has local motion It is fuzzy and in the case that background has homogeneous motion blur, the notable object segmentation methods based on conspicuousness information can often lose Effect.The degradation phenomena of image can have a strong impact on to most computers vision application generation.It normally results in arithmetic result Precision is reduced, and even results in the failure of some algorithms.This problem is in the Object Segmentation based on conspicuousness for natural image In it is very common.Notable object in most of degraded images may includes many not significant enough parts, these parts Ambiguity will be caused when carrying out Object Segmentation.Therefore, result of the Object Segmentation based on conspicuousness on degraded image is usual It is incomplete.
The content of the invention
In order to overcome due to the object segmentation lost part region caused by serious image degradation the problem of, the present invention A kind of object segmentation methods based on conspicuousness suitable for serious degraded image are proposed, it can effectively improve segmentation result Accuracy and integrality, it is to avoid by segmentation result lost part region that serious image degradation is brought the problem of, improve Robustness of the notable Object Segmentation for image degradation.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of object segmentation methods based on conspicuousness suitable for serious degraded image, comprise the following steps:
(1) the notable object seed of initialization is generated by notable figure
In the histogram of the notable figure obtained based on soft picture abstraction method one is found in the higher scope of significance Individual histogrammic peak value, the higher scope of the significance be chosen for (127,255], threshold value point is carried out to notable figure by threshold value T Cut the segmentation result for obtaining a binaryzation;Connected domain is marked one by one, and using morphological dilations before mark Operate to protect the more significantly details of destination object, topmost region is extracted in labeled good not connected region and is made For initial significantly object seed;
(2) skeleton based on local autocorrelation is generated
Local autocoorrelation of the point (x, y) in the local window w centered on the point:
Wherein, I (xk,yk) represent point (xk,yk) gradient in window w (3 × 3), Δ x and Δ y are represented in x and y respectively Displacement in both direction;
The formula (2.1) is approximately:
Wherein
The matrix M obtained from each point two characteristic values are calculated, wherein the characteristic vector correspondence of smaller characteristic value is oval Long axis direction, this direction can be expressed as bearing of trend a little, and each point is calculated to obtained value and is transformed into director space In [0,180], each value is represented in positive and negative both direction point-blank, each pixel of direction of motion figure generated Value all correspond to the direction of some;
Direction of motion figure is normalized into 4 directions { 0,45,90,135 } by way of mean allocation, in motion side Into the image after figure normalization, the maximum direction of quantity will be taken as the direction of background;The direction of background is all removed afterwards, Remaining largest connected region is just taken as the object seed of supplement;Two proximal directions are come along in background direction search background Repair the part lost.Just they are connected together when two neighbouring relevant ranges are spatially near each other;
If obtaining initial significantly object seed from step (1) to be made up of several disconnected regions, this means that The result obtained from step (1) can not be used for representing the whole skeleton of destination object, in this case, based on part from phase The skeleton of closing property is used to allow initial significantly object seed to be optimized to preferably significantly object seed, it is contemplated that by local auto-correlation Property obtained skeleton some expansions are compared with destination object, so applying morphologic etching operation to correct this problem; When initial object seed is made up of some disconnected regions, by the skeleton based on local autocorrelation and initial object seed It is merged as final notable object seed;Otherwise, result will be only obtained in step (1) as final notable right As seed;
Cavity in notable object seed needs to be filled, except those account for the percentage threshold of whole destination object Cavity above;
(3) starting point for extension is generated on the edge of notable object seed
The starting point of our extended methods is taken as in borderline each point of notable object seed, one is constructed The convolution kernel N of 3 × 3 sizess, and allow notable object seed binary map MbWith convolution kernel NsMake convolution operation, when in action pane When central point p value is 1, NsWill be by using calculating a factor of determination d:
Wherein, pijThe binary value of notable object seed pixel that the i-th row jth is arranged in w windows, and NsijRepresent in NsIn The value of i-th row jth row, if n is not equal to 0 or 8, factor of determination d is 1, represent value a little be not just as, just this Individual central point is used as a starting point on notable object seed edge;
(4) direction of extension is initialized for each starting point
Based on starting point set { Ps, the direction of each starting point can be calculated, it is contemplated that each starting point is all along normal Directional Extension, so needing to calculate this normal direction, is divided into 8 types by propagation direction, and with 0 to 7 eight numeral come Them are marked, a convolution kernel NdIt is constructed out for determining the normal direction of each starting point, in calculated direction ldMark The formula of value is as follows:
Wherein, pijIt is binary map MbThe value of the i-th row jth row pixel of correspondence in convolution window w,Represent to the pixel Value carries out inversion operation, is changed into 0 if being originally 1, if 0 is changed into 1, and p11It is then the 1st row the 1st in window w The value of row pixel, n1It is to work as p11When=1 in addition to central point all satisfaction { pij=1 } quantity of point, and n2It is then to work as p11 When=0 in addition to central point all satisfaction { pij=1 } quantity of point;
(5) end condition of extension is set and extended operation is completed
The border of original image is detected using the algorithm of adaptive threshold canny operators, and define one it is expansible Restricted area, the restricted area carries out triple-expansion by notable object seed and obtained;
In confined areas, each borderline point has the opportunity to the terminating point as extension, will eventually according to propagation direction Only condition is divided into two classes:{ 0,2,4,6 } and { 1,3,5,7 }, each type all contains several situation;
When normal direction of the starting point along it have found its terminating point, starting point will be carried out to corresponding One operation of terminating point extension, in starting point (x, y) and terminating point (xe,ye) between extension line on point assignment again, Point set on extension line is expressed as formula:
{p(x+dixΔx,y+diyΔ y)=1 | 0≤Δ x≤| x-xe|,0≤Δy≤|y-ye|, (5.1)
Wherein, p (x+dixΔx,y+diyΔ y) is the binary value of the point on extension line, and it represents extension line in confined areas On again assignment point can turn into our final goal objects in point;diValue from set (- 1, -1), (0, -1), (1, - 1), (1,0), (1,1), (0,1), (- 1,1), (- 1,0) } in find, 8 bearing mark values in its corresponding diagram 30,1,2, 3,4,5,6,7};
(6) mark could not find the starting point of terminating point as degradation regions mark point
If terminating point is not present in extension restricted area in a starting point, the starting point is being degenerated labeled as one Point in region, obtains a mark point set P for representing degradation regions1
(7) repairing is carried out and smooth to the result after extension
These disconnected holes are ranked up according to area, and the point in hole again assignment is turned into target pair Point as in, the hole accounted for except those areas more than the percentage of whole object finally, is smoothly tied using Gaussian filter Border coarse as caused by error in fruit.
(8) object segmentation is repaired with super-pixel according to degradation regions mark point
The obtained super-pixel of superpixel segmentation method is clustered moving back of repairing that those are lost using by simple linear iteration Change region, when the skeleton based on local autocorrelation is attached to notable object seed, the object based on super-pixel is not performed and is melted The process of conjunction;
Wherein, ObjfThe final destination object of expression, and ObjeResult obtained by representing after extending;Parameter a=0 tables Show that the skeleton based on local autocorrelation is not applied in notable object seed, nliRepresent in super-pixel SiThe number of middle mark point Amount, these mark points are obtained from step (6), and belong to set Pl
The present invention technical concept be:Construct a kind of Object Segmentation side based on conspicuousness using automatic expanding mechanism Method, it can be roughly divided into two steps:The generation of notable object seed and the segmentation based on notable object seed.Aobvious In the generating process for writing object seed, our results based on soft picture abstraction method generate initial significantly object seed, And optimize notable object seed with reference to local auto-correlation uniformity, using it as destination object whole skeleton.Based on In the cutting procedure of notable object seed, based on the inapparent observation phenomenon in border in degradation regions, we have proposed one The individual novel method for being called " normal extension ".Based on the starting point in notable target edges, we, which calculate, arrives each starting point The corresponding direction of propagation, and list end condition.By this method, notable object seed can just expand to target The real border of object, those points that correspondence border can not be found in notable object seed can be labeled.Lost to repair The degradation regions of mistake, according to the density of labeled point, super-pixel is fused in our final object segmentation.
Beneficial effects of the present invention are mainly manifested in:The method extended with normal makes notable object seed expand to target The real border of object, the accuracy of segmentation result can be effectively improved and complete with reference to local autocorrelation and super-pixel Property, it is to avoid by the segmentation result lost part region that serious image degradation is brought the problem of, improve notable Object Segmentation For the robustness of image degradation.
Brief description of the drawings
Two characteristic vectors of the Metzler matrix that Fig. 1 is obtained according to local autocoorrelation as an oval major and minor axis, by Oval major axis (characteristic vector of i.e. less feature value object) and extension of the angle formed by horizontal line as the pixel Direction (is extended to the direction of motion).
Fig. 2 generates extension starting point on notable object seed.The left side is convolution kernel used, and the right is convolution kernel effect The process and its result of a pixel in notable object seed.
Fig. 3 left sides are 8 types of propagation direction, and the right is to mark these directions with 0 to 7 eight numeral.
Specific implementation method
The present invention will be further described below in conjunction with the accompanying drawings.
1~Fig. 3 of reference picture, a kind of object segmentation methods specific steps based on conspicuousness suitable for serious degraded image It is as follows:
(1) the notable object seed of initialization is generated by notable figure.In order to result in whole object by extension, I Need the skeleton of a good destination object.Under normal circumstances, marking area be foreground area possibility it is bigger.Cause This, we find one in the histogram of the notable figure obtained based on soft picture abstraction method in the higher scope of significance Histogrammic peak value.In our method, this scope be chosen for (127,255], it is high that this scope has corresponded to significance Region.Then, the segmentation result that row threshold division obtains a binaryzation is entered to notable figure by threshold value T.We one connect one Connected domain is marked individually, and the more significantly details of destination object are protected before mark using morphological dilation.I Topmost region is extracted in labeled good not connected region as our rough initial notable object seeds.
(2) skeleton based on local autocorrelation is generated.This formula is point (x, y) in the office centered on the point below Local autocoorrelation in portion window w:
Wherein I (xk,yk) represent point (xk,yk) gradient in window w (3 × 3), Δ x and Δ y are represented in x and y respectively Displacement in both direction.
This formula can be approximately:
Wherein
We calculate the matrix M (2 × 2) obtained from each point two characteristic values.The feature of wherein smaller characteristic value to The oval long axis direction (as shown in Figure 2) of amount correspondence, this direction can be expressed as bearing of trend (direction of motion) a little.I Each point calculated into obtained value be transformed into director space [0,180], so each value is represented point-blank Positive and negative both direction.The value in each pixel of direction of motion figure generated corresponds to the direction of some.
Direction of motion figure is normalized to 4 directions { 0,45,90,135 } by us by way of mean allocation.In motion The maximum direction of quantity will be taken as the direction of background in image after directional diagram normalization.The direction of background is all gone afterwards Remove, remaining largest connected region is just taken as the object seed of supplement.Certainly, on object seed with background direction identical portion Dividing can also be lost.We repair the part of loss along two proximal directions in background direction search background.As two neighbours Nearly relevant range just connects together them when being spatially near each other.
We assume here that:If obtaining initial significantly object seed from step (1) by several disconnected areas Domain is constituted, and this means that the result obtained from step (1) can not be used for representing the whole skeleton of destination object.In this feelings Under condition, the skeleton based on local autocorrelation is used to allow initial significantly object seed to be optimized to preferably significantly object seed. Some expansions are compared with destination object in view of the skeleton obtained by local autocorrelation, so applying morphologic corrosion behaviour Make to correct this problem.When initial object seed is made up of some disconnected regions, we will be based on local auto-correlation Property skeleton and initial object seed be merged the notable object seed for being used as us final.Otherwise, only by step (1) In obtain result as our final notable object seeds.Before next step, the cavity needs in notable object seed It is filled, (20% is that we obtain threshold by many experiments in more than 20% cavity that account for whole destination object except those Value).
(3) starting point for extension is generated on the edge of notable object.By our initial method, generate The binary map M of notable object seedb(only value 0 and 1).Notable object seed is expanded we have proposed a kind of novel method Open up the real border of destination object.Of our extended methods is taken as in borderline each point of notable object seed Initial point.Will be described below how setting up starting point set { PsMethod.
We construct the convolution kernel N of 3 × 3 sizess(as shown in Figure 2), and allow notable object drawing of seeds MbAnd volume Product core NsMake convolution operation.When the value of central point p in action pane is 1, NsWill be by using calculating a factor of determination d.
Wherein pijThe binary value of notable object seed pixel that the i-th row jth is arranged in w windows, and NsijRepresent in NsIn The value of i rows jth row.If n is not equal to 0 or 8 (factor of determination d is 1), mean that value a little be not just as, then We just assign this central point as a starting point on notable object seed edge.
(4) direction of extension is initialized for each starting point.Based on starting point set { Ps, the direction of each starting point can Calculated.It is contemplated that each starting point extends all along normal direction, so we need to calculate this normal direction.We Our propagation direction is roughly divided into 8 types, and them are marked with 0 to 7 eight numeral, as shown in Figure 3.One Convolution kernel NdIt is constructed out to determine the normal direction of each starting point.For calculated direction ldMark value formula such as Shown in lower:
Wherein pijIt is binary map MbThe value of the i-th row jth row pixel of correspondence in convolution window w (3 × 3),Expression pair The pixel value carries out inversion operation, is changed into 0 if being originally 1, if 0 is changed into 1, and p11It is then the 1st in window w The value of the row pixel of row the 1st.n1It is to work as p11It is all in addition to central point when=1 to meetPoint quantity, and n2Then It is to work as p11When=0 in addition to central point all satisfaction { pij=1 } quantity of point.
(5) end condition of extension is set and extended operation is completed.Our method expands to notable object seed The real border of destination object, so needing to create a border to terminate extension.We just have one to be called adaptive thresholding The algorithm of value canny operators detects the border of original image, and defines an expansible restricted area.This restricted area Triple-expansion is carried out by notable object seed to obtain.
In confined areas, each borderline point has the opportunity to the terminating point as extension.We are according to as shown in Figure 3 Propagation direction end condition can be divided into two classes:{ 0,2,4,6 } and { 1,3,5,7 }.Each type all contains several Situation.
When normal direction of the starting point along it have found its terminating point, starting point will be carried out to corresponding One operation of terminating point extension.We are in starting point (x, y) and terminating point (xe,ye) between extension line on point again Assignment.Point set on extension line can be expressed as formula:
{p(x+dixΔx,y+diyΔ y)=1 | 0≤Δ x≤| x-xe|,0≤Δy≤|y-ye|, (5.1)
Wherein p (x+dixΔx,y+diyΔ y) is the binary value of the point on extension line, and it represents extension line in confined areas On again assignment point can turn into our final goal objects in point.diValue can from set (- 1, -1), (0, -1), (1, -1), (1,0), (1,1), (0,1), (- 1,1), (- 1,0) } in find, 8 bearing mark values in its corresponding diagram 30,1, 2,3,4,5,6,7}。
(6) mark could not find the starting point of terminating point as degradation regions mark point.According to the border in degradation regions Must be it is inapparent this it is assumed that if a starting point extension restricted area in be not present terminating point if, we are just The starting point is labeled as a point in degradation regions.A mark point set for representing degradation regions can finally be obtained P1
(7) repairing is carried out and smooth to the result after extension.With the notable object seed mistake of generation described in step (2) The final step of journey is similar, needs exist for an operation to repair the hole in the result after extension.We are according to area to this A little disconnected holes are ranked up, and the point in hole again assignment is turned into the point in destination object, except those faces Product accounts for more than 20% hole of whole object (threshold value 20% here is obtained by many experiments).Finally, we make With Gaussian filter come border coarse as caused by error in sharpening result.
Here explanation is had to, the method in initialization direction is not suitable for obtaining straight in " normal extension " that we are proposed The situation of the normal direction of line.It is only applicable to calculate the normal direction on some big zone boundaries.And some are due to generation Starting point, initialization direction or the miscalculation calculated caused by the process of end condition will not cause larger to our system Influence, our system has very high robustness.
(8) object segmentation is repaired with super-pixel according to degradation regions mark point.Based in the degradation regions of image Border certainly it is inapparent this it is assumed that we using by simple linear iteration cluster superpixel segmentation method obtain Super-pixel repairs the degradation regions that those are lost.A part significantly object seed ratio in view of combining local autocorrelation Real target objects are big, so when the skeleton based on local autocorrelation is attached to notable object seed, we do not perform The program of this object fusion based on super-pixel.
Wherein ObjfThe final destination object of expression, and ObjeResult obtained by representing after extending.Parameter a=0 is represented Skeleton based on local autocorrelation is not applied in notable object seed, nliRepresent in super-pixel SiThe number of middle mark point Amount, these mark points are obtained from step (6), and belong to set Pl
The object segmentation methods based on conspicuousness suitable for serious degraded image of the present embodiment, make to input picture Notable figure is obtained with based on soft picture abstraction method, then by by Threshold segmentation, extracting main connected domain as rough Notable object seed;The direction of motion of each pixel is calculated using local autocorrelation, as shown in figure 1, after wiping out background To based on local autocorrelation skeleton;According to the good and bad situation of rough significantly object seed decide whether to combine based on office The skeleton of portion's autocorrelation forms final notable object seed point;The starting for extension is found according to notable object seed Point, as shown in Figure 2;Its normal direction is calculated each starting point as propagation direction, 8 kinds of possible propagation directions such as Fig. 3 institutes Show;Terminal is set for each starting point, the starting point for finding terminal is connected with line and terminal;Rising for terminating point is not found Initial point is marked as the point in degradation regions;Repairing is carried out to the result after extension and smooth;Finally, according to degradation regions Mark point repairs object segmentation with super-pixel.

Claims (1)

1. a kind of object segmentation methods based on conspicuousness suitable for serious degraded image, it is characterised in that:It is described to be based on showing The object segmentation methods of work property comprise the following steps:
(1) the notable object seed of initialization is generated by notable figure
One is found in the higher scope of significance in the histogram of the notable figure obtained based on soft picture abstraction method directly The peak value of square figure, the higher scope of the significance be chosen for (127,255], row threshold division is entered to notable figure by threshold value T and obtained To the segmentation result of a binaryzation;Connected domain is marked one by one, and using morphological dilation before mark To protect the more significantly details of destination object, topmost region is extracted as first in labeled good not connected region Begin notable object seed;
(2) skeleton based on local autocorrelation is generated
Local autocoorrelation of the point (x, y) in the local window w centered on the point:
<mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>&amp;Sigma;</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> <mo>&amp;Element;</mo> <mi>w</mi> </mrow> </msub> <msup> <mrow> <mo>&amp;lsqb;</mo> <mi>I</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>I</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>+</mo> <mi>&amp;Delta;</mi> <mi>x</mi> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>+</mo> <mi>&amp;Delta;</mi> <mi>y</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow> <mn>2</mn> </msup> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2.1</mn> <mo>)</mo> </mrow> </mrow>
Wherein, I (xk,yk) represent point (xk,yk) gradient in window w, Δ x and Δ y are represented in the two directions x and y respectively Displacement;
The formula (2.1) is approximately:
<mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&amp;cong;</mo> <msub> <mi>&amp;Sigma;</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> <mo>&amp;Element;</mo> <mi>w</mi> </mrow> </msub> <mo>&amp;lsqb;</mo> <msub> <mi>I</mi> <mi>x</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mi>y</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>&amp;Delta;</mi> <mi>x</mi> </mtd> </mtr> <mtr> <mtd> <mi>&amp;Delta;</mi> <mi>y</mi> </mtd> </mtr> </mtable> </mfenced> <mn>2</mn> </msup> <mo>=</mo> <mo>&amp;lsqb;</mo> <mi>&amp;Delta;</mi> <mi>x</mi> <mo>,</mo> <mi>&amp;Delta;</mi> <mi>y</mi> <mo>&amp;rsqb;</mo> <mi>M</mi> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>&amp;Delta;</mi> <mi>x</mi> </mtd> </mtr> <mtr> <mtd> <mi>&amp;Delta;</mi> <mi>y</mi> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2.2</mn> <mo>)</mo> </mrow> </mrow>
Wherein
<mrow> <mi>M</mi> <mo>=</mo> <msub> <mi>&amp;Sigma;</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> <mo>&amp;Element;</mo> <mi>w</mi> </mrow> </msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msubsup> <mi>I</mi> <mi>x</mi> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <msub> <mi>I</mi> <mi>x</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <msub> <mi>I</mi> <mi>y</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>I</mi> <mi>x</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <msub> <mi>I</mi> <mi>y</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <msubsup> <mi>I</mi> <mi>y</mi> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2.3</mn> <mo>)</mo> </mrow> </mrow>
The matrix M obtained from each point two characteristic values are calculated, wherein the oval length of the characteristic vector correspondence of smaller characteristic value Direction of principal axis, this direction can be expressed as bearing of trend a little, by each point calculate obtained value be transformed into director space [0, 180] in, each value is represented in positive and negative both direction point-blank, each pixel of direction of motion figure generated Value all corresponds to the direction of some;
Direction of motion figure is normalized into 4 directions { 0,45,90,135 } by way of mean allocation, returned in direction of motion figure The maximum direction of quantity will be taken as the direction of background in image after one change;The direction of background is all removed afterwards, it is remaining Largest connected region is just taken as the object seed of supplement;Two proximal directions are lost to repair along in background direction search background The part of mistake, just connects together them when two neighbouring relevant ranges are spatially near each other;
If obtaining initial significantly object seed from step (1) to be made up of several disconnected regions, this is meant that from step Suddenly the result obtained in (1) can not be used for representing the whole skeleton of destination object, in this case, based on local autocorrelation Skeleton be used to allow initial significantly object seed to be optimized to preferably significantly object seed, it is contemplated that obtained by local autocorrelation The skeleton arrived compares some expansions with destination object, so applying morphologic etching operation to correct this problem;Originally Notable object seed begin when being made up of some disconnected regions, by the skeleton based on local autocorrelation and initial significantly object Seed is merged as final notable object seed;Otherwise, result will be only obtained in step (1) as final to show Write object seed;
Cavity in final notable object seed needs to be filled, except those account for 1/5 threshold value of whole destination object Cavity above;
(3) starting point for extension is generated on the edge of final notable object seed
The starting point of our extended methods is taken as in borderline each point of final notable object seed, one is constructed The convolution kernel N of individual 3 × 3 sizes, and allow final notable object seed binary map MbWith convolution kernel NsMake convolution operation, work as behaviour When the value for making central point p in window is 1, NsWill be by using calculating a factor of determination d:
<mrow> <mi>d</mi> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>1</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>n</mi> <mo>&amp;NotEqual;</mo> <mn>0</mn> <mi>a</mi> <mi>n</mi> <mi>d</mi> <mi> </mi> <mi>n</mi> <mo>&amp;NotEqual;</mo> <mn>8</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> <mi>o</mi> <mi>r</mi> <mi> </mi> <mi>n</mi> <mo>=</mo> <mn>8</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3.1</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mi>n</mi> <mo>=</mo> <msub> <mi>&amp;Sigma;</mi> <mrow> <msub> <mi>p</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>&amp;Element;</mo> <mi>w</mi> </mrow> </msub> <msub> <mi>p</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>*</mo> <msub> <mi>N</mi> <mrow> <mi>s</mi> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3.2</mn> <mo>)</mo> </mrow> </mrow>
Wherein, pijThe final notable object seed pixel binary value that the i-th row jth is arranged in w windows, and NsijRepresent NsIn the value that arranges of the i-th row jth, if n is not equal to 0 and 8, factor of determination d is 1, represent value a little be not just as, Just it assign this convolution kernel central point as one of starting point on final notable object seed edge;
(4) direction of extension is initialized for each starting point
Based on starting point set { Ps, the direction of each starting point can be calculated, it is contemplated that each starting point expands all along normal direction Exhibition, so needing to calculate this normal direction, is divided into 8 types, and mark it with 0 to 7 eight numeral by propagation direction , a convolution kernel NdIt is constructed out for determining the normal direction of each starting point, in calculated direction ldMark value public affairs Formula is as follows:
<mrow> <msub> <mi>N</mi> <mi>d</mi> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>2</mn> </mtd> </mtr> <mtr> <mtd> <mn>7</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>3</mn> </mtd> </mtr> <mtr> <mtd> <mn>6</mn> </mtd> <mtd> <mn>5</mn> </mtd> <mtd> <mn>4</mn> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4.2</mn> <mo>)</mo> </mrow> </mrow>
Wherein, pijIt is binary map MbThe value of the i-th row jth row pixel of correspondence in convolution window w,Expression is entered to the pixel value Row inversion operation, is changed into 0 if being originally 1, if 0 is changed into 1, and p11It is then the row picture of the 1st row the 1st in window w The value of element, n1It is to work as p11When=1 in addition to central point all satisfaction { pij=1 } quantity of point, and n2It is then to work as p11When=0 All satisfaction { p in addition to central pointij=1 } quantity of point;(5) end condition of extension is set and extended operation is completed
The border of original image is detected using the algorithm of adaptive threshold canny operators, and defines an expansible limitation Region, the restricted area is to carry out triple-expansion by final notable object seed to obtain;
In confined areas, each borderline point has the opportunity to the terminating point as extension, and bar will be terminated according to propagation direction Part is divided into two classes:{ 0,2,4,6 } and { 1,3,5,7 }, each type all contains several situation;
When normal direction of the starting point along it have found its terminating point, starting point will be carried out to corresponding termination One operation of point extension, in starting point (x, y) and terminating point (xe,ye) between extension line on point assignment again, extension Point set on line is expressed as formula:
{p(x+dixΔx,y+diyΔ y)=1 | 0≤Δ x≤| x-xe|,0≤Δy≤|y-ye|, (5.1)
Wherein, p (x+dixΔx,y+diyΔ y) is the binary value of the point on extension line, and it represents extension line in confined areas On again assignment point can turn into our final goal objects in point;diValue from set (- 1, -1), (0, -1), (1, - 1), (1,0), (1,1), (0,1), (- 1,1), (- 1,0) } in find, its 8 bearing mark value of correspondence 0,1,2,3,4,5,6, 7};
(6) mark could not find the starting point of terminating point as degradation regions mark point
If terminating point is not present in extension restricted area in a starting point, by the starting point labeled as one in degradation regions In point, obtain a mark point set P for representing degradation regions1
(7) repairing is carried out and smooth to the result after extension
These disconnected cavities are ranked up according to area, and the point in cavity again assignment is turned into destination object Point, except those areas account for more than 20% cavity of whole object, finally, using Gaussian filter come in sharpening result by Coarse border caused by error;
(8) object segmentation is repaired with super-pixel according to degradation regions mark point
The degenerate region that those are lost is repaired using the super-pixel obtained by simple linear iteration cluster superpixel segmentation method Domain, when the skeleton based on local autocorrelation is attached to final notable object seed, the object based on super-pixel is not performed The process of fusion;
<mrow> <msub> <mi>Obj</mi> <mi>f</mi> </msub> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>Obj</mi> <mi>e</mi> </msub> <mo>+</mo> <msub> <mo>&amp;Sigma;</mo> <mrow> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>&amp;Element;</mo> <mi>S</mi> <mo>,</mo> <msub> <mi>n</mi> <mrow> <mi>l</mi> <mi>i</mi> </mrow> </msub> <mo>&gt;</mo> <mn>5</mn> </mrow> </msub> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>a</mi> <mo>=</mo> <mn>0</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>Obj</mi> <mi>e</mi> </msub> <mo>,</mo> <mi>a</mi> <mo>=</mo> <mn>1</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8.1</mn> <mo>)</mo> </mrow> </mrow>
Wherein, ObjfThe final destination object of expression, and ObjeResult obtained by representing after extending;Parameter a=0 represents base It is not applied in the skeleton of local autocorrelation in final notable object seed, nliRepresent in super-pixel SiMiddle mark point Quantity, these mark points are obtained from step (6), and belong to set Pl
CN201510069617.3A 2015-02-10 2015-02-10 A kind of object segmentation methods based on conspicuousness suitable for serious degraded image Active CN104915946B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510069617.3A CN104915946B (en) 2015-02-10 2015-02-10 A kind of object segmentation methods based on conspicuousness suitable for serious degraded image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510069617.3A CN104915946B (en) 2015-02-10 2015-02-10 A kind of object segmentation methods based on conspicuousness suitable for serious degraded image

Publications (2)

Publication Number Publication Date
CN104915946A CN104915946A (en) 2015-09-16
CN104915946B true CN104915946B (en) 2017-10-13

Family

ID=54084986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510069617.3A Active CN104915946B (en) 2015-02-10 2015-02-10 A kind of object segmentation methods based on conspicuousness suitable for serious degraded image

Country Status (1)

Country Link
CN (1) CN104915946B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106353032B (en) * 2015-10-10 2019-03-29 北京控制与电子技术研究所 A kind of celestial body centroid rapid detection method under deficient illumination condition
CN106447681B (en) * 2016-07-26 2019-01-29 浙江工业大学 A kind of object segmentation methods of non-uniform severe motion degraded image
CN106295509B (en) * 2016-07-27 2019-11-08 浙江工业大学 A kind of structuring tracking towards object in non-homogeneous degraded video
CN106658004B (en) * 2016-11-24 2019-05-17 浙江大学 A kind of compression method and device based on image flat site feature
CN109242877B (en) * 2018-09-21 2021-09-21 新疆大学 Image segmentation method and device
CN109544554B (en) * 2018-10-18 2020-01-31 中国科学院空间应用工程与技术中心 plant image segmentation and leaf skeleton extraction method and system
CN115250939B (en) * 2022-06-14 2024-01-05 新瑞鹏宠物医疗集团有限公司 Pet hamper anti-misfeeding method and device, electronic equipment and storage medium
CN115375685B (en) * 2022-10-25 2023-03-24 临沂天元混凝土工程有限公司 Method for detecting sand and stone particle size abnormity in concrete raw material

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020965A (en) * 2012-11-29 2013-04-03 奇瑞汽车股份有限公司 Foreground segmentation method based on significance detection
CN103955934A (en) * 2014-05-06 2014-07-30 北京大学 Image blurring detecting algorithm combined with image obviousness region segmentation
CN104034732A (en) * 2014-06-17 2014-09-10 西安工程大学 Fabric defect detection method based on vision task drive

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8989437B2 (en) * 2011-05-16 2015-03-24 Microsoft Corporation Salient object detection by composition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020965A (en) * 2012-11-29 2013-04-03 奇瑞汽车股份有限公司 Foreground segmentation method based on significance detection
CN103955934A (en) * 2014-05-06 2014-07-30 北京大学 Image blurring detecting algorithm combined with image obviousness region segmentation
CN104034732A (en) * 2014-06-17 2014-09-10 西安工程大学 Fabric defect detection method based on vision task drive

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Image Partial Blur Detection and Classfication;Renting Liu et al.;《Computer Vision and Pattern Recognition ,2008.CVPR 2008. IEEE conference on》;20080628;论文第1-8页 *
融合模糊连通图和区域生长的MRI脑组织图像分割方法;吴建;《科学技术与工程》;20130228;第13卷(第5期);第1135-1140页 *

Also Published As

Publication number Publication date
CN104915946A (en) 2015-09-16

Similar Documents

Publication Publication Date Title
CN104915946B (en) A kind of object segmentation methods based on conspicuousness suitable for serious degraded image
Bleyer et al. Surface stereo with soft segmentation
CN106157319B (en) The conspicuousness detection method in region and Pixel-level fusion based on convolutional neural networks
Revaud et al. Epicflow: Edge-preserving interpolation of correspondences for optical flow
Guo et al. Image retargeting using mesh parametrization
Mishra et al. Active segmentation with fixation
CN110046598B (en) Plug-and-play multi-scale space and channel attention remote sensing image target detection method
CN100539698C (en) The video of interactive time-space unanimity is scratched drawing method in a kind of Digital Video Processing
Gupta et al. Real-time stereo matching using adaptive binary window
CN109858487A (en) Weakly supervised semantic segmentation method based on watershed algorithm and image category label
CN113870128B (en) Digital mural image restoration method based on depth convolution countermeasure network
CN105741265B (en) The processing method and processing device of depth image
CN110909741A (en) Vehicle re-identification method based on background segmentation
CN105760898A (en) Vision mapping method based on mixed group regression method
CN112330589A (en) Method and device for estimating pose and computer readable storage medium
CN106097385A (en) A kind of method and apparatus of target following
Mooser et al. Real-time object tracking for augmented reality combining graph cuts and optical flow
CN103473789B (en) A kind of human body methods of video segmentation merging multi thread
Li et al. Saliency detection via alternative optimization adaptive influence matrix model
Dimiccoli et al. Exploiting t-junctions for depth segregation in single images
CN111523411B (en) Synthetic aperture imaging method based on semantic patching
CN110322479B (en) Dual-core KCF target tracking method based on space-time significance
Shiota et al. Filtering, segmentation and depth
CN114565639B (en) Target tracking method and system based on composite convolution network
US8942491B2 (en) Topology-preserving downsampling of binary images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20200611

Address after: Room 1504-2, Dikai International Center, Jianggan District, Hangzhou, Zhejiang Province

Patentee after: HANGZHOU SHISHANG TECHNOLOGY Co.,Ltd.

Address before: The city Zhaohui six districts Chao Wang Road Hangzhou city Zhejiang province Zhejiang University of Technology No. 18 310014

Patentee before: ZHEJIANG University OF TECHNOLOGY

TR01 Transfer of patent right