CN103413332A - Image segmentation method based on two-channel texture segmentation active contour model - Google Patents

Image segmentation method based on two-channel texture segmentation active contour model Download PDF

Info

Publication number
CN103413332A
CN103413332A CN2013103713364A CN201310371336A CN103413332A CN 103413332 A CN103413332 A CN 103413332A CN 2013103713364 A CN2013103713364 A CN 2013103713364A CN 201310371336 A CN201310371336 A CN 201310371336A CN 103413332 A CN103413332 A CN 103413332A
Authority
CN
China
Prior art keywords
phi
centerdot
image
gray
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013103713364A
Other languages
Chinese (zh)
Other versions
CN103413332B (en
Inventor
许刚
马爽
史巍
刘坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China Electric Power University
Original Assignee
North China Electric Power University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China Electric Power University filed Critical North China Electric Power University
Priority to CN201310371336.4A priority Critical patent/CN103413332B/en
Publication of CN103413332A publication Critical patent/CN103413332A/en
Application granted granted Critical
Publication of CN103413332B publication Critical patent/CN103413332B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an image segmentation method based on a two-channel texture segmentation active contour model in the technical field of digital picture processing. The image segmentation method based on the two-channel texture segmentation active contour model comprises the steps that the gray level, the horizontal gradient field and the vertical gradient field of each pixel in an image are extracted; textural features corresponding to the gray level, the horizontal gradient field and the vertical gradient field of each pixel in the image are calculated; a gray feature channel and an edge feature channel are obtained according to the textural features; the two-channel texture segmentation active contour model is created; a texture segmentation model is minimized through evolvement of a horizontal set function to complete image segmentation. The image segmentation method improves algorithm efficiency, avoids incorrect segmentation caused by gray information, and improves accuracy of an algorithm.

Description

Image partition method based on two passage Texture Segmentation active contour models
Technical field
The invention belongs to the digital image processing techniques field, relate in particular to a kind of image partition method based on two passage Texture Segmentation active contour models.
Background technology
Image is cut apart, and especially cutting apart of texture image is important content and the difficult problem of computer vision and digital image processing field always.Texture Segmentation is, according to the textural characteristics consistance in image-region, target image is divided into to several not overlapping zones mutually.Method commonly used is at first to extract the characteristic information of image at present, then at feature space, cuts apart image according to certain model.Wherein, due to division and the merging of the curve of can automatically realizing developing, caused researchist's concern based on the driving wheel profile method of Level Set Theory, be widely used in Texture Segmentation.
Gabor filtering and structure tensor method are the most representative aspect texture feature extraction.Structure tensor method texture feature extraction adopts additive operator partition method (Additive Operator Separation usually, AOS) iterative nonlinear diffusion equations, name is called " based on the AOS algorithm of the image of ROF model and C-V model processing " (Huang Chengqi, Jilin University's master thesis, 2008) document in (the 12nd page-15 pages) specifically described the solution procedure of AOS algorithm.Gabor filtering utilizes the bank of filters of different directions, different frequency range to obtain the fully texture description of characteristic feature.The general multidimensional characteristic vectors group of at first utilizing the Gabor wave filter to extract texture image, then adopt active contour model, as: hyperchannel C-V(Chan-Vese) model, according to the equal value information of cutting apart each width characteristic image of inside or outside of curve, cut apart image; In addition, also be dissolved in model based on the textural characteristics edge detection operator of Beltrami framework, improved to a certain extent the accuracy rate of cutting apart of texture image.
But Gabor filtering is calculated loaded down with trivial details and can be produced bulk redundancy information, causes algorithm complex excessive; The C-V model can not well be processed the obvious texture image of structure.Based on the structure tensor method of Anisotropic diffusion, piece image is divided into to the gradient channel of gray scale passage and level, vertical, 45 ° of three directions, by each passage is implemented to nonlinear diffusion, effectively the smooth grain detailed information, extract gray scale and Gradient Features.At present, technology commonly used is that Gauss curve fitting method, Wasserstein distance metric method, local yardstick mensuration etc. are combined with structure tensor, to having obtained effect preferably cutting apart of texture image, yet structure tensor faces the problem same with Gabor filtering, while to the processing of high dimensional feature, making image cut apart, computing velocity is slower.In addition, histogram feature and some local messages also are used to image and cut apart, because the texture image complexity is various, all algorithm models all can only be applicable to the texture image of particular type, and the counting yield and the segmentation performance that how to improve algorithm are the difficult problems that people endeavour to solve always.
Summary of the invention
The object of the invention is to, propose a kind of image partition method based on two passage Texture Segmentation active contour models, the deficiency existed in order to the dividing method that solves texture image at present commonly used.
To achieve these goals, the technical scheme of the present invention's proposition is that a kind of image partition method based on two passage Texture Segmentation active contour models is characterized in that described method comprises:
Step 1: gray-scale value, horizontal gradient field and the VG (vertical gradient) field of extracting each pixel in image;
Step 2: the textural characteristics of the gray-scale value of each pixel, horizontal gradient field and VG (vertical gradient) field correspondence in computed image;
Step 3: obtain gray feature passage and edge feature passage according to described textural characteristics;
Step 4: set up two passage Texture Segmentation active contour models;
Step 5: the evolution by level set function minimizes the Texture Segmentation model and completes image and cut apart.
In described extraction image, the horizontal gradient field of each pixel is specially the employing formula The horizontal gradient field of the capable j row of the i pixel in computed image
Figure BDA0000370877990000022
Wherein, I (i, j) is the gray-scale value of the capable j row of the i pixel in image, and I (i+1, j) is the gray-scale value of the capable j row of the i+1 pixel in image.
In described extraction image, the VG (vertical gradient) field of each pixel is specially the employing formula
Figure BDA0000370877990000037
The horizontal gradient field of the capable j row of the i pixel in computed image
Figure BDA0000370877990000038
Wherein, I (i, j) is the gray-scale value of the capable j row of the i pixel in image, and I (i, j+1) is the gray-scale value of the capable j+1 row of the i pixel in image.
Described step 3 is specially:
Step 301: according to the horizontal gradient field of each pixel in image and the textural characteristics u of VG (vertical gradient) field correspondence 2(x, y) and u 3(x, y), adopt formula
Figure BDA0000370877990000031
Extract edge feature u Edge
Wherein, I is the gray-scale value of pixel,
Figure BDA0000370877990000039
It is the gradient of the gray-scale value of pixel;
Step 302: according to formula
Figure BDA0000370877990000032
Calculate gray feature passage u ' 1(x, y) and edge feature passage u ' 2(x, y); Wherein, i=1,2, L 1=u 1(x, y), L 2=u Edge(x, y).
The described two passage Texture Segmentation active contour models of setting up are: F ( c + ‾ , c - ‾ , φ ( x , y ) ) = μ · ∫ Ω δ ( φ ( x , y ) ) | ▿ φ ( x , y ) | + α · F 1 ( c + 1 , c - 1 , φ ( x , y ) ) + β · F 2 ( c + 2 , c - 2 , φ ( x , y ) ) ,
Wherein,
Figure BDA0000370877990000034
With
Figure BDA0000370877990000035
It is respectively the inside and outside average of curve C in gray feature passage and edge feature passage;
Curve C is C={ (x, y): φ (x, y)=0}, and φ (x, y) is the level function collection;
μ, α and β are respectively the parameter of length item, gray scale item and edge item;
Ω is integral domain, i.e. image-region;
δ () is the Dirac function;
Gradient for level function collection φ (x, y);
F 1 ( c + 1 , c - 1 , φ ( x , y ) ) = 1 1 + e - a · | c + 1 - c - 1 | · [ λ + 1 · ∫ Ω | u 1 ′ ( x , y ) - c + 1 | 2 H ( φ ( x , y ) ) dxdy
+ λ - 1 · ∫ Ω | u 1 ′ ( x , y ) - c - 1 | 2 ( 1 - H ( φ ( x , y ) ) ) dxdy ] ;
Figure BDA0000370877990000043
With
Figure BDA0000370877990000044
Be respectively the gray average of exterior domain in the curve C in the gray feature passage;
Figure BDA00003708779900000414
With
Figure BDA00003708779900000415
Be respectively the parameter of gray feature passage, and
Figure BDA00003708779900000416
Figure BDA00003708779900000417
U ' 1(x, y) is the value of gray feature passage corresponding to pixel (x, y);
H () is the Heaviside function;
A is constant and a be used to adjusting function shape > 0;
F 2 ( c + 2 , c - 2 , φ ( x , y ) ) = λ + 2 · ∫ Ω | u 2 ′ ( x , y ) - c + 2 | 2 H ( φ ( x , y ) ) dxdy
+ λ - 2 · ∫ Ω | u 2 ′ ( x , y ) - c - 2 | 2 ( 1 - H ( φ ( x , y ) ) ) dxdy
Figure BDA0000370877990000047
With Be respectively the gray average of exterior domain in the curve C in the edge feature passage;
Figure BDA0000370877990000049
With
Figure BDA00003708779900000410
Be respectively the parameter of edge feature passage, and
Figure BDA00003708779900000411
Figure BDA00003708779900000412
U ' 2(x, y) is the value of edge feature passage corresponding to pixel (x, y);
H () is the Heaviside function.
Described step 5 comprises:
Step 501: random given first closure is cut apart curve C 0, and the calculating first closure is cut apart curve C 0Corresponding initial level set function φ 0(x, y);
Step 502: setting model parameter μ, α, β,
Figure BDA00003708779900000413
Step 503: make k=0, calculate respectively first closure and cut apart curve C 0Inside and outside average;
First closure is cut apart curve C 0The computing formula of inner average is:
c + i ( φ 0 ( x , y ) ) = ∫ Ω u i ′ ( x , y ) H ( φ 0 ( x , y ) ) dxdy ∫ Ω H ( φ 0 ( x , y ) ) dxdy ;
First closure is cut apart curve C 0The computing formula of outside average is:
c - i ( φ 0 ) = ∫ Ω u i ′ ( x , y ) ( 1 - H ( φ 0 ( x , y ) ) ) dxdy ∫ Ω ( 1 - H ( φ 0 ( x , y ) ) ) dxdy ;
In above-mentioned two formula, i=1,2;
Ω is integral domain, i.e. image-region;
U ' 1(x, y) is the value of gray feature passage corresponding to pixel (x, y);
U ' 2(x, y) is the value of edge feature passage corresponding to pixel (x, y);
H () is the Heaviside function;
Step 504: according to formula φ K+1(x, y)-φ k(x, y)=Δ t * L (φ k(x, y)) iterative computation φ K+1(x, y);
L ( φ k ( x , y ) ) = δ ϵ ( φ k ( x , y ) ) [ μ · φ xx k ( x , y ) ( φ y k ( x , y ) ) 2 - 2 φ xy k ( x , y ) φ x k ( x , y ) φ y k ( x , y ) + φ yy k ( x , y ) ( φ x k ( x , y ) ) 2 ( ( φ x k ( x , y ) ) 2 + ( φ y k ( x , y ) ) 2 ) 3 / 2
- α · ( λ + 1 · 1 1 + e - a · | c + 1 ( φ k ( x , y ) ) - c - 1 ( φ k ( x , y ) ) | · ( u 1 ′ ( x , y ) - c + 1 ( φ k ( x , y ) ) ) 2 - λ - 1 · ( u 1 ′ ( x , y ) - c - 1 ( φ k ( x , y ) ) ) 2 ) ;
- β · ( λ + 2 · ( u 2 ′ ( x , y ) - c + 2 ( φ k ( x , y ) ) ) 2 - λ - 2 · ( u 2 ′ ( x , y ) - c - 2 ( φ k ( x , y ) ) ) 2 ) ]
Namely
φ k + 1 ( x , y ) - φ k ( x , y ) Δt =
δ ϵ ( φ k ( x , y ) ) [ μ · φ xx k ( x , y ) ( φ y k ( x , y ) ) 2 - 2 φ xy k ( x , y ) φ x k ( x , y ) φ y k ( x , y ) + φ yy k ( x , y ) ( φ x k ( x , y ) ) 2 ( ( φ x k ( x , y ) ) 2 + ( φ y k ( x , y ) ) 2 ) 3 / 2 ;
- α · ( λ + 1 · 1 1 + e - a · | c + 1 ( φ k ( x , y ) ) - c - 1 ( φ k ( x , y ) ) | · ( u 1 ′ ( x , y ) - c + 1 ( φ k ( x , y ) ) ) 2 - λ - 1 · ( u 1 ′ ( x , y ) - c - 1 ( φ k ( x , y ) ) ) 2 )
- β · ( λ + 2 · ( u 2 ′ ( x , y ) - c + 2 ( φ k ( x , y ) ) ) 2 - λ - 2 · ( u 2 ′ ( x , y ) - c - 2 ( φ k ( x , y ) ) ) 2 ) ]
Δ t is the setting-up time step-length, δ ε() is the Dirac function;
Step 505: from level function collection φ K+1Extract zero level collection in (x, y), the zero level collection of this extraction curve that namely develops;
Step 506: determined level collection of functions φ K+1Whether (x, y) be stable, when the difference of the evolution length of curve namely obtained when adjacent twice iteration is less than setting threshold, and level function collection φ K+1(x, y) is stable, execution step 507; Otherwise, make k=k+1, jump to step 504;
Step 507: by level function collection φ K+1In (x, y), extract the evolution curve as cutting apart curve, complete the image cutting procedure with the described curve segmentation image of cutting apart.
The present invention by extracting image edge and gray feature as cutting apart feature set, avoided the troublesome calculation to the high dimensional feature group, improved efficiency of algorithm; By the two passage Texture Segmentation C-V models of setting up, can with edge feature, take a driving curve as the leading factor at the grey scale change flat site and develop, avoid being cut apart by the mistake that half-tone information causes, improved the accuracy of algorithm.
The accompanying drawing explanation
Fig. 1 is based on the image side of the cutting apart process flow diagram of two passage Texture Segmentation active contour models;
Fig. 2 is the example texture image that emulation of the present invention is adopted;
Fig. 3 a is the edge feature figure that Fig. 2 is extracted;
Fig. 3 b is the gray feature figure that Fig. 2 is extracted;
Fig. 4 a is the initial segmentation curve to Fig. 2;
Fig. 4 b is cutting procedure and the result of the present invention to Fig. 2;
Fig. 4 c is cutting procedure and the result of basic C-V model to Fig. 2;
Fig. 5 a is the edge feature that other three width texture image is extracted;
Fig. 5 b is the gray feature that other three width texture image is extracted;
Fig. 5 c is the final segmentation result that other three width texture image is extracted.
Embodiment
Below in conjunction with accompanying drawing, preferred embodiment is elaborated.Should be emphasized that, following explanation is only exemplary, rather than in order to limit the scope of the invention and to apply.
Fig. 1 is based on the image partition method process flow diagram of two passage Texture Segmentation active contour models.As shown in Figure 1, the image partition method based on two passage Texture Segmentation active contour models provided by the invention comprises:
Step 1: gray-scale value, horizontal gradient field and the VG (vertical gradient) field of extracting each pixel in image.
The method that prior art provides the gray-scale value of pixel in multiple image to extract, select any getting final product wherein.Such as, in pixel, having the three-channel coloured image of RGB, as long as make R=G=B, three's value equates just can obtain gray level image.R=G=B=255 is white, and R=G=B=0 is black, when R=G=B=is less than certain integer of 255, is just now certain gray-scale value.
Extract the horizontal gradient field of each pixel in image and adopt formula:
I x 2 = [ I ( i , j ) - I ( i + 1 , j ) ] 2 - - - ( 1 )
In formula (1),
Figure BDA0000370877990000071
For the horizontal gradient field of the capable j row of the i pixel in image, I (i, j) is the gray-scale value of the capable j row of the i pixel in image, and I (i+1, j) is the gray-scale value of the capable j row of the i+1 pixel in image.
Extract the VG (vertical gradient) field of each pixel in image and adopt formula:
I y 2 = [ I ( i , j ) - I ( i , j + 1 ) ] 2 - - - ( 2 )
In formula (2),
Figure BDA0000370877990000082
For the horizontal gradient field of the capable j row of the i pixel in image, I (i, j) is the gray-scale value of the capable j row of the i pixel in image, and I (i, j+1) is the gray-scale value of the capable j+1 row of the i pixel in image.
Step 2: the textural characteristics of the gray-scale value of each pixel, horizontal gradient field and VG (vertical gradient) field correspondence in computed image.
The present invention is by setting up nonlinear diffusion equations, Nonlinear diffusion filtering is carried out in the gray-scale value, horizontal gradient field and the VG (vertical gradient) field that obtain each pixel in image, then extract the textural characteristics u of gray-scale value, horizontal gradient field and the VG (vertical gradient) field correspondence of level and smooth rear each pixel 1, u 2And u 3:
u=(u 1,u 2,u 3)=TV(I,I x 2,I y 2) (3)
In formula (3), TV means nonlinear diffusion equations, shown in (4):
∂ t u i = div ( g ( Σ k = 1 3 | ▿ u k | 2 ) ▿ u i ) ∀ i - - - ( 4 )
In formula (4), i=1,2,3, u 1, u 2And u 3Be respectively the textural characteristics of gray-scale value, horizontal gradient field and the VG (vertical gradient) field correspondence of each pixel, div () is the divergence computing, and g () is monotonic decreasing function and has
Figure BDA0000370877990000084
ξ is setting value.In the present invention, get ξ=e -10,
Figure BDA0000370877990000085
u IxTextural characteristics u iThe horizontal gradient field, u IyTextural characteristics u iThe VG (vertical gradient) field.Adopt additive operator partition method (Additive Operator Separation, AOS) iterative formula (4), obtain feature u 1~u 3, concrete steps are:
Step 101: initialization u iInitial value
Figure BDA0000370877990000086
Even the corresponding textural characteristics u in the gray-scale value of each pixel, horizontal gradient field and VG (vertical gradient) field 1, u 2And u 3Initial value
Figure BDA0000370877990000087
Figure BDA0000370877990000088
With
Figure BDA0000370877990000089
Be respectively gray-scale value, horizontal gradient field and the VG (vertical gradient) field of this pixel, make k=0.
Sub-step 102: order
Figure BDA00003708779900000810
Sub-step 103: adopt formula v i=K σ* u iCarry out Gaussian smoothing.Wherein, v iFeature u iThe image obtained after Gaussian smoothing, K σTo take the Gaussian function of σ as standard deviation.
Sub-step 104: according to formula
Figure BDA0000370877990000091
Calculate coefficient of diffusion.
Step 105: the equation v that solves respectively the x direction according to the Thomas algorithm Ix=(2I-4 τ A x) -1u iEquation v with the y direction Iy=(2I-4 τ A y) -1u i.Wherein, I is unit matrix, and its exponent number is image pixel number, and τ is by the time step after the time domain discretize, A xAnd A yBe respectively right
Figure BDA0000370877990000092
Ask two one dimension operators of partial derivative.
Step 106: according to formula u i=v Ix+ v IyUpgrade u i.
Step 107: order
Figure BDA0000370877990000093
And judge whether k≤K sets up, if k≤K makes k=k+1, return to step 102.Otherwise, execution step 108; K is setting value, K=30 in the present invention.
Step 108: incite somebody to action now
Figure BDA0000370877990000094
As textural characteristics.
Step 3: obtain gray feature passage and edge feature passage according to described textural characteristics.
As can be known according to the definition of texture image gray scale and Gradient Features, u 1There is obvious texture structure information, can't for image, cut apart separately; u 2And u 3In only comprise the part gradient information of image, wherein, level (vertically) gradient fields has larger numerical value in vertical (level) edge of image, and less in level (vertically) edge direction value, therefore, definition edge feature u EdgeFor:
u edge ( x , y ) = 1 ▿ I ( u 2 ( x , y ) + u 3 ( x , y ) ) - - - ( 5 )
In formula (5), I is the gray-scale value of pixel,
Figure BDA0000370877990000096
It is the gradient of the gray-scale value of pixel.For the impact of avoiding the dimension difference to cause, by use formula (6), unified the span of the two, obtain gray scale passage u ' 1(x, y) and edge gateway u ' 2(x, y):
u i ′ ( x , y ) = 255 · L i - min ( L i ) max ( L i ) - min ( L i ) - - - ( 6 )
In formula (6), i=1,2, L 1=u 1(x, y), L 2=u Edge(x, y); X and y are respectively the transverse and longitudinal coordinate of pixel in image.
Step 4: set up two passage Texture Segmentation active contour models.
Basic hyperchannel C-V model is a kind of regional active contour model, can without obvious border the time, cut apart target and background, and N feature passage establishing original image is u i(i=1,2 ..., N), C is for cutting apart curve,
Figure BDA0000370877990000109
With
Figure BDA00003708779900001010
Be respectively the inside and outside average of curve C in i passage, hyperchannel C-V energy model can be described as:
F ( c + ‾ , c - ‾ , C ) = μ · Length ( C )
+ ∫ inside ( C ) 1 N Σ i = 1 N λ + i | u i ( x , y ) - c + i | 2 dxdy - - - ( 7 )
+ ∫ outside ( C ) 1 N Σ i = 1 N λ - i | u i ( x , y ) - c - i | 2 dxdy
In formula (7), μ is length item parameter and μ>=0, With
Figure BDA00003708779900001012
Be respectively the parameter of i feature passage, first is the length of curve C, guarantees the smoothness of evolution curve.
The C-V model is according to the evolution of the mean value driving curve C of all channel energies, and in practice, not all feature all helps to find ideal to cut apart curve, when especially the gray scale difference in having similar gray-scale value or same texture region between the different texture zone was larger, what under the effect of gray feature passage, will lead to errors cut apart.
In assumed curve C, the average of the gray feature passage of exterior domain is respectively
Figure BDA0000370877990000105
With
Figure BDA0000370877990000106
Work as gray scale difference
Figure BDA0000370877990000107
Hour, the few as much as possible mistake when reducing between texture region that gray scale is close of the ratio of gray scale energy is cut apart; Along with
Figure BDA0000370877990000108
Increase, its energy increases gradually with driving curve C and develops to object boundary.According to the analysis to common membership function, using the sigmoid function as the adjustment coefficient of gray scale energy term, therefore, on the basis of C-V model, set up gray scale energy F 1, and with the method representation of level set, shown in (8):
F 1 ( c + 1 , c - 1 , φ ( x , y ) ) = 1 1 + e - a · | c + 1 - c - 1 | · [ λ + 1 · ∫ Ω | u 1 ′ ( x , y ) - c + 1 | 2 H ( φ ( x , y ) ) dxdy
(8)
+ λ - 1 · ∫ Ω | u 1 ′ ( x , y ) - c - 1 | 2 ( 1 - H ( φ ( x , y ) ) ) dxdy ]
In formula (8),
Figure BDA0000370877990000113
With
Figure BDA0000370877990000114
Be respectively the gray average of exterior domain in the curve C in the gray feature passage,
Figure BDA0000370877990000115
With Be respectively the parameter of gray feature passage, and
Figure BDA00003708779900001121
U ' 1(x, y) is the value of gray feature passage corresponding to pixel (x, y), and H () is the Heaviside function, and a is constant and a be used to adjusting function shape>0, desirable a=3.Coefficient
Figure BDA0000370877990000116
For along with
Figure BDA0000370877990000117
The sigmoid function that changes and change.
If in edge gateway, the inside and outside average of C is respectively
Figure BDA0000370877990000118
With
Figure BDA0000370877990000119
Set up edge energy F 2For:
F 2 ( c + 2 , c - 2 , φ ( x , y ) ) = λ + 2 · ∫ Ω | u 2 ′ ( x , y ) - c + 2 | 2 H ( φ ( x , y ) ) dxdy
(9)
+ λ - 2 · ∫ Ω | u 2 ′ ( x , y ) - c - 2 | 2 ( 1 - H ( φ ( x , y ) ) ) dxdy
In formula (9),
Figure BDA00003708779900001112
With
Figure BDA00003708779900001113
Be respectively the gray average of exterior domain in the curve C in the edge feature passage,
Figure BDA00003708779900001114
With
Figure BDA00003708779900001123
Be respectively the parameter of edge feature passage, and
Figure BDA00003708779900001115
Figure BDA00003708779900001124
u 2(x, y) is the value of edge feature passage corresponding to pixel (x, y), and H () is the Heaviside function.
Add length of curve and adjust item, the new Texture Segmentation active contour model based on edge and gray feature passage is:
F ( c + ‾ , c - ‾ , φ ( x , y ) ) = μ · ∫ Ω δ ( φ ( x , y ) ) | ▿ φ ( x , y ) | + α · F 1 ( c + 1 , c - 1 , φ ( x , y ) ) + β · F 2 ( c + 2 , c - 2 , φ ( x , y ) ) - - - ( 10 )
In formula (10),,
Figure BDA00003708779900001117
With
Figure BDA00003708779900001118
Be respectively the inside and outside average of curve C in gray feature passage and edge feature passage, curve C is C={ (x, y): φ (x, y)=0}, and φ (x, y) is the level function collection.α and β are respectively parameter and the α of gray scale item and edge item > 0, β > 0.Ω is integral domain, i.e. image-region.δ () is the Dirac function,
Figure BDA00003708779900001119
Gradient for level function collection φ (x, y).
In addition, in formula (8) and (9), the integral domain inside (C) in H (φ (x, y)) representation formula (7), the integral domain outside (C) in 1-H (φ (x, y)) representation formula (7).Employing is suc as formula the regularization form shown in (11) and (12):
H ϵ ( z ) = 1 2 ( 1 + 2 π arctan ( z ϵ ) ) , ϵ → 0 - - - ( 11 )
δ ϵ ( z ) = 1 π ϵ 2 ϵ 2 + z 2 - - - ( 12 )
Step 5: the evolution by level set function minimizes the Texture Segmentation model and completes image and cut apart.
At first, according to model, be formula (10), fixing horizontal set function φ, to the feature passage average of image
Figure BDA00003708779900001213
With
Figure BDA00003708779900001214
Differentiate obtains the average of two feature passages inside and outside curve C according to the variational method as follows:
c + i ( φ ) = ∫ Ω u i ′ ( x , y ) H ( φ ( x , y ) ) dxdy ∫ Ω H ( φ ( x , y ) ) dxdy - - - ( 13 )
c - i ( φ ) = ∫ Ω u i ′ ( x , y ) ( 1 - H ( φ ( x , y ) ) ) dxdy ∫ Ω ( 1 - H ( φ ( x , y ) ) ) dxdy - - - ( 14 )
Fixing again
Figure BDA0000370877990000125
With Ask
Figure BDA0000370877990000127
About the φ minimizing, by the Euler-Lagrange equation of derivation φ, the curve evolvement equation that obtains model is:
∂ φ ∂ t = δ ϵ [ μ · div ( ▿ φ | ▿ φ | ) - α · ( λ + 1 · 1 1 + e - a · | c + 1 - c - 1 | · ( u 1 ′ ( x , y ) - c + 1 ) 2 - λ - 1 · ( u 1 ′ ( x , y ) - c - 1 ) 2 )
- β · ( λ + 2 · ( u 2 ′ ( x , y ) - c + 2 ) 2 - λ - 2 · ( u 2 ′ ( x , y ) - c - 2 ) 2 ) ] - - - ( 15 )
Adopt method of finite difference discretize curve evolvement equation, the discrete form that obtains formula (15) according to forward difference is:
φ i , j n + 1 - φ i , j n Δt = L ( φ i , j n )
Wherein, Δ t is time step,
Figure BDA00003708779900001211
The numerical value that is formula (15) equal sign the right approaches.
Figure BDA00003708779900001212
In curvature
Figure BDA0000370877990000131
Be expressed as:
div ( ▿ φ | ▿ φ | ) = φ xx φ y 2 - 2 φ xy φ x φ y + φ yy φ x 2 ( φ x 2 + φ y 2 ) 3 / 2
Wherein:
φ x = 1 2 h ( φ i + 1 , j - φ i - 1 , j ) φ y = 1 2 h ( φ i , j + 1 - φ i , j - 1 )
φ xx = 1 h 2 ( φ i + 1 , j + φ i - 1 , j - 2 φ i , j ) φ yy = 1 h 2 ( φ i , j + 1 + φ i , j - 1 - 2 φ i , j )
φ xy = 1 h 2 ( φ i + 1 , j + 1 - φ i - 1 , j + 1 - φ i + 1 , j - 1 + φ i - 1 , j - 1 )
Wherein, h means the discrete networks interval, h=1 commonly used.According to the differential representation of level set function, the discrete form that can obtain curve evolvement equation (15) is:
φ i , j n + 1 - φ i , j n Δt = δ ϵ ( φ i , j n ) [ μ · φ xx φ y 2 - 2 φ xy φ x φ y + φ yy φ x 2 ( φ x 2 + φ y 2 ) 3 / 2
- α · ( λ + 1 · 1 1 + e - a · | c + 1 ( φ n ) - c - 1 ( φ n ) | · ( u 1 ( i , j ) ′ - c + 1 ( φ n ) ) 2 - λ - 1 · ( u 1 ( i , j ) ′ - c - 1 ( φ n ) ) 2 ) - - - ( 16 )
- β · ( λ + 2 · ( u 2 ( i , j ) ′ - c + 2 ( φ n ) ) 2 - λ - 2 · ( u 2 ( i , j ) ′ - c - 2 ( φ n ) ) 2 ) ]
Model solution equation in sum, determine that the concrete steps of texture image model level set movements are:
Step 501: random given first closure is cut apart curve C 0, and the calculating first closure is cut apart curve C 0Corresponding initial level set function φ 0(x, y).
Step 502: setting model parameter μ, α, β,
Figure BDA00003708779900001311
Generally μ=0.2, other parameter are taken as 1, and parameter value can be adjusted according to different texture images.
Step 503: make k=0, calculate respectively first closure and cut apart curve C 0Inside and outside average.
First closure is cut apart curve C 0The computing formula of inner average is:
c + i ( φ 0 ( x , y ) ) = ∫ Ω u i ′ ( x , y ) H ( φ 0 ( x , y ) ) dxdy ∫ Ω H ( φ 0 ( x , y ) ) dxdy .
First closure is cut apart curve C 0The computing formula of outside average is:
c - i ( φ 0 ) = ∫ Ω u i ′ ( x , y ) ( 1 - H ( φ 0 ( x , y ) ) ) dxdy ∫ Ω ( 1 - H ( φ 0 ( x , y ) ) ) dxdy .
In above-mentioned two formula, i=1,2, Ω are integral domain, i.e. image-region, u ' 1(x, y) is the value of gray feature passage corresponding to pixel (x, y), u ' 2(x, y) is the value of edge feature passage corresponding to pixel (x, y), and H () is the Heaviside function.
Step 504: according to formula φ K+1(x, y)-φ k(x, y)=Δ t * L (φ k(x, y)) iterative computation φ K+1(x, y).
L ( φ k ( x , y ) ) = δ ϵ ( φ k ( x , y ) ) [ μ · φ xx k ( x , y ) ( φ y k ( x , y ) ) 2 - 2 φ xy k ( x , y ) φ x k ( x , y ) φ y k ( x , y ) + φ yy k ( x , y ) ( φ x k ( x , y ) ) 2 ( ( φ x k ( x , y ) ) 2 + ( φ y k ( x , y ) ) 2 ) 3 / 2
- α · ( λ + 1 · 1 1 + e - a · | c + 1 ( φ k ( x , y ) ) - c - 1 ( φ k ( x , y ) ) | · ( u 1 ′ ( x , y ) - c + 1 ( φ k ( x , y ) ) ) 2 - λ - 1 · ( u 1 ′ ( x , y ) - c - 1 ( φ k ( x , y ) ) ) 2 ) ;
- β · ( λ + 2 · ( u 2 ′ ( x , y ) - c + 2 ( φ k ( x , y ) ) ) 2 - λ - 2 · ( u 2 ′ ( x , y ) - c - 2 ( φ k ( x , y ) ) ) 2 ) ]
Namely
φ k + 1 ( x , y ) - φ k ( x , y ) Δt =
δ ϵ ( φ k ( x , y ) ) [ μ · φ xx k ( x , y ) ( φ y k ( x , y ) ) 2 - 2 φ xy k ( x , y ) φ x k ( x , y ) φ y k ( x , y ) + φ yy k ( x , y ) ( φ x k ( x , y ) ) 2 ( ( φ x k ( x , y ) ) 2 + ( φ y k ( x , y ) ) 2 ) 3 / 2 .
- α · ( λ + 1 · 1 1 + e - a · | c + 1 ( φ k ( x , y ) ) - c - 1 ( φ k ( x , y ) ) | · ( u 1 ′ ( x , y ) - c + 1 ( φ k ( x , y ) ) ) 2 - λ - 1 · ( u 1 ′ ( x , y ) - c - 1 ( φ k ( x , y ) ) ) 2 )
- β · ( λ + 2 · ( u 2 ′ ( x , y ) - c + 2 ( φ k ( x , y ) ) ) 2 - λ - 2 · ( u 2 ′ ( x , y ) - c - 2 ( φ k ( x , y ) ) ) 2 ) ]
Step 505: from level function collection φ K+1Extract zero level collection in (x, y), the zero level collection of this extraction curve that namely develops.
Step 506: determined level collection of functions φ K+1Whether (x, y) be stable, when the difference of the evolution length of curve namely obtained when adjacent twice iteration is less than setting threshold, and level function collection φ K+1(x, y) is stable, execution step 507; Otherwise, make k=k+1, jump to step 504.
Step 507: by level function collection φ K+1In (x, y), extract the evolution curve as cutting apart curve, complete the image cutting procedure with the described curve segmentation image of cutting apart.
Effect of the present invention can further illustrate by following emulation:
Adopt method of the present invention to the zebra texture Image Segmentation Using shown in accompanying drawing 2, the picture edge characteristic obtained and gray feature are respectively as shown in accompanying drawing 3a and 3b.At image, cut apart the stage, adopt respectively the method disclosed in the present and basic hyperchannel C-V model to contrast, initial segmentation curve, cutting procedure and finally cut apart curve and in accompanying drawing 4, provide respectively.
As can be known by accompanying drawing 3 and 4, the present invention be take edge feature in curve evolvement early stage and is main drive, and the evolution curve can rest on the place that the back of zebra etc. has limbus accurately; And for the weak place of the edge features such as zebra head, tail and four limbs, the gray scale energy plays a leading role and makes curve continue to develop, correct realization to the extraction of target area.Basic C-V model is due to by all Gradient Features and gray feature equivalent process, and curve is affected by half-tone information and occurred that a large amount of mistakes cut apart in evolutionary process, can not obtain correct segmentation result.
In accompanying drawing 5, provided and adopted the present invention to characteristic pattern and segmentation result that several typical texture images extract, comprised sufficient information in two feature passages, the parted pattern of setting up has good segmentation performance.Owing to only in two feature passages, carrying out Texture Segmentation, the more traditional Gabor wave filter of efficiency of algorithm, structure tensor etc. have obtained significant raising.
In sum, the present invention has avoided the troublesome calculation to many features passage, the Texture Segmentation model of two passages overcome the different texture area grayscale close cause cut apart difficulty, especially trickle to background texture, the obvious image of target texture structure obtained good segmentation effect; In addition, the present invention can think a kind of non-supervisory method, has very strong applicability, is a kind of very effective Texture Segmentation Methods.
The above; only be the present invention's embodiment preferably, but protection scope of the present invention is not limited to this, anyly is familiar with those skilled in the art in the technical scope that the present invention discloses; the variation that can expect easily or replacement, within all should being encompassed in protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain of claim.

Claims (6)

1. image partition method based on two passage Texture Segmentation active contour models is characterized in that described method comprises:
Step 1: gray-scale value, horizontal gradient field and the VG (vertical gradient) field of extracting each pixel in image;
Step 2: the textural characteristics of the gray-scale value of each pixel, horizontal gradient field and VG (vertical gradient) field correspondence in computed image;
Step 3: obtain gray feature passage and edge feature passage according to described textural characteristics;
Step 4: set up two passage Texture Segmentation active contour models;
Step 5: the evolution by level set function minimizes the Texture Segmentation model and completes image and cut apart.
2. method according to claim 1, is characterized in that the horizontal gradient field of each pixel in described extraction image is specially the employing formula
Figure FDA0000370877980000013
The horizontal gradient field of the capable j row of the i pixel in computed image
Figure FDA0000370877980000014
Wherein, I (i, j) is the gray-scale value of the capable j row of the i pixel in image, and I (i+1, j) is the gray-scale value of the capable j row of the i+1 pixel in image.
3. method according to claim 1, is characterized in that the VG (vertical gradient) field of each pixel in described extraction image is specially the employing formula
Figure FDA0000370877980000015
The horizontal gradient field of the capable j row of the i pixel in computed image
Figure FDA0000370877980000011
Wherein, I (i, j) is the gray-scale value of the capable j row of the i pixel in image, and I (i, j+1) is the gray-scale value of the capable j+1 row of the i pixel in image.
4. according to the method in claim 2 or 3, it is characterized in that described step 3 is specially:
Step 301: according to the horizontal gradient field of each pixel in image and the textural characteristics u of VG (vertical gradient) field correspondence 2(x, y) and u 3(x, y), adopt formula
Figure FDA0000370877980000012
Extract edge feature u Edge
Wherein, I is the gray-scale value of pixel,
Figure FDA0000370877980000016
It is the gradient of the gray-scale value of pixel;
Step 302: according to formula
Figure FDA0000370877980000021
Calculate gray feature passage u ' 1(x, y) and edge feature passage u ' 2(x, y); Wherein, i=1,2, L 1=u 1(x, y), L 2=u Edge(x, y).
5. method according to claim 4 is characterized in that the described two passage Texture Segmentation active contour models of setting up are:
F ( c + ‾ , c - ‾ , φ ( x , y ) ) = μ · ∫ Ω δ ( φ ( x , y ) ) | ▿ φ ( x , y ) | + α · F 1 ( c + 1 , c - 1 , φ ( x , y ) ) + β · F 2 ( c + 2 , c - 2 , φ ( x , y ) ) ,
Wherein,
Figure FDA0000370877980000023
With
Figure FDA0000370877980000024
It is respectively the inside and outside average of curve C in gray feature passage and edge feature passage;
Curve C meets C={ (x, y): φ (x, y)=0}, and φ (x, y) is the level function collection;
μ, α and β are respectively the parameter of length item, gray scale item and edge item;
Ω is integral domain, i.e. image-region;
δ () is the Dirac function;
Gradient for level function collection φ (x, y);
F 1 ( c + 1 , c - 1 , φ ( x , y ) ) = 1 1 + e - a · | c + 1 - c - 1 | · [ λ + 1 · ∫ Ω | u 1 ′ ( x , y ) - c + 1 | 2 H ( φ ( x , y ) ) dxdy
+ λ - 1 · ∫ Ω | u 1 ′ ( x , y ) - c - 1 | 2 ( 1 - H ( φ ( x , y ) ) ) dxdy ] ;
Figure FDA0000370877980000028
With
Figure FDA0000370877980000029
Be respectively the gray average of exterior domain in the curve C in the gray feature passage;
Figure FDA00003708779800000210
With
Figure FDA00003708779800000211
Be respectively the parameter of gray feature passage, and
Figure FDA00003708779800000212
Figure FDA00003708779800000213
U ' 1(x, y) is the value of gray feature passage corresponding to pixel (x, y);
H () is the Heaviside function;
A is constant and a be used to adjusting function shape > 0;
F 2 ( c + 2 , c - 2 , φ ( x , y ) ) = λ + 2 · ∫ Ω | u 2 ′ ( x , y ) - c + 2 } 2 H ( φ ( x , y ) ) dxdy
+ λ - 2 · ∫ Ω | u 2 ′ ( x , y ) - c - 2 | 2 ( 1 - H ( φ ( x , y ) ) ) dxdy
Figure FDA0000370877980000033
With
Figure FDA0000370877980000034
Be respectively the gray average of exterior domain in the curve C in the edge feature passage;
Figure FDA0000370877980000035
With
Figure FDA0000370877980000036
Be respectively the parameter of edge feature passage, and
Figure FDA0000370877980000037
Figure FDA0000370877980000038
U ' 2(x, y) is the value of edge feature passage corresponding to pixel (x, y);
H () is the Heaviside function.
6. method according to claim 5 is characterized in that described step 5 comprises:
Step 501: random given first closure is cut apart curve C 0, and the calculating first closure is cut apart curve C 0Corresponding initial level set function φ 0(x, y);
Step 502: setting model parameter μ, α, β,
Figure FDA0000370877980000039
Step 503: make k=0, calculate respectively first closure and cut apart curve C 0Inside and outside average;
First closure is cut apart curve C 0The computing formula of inner average is:
c + i ( φ 0 ( x , y ) ) = ∫ Ω u i ′ ( x , y ) H ( φ 0 ( x , y ) ) dxdy ∫ Ω H ( φ 0 ( x , y ) ) dxdy ;
First closure is cut apart curve C 0The computing formula of outside average is:
c - i ( φ 0 ) = ∫ Ω u i ′ ( x , y ) ( 1 - H ( φ 0 ( x , y ) ) ) dxdy ∫ Ω ( 1 - H ( φ 0 ( x , y ) ) ) dxdy ;
In above-mentioned two formula, i=1,2;
Ω is integral domain, i.e. image-region;
U ' 1(x, y) is the value of gray feature passage corresponding to pixel (x, y);
U ' 2(x, y) is the value of edge feature passage corresponding to pixel (x, y);
H () is the Heaviside function;
Step 504: according to formula φ K+1(x, y)-φ k(x, y)=Δ t * L (φ k(x, y)) iterative computation φ K+1(x, y);
L ( φ k ( x , y ) ) = δ ϵ ( φ k ( x , y ) ) [ μ · φ xx k ( x , y ) ( φ y k ( x , y ) ) 2 - 2 φ xy k ( x , y ) φ x k ( x , y ) φ y k ( x , y ) + φ yy k ( x , y ) ( φ x k ( x , y ) ) 2 ( ( φ x k ( x , y ) ) 2 + ( φ y k ( x , y ) ) 2 ) 3 / 2
- α · ( λ + 1 · 1 1 + e - a · | c + 1 ( φ k ( x , y ) ) - c - 1 ( φ k ( x , y ) ) | · ( u 1 ′ ( x , y ) - c + 1 ( φ k ( x , y ) ) ) 2 - λ - 1 · ( u 1 ′ ( x , y ) - c - 1 ( φ k ( x , y ) ) ) 2 ) ;
- β · ( λ + 2 · ( u 2 ′ ( x , y ) - c + 2 ( φ k ( x , y ) ) ) 2 - λ - 2 · ( u 2 ′ ( x , y ) - c - 2 ( φ k ( x , y ) ) ) 2 ) ]
Namely
φ k + 1 ( x , y ) - φ k ( x , y ) Δt =
δ ϵ ( φ k ( x , y ) ) [ μ · φ xx k ( x , y ) ( φ y k ( x , y ) ) 2 - 2 φ xy k ( x , y ) φ x k ( x , y ) φ y k ( x , y ) + φ yy k ( x , y ) ( φ x k ( x , y ) ) 2 ( ( φ x k ( x , y ) ) 2 + ( φ y k ( x , y ) ) 2 ) 3 / 2 ;
- α · ( λ + 1 · 1 1 + e - a · | c + 1 ( φ k ( x , y ) ) - c - 1 ( φ k ( x , y ) ) | · ( u 1 ′ ( x , y ) - c + 1 ( φ k ( x , y ) ) ) 2 - λ - 1 · ( u 1 ′ ( x , y ) - c - 1 ( φ k ( x , y ) ) ) 2 )
- β · ( λ + 2 · ( u 2 ′ ( x , y ) - c + 2 ( φ k ( x , y ) ) ) 2 - λ - 2 · ( u 2 ′ ( x , y ) - c - 2 ( φ k ( x , y ) ) ) 2 ) ]
Δ t is the setting-up time step-length, δ ε() is the Dirac function;
Step 505: from level function collection φ K+1Extract zero level collection in (x, y), the zero level collection of this extraction curve that namely develops;
Step 506: determined level collection of functions φ K+1Whether (x, y) be stable, when the difference of the evolution length of curve namely obtained when adjacent twice iteration is less than setting threshold, and level function collection φ K+1(x, y) is stable, execution step 507; Otherwise, make k=k+1, jump to step 504;
Step 507: by level function collection φ K+1In (x, y), extract the evolution curve as cutting apart curve, complete the image cutting procedure with the described curve segmentation image of cutting apart.
CN201310371336.4A 2013-08-23 2013-08-23 Based on the image partition method of two passage Texture Segmentation active contour models Expired - Fee Related CN103413332B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310371336.4A CN103413332B (en) 2013-08-23 2013-08-23 Based on the image partition method of two passage Texture Segmentation active contour models

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310371336.4A CN103413332B (en) 2013-08-23 2013-08-23 Based on the image partition method of two passage Texture Segmentation active contour models

Publications (2)

Publication Number Publication Date
CN103413332A true CN103413332A (en) 2013-11-27
CN103413332B CN103413332B (en) 2016-05-18

Family

ID=49606337

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310371336.4A Expired - Fee Related CN103413332B (en) 2013-08-23 2013-08-23 Based on the image partition method of two passage Texture Segmentation active contour models

Country Status (1)

Country Link
CN (1) CN103413332B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123719A (en) * 2014-06-03 2014-10-29 南京理工大学 Method for carrying out infrared image segmentation by virtue of active outline
CN105894496A (en) * 2016-03-18 2016-08-24 常州大学 Semi-local-texture-feature-based two-stage image segmentation method
CN106296649A (en) * 2016-07-21 2017-01-04 北京理工大学 A kind of texture image segmenting method based on Level Set Models
CN109961424A (en) * 2019-02-27 2019-07-02 北京大学 A kind of generation method of hand x-ray image data
CN110490859A (en) * 2019-08-21 2019-11-22 西安工程大学 A kind of texture inhibits the fabric defect detection method in conjunction with Active contour

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976445A (en) * 2010-11-12 2011-02-16 西安电子科技大学 Level set SAR (Synthetic Aperture Radar) image segmentation method by combining edges and regional probability density difference
CN102426700A (en) * 2011-11-04 2012-04-25 西安电子科技大学 Level set SAR image segmentation method based on local and global area information
CN102426699A (en) * 2011-11-04 2012-04-25 西安电子科技大学 Level set synthetic aperture radar (SAR) image segmentation method based on edge and regional information

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976445A (en) * 2010-11-12 2011-02-16 西安电子科技大学 Level set SAR (Synthetic Aperture Radar) image segmentation method by combining edges and regional probability density difference
CN102426700A (en) * 2011-11-04 2012-04-25 西安电子科技大学 Level set SAR image segmentation method based on local and global area information
CN102426699A (en) * 2011-11-04 2012-04-25 西安电子科技大学 Level set synthetic aperture radar (SAR) image segmentation method based on edge and regional information

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MIKAEL ROUSSON ET AL.: "Active unsupervised texture segmentation on a diffusion based feature space", 《PROCEEDINGS OF IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
TONY F. CHAN ET AL.: "Active contours without edges for vector-valued images", 《JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION》 *
张煜,谭德宝: "利用非线性扩散的半自动纹理图像分割", 《武汉大学学报·信息科学版》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123719A (en) * 2014-06-03 2014-10-29 南京理工大学 Method for carrying out infrared image segmentation by virtue of active outline
CN104123719B (en) * 2014-06-03 2017-01-25 南京理工大学 Method for carrying out infrared image segmentation by virtue of active outline
CN105894496A (en) * 2016-03-18 2016-08-24 常州大学 Semi-local-texture-feature-based two-stage image segmentation method
CN106296649A (en) * 2016-07-21 2017-01-04 北京理工大学 A kind of texture image segmenting method based on Level Set Models
CN106296649B (en) * 2016-07-21 2018-11-20 北京理工大学 A kind of texture image segmenting method based on Level Set Models
CN109961424A (en) * 2019-02-27 2019-07-02 北京大学 A kind of generation method of hand x-ray image data
CN109961424B (en) * 2019-02-27 2021-04-13 北京大学 Hand X-ray image data generation method
CN110490859A (en) * 2019-08-21 2019-11-22 西安工程大学 A kind of texture inhibits the fabric defect detection method in conjunction with Active contour

Also Published As

Publication number Publication date
CN103413332B (en) 2016-05-18

Similar Documents

Publication Publication Date Title
CN103455991B (en) A kind of multi-focus image fusing method
CN103886589B (en) Object-oriented automated high-precision edge extracting method
CN102135606B (en) KNN (K-Nearest Neighbor) sorting algorithm based method for correcting and segmenting grayscale nonuniformity of MR (Magnetic Resonance) image
CN108053417A (en) A kind of lung segmenting device of the 3DU-Net networks based on mixing coarse segmentation feature
CN102426700B (en) Level set SAR image segmentation method based on local and global area information
CN104036479B (en) Multi-focus image fusion method based on non-negative matrix factorization
CN103927717A (en) Depth image recovery method based on improved bilateral filters
CN109887021B (en) Cross-scale-based random walk stereo matching method
CN102903102A (en) Non-local-based triple Markov random field synthetic aperture radar (SAR) image segmentation method
CN102999901A (en) Method and system for processing split online video on the basis of depth sensor
CN103413332A (en) Image segmentation method based on two-channel texture segmentation active contour model
CN103871062B (en) A kind of lunar surface rock detection method described based on super-pixel
CN102298774B (en) Non-local mean denoising method based on joint similarity
CN110648342A (en) Foam infrared image segmentation method based on NSST significance detection and image segmentation
CN103364410A (en) Crack detection method of hydraulic concrete structure underwater surface based on template search
CN104732545A (en) Texture image segmentation method combined with sparse neighbor propagation and rapid spectral clustering
CN102663762B (en) The dividing method of symmetrical organ in medical image
CN106780450A (en) A kind of image significance detection method based on low-rank Multiscale Fusion
CN101231745A (en) Automatic partitioning method for optimizing image initial partitioning boundary
CN105989598A (en) Eye fundus image vessel segmentation method based on local enhancement active contour module
CN103761726A (en) Partitioning self-adaptation image segmentation method based on FCM
CN103955945A (en) Self-adaption color image segmentation method based on binocular parallax and movable outline
CN105469408A (en) Building group segmentation method for SAR image
CN101765019A (en) Stereo matching algorithm for motion blur and illumination change image
CN111754538A (en) Threshold segmentation method for USB surface defect detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160518

Termination date: 20170823