CN111435532B - Method for detecting tree-like structure end point in digital image - Google Patents

Method for detecting tree-like structure end point in digital image Download PDF

Info

Publication number
CN111435532B
CN111435532B CN201910029830.XA CN201910029830A CN111435532B CN 111435532 B CN111435532 B CN 111435532B CN 201910029830 A CN201910029830 A CN 201910029830A CN 111435532 B CN111435532 B CN 111435532B
Authority
CN
China
Prior art keywords
dimensional
point
candidate
rays
tree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910029830.XA
Other languages
Chinese (zh)
Other versions
CN111435532A (en
Inventor
刘敏
杨博
陈伟迅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN201910029830.XA priority Critical patent/CN111435532B/en
Publication of CN111435532A publication Critical patent/CN111435532A/en
Application granted granted Critical
Publication of CN111435532B publication Critical patent/CN111435532B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for detecting a terminal point of a tree-shaped structure in a digital image, which comprises the following two-dimensional terminal point detection steps: step 1, detecting two-dimensional high curvature points in a two-dimensional image, and taking the two-dimensional high curvature points as candidate two-dimensional tip points; step 2, detecting the local diameter of the tree-shaped structure around each candidate two-dimensional tip point; and 3, detecting a real two-dimensional tip point from the candidate two-dimensional tip points based on the multi-scale divergent ray model. The three-dimensional tip point detection step comprises: firstly, extracting a group of two-dimensional slice images from a three-dimensional image, detecting a two-dimensional tip point in each two-dimensional slice image, and taking the detected two-dimensional tip point as a candidate three-dimensional tip point; then, true three-dimensional tip points are detected from the candidate three-dimensional tip points based on the tip point visual prior. The invention has high detection accuracy.

Description

Method for detecting tree-like structure end point in digital image
Technical Field
The invention belongs to the technical field of image processing, and relates to a method for detecting a terminal point of a tree-shaped structure in a digital image.
Background
In biomedical research, the reconstruction of the morphology of tree-like structures (such as neurons, retinal vessels and bronchi) plays a crucial role. Currently, many tree tracking methods rely on suitable seed points, one of the most desirable of which is the start and end points of the tree, i.e., the end points. If the position of the distal point of the tree structure can be automatically and accurately detected, a plurality of reconstruction tools and methods can reconstruct the two-dimensional/three-dimensional shape of the tree structure.
The prior art discloses an automatic detection method [1] for nerve ending points in a three-dimensional image stack, which comprises the following steps of:
a. two-dimensional high curvature points in a three-dimensional image stack (i.e., a three-dimensional image stacked from a number of two-dimensional slices) are detected as candidate tip points: a three-dimensional tree-shaped structure end point generates two-dimensional end points on a plurality of two-dimensional slices, so that the method firstly detects all the two-dimensional end points and then screens the three-dimensional end points. The method detects two-dimensional high curvature points by utilizing curvature information of edge points (an angular point detection method based on a contour curve), takes the two-dimensional high curvature points as candidate tip points, and records a set formed by all the candidate tip points as TP2
b. The local diameter of the tree around each candidate tip point is examined: performing distance transformation on a tree structure (neuron) image stack by using an MSFM (multi Stencils Fast marching) algorithm to obtain a distance image (distance map); for a set of candidate tip points TP2Taking each candidate tip point P as an interest point, respectively intercepting a nearby region of interest in the distance image, taking the point with the largest distance value in the region as a central point, and using a Rayburst sampling algorithm with the point as the center to obtain the local diameter Ra of the tree structure around the candidate tip point;
c. detecting a two-dimensional tip point based on the self-adaptive divergent ray model: for each candidate tip point P, first, an adaptive divergent rays model is generated, which includes projecting M rays out of P, the length of the ray N (number of points on the ray) being equal to the local diameter Ra of the tree around the candidate tip point P, i.e. N ═ Ra; let I (I, j) denote an intensity matrix of the rays, where I is 1: M, j is 1: N, each row of the matrix represents the pixel intensities at N points on one ray, and the intensity values of the points on the oblique ray are calculated by a bilinear interpolation method; then, the average intensity value of each ray in the self-adaptive divergent ray model is calculated
Figure BDA0001943860940000011
And find the maximum mean intensity value MI=miax (A (i)); if M isIGreater than a given threshold T0If the threshold value T is equal to MIXr, R ∈ (0,1) to distinguish whether each ray is a foreground ray, rays with intensity a (i) greater than T will be considered foreground rays and included in the set Q; then, the maximum included angle M between the foreground rays in the set Q is calculatedA(ii) a Finally, according to the number n of the foreground rays and the maximum included angle M between the foreground raysAJudging whether the candidate tip point P is a two-dimensional tip point, if the candidate tip point P satisfies the following formula, judging that the candidate tip point P is the two-dimensional tip point:
Figure BDA0001943860940000021
wherein, T1,T2And T3The given threshold value can be adjusted according to actual conditions.
d. Screening out three-dimensional tip points from the two-dimensional tip points by adopting an adjacent slice verification algorithm based on a Gaussian model: for true three-dimensional tip points, at the same position (x) of several adjacent slices0,y0) And c, fitting the distribution of the pixel intensity values into a Gaussian model, and excluding the pseudo points in the two-dimensional peripheral points obtained in the step c according to the characteristic to obtain real three-dimensional peripheral points.
The accuracy of the two-dimensional/three-dimensional end points of the tree structure detected by the detection method is all deficient. First, the step c is mainly used to detect the "tip point" of the tree structure, and when the two-dimensional tip point of the tree structure is detected, the burrs (such as dendrite spines and axon knots) on the surface of the tree structure may be mistaken for the two-dimensional tip point. Secondly, the adjacent slice verification algorithm based on the Gaussian model adopted in the step d has insufficient robustness on the change of branch directions, and the three-dimensional tip point of the tree structure almost perpendicular to the z direction cannot be identified.
Reference documents:
[1] automatic detection of nerve ending points in three-dimensional image stacks [ J ] electronic measurement and instrumental report, 2017,31(8): 1304-plus 1311.
Disclosure of Invention
Aiming at the technical problem solved by the invention and the defects of the prior art, the invention provides the detection method of the terminal point of the tree-shaped structure, and the detection accuracy is high.
The technical scheme provided by the invention is as follows:
a detection method of a tree-shaped structure end point in a digital image detects the tree-shaped structure two-dimensional end point in a two-dimensional image through the following steps:
step 1, detecting two-dimensional high curvature points in a two-dimensional image, and taking the two-dimensional high curvature points as candidate two-dimensional tip points:
step 2, detecting the local diameter of the tree-shaped structure around each candidate two-dimensional tip point;
the diameters of different tree branches may vary widely, so a robust detection method should have the ability to adapt to changes in tree diameter. Therefore, after detecting two-dimensional high curvature points, the method first estimates the local diameter of the tree around these points. The estimated local diameter of the tree-like structure will be used to determine the scale of the multi-scale divergent ray model. This makes the method robust to diameter variations.
Step 3, detecting a two-dimensional tip point based on a multi-scale divergent ray model;
the divergent rays model is a circular kernel that can extract and analyze the distribution of gray values of pixels around a point of interest (candidate two-dimensional tip points). It "shoots" a number of rays from a point, each with a number of discrete nodes, as shown in fig. 3.
Taking the candidate two-dimensional tip point p at coordinate (x, y) as an example, the generated divergent ray model is as follows:
Figure BDA0001943860940000031
wherein (x)i,j,yi,j) Is the coordinate of j node in i ray, theta 2 pi/M is the included angle between any two adjacent rays, i1, 2.. M is the serial number of the ray, where M is the number of rays, j is 1, 2.. N is the serial number of the node on the ray, N is the length of the ray (the number of nodes on the ray);
the divergent rays model can effectively extract the gray value distribution characteristics of the pixels around the candidate two-dimensional tip point p. The gray value of each node on the ray extracted by the divergent ray model is stored in the matrix
Figure BDA0001943860940000032
In the ith row I of the matrixi=[Ii,1…Ii,N]Corresponding to the gray value of the N node on the ith ray, the jth column of the matrix corresponding to the gray value of the jth node on the M rays, and the element I of the ith row and the jth column of the matrixi,jAnd expressing the gray value of the jth node on the ith ray, and solving the gray value of a point on the diagonal ray by a bilinear interpolation method.
Next the quantitative features are further calculated-the number of foreground rays n and the maximum angle between the foreground rays MA. First, a weighted average intensity vector a ═ a is calculated for each divergent ray model1…AM]TWherein A isiIs the weighted average intensity of the ith ray, and is calculated by:
A=Dw/N (2)
wherein w ═ w1…wN]TIs a weight vector, wj=eλ(j-N)Is a weight factor, λ is used to determine the magnitude of the weight factor, and the value of λ is fixed to 0.05 in the present invention.
As seen in the divergent rays model in FIG. 5, the background ray may have many nodes in the foreground region, and these nodes are all close to the point of interest. That is, the difference between the background ray and the foreground ray is mainly in those nodes that are far away from the point of interest. Therefore, the invention sets a weight factor to reduce node pairs A near the points of interestiThe influence of (c).
Whether each ray is a foreground ray or not is distinguished by using a given intensity threshold (adjusted according to different pictures, and the default value is 135 because the intensity threshold is an empirical value), and the weighted average intensityDegree greater than T0Is considered to be a foreground ray and the weighted average intensity is less than or equal to T0The rays are regarded as background rays, foreground rays are stored in a set Q, and the number of the foreground rays in the set Q is calculated;
Figure BDA0001943860940000041
wherein r isiRepresenting the ith ray, # Q representing the number of elements in the computed set Q, and n being the number of foreground rays in the computed set Q;
calculate the maximum angle between the foreground rays in set Q:
Figure BDA0001943860940000042
wherein, ang (r)a,rb) Is two rays raAnd rbThe included angle between the two, n and MA, is the quantitative feature extracted by the divergent ray model.
Unlike the prior art which uses only one scale of divergent ray model, the method generates L different scales at p
Figure BDA0001943860940000043
To extract sufficient features, thereby improving the accuracy of the detection. Let the ray length in the k model divergent ray model be N ═ sk. The scale range is automatically determined by the tree structure local diameter d (p), as follows:
Figure BDA0001943860940000044
where μ is a control parameter for preventing the scale from being too small, and η is a step size of the scale. The multi-scale divergent ray model will extract two feature sequences: sequence of foreground ray quantities
Figure BDA0001943860940000045
And the maximum angle sequence between the foreground rays
Figure BDA0001943860940000046
Wherein n iskAnd MAkRespectively, in the dimension skThe number of lower foreground rays and the maximum angle. Then, by analyzing the characteristic sequence F1And F2To detect the distal points of the 2D tree structure. If p is a true two-dimensional tip point, the signature sequence F1And F2The following conditions should be satisfied:
(a)MAkshould be equal to nkThe correlation is linear. By rcAnd rdRepresenting the two rays with the largest angle among all foreground rays in the divergent ray model, by
Figure BDA0001943860940000047
Is represented by rcAnd rdThe number of all rays in between. Given dimension skThe model of the divergent rays of (1),
Figure BDA0001943860940000048
calculated from the following formula,
Figure BDA0001943860940000049
if the foreground rays are continuously distributed, then
Figure BDA00019438609400000410
Will equal the number n of foreground rays, otherwise
Figure BDA00019438609400000411
Will be greater than n. At the tip point, the foreground rays should be continuously distributed on every scale of the divergent rays model, while at the non-tip point, the foreground rays will not be continuously distributed on every scale of the divergent rays model, as shown in FIG. 4. In other words, at the tip point
Figure BDA00019438609400000412
Should be equal to n on each scale:
Figure BDA0001943860940000051
by substituting formula (6) for formula (7), it is possible to obtain:
MAk=θ(nk-1) (8)
since θ is a constant, MAkAt the tip point with nkThe correlation is linear. This explains MA of the tip pointkAnd nkWhy the trends are consistent, as shown in fig. 4 (b).
(b)MAkShould be satisfied in a minimum value of (c),
Figure BDA0001943860940000052
wherein T is1Is an angle threshold (adjusted according to different pictures, which is an empirical value, with a default value of 0.84 pi). For non-tip and branch points, both the number of foreground rays and the angle between them may be large. The maximum angle between the foreground rays at the tip point should be within a certain range. As shown in the example graph of the multi-scale divergent rays model in FIG. 4, the tree tip points are located at the "peaks". Visible MAkShould be small. At the same time, not all MA's can be expectedkAre all less than T1Because when the scale is small, MAkMay be greater than T1. This is achieved by selecting an angle threshold T1The important basis of.
In summary, in the proposed multi-scale divergent ray model, candidate two-dimensional tip points satisfying both conditions (a) and (b) are considered as true two-dimensional tip points, and candidate points not satisfying one condition are considered as non-tip points.
Further, before the step 1, a tree structure is enhanced: an image enhancement method based on a Hessian matrix is adopted to suppress noise and enhance a tree structure in an input two-dimensional image. The enhanced image is obtained by calculating three characteristic values of a Hessian matrix of each pixel, and then the tree structure is segmented from background pixels by using a background threshold value. The existing three-dimensional tree structure image segmentation method can also be used for segmenting the foreground region of the tree structure. In the segmented image, the grayscale value of each foreground pixel is set to 255. The purpose of this step is to remove the influence of the background noise of the tree structure on the detection of the peripheral point and improve the detection effect.
Further, the step 2 is specifically implemented as follows: first, a gray-weighted distance transform (GWDT) is used on a two-dimensional image to obtain a distance transform map DT, where the gray value of each pixel (voxel) inside a tree structure in DT is its euclidean distance to the nearest tree structure surface. The time it takes for the GWDT to generate a range image is much shorter than if the MSFM used in the existing methods were to derive a range map. In DT, the gray value of the pixel points near the center line of the tree branches (the line connecting the points at the middle of the tree branches) is larger, while the gray value of the pixel points near the surface (edge) of the tree branches is smaller. Therefore, on the distance conversion map DT, by searching for a point having the largest distance value in a local region around the candidate two-dimensional tip point p, a point close to the center line can be found
Figure BDA0001943860940000053
Then is provided with
Figure BDA0001943860940000054
The Rayburst sampling algorithm is performed for the seed point, and the local diameter d (p) of the tree-like structure around the candidate two-dimensional tip point p is estimated. The Rayburst sampling algorithm is a morphological analysis technique of two-dimensional/three-dimensional tree-structured images, which generates a set of multidirectional rays called a sampling kernel. As shown in fig. 2, the rays are cast from a seed point until they stop at the edge of the tree structure. After all ray casting is completed, the local Diameter d (p) of the tree structure can be calculated from the length distribution of all rays using the Media Lower Band Diameter (MLBD) algorithm.
Further, detecting tree-structured three-dimensional tip points in the three-dimensional image by:
firstly, extracting a group of two-dimensional slice (z-slice) images from a three-dimensional image, and detecting a tree-structure two-dimensional tip point in each two-dimensional slice image through the steps; the detected two-dimensional tip point is taken as a candidate three-dimensional tip point.
Then, real three-dimensional tip points are detected from the candidate three-dimensional tip points based on the tip point visual prior: the tip point visual prior is based on the observation of the tree-like structure tip points in the following three-dimensional images: when the three-dimensional tip point is viewed from three orthogonal directions without being occluded by other structures, it can be identified in at least two views. In most cases, the tip points in all three views can be identified. However, in case the three-dimensional tip point is almost parallel to a certain viewing direction, the view of this viewing direction will be a point-like structure. Examples of tip point and non-tip point views are shown in fig. 5.
A two-dimensional view is generated using Maximum Intensity Projection (MIP). To avoid interference from other unrelated tree structures, before the two-dimensional MIP image is generated, a local cube B is generated whose center is the candidate three-dimensional tip point. The side length h of B is 2 xsL+1, since the distance from the candidate three-dimensional tip point is greater than sLWill not be covered by the multi-scale divergent ray model. Without loss of generality, two-dimensional MIP images of cube B, called I, are generated in the XY, XZ and YZ planes, respectivelyXY、IXZAnd IYZ. Let IXY、IXZAnd IYZThe centers of the three planes are respectively pXY、pXZAnd pYZ(pXY、pXZAnd pYZRespectively a candidate three-dimensional tip point p is in IXY、IXZAnd IYZProjection of three); are respectively at IXY、IXZAnd IYZDetecting two-dimensional tip points in the three planes, and based on the visual prior of the tip points, if pXY、pXZAnd pYZMore than two of the candidate three-dimensional peripheral points p are detected as two-dimensional peripheral points on the respective planesA three-dimensional tip point.
Has the advantages that:
compared with the prior art, the method has the following advantages: firstly, a weighting factor is set when the average intensity value of each ray is calculated to weaken the influence of nodes near the interest point, so that the classification of the foreground ray and the background ray is more accurate. Two, using a multiscale divergent rays model, i.e. using L different scales at candidate tip points
Figure BDA0001943860940000061
The divergent rays model of (2) extracts features, and enough features can be obtained to improve the accuracy of judging the peripheral points. For the problem of misjudgment of the tree-shaped structure branch almost parallel to the z direction, the invention provides a three-dimensional tip point detection method based on tip point visual prior, a3D detection task is converted into a plurality of 2D detection tasks, misjudgment of the tree-shaped structure branch almost parallel to the z direction can be avoided, the accuracy of the algorithm is improved, and the application range of the model is enriched.
Drawings
FIG. 1 is a general flow diagram of the present invention;
FIG. 2 is a schematic diagram of partial diameter estimation; fig. 2(a) is a diagram of the optimization result of the seed point, and fig. 2(b) is a diagram of the Rayburst sampling performed at the new seed point;
FIG. 3 is a diagram of an example divergent ray model; FIGS. 3(a) and 3(c) are a non-tip point and a tip point, respectively, and FIGS. 3(b) and 3(d) are divergent ray models obtained at 3(a) and 3(c), respectively.
FIG. 4 is a diagram of an example of a multi-scale divergent rays model; FIGS. 4(a1) and 4(b1) are a non-tip point and a tip point, respectively, and FIGS. 4(a2) - (a5) and 4(b2) - (b5) are multi-scale ray models generated at the non-tip point and the tip point in FIGS. 4(a1) and 4(b1), respectively; 4(c) and 4(d) show line graphs of the variation of two variables n and MA corresponding to the multi-scale ray model in FIGS. 4(a2) - (a5) and 4(b2) - (b5), respectively;
FIG. 5 is a diagram of an example of different tree structures;
FIG. 6 is a schematic diagram of the Vaa3D software using the tip point detection plug-in;
FIG. 7 is a schematic representation of a tree centerline;
Detailed Description
The algorithm of the invention is implemented by compiling a computer program, the view being obtained using Vaa3D software. The general flow chart of the method for detecting the three-dimensional end point of the tree structure in the three-dimensional image by programming in matlab language in the windows program running environment is shown in fig. 1.
Firstly, extracting a group of two-dimensional slice images from a three-dimensional image, and detecting a tree-structure two-dimensional tip point in each two-dimensional slice image by adopting the following steps; taking the detected two-dimensional tip point as a candidate three-dimensional tip point;
step 1, processing a two-dimensional image by adopting an image enhancement method based on a Hessian matrix, detecting two-dimensional high curvature points in the two-dimensional image after enhancement processing, and taking the two-dimensional high curvature points as candidate two-dimensional tip points;
step 2, detecting the local diameter of the tree-shaped structure around each candidate two-dimensional tip point; for any candidate two-dimensional tip point p, the steps of detecting the local diameter of the tree around it are as follows:
step 2.1, obtaining a distance transformation graph by using a gray-scale weighted distance transformation method for the two-dimensional image, wherein in the distance transformation graph, the gray value of each pixel point in the tree-shaped structure is the Euclidean distance from the pixel point to the nearest tree-shaped structure surface;
step 2.2, searching the pixel point with the maximum gray value in the local area around the p on the distance transformation graph, regarding the pixel point as a point close to the center line of the tree-shaped structure, and marking as the pixel point
Figure BDA0001943860940000071
Step 2.3 with
Figure BDA0001943860940000072
The Rayburst sampling algorithm is performed for the seed points, and the local diameter d (p) of the tree-like structure around the candidate two-dimensional tip point p is estimated.
Step 3, detecting a real two-dimensional tip point from the candidate two-dimensional tip points based on the multi-scale divergent ray model; for any candidate two-dimensional peripheral point p, the step of detecting whether the candidate two-dimensional peripheral point p is a real two-dimensional peripheral point is as follows:
step 3.1, generating L different scales at p
Figure BDA0001943860940000081
Let the ray length N in the k-th divergent ray model be skDimension skThe local diameter d (p) around p of the tree is automatically determined as shown in the following formula:
Figure BDA0001943860940000082
wherein mu is a control parameter for preventing the scale from being too small, and eta is the step length of the scale;
step 3.2, respectively calculating the characteristics of L divergent ray models with different scales to obtain two characteristic sequences: sequence of foreground ray quantities
Figure BDA0001943860940000083
And the maximum angle sequence between the foreground rays
Figure BDA0001943860940000084
Wherein n iskRepresenting the number of foreground rays, MA, in the kth divergent-ray modelkRepresenting the maximum angle between the foreground rays in the kth divergent ray model; for any divergent ray model, the calculation method of the maximum angle MA between the number n of foreground rays and the foreground rays comprises the following steps:
1) the gray value of N nodes on the ith ray in the divergent ray model is recorded as Ii=[Ii,1,…,Ii,j,…,Ii,N]In which Ii,jRepresenting the gray value of the jth node on the ith ray;
2) calculating the weighted average intensity A of the ith ray in the divergent ray modeli==Iiw/N, wherein w ═ w1,…,wj,…,wN]TIs a weight vector, wj=eλ(j-N)λ is used for determining the size of the weighting factor;
3) the weighted average intensity in the divergent ray model is greater than a given intensity threshold T0The rays are regarded as foreground rays, the foreground rays are stored in a set Q, and the number of the foreground rays in the set Q is calculated, namely the number n of the foreground rays in the divergent ray model;
4) maximum angle between foreground rays in the divergent rays model:
Figure BDA0001943860940000085
wherein, ang (r)a,rb) Is two rays raAnd rbThe included angle therebetween.
Step 3.3 by analyzing the signature sequence F1And F2To detect whether p is a true two-dimensional tip; if the signature sequence F1And F2The following two conditions are satisfied simultaneously:
(a)MAk=θ(nk-1), where θ is the angle between any two adjacent rays of the divergent ray model;
(b)mkin(MAk)<T1wherein T is1Is an angle threshold;
then p is considered to be a true two-dimensional tip point.
Then, detecting a real three-dimensional tip point from the candidate three-dimensional tip points based on the tip point visual prior; for any candidate three-dimensional peripheral point, the step of detecting whether the candidate three-dimensional peripheral point is a real three-dimensional peripheral point comprises the following steps:
step 4.1, generating a local cube B by taking the candidate three-dimensional tip point as a center, wherein the side length h of the local cube B is 2 xsL+1;
Step 4.2, generating two-dimensional maximum intensity projection images of the cube B on the XY plane, the XZ plane and the YZ plane respectively, and marking as IXY、IXZAnd IYZ(ii) a Let IXY、IXZAnd IYZThe centers of the three planes are respectively pXY、pXZAnd pYZ
Step 4.3, adopting the steps 1 to 3 to respectively detect the IXY、IXZAnd IYZTwo-dimensional tip points in three planes; according to the peripheral point vision prior, if pXY、pXZAnd pYZIf more than two of the candidate three-dimensional peripheral points are detected as two-dimensional peripheral points on the respective planes, the candidate three-dimensional peripheral points are considered as real three-dimensional peripheral points.
Fig. 2 is a schematic diagram of local diameter estimation. The dots in fig. 2(a) are initial seed points (two-dimensional high curvature points). The estimation accuracy is affected if the Rayburst sampling algorithm is applied here. The triangular point is a new seed point found in a local region of interest (gray circle) around the high curvature point p after using a seed point optimization algorithm
Figure BDA0001943860940000091
Fig. 2(b) shows the Rayburst sampling at the new seed point. The bold arrows are the estimated local tree diameter.
FIG. 3 is a diagram of an example divergent ray model. FIGS. 3(a) and 3(c) are a non-distal point and a distal point, respectively. Fig. 3(b) and 3(d) are divergent ray models obtained at 3(a) and 3(c), respectively, where dark rays represent foreground rays and light rays represent background rays. In each divergent ray model, rcAnd rdIs the two foreground rays with the largest included angle.
Figure BDA0001943860940000092
Is rcAnd rdAnd n is the number of foreground rays. Graph (d) is a divergent rays model at the tip point, where
Figure BDA0001943860940000098
Since the foreground rays are continuously distributed, the method for detecting the foreground rays is simple and convenient
Figure BDA0001943860940000094
Is equal to n. FIG. b shows hair at a non-distal pointModel of the scatter line, wherein
Figure BDA0001943860940000097
Because of the discontinuous distribution among the foreground rays
Figure BDA0001943860940000096
Greater than n.
FIG. 4 is a diagram of an example of a multi-scale divergent ray model. Fig. 4(a1) and 4(b1) are one non-tip point and one tip point, respectively, and fig. 4(a2) - (a5) and 4(b2) - (b5) are multi-scale ray models generated at the non-tip point and the tip point in fig. 4(a1) and 4(b1), respectively. Fig. 4(c) and 4(d) show line graphs of the variation of two variables n and MA corresponding to the multi-scale ray model in fig. 4(a2) - (a5) and fig. 4(b2) - (b5), respectively. As can be seen in fig. 4(c), for non-distal points, the ray model has a significant reduction in n when used to the fourth scale, while the MA change is insignificant. From fig. 4(d), it can be seen that for the distal point, n is consistent with MA.
Fig. 5 is a view example of different tree structures. Fig. 5(a) and 5(b) are three-dimensional tree structures including 3D tip points and 3D non-tip points, respectively. R1-R3: lines of sight representing viewing directions, which are perpendicular to the XY, XZ and YZ planes, respectively. Fig. 5(a1) -5(a3) are three views of fig. 5(a) corresponding to the directions of R1-R3, and fig. 5(b1) -5 (b3) are three views of fig. 5 (b). Since the tree structure in fig. 5(a) is almost parallel to R3, the corresponding view 5(a3) shows a dot-like structure, but as can be seen from fig. 5(a1) -5(a3), the 3D tip point can be recognized in at least two views. As can be seen from fig. 5(b1) -5 (b3), the 3D non-distal point may form a tip point-like structure in at most one view.
FIG. 6 is a schematic diagram of the Vaa3D software using the tip point detection plug-in. Graph (a) is the main window for Vaa 3D. After successful compilation of the source code In Vaa3D, the user can run the Plug-In of the proposed method of the present invention through "Plug-In- > Termination _ Detection". Clicking the "See in 3D" button displays the detection result in a3D format. Fig. (b) shows the result of detecting the distal point of the tree structure image after three-dimensional visualization.
FIG. 7 is a schematic representation of the centerline of a tree structure. The centerline of the tree is the broken line in the figure, and the dots are the points on the centerline.

Claims (5)

1. A method for detecting a tree-shaped structure end point in a digital image is characterized in that the two-dimensional end point of the tree-shaped structure in a two-dimensional image is detected through the following steps:
step 1, detecting two-dimensional high curvature points in a two-dimensional image, and taking the two-dimensional high curvature points as candidate two-dimensional tip points;
step 2, detecting the local diameter of the tree-shaped structure around each candidate two-dimensional tip point;
step 3, detecting a real two-dimensional tip point from the candidate two-dimensional tip points based on the multi-scale divergent ray model; for any candidate two-dimensional peripheral point p, the step of detecting whether the candidate two-dimensional peripheral point p is a real two-dimensional peripheral point is as follows:
step 3.1, generating L different scales at p
Figure FDA0001943860930000011
Let the ray length N in the k-th divergent ray model be skDimension skThe local diameter d (p) around p of the tree is automatically determined as shown in the following formula:
Figure FDA0001943860930000012
sk+1=sk+η,k=1,...,L-1
wherein mu is a control parameter for preventing the scale from being too small, and eta is the step length of the scale;
step 3.2, respectively calculating the characteristics of L divergent ray models with different scales to obtain two characteristic sequences: sequence of foreground ray quantities
Figure FDA0001943860930000013
And the maximum angle sequence between the foreground rays
Figure FDA0001943860930000014
Wherein n iskRepresenting the number of foreground rays, MA, in the kth divergent-ray modelkRepresenting the maximum angle between the foreground rays in the kth divergent ray model;
step 3.3 by analyzing the signature sequence F1And F2To detect whether p is a true two-dimensional tip; if the signature sequence F1And F2The following two conditions are satisfied simultaneously:
(a)MAk=θ(nk-1), where θ is the angle between any two adjacent rays of the divergent ray model;
(b)
Figure FDA0001943860930000015
wherein T is1Is an angle threshold;
then p is considered to be a true two-dimensional tip point.
2. The method for detecting the terminal point of the tree-like structure in the digital image according to claim 1, wherein before the step 1, the two-dimensional image is processed by an image enhancement method based on the Hessian matrix.
3. The method according to claim 1, wherein in the step 2, for any candidate two-dimensional end point p, the step of detecting the local diameter of the tree structure around the candidate two-dimensional end point p is as follows:
step 2.1, obtaining a distance transformation graph by using a gray-scale weighted distance transformation method for the two-dimensional image, wherein in the distance transformation graph, the gray value of each pixel point in the tree-shaped structure is the Euclidean distance from the pixel point to the nearest tree-shaped structure surface;
step 2.2, searching the pixel point with the maximum gray value in the local area around the p on the distance transformation graph, regarding the pixel point as a point close to the center line of the tree-shaped structure, and marking as the pixel point
Figure FDA0001943860930000021
Step 2.3 with
Figure FDA0001943860930000022
The Rayburst sampling algorithm is performed for the seed points, and the local diameter d (p) of the tree-like structure around the candidate two-dimensional tip point p is estimated.
4. The method according to claim 1, wherein in step 3.2, for any divergent ray model, the calculation method of the maximum angle MA between the number n of foreground rays and the foreground rays is:
1) the gray value of N nodes on the ith ray in the divergent ray model is recorded as Ii=[Ii,1,…,Ii,j,… ,Ii,N]In which Ii,jRepresenting the gray value of the jth node on the ith ray;
2) calculating the weighted average intensity A of the ith ray in the divergent ray modeli==Iiw/N, wherein w ═ w1,…,wj,…,wN]TIs a weight vector, wj=eλ(j-N)λ is used for determining the size of the weighting factor;
3) the weighted average intensity in the divergent ray model is greater than a given intensity threshold T0The rays are regarded as foreground rays, the foreground rays are stored in a set Q, and the number of the foreground rays in the set Q is calculated, namely the number n of the foreground rays in the divergent ray model;
4) maximum angle between foreground rays in the divergent rays model:
Figure FDA0001943860930000023
wherein, ang (r)a,rb) Is two rays raAnd rbThe included angle therebetween.
5. A detection method of a tree-shaped structure end point in a digital image is characterized in that the three-dimensional end point of the tree-shaped structure in a three-dimensional image is detected through the following steps:
firstly, extracting a group of two-dimensional slice images from a three-dimensional image, and detecting a tree-structure two-dimensional tip point in each two-dimensional slice image by adopting the method of any one of claims 1 to 4; taking the detected two-dimensional tip point as a candidate three-dimensional tip point;
then, detecting a real three-dimensional tip point from the candidate three-dimensional tip points based on the tip point visual prior; for any candidate three-dimensional peripheral point, the step of detecting whether the candidate three-dimensional peripheral point is a real three-dimensional peripheral point comprises the following steps:
step 4.1, generating a local cube B by taking the candidate three-dimensional tip point as a center, wherein the side length h of the local cube B is 2 xsL+1;
Step 4.2, generating two-dimensional maximum intensity projection images of the cube B on the XY plane, the XZ plane and the YZ plane respectively, and marking as IXY、IXZAnd IYZ(ii) a Let IXY、IXZAnd IYZThe centers of the three planes are respectively pXY、pXZAnd pYZ
Step 4.3, respectively detecting I by using the method of any one of claims 1 to 4XY、IXZAnd IYZTwo-dimensional tip points in three planes; according to the peripheral point vision prior, if pXY、pXZAnd pYZIf more than two of the candidate three-dimensional peripheral points are detected as two-dimensional peripheral points on the respective planes, the candidate three-dimensional peripheral points are considered as real three-dimensional peripheral points.
CN201910029830.XA 2019-01-14 2019-01-14 Method for detecting tree-like structure end point in digital image Active CN111435532B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910029830.XA CN111435532B (en) 2019-01-14 2019-01-14 Method for detecting tree-like structure end point in digital image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910029830.XA CN111435532B (en) 2019-01-14 2019-01-14 Method for detecting tree-like structure end point in digital image

Publications (2)

Publication Number Publication Date
CN111435532A CN111435532A (en) 2020-07-21
CN111435532B true CN111435532B (en) 2021-06-22

Family

ID=71580521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910029830.XA Active CN111435532B (en) 2019-01-14 2019-01-14 Method for detecting tree-like structure end point in digital image

Country Status (1)

Country Link
CN (1) CN111435532B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101283910A (en) * 2008-06-05 2008-10-15 华北电力大学 Method for obtaining the coronary artery vasomotion information
CN101763652A (en) * 2009-06-03 2010-06-30 中国科学院自动化研究所 Three-dimensional framework fast extraction method based on branch feathers
CN107274399A (en) * 2017-06-19 2017-10-20 太原理工大学 A kind of Lung neoplasm dividing method based on Hession matrixes and 3D shape index
CN108171703A (en) * 2018-01-18 2018-06-15 东北大学 A kind of method that tracheae tree is automatically extracted from chest CT image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101283910A (en) * 2008-06-05 2008-10-15 华北电力大学 Method for obtaining the coronary artery vasomotion information
CN101763652A (en) * 2009-06-03 2010-06-30 中国科学院自动化研究所 Three-dimensional framework fast extraction method based on branch feathers
CN107274399A (en) * 2017-06-19 2017-10-20 太原理工大学 A kind of Lung neoplasm dividing method based on Hession matrixes and 3D shape index
CN108171703A (en) * 2018-01-18 2018-06-15 东北大学 A kind of method that tracheae tree is automatically extracted from chest CT image

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
3D neuron tip detection in volumetric microscopy images using an adaptive ray-shooting model;Min Liu;《ELSEVIER》;20170204;全文 *
3D Neuron Tip Detection in Volumetric Microscopy Images;Min Liu;《IEEE》;20111231;全文 *
An Adaptive Ray-Shooting Model for Terminations Detection: Applications in Neuron and Retinal Blood Vessel Images;Weixun Chen;《IEEE》;20181231;全文 *
Automatic 3D Neuron Tracing Based on Terminations Detection;Chao Wang;《IEEE》;20181231;全文 *
三维图像栈中神经末梢点的自动检测;刘敏;《电子测量与仪器学报》;20170831;第31卷(第8期);全文 *

Also Published As

Publication number Publication date
CN111435532A (en) 2020-07-21

Similar Documents

Publication Publication Date Title
JP6660313B2 (en) Detection of nuclear edges using image analysis
Mack et al. High-precision 3D detection and reconstruction of grapes from laser range data for efficient phenotyping based on supervised learning
CN108154519A (en) Dividing method, device and the storage medium of eye fundus image medium vessels
CN110837768B (en) Online detection and identification method for rare animal protection
US20070223815A1 (en) Feature Weighted Medical Object Contouring Using Distance Coordinates
CN112598713A (en) Offshore submarine fish detection and tracking statistical method based on deep learning
Choromanska et al. Automatic reconstruction of neural morphologies with multi-scale tracking
CN108830842B (en) Medical image processing method based on angular point detection
CN109035300B (en) Target tracking method based on depth feature and average peak correlation energy
CN105741244B (en) The method of shade and halation is removed under a kind of interior crusing robot dim light
Pramunendar et al. A Robust Image Enhancement Techniques for Underwater Fish Classification in Marine Environment.
CN110633727A (en) Deep neural network ship target fine-grained identification method based on selective search
CN112102201A (en) Image shadow reflection eliminating method and device, computer equipment and storage medium
De Automatic data extraction from 2D and 3D pie chart images
CN117576079A (en) Industrial product surface abnormality detection method, device and system
Streekstra et al. Analysis of tubular structures in three-dimensional confocal images
Li et al. Sublingual vein extraction algorithm based on hyperspectral tongue imaging technology
AlAzawee et al. Using morphological operations—Erosion based algorithm for edge detection
Sengar et al. Analysis of 2D-gel images for detection of protein spots using a novel non-separable wavelet based method
CN111435532B (en) Method for detecting tree-like structure end point in digital image
CN111739047A (en) Tongue image segmentation method and system based on bispectrum reconstruction
CN108805186B (en) SAR image circular oil depot detection method based on multi-dimensional significant feature clustering
Nayan et al. Real time multi-class object detection and recognition using vision augmentation algorithm
Yang et al. Cherry recognition based on color channel transform
Khan et al. Segmentation of single and overlapping leaves by extracting appropriate contours

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant