WO2021047684A1 - 基于主动轮廓和深度学习的模糊边界图像自动分割方法 - Google Patents

基于主动轮廓和深度学习的模糊边界图像自动分割方法 Download PDF

Info

Publication number
WO2021047684A1
WO2021047684A1 PCT/CN2020/125703 CN2020125703W WO2021047684A1 WO 2021047684 A1 WO2021047684 A1 WO 2021047684A1 CN 2020125703 W CN2020125703 W CN 2020125703W WO 2021047684 A1 WO2021047684 A1 WO 2021047684A1
Authority
WO
WIPO (PCT)
Prior art keywords
contour
point
area
energy
boundary
Prior art date
Application number
PCT/CN2020/125703
Other languages
English (en)
French (fr)
Inventor
陈俊颖
游海军
Original Assignee
华南理工大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华南理工大学 filed Critical 华南理工大学
Priority to US17/641,445 priority Critical patent/US20220414891A1/en
Publication of WO2021047684A1 publication Critical patent/WO2021047684A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20116Active contour; Active surface; Snakes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20161Level set
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Definitions

  • the invention belongs to the technical field of fuzzy boundary image processing, and specifically relates to an automatic segmentation method for fuzzy boundary images based on active contour and deep learning.
  • the difficulty of fuzzy image segmentation is that it is difficult to accurately locate complex boundaries and correctly segment tiny isolated targets.
  • Complicated boundaries include blurred boundaries, disappearance of boundaries, complex interactions of boundaries, and changeable shapes.
  • Ultrasound image is a common blurred image. Its low-contrast and multi-noise characteristics often make the edge of the target blur or even disappear. The actual boundary of the target is easily affected by artifacts, and even part of it is covered by a large number of artifacts. Accurate segmentation of blurred boundary images has become a challenge at the moment.
  • the deep convolutional neural network model has achieved remarkable results in semantic segmentation, and has also achieved remarkable results in low-contrast images. Fuzzy boundaries have a certain degree of ambiguity, and it is easy to produce noise in the training stage, so that the deep model still has the problem of insufficient fuzzy boundary segmentation ability in the application of fuzzy boundary image segmentation.
  • the active contour model based on level set has the advantages of being insensitive to noise and capable of evolving contours on the fuzzy boundary of the target, making the contour close to the boundary of the target.
  • the active contour model needs to initialize the contour, which is difficult to deal with complex images.
  • the deep convolutional neural network model has been able to achieve good segmentation results of fuzzy boundary images. Using active contour models on this segmentation result can effectively alleviate the limitations of the initial contour on the active contour model, and further improve the fuzzy boundary through fine adjustment of the local area. The segmentation ability and the accuracy of the boundary segmentation results.
  • the fuzzy boundary image automatic segmentation method based on active contour and deep learning proposed in the present invention can accurately segment the fuzzy boundary image.
  • the present invention is based on the active contour and deep learning method for automatic segmentation of fuzzy boundary images, which realizes automatic segmentation of fuzzy boundary images, further improves the accuracy of fuzzy boundary segmentation, and realizes accurate segmentation of targets with fuzzy boundaries in the image.
  • the deep learning model is used to segment the fuzzy boundary image to obtain the initial target segmentation result; then the active contour model is used to fine-tune the segmentation result of the model to obtain more accurate normal boundary and fuzzy boundary segmentation results.
  • the deep active contour model proposed by the present invention uses the local evolution of the contour points to drive the contour to move to the target boundary, and at the same time uses the initialized contour to restrain the excessive evolution of the contour.
  • the automatic segmentation method of fuzzy boundary image based on active contour and deep learning includes the following steps:
  • step S2.1 the initial level set ⁇ I (x, y) of the active contour model is constructed through the segmentation result of the deep learning model.
  • the initial level set is defined as follows:
  • the points at the boundary between the target area and the non-target area constitute the target boundary B
  • D(x, y) is the shortest distance between each point (x, y) on the image and the target boundary B.
  • the energy function includes three parts: 1) the perimeter and area of the contour; 2) the energy of the contour local area; 3) the contour constraint energy;
  • C represents the current segmented contour
  • C 0 represents the initialized segmented contour
  • Length(C) represents the perimeter of the contour C
  • Area(inside(C)) represents the area of the inner area of the contour C
  • ⁇ 0 (x,y) refers to the pixel intensity of the source image I at (x, y)
  • c 1 refers to the average pixel intensity inside the contour C
  • c 2 refers to the average pixel intensity outside the contour C
  • p refers to the point on the contour C
  • p ⁇ N (C) means that the contour point p is in the target edge area
  • p ⁇ F(C) means that the contour point p is in the foreground (target) area
  • p ⁇ B(C) means that the contour point p is in the background area
  • ia(p) Refers to the points that are around the contour point p and inside the contour C
  • oa(p) refers to the points that are around the contour point p and outside the contour
  • Heaviside function H and Dirac function ⁇ 0 are as follows:
  • Use level set ⁇ , function H, function ⁇ 0 to represent the perimeter and area of the contour:
  • the contour constraint energy is the difference between the current contour C and the initial contour C 0 , which is expressed by the level set ⁇ , function H, and ⁇ I.
  • the contour constraint energy is expressed as the difference between the current level set ⁇ and the initial level set ⁇ I:
  • the energy of the local area of the contour is the sum of the internal and external energy around all contour points; the energy of the area around the contour is calculated locally, and the energy inside and outside the contour in the local area is calculated separately for each contour point, and then superimposed to obtain the overall After using the level set ⁇ and the function H, the energy of the area around the contour is defined as:
  • ⁇ (p) 0
  • a(p) means that it is around the contour point p
  • the periphery of the contour point p means that p is the center of the circle
  • ia(p) represents the point around the contour point p and inside the contour C
  • oa(p) represents a point around the contour point p and outside the contour C.
  • the energy function F is defined as:
  • c 1 refers to the average value of pixel intensity inside contour C
  • c 2 refers to the average value of pixel intensity outside contour C, respectively satisfying:
  • c ip is the average pixel intensity of points that satisfy ia(p), and c op is the average pixel intensity of points that satisfy oa(p);
  • the energy function F obtains the partial differential equation of curve evolution through Euler-Lagrangian variational method and gradient descent flow:
  • (x, y) ⁇ a(p) means that the point (x, y) is around the contour point p, and the contour point p refers to the range of the circle with p as the center and R as the radius; in the process of curve evolution ,
  • the level set of the nth iteration is ⁇ n , the level set of the n+1th iteration
  • the finite difference method is used to calculate the horizontal and vertical partial derivatives in the two-dimensional image.
  • the contour point p is determined to be in the target edge area or in the non-target edge area based on the difference in pixel intensity inside and outside the contour.
  • the specific method is as follows: The outer pixel intensity has a large difference, while the inner and outer pixel intensities around the contour in the non-target edge area have a small difference; when the contour point p is in the non-target edge area, the values of c ip and c op are similar, that is, c ip ⁇ c op ,
  • step S2.3.4 If there is a segment that satisfies step S2.3.3, all contour points in the segment are in the non-target edge area, and other contour points are in the target edge area;
  • the sum of the energy inside the contour in the local area of the contour point in the target edge area is:
  • the contour point p is in the non-target edge area, it is further determined that the contour point p is in the foreground area or the background area; since the characteristics of the area around the contour point are similar to the area in which it is located, the fuzzy boundary image is divided into several sub-regions according to the image characteristics. Area, in these sub-areas, it is determined that the contour point p is in the foreground area or the background area; the specific method is as follows:
  • the fuzzy boundary image is divided into several sub-regions according to the image characteristics, and the sub-region ⁇ where the contour segment ⁇ C is located is determined;
  • the maximum distance between x 0 and the sub-region boundary 1/6 is taken as the standard deviation ⁇ x of the X-axis part of the Gaussian function, and the 1/6 maximum distance between y 0 and the sub-region boundary is taken as the standard deviation ⁇ y of the Y-axis part of the Gaussian function; use two The dimensional Gaussian function assigns a weight w ij to each point in the subregion, and standardizes the weights w ij inside and outside the contour respectively, and obtains the normalized weight w ij_in inside the contour and the normalized weight outside the contour Value w ij_out ;
  • the evolution direction of the contour point p is toward the outside of the contour.
  • the evolution direction correction is embodied by increasing the energy outside the contour in the local area of the foreground contour point, and the increased energy is defined as :
  • the evolution direction of the contour point p is toward the inside of the contour.
  • the evolution direction correction is embodied by increasing the energy inside the contour in the local area of the background contour point.
  • the increased energy is defined as:
  • step S2.4 pass Iteratively evolve the contour until it reaches the maximum number of iterations iter or the contour changes slightly or unchanged; among them, 200 ⁇ iter ⁇ 10000; contour changes Indicates the change of the contour. If the change is small for several consecutive times, the iteration will stop.
  • the present invention applies the active contour model to the fuzzy boundary image segmentation field, and further optimizes the segmentation result of the deep convolutional neural network model.
  • the energy associated with the image pixels is obtained by superimposing the characteristics of the surrounding area of each contour point.
  • the judgment of the area where the contour point is located and the correction of the evolution direction of the contour point are added to make this method have segmentation.
  • Fig. 1 is the original image of the blurred boundary in the embodiment of the present invention—the thyroid ultrasound image.
  • Fig. 2 is an image of a middle border label according to an embodiment of the present invention, and a white line represents a schematic diagram of a thyroid area.
  • FIG. 3 is a schematic diagram of the result of segmenting the thyroid region by the U-Net deep convolutional neural network in an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of the result of segmenting the thyroid region based on the depth model U-Net and the active contour model in an embodiment of the present invention.
  • Fig. 5 is a schematic diagram of a local area of a contour point p in an embodiment of the present invention.
  • Fig. 6 is a schematic diagram of a thyroid ultrasound transverse scan image and sub-region division in an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of a thyroid ultrasonic longitudinal scan image and sub-region division in an embodiment of the present invention.
  • Fig. 8 is a flowchart of steps of an embodiment of the present invention.
  • the automatic segmentation method of fuzzy boundary image based on active contour and deep learning includes the following steps:
  • a fuzzy boundary image such as the thyroid ultrasound image shown in Figure 1, use the trained U-Net convolutional neural network model to segment the thyroid area to obtain the U-Net segmentation result image;
  • the points at the boundary between the target area and the non-target area constitute the target boundary B
  • D(x, y) is the shortest distance between each point (x, y) on the image and the target boundary B.
  • the energy function contains three parts: 1) the perimeter and area of the contour; 2) the energy of the contour local area; 3) the contour constraint energy;
  • C represents the current segmented contour
  • C 0 represents the initialized segmented contour
  • Length(C) represents the perimeter of the contour C
  • Area(inside(C)) represents the area of the inner area of the contour C
  • ⁇ 0 (x,y) refers to the pixel intensity of the source image I at (x, y)
  • c 1 refers to the average pixel intensity inside the contour C
  • c 2 refers to the average pixel intensity outside the contour C
  • p refers to the point on the contour C
  • p ⁇ N (C) means that the contour point p is in the target edge area
  • p ⁇ F(C) means that the contour point p is in the foreground (target) area
  • p ⁇ B(C) means that the contour point p is in the background area
  • ia(p) Refers to the points that are around the contour point p and inside the contour C
  • oa(p) refers to the points that are around the contour point p and outside the contour
  • Heaviside function H and Dirac function ⁇ 0 are as follows:
  • Use level set ⁇ , function H, function ⁇ 0 to represent the perimeter and area of the contour:
  • the contour constraint energy is the difference between the current contour C and the initial contour C 0 , which is expressed by the level set ⁇ , function H, and ⁇ I.
  • the contour constraint energy is expressed as the difference between the current level set ⁇ and the initial level set ⁇ I:
  • the energy of the local area of the contour is the sum of the internal and external energy around all contour points; the energy of the area around the contour is calculated locally, and the energy inside and outside the contour in the local area is calculated separately for each contour point, and then superimposed to obtain the overall After using the level set ⁇ and the function H, the energy of the area around the contour is defined as:
  • ⁇ (p) 0
  • a(p) means that it is around the contour point p
  • the periphery of the contour point p means that p is the center of the circle
  • ia(p) represents the point around the contour point p and inside the contour C
  • oa(p) represents a point around the contour point p and outside the contour C.
  • the energy function F is defined as:
  • c 1 refers to the average value of pixel intensity inside contour C
  • c 2 refers to the average value of pixel intensity outside contour C, respectively satisfying:
  • c ip is the average pixel intensity of points that satisfy ia(p), and c op is the average pixel intensity of points that satisfy oa(p);
  • the energy function F obtains the partial differential equation of curve evolution through Euler-Lagrangian variational method and gradient descent flow:
  • (x, y) ⁇ a(p) means that the point (x, y) is around the contour point p, and the contour point p refers to the range of the circle with p as the center and R as the radius; in the process of curve evolution ,
  • the level set of the nth iteration is ⁇ n , the level set of the n+1th iteration
  • the finite difference method is used to calculate the horizontal and vertical partial derivatives in the two-dimensional image.
  • the black line box represents an image area
  • the closed black curve is contour C
  • the inner area of contour C is represented as Inside (C)
  • the outer area of contour C is represented Is Outside(C)
  • point p is a point on contour C
  • ia(p) refers to the area around contour point p and inside contour C
  • oa(p) refers to the area around contour point p and outside contour C
  • the area around the contour point p refers to the range of a circle with p as the center and R as the radius, such as the circle drawn by the black dashed line in the figure;
  • the specific method is as follows: In the blurred boundary image, the difference in the mean value of the pixel intensity inside and outside the contour in the target edge area is large.
  • the difference in the mean value of the inner and outer pixel intensity around the contour is small; when the contour point p is in the non-target edge area, the values of c ip and c op are similar, that is, c ip ⁇ c op ,
  • step S2.3.4 If there is a segment that satisfies step S2.3.3, all contour points in the segment are in the non-target edge area, and other contour points are in the target edge area;
  • the sum of the energy inside the contour in the local area of the contour point in the target edge area is:
  • the contour point p is in the non-target edge area, it is further determined that the contour point p is in the foreground area or the background area; since the characteristics of the area around the contour point are similar to the area in which it is located, the fuzzy boundary image is divided into several sub-regions according to the image characteristics. In these sub-areas, it is determined that the contour point p is in the foreground area or the background area; in this example, an ultrasound image of the thyroid is used as the test image. Thyroid ultrasound images are divided into horizontal scan and vertical scan, as shown in Figure 6 and Figure 7. The left and right dividing lines in Figure 6 are divided into the trachea and arterial regions. The upper and lower dividing lines reduce the influence of sound wave attenuation.
  • the pixel intensity will decrease as the depth increases.
  • the upper part is generally brighter than the lower part, and the muscle area is separated.
  • the upper and lower dividing lines in Figure 7 also reduce the influence of sound wave attenuation and separate the muscle area at the same time.
  • the sub-regions A, B, C, and D determine that the contour point p is in the foreground region or the background region. Specific steps are as follows:
  • the evolution direction of the contour point p is toward the outside of the contour.
  • the evolution direction correction is embodied by increasing the energy outside the contour in the local area of the foreground contour point.
  • the increased energy is defined as:
  • the evolution direction of the contour point p is toward the inside of the contour.
  • the evolution direction correction is embodied by increasing the energy inside the contour in the local area of the background contour point.
  • the increased energy is defined as:
  • FIG. 2 is a standard segmented image, which is annotated by an experienced doctor.
  • the segmentation results of U-Net in Figure 3 present problems of segmentation errors and insufficient segmentation.
  • the resulting image removes the incorrectly segmented area and expands the contour outward in the blurred area. Cover some under-segmented areas.
  • the automatic segmentation method of fuzzy boundary image based on active contour and deep learning aims to enable the segmentation model to segment the fuzzy boundary area, while fine-tuning the segmentation contour to make the segmentation contour as close to the target boundary as possible.
  • the present invention adopts the method of combining the deep convolution network model and the active contour model, so that the model achieves accurate segmentation results.
  • the experimental data of the present invention is thyroid ultrasound images, and the data set contains 309 images, 150 of which are used as training sets, and the remaining 159 are used as test sets. Use 150 training images to train the U-Net model, the trained model segment 159 test images, and then use the active contour model to further fine-tune the U-Net segmentation results.
  • the quantitative indicators of the segmentation results are as follows:
  • TP, TN, FP, FN, AP , and A N represent true positive, true negative, false positive, false negative, and positive respectively. (All Positive), All Negative.
  • the average quantization index obtained after segmenting 159 images is shown in Table 1.
  • the present invention combines U-Net and active contour model compared to only U-Net, the pixel classification accuracy is higher in fine-grained segmentation, and the Accuracy reaches 0.9933; in the region segmented as thyroid, 0.9278 is correct The accuracy of the thyroid area is increased by 2.78%; the intersection ratio of the divided thyroid area and the real thyroid area is 0.9026, which is 1.54% higher than using U-Net only.
  • the improvement of the quantitative indicators Accuracy, PPV, and IOU of the present invention shows that the present invention can further improve the accuracy of target segmentation in the blurred image, and obtain fine and accurate segmentation results at the fuzzy boundary.
  • the present invention uses the active contour model on the basis of U-Net to obtain better fuzzy boundary image segmentation results.
  • the fuzzy boundary image automatic segmentation method based on active contour and deep learning has the ability to segment the fuzzy boundary in the fuzzy boundary image, and at the same time fine-tune the segmentation contour to make the contour close to the target boundary.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明公开一种基于主动轮廓和深度学习的模糊边界图像自动分割方法。所述方法首先使用深度卷积神经网络模型分割模糊边界图像,得到初始的分割结果;然后使用深度卷积神经网络模型分割出的图像内部区域轮廓作为主动轮廓模型的初始化轮廓和轮廓约束;主动轮廓模型通过每个轮廓点周围区域的图像特性驱使轮廓向目标边缘移动,在目标区域和其他背景区域之间得出精准的分割线。本发明在深度卷积神经网络模型的基础上引入主动轮廓模型进一步精化模糊边界图像的分割结果,具备分割图像中模糊边界的能力,进一步提高模糊边界图像的分割准确度。

Description

基于主动轮廓和深度学习的模糊边界图像自动分割方法 技术领域
本发明属于模糊边界图像处理的技术领域,具体涉及一种基于主动轮廓和深度学习的模糊边界图像自动分割方法。
背景技术
模糊图像分割的难点在于难以准确的定位复杂的边界和正确分割微小的孤立目标。复杂的边界包括边界模糊、边界消失、边界复杂相互作用、形状多变等。超声图像是一种常见的模糊图像,其低对比度和多噪声的特性往往使得目标边缘模糊甚至消失,目标的实际边界容易受到伪影的影响,甚至部分被大量的伪影掩盖。模糊边界图像的精准分割成为当下的一个挑战。
近年来,深度卷积神经网络模型在语义分割上取得显著的成果,在低对比度图像上也取得了瞩目的结果。模糊边界具有一定的歧义性,在训练阶段容易产生噪声,使得深度模型在模糊边界图像分割的应用上仍然存在模糊边界分割能力不足的问题。在传统的超声图像分割方法中,基于水平集的活动轮廓模型具有对噪声不敏感,能够在目标模糊边界上演化轮廓,使轮廓贴近目标边界的优点。但活动轮廓模型需要初始化轮廓,难以应对复杂的图像。深度卷积神经网络模型已经能够取得较好的模糊边界图像分割结果,在这个分割结果上使用主动轮廓模型能够有效地减轻初始化轮廓对活动轮廓模型的限制,经过局部区域的精调进一步提升模糊边界的分割能力和边界分割结果的准确性。
本发明提出的基于主动轮廓和深度学习的模糊边界图像自动分割方法,结合主动轮廓模型和深度卷积神经网络模型,能够精准地分割模糊边界图像。
发明内容
本发明基于主动轮廓和深度学习的模糊边界图像自动分割方法,实现自动化分割模糊边界图像的同时,进一步提高模糊边界分割的准确度,实现在图像中精准分割具有模糊边界的目标。首先使用深度学习模型分割模糊边界图像,得到初始化的目标分割结果;然后使用主动轮廓模型精调模型的分割结果,得到更加精准的正常边界和模糊边界分割结果。本发明提 出的深度主动轮廓模型采用轮廓点局部演化的方式驱使轮廓向目标边界移动,同时使用初始化轮廓约束轮廓过度演化。
本发明的目的至少通过如下技术方案之一实现。
基于主动轮廓和深度学习的模糊边界图像自动分割方法,包括以下步骤:
S1、使用深度学习模型分割模糊边界图像,得到初始化的目标分割结果;
S2、使用主动轮廓模型精调模型的分割结果,得到更加精准的正常边界和模糊边界分割结果,具体包括:
S2.1、使用S1中得到的初始化的目标分割结果中的区域边界初始化主动轮廓模型,构造初始的水平集;
S2.2、使用水平集来表示能量函数,通过能量函数得到曲线演化的偏微分方程;
S2.3、进行轮廓点所在区域的判定;
S2.4、确定各个轮廓点所在的区域之后,计算偏微分方程的值,并迭代演化轮廓,直到达到最大迭代次数或轮廓变动微小或不变则完成分割。
进一步地,步骤S2.1中,通过深度学习模型的分割结果构造主动轮廓模型的初始水平集φ I(x,y),初始水平集的定义如下:
Figure PCTCN2020125703-appb-000001
其中R(x,y)={0,1}为深度学习模型分割结果,R(x,y)=0表示点(x,y)属于目标区域,R(x,y)=1表示点(x,y)属于非目标区域;处于目标区域与非目标区域分界处的点构成目标边界B,D(x,y)为图像上每个点(x,y)与目标边界B的最短距离。
进一步地,步骤S2.2中,能量函数中共包含三个部分:1)轮廓的周长、面积;2)轮廓局部区域能量;3)轮廓约束能量;
整个能量函数的定义如下:
Figure PCTCN2020125703-appb-000002
其中,C表示当前的分割轮廓,C 0表示初始化的分割轮廓,Length(C)表示轮廓C的周长,Area(inside(C))表示轮廓C内部区域的面积,μ 0(x,y)是指源图像I在(x,y)处的像素强度,c 1是指轮廓C内部像素强度均值,c 2是指轮廓C外部像素强度均值,p是指轮廓C上的点,p∈N(C)表示轮廓点p处于目标边缘区域内,p∈F(C)表示轮廓点p处于前景(目标)区域内,p∈B(C)表示轮廓点p处于背景区域内,ia(p)是指处于轮廓点p周围且在轮廓C内部的点,oa(p)是指处于轮廓点p周围且在轮廓C外部的点,c ip是指满足ia(p)的点的像素强度均值,c op是指满足oa(p)的点的像素强度均值,所述轮廓点p周围是指以p为圆心,R为半径的圆的范围内;能量函数中的第一项和第二项表示轮廓的周长和面积,作用是使轮廓保持连续、光滑,只与轮廓本身的大小、形状有关;能量函数中的第三项和第四项是轮廓局部区域能量,作用是使轮廓向目标的边界演化,与图像数据有关;能量函数中的第五项是轮廓约束能量,作用是限制当前轮廓向极大偏离初始化轮廓的区域演化,u,v,λ 123是相应能量项的系数。
进一步地,在能量函数F中,使用水平集方法来表示轮廓C以及轮廓内部和外部;在水平集方法中,图像域Ω中轮廓C表示为零水平集即φ=0,定义为:
Figure PCTCN2020125703-appb-000003
使用零水平集即φ=0表示轮廓C;
Heaviside函数H和Dirac函数δ 0的定义如下:
Figure PCTCN2020125703-appb-000004
使用H表示轮廓C内部和外部:
Figure PCTCN2020125703-appb-000005
使用水平集φ、函数H、函数δ 0表示轮廓的周长、面积:
Figure PCTCN2020125703-appb-000006
Area{φ>0}=∫ ΩH(φ(x,y))dxdy;
轮廓约束能量是当前轮廓C和初始化轮廓C 0的差异,使用水平集φ、函数H、φ I表示,轮廓约束能量表示为当前水平集φ与初始化水平集φ I的差异:
(C-C 0) 2=∫ Ω(H(φ(x,y)))-H(φ I(x,y)) 2dxdy;
轮廓局部区域能量是所有轮廓点周围内部和外部能量的总和;轮廓周围区域的能量采用局部计算的方式,对每个轮廓点单独计算其局部区域内轮廓内部和轮廓外部的能量,然后叠加得到总体的能量;使用水平集φ、函数H表示后,轮廓周围区域的能量中的各项定义为:
Figure PCTCN2020125703-appb-000007
Figure PCTCN2020125703-appb-000008
Figure PCTCN2020125703-appb-000009
Figure PCTCN2020125703-appb-000010
其中,对于轮廓点C上的点p(x p,y p),φ(p)=0;a(p)表示处于轮廓点p周围,所述轮廓点p周围是指以p为圆心,R为半径的圆的范围内;ia(p)表示处于轮廓点p周围且在轮廓C内部的点,对于满足ia(p)的点a(x a,y a),φ(x a,y a)>0且
Figure PCTCN2020125703-appb-000011
oa(p)表示处于轮廓点p周围且在轮廓C外部的点,对于满足oa(p)的点a(x a,y a),φ(x a,y a)<0且
Figure PCTCN2020125703-appb-000012
进一步地,在使用水平集方法表示各个能量项后,能量函数F定义为:
Figure PCTCN2020125703-appb-000013
其中c 1是指轮廓C内部像素强度均值,c 2是指轮廓C外部像素强度均值,分别满足:
c 1(φ)=average(u 0)in{φ≥0},c 2(φ)=average(u 0)in{φ<0};通过水平集φ定义c 1和c 2
Figure PCTCN2020125703-appb-000014
Figure PCTCN2020125703-appb-000015
c ip为满足ia(p)的点的像素强度均值,c op为满足oa(p)的点的像素强度均值;
Figure PCTCN2020125703-appb-000016
定义为:
Figure PCTCN2020125703-appb-000017
Figure PCTCN2020125703-appb-000018
能量函数F通过欧拉-拉格朗日变分法和梯度下降流得到曲线演化的偏微分方程:
Figure PCTCN2020125703-appb-000019
其中
Figure PCTCN2020125703-appb-000020
(x,y)∈a(p)表示点(x,y)处于轮廓点p周围,所述轮廓点p周围是指以p为圆心,R为半径的圆的范围内;在曲线演化过程中,第n次迭代的水平集为φ n,第n+1次迭代的水平集
Figure PCTCN2020125703-appb-000021
使用有限差分的方式计算二维图像中水平方向和竖直方向的偏导数。
进一步地,步骤S2.3中,通过轮廓内外的像素强度差异判定轮廓点p在目标边缘区域内或者非目标边缘区域内,具体方法如下:在模糊边界图像中,目标边缘区域中轮廓周围内部和外部的像素强度均值差异较大,而非目标边缘区域中轮廓周围内部和外部的像素强度均值差异较小;当轮廓点p在非目标边缘区域时,c ip与c op的值相近,即c ip≈c op,|c ip-c op|≤c d,c d是判定c ip与c op是否相近的阈值;判定方法按照以下步骤:
S2.3.1、按照逆时针的顺序计算轮廓上每个轮廓点的c ip与c op的差值d p,按照d p得到的顺序构建闭环队列D;
S2.3.2、使用宽度为R的高斯滤波器平滑闭环队列D;
S2.3.3、寻找闭环队列D中长度大于2R且d p≤c d的片段ΔC;
S2.3.4、若存在满足步骤S2.3.3的片段,则片段中所有轮廓点处于非目标边缘区域,其他轮廓点处于目标边缘区域;
处于目标边缘区域轮廓点的局部区域内轮廓内部的能量和为:
Figure PCTCN2020125703-appb-000022
处于目标边缘区域轮廓点的局部区域内轮廓外部的能量和为:
Figure PCTCN2020125703-appb-000023
进一步地,若轮廓点p处于非目标边缘区域,进一步确定轮廓点p处于前景区域或背景区域;由于轮廓点周围的区域特性和所处区域相似,因此将模糊边界图像根据图像特性分为若干个子区域,在这些子区域中确定轮廓点p处于前景区域或背景区域;具体方法如下:
S2.3.5、首先将模糊边界图像根据图像特性分为若干个子区域,确定轮廓片段ΔC所处的子区域Ο;
S2.3.6、在图像子区域Ο中建立二维坐标系,以处于轮廓片段ΔC中间的轮廓点坐标位置为二维高斯函数f(x,y)中心点center(x 0,y 0),以x 0与子区域边界1/6最大距离作为高斯函数X轴部分的标准差σ x,以y 0与子区域边界的1/6最大距离作为高斯函数Y轴部分的标准差σ y;使用二维高斯函数给子区域中的每个点赋予权值w ij,并分别对轮廓内部和外部的权值w ij做标准化处理,得到轮廓内部标准化后的权值w ij_in以及轮廓外部标准化后的权值w ij_out
S2.3.7、使用标准化的权值w ij_in、w ij_out和像素强度μ 0(i,j)计算子区域Ο中轮廓内外的均值c o1和c o2;当点(i,j)处于子区域Ο中轮廓内部时,
Figure PCTCN2020125703-appb-000024
N为处于子区域Ο中轮廓内部的点的个数;当点(i,j)处于子区域Ο中轮廓外部时,
Figure PCTCN2020125703-appb-000025
M为处于子区域Ο中轮廓外部的点的个数;
S2.3.8、计算轮廓片段ΔC中所有轮廓点周围区域的像素强度均值m Δc,比较m Δc与c o1和c o2的差异,若|m Δc-c ο1|≤|m Δc-c ο2|,则轮廓片段ΔC中的轮廓点处于前景区域,否则处于背景区域。
进一步地,若轮廓点p处于前景区域,则轮廓点p的演化方向朝向轮廓外部,在能量函数中,演化方向矫正体现为增加前景轮廓点的局部区域内轮廓外部的能量,增加的能量定义为:
Figure PCTCN2020125703-appb-000026
若轮廓点p处于背景区域,则轮廓点p的演化方向朝向轮廓内部,在能量函数中,演化方向矫正体现为增加背景轮廓点的局部区域内轮廓内部的能量,增加的能量定义为:
Figure PCTCN2020125703-appb-000027
进一步地,步骤S2.4中,通过
Figure PCTCN2020125703-appb-000028
迭代演化轮廓,直到达到最大迭代次数iter或轮廓变动微小或不变;其中,200≤iter≤10000;轮廓变动
Figure PCTCN2020125703-appb-000029
表示轮廓的变化情况,若连续多次变动微小则迭代停止。
与现有的技术相比,本发明的优点在于:
本发明将活动轮廓模型运用于模糊边界图像分割领域,在深度卷积神经网络模型的分割结果上进一步优化。在构造能量函数时,首次利用每个轮廓点周围区域的特性叠加计算的方式得到与图像像素相关的能量,同时加入对轮廓点所处区域的判定和轮廓点演化方向矫正,使得本方法具备分割模糊边界的能力并提升边界分割的精准度。
附图说明
图1为本发明实施例中的模糊边界原图像——甲状腺超声图像。
图2为本发明实施例的中边界标签图像,白色线条表示甲状腺区域示意图。
图3为本发明实施例中的U-Net深度卷积神经网络分割甲状腺区域的结果示意图。
图4为本发明实施例中基于深度模型U-Net和主动轮廓模型分割甲状腺区域的结果示意图。
图5为本发明实施例中轮廓点p局部区域示意图。
图6为本发明实施例中的甲状腺超声横向扫描图像以及子区域划分示意图。
图7为本发明实施例中的甲状腺超声纵向扫描图像以及子区域划分示意图。
图8为本发明实施例的步骤流程图。
具体实施方式
以下结合附图和实施例对本发明的具体实施作进一步说明,但本发明的实施和保护不限于此。需指出的是,以下若有未特别详细说明之处,均是本领域技术人员可参考现有技术实现的。
实施例:
基于主动轮廓和深度学习的模糊边界图像自动分割方法,如图8所示,包括以下步骤:
S1、对于一张模糊边界图像,如图1所示的甲状腺超声图像,使用训练好的U-Net卷积神经网络模型分割甲状腺区域,得到U-Net分割结果图像;
S2、使用主动轮廓模型精调模型的分割结果,得到更加精准的正常边界和模糊边界分割结果,如图8所示,包括以下步骤:
S2.1、使用图3中的甲状腺区域边界初始化主动轮廓模型,构造初始的水平集φ I(x,y);设置主动轮廓模型的参数为μ=1,ν=0,λ 1=1,λ 2=1,λ 3=1,Δt=0.1,R=8,c d=8,ε=1;初始水平集的定义如下:
Figure PCTCN2020125703-appb-000030
其中R(x,y)={0,1}为深度学习模型分割结果,R(x,y)=0表示点(x,y)属于目标区域,R(x,y)=1表示点(x,y)属于非目标区域;处于目标区域与非目标区域分界处的点构成目标边界B,D(x,y)为图像上每个点(x,y)与目标边界B的最短距离。
S2.2、使用水平集来表示能量函数,通过能量函数得到曲线演化的偏微分方程;
能量函数中共包含三个部分:1)轮廓的周长、面积;2)轮廓局部区域能量;3)轮廓约束能量;
整个能量函数的定义如下:
Figure PCTCN2020125703-appb-000031
其中,C表示当前的分割轮廓,C 0表示初始化的分割轮廓,Length(C)表示轮廓C的周长,Area(inside(C))表示轮廓C内部区域的面积,μ 0(x,y)是指源图像I在(x,y)处的像素强度,c 1是指轮廓C内部像素强度均值,c 2是指轮廓C外部像素强度均值,p是指轮廓C上的点,p∈N(C)表示轮廓点p处于目标边缘区域内,p∈F(C)表示轮廓点p处于前景(目 标)区域内,p∈B(C)表示轮廓点p处于背景区域内,ia(p)是指处于轮廓点p周围且在轮廓C内部的点,oa(p)是指处于轮廓点p周围且在轮廓C外部的点,c ip是指满足ia(p)的点的像素强度均值,c op是指满足oa(p)的点的像素强度均值,所述轮廓点p周围是指以p为圆心,R为半径的圆的范围内;能量函数中的第一项和第二项表示轮廓的周长和面积,作用是使轮廓保持连续、光滑,只与轮廓本身的大小、形状有关;能量函数中的第三项和第四项是轮廓局部区域能量,作用是使轮廓向目标的边界演化,与图像数据有关;能量函数中的第五项是轮廓约束能量,作用是限制当前轮廓向极大偏离初始化轮廓的区域演化,u,v,λ 123是相应能量项的系数。
进一步地,在能量函数F中,使用水平集方法来表示轮廓C以及轮廓内部和外部;在水平集方法中,图像域Ω中轮廓C表示为零水平集即φ=0,定义为:
Figure PCTCN2020125703-appb-000032
使用零水平集即φ=0表示轮廓C;
Heaviside函数H和Dirac函数δ 0的定义如下:
Figure PCTCN2020125703-appb-000033
使用H表示轮廓C内部和外部:
Figure PCTCN2020125703-appb-000034
使用水平集φ、函数H、函数δ 0表示轮廓的周长、面积:
Figure PCTCN2020125703-appb-000035
Area{φ>0}=∫ ΩH(φ(x,y))dxdy;
轮廓约束能量是当前轮廓C和初始化轮廓C 0的差异,使用水平集φ、函数H、φ I表示,轮廓约束能量表示为当前水平集φ与初始化水平集φ I的差异:
(C-C 0) 2=∫ Ω(H(φ(x,y)))-H(φ I(x,y)) 2dxdy;
轮廓局部区域能量是所有轮廓点周围内部和外部能量的总和;轮廓周围区域的能量采用局部计算的方式,对每个轮廓点单独计算其局部区域内轮廓内部和轮廓外部的能量,然后叠加得到总体的能量;使用水平集φ、函数H表示后,轮廓周围区域的能量中的各项定义为:
Figure PCTCN2020125703-appb-000036
Figure PCTCN2020125703-appb-000037
Figure PCTCN2020125703-appb-000038
Figure PCTCN2020125703-appb-000039
其中,对于轮廓点C上的点p(x p,y p),φ(p)=0;a(p)表示处于轮廓点p周围,所述轮廓点p周围是指以p为圆心,R为半径的圆的范围内;ia(p)表示处于轮廓点p周围且在轮廓C内部的点,对于满足ia(p)的点a(x a,y a),φ(x a,y a)>0且
Figure PCTCN2020125703-appb-000040
oa(p)表示处于轮廓点p周围且在轮廓C外部的点,对于满足oa(p)的点a(x a,y a),φ(x a,y a)<0且
Figure PCTCN2020125703-appb-000041
进一步地,在使用水平集方法表示各个能量项后,能量函数F定义为:
Figure PCTCN2020125703-appb-000042
其中c 1是指轮廓C内部像素强度均值,c 2是指轮廓C外部像素强度均值,分别满足:
c 1(φ)=average(u 0)in{φ≥0},c 2(φ)=average(u 0)in{φ<0}。通过水平集φ定义c 1和c 2
Figure PCTCN2020125703-appb-000043
Figure PCTCN2020125703-appb-000044
c ip为满足ia(p)的点的像素强度均值,c op为满足oa(p)的点的像素强度均值;
Figure PCTCN2020125703-appb-000045
定义为:
Figure PCTCN2020125703-appb-000046
Figure PCTCN2020125703-appb-000047
能量函数F通过欧拉-拉格朗日变分法和梯度下降流得到曲线演化的偏微分方程:
Figure PCTCN2020125703-appb-000048
其中
Figure PCTCN2020125703-appb-000049
(x,y)∈a(p)表示点(x,y)处于轮廓点p周围,所述轮廓点p周围是指以p为圆心,R为半径的圆的范围内;在曲线演化过程中,第n次迭代的水平集为φ n,第n+1次迭代的水平集
Figure PCTCN2020125703-appb-000050
使用有限差分的方式计算二维图像中水平方向和竖直方向的偏导数。
S2.3、进行轮廓点所在区域的判定;如图5所示,黑线方框表示一块图像区域,闭合黑色曲线为轮廓C,轮廓C内部区域表示为Inside(C),轮廓C外部区域表示为Outside(C),点p为轮廓C上的一点,ia(p)是指处于轮廓点p周围且在轮廓C内部的区域,oa(p)是指处于轮廓点p周围且在轮廓C外部的区域,所述轮廓点p周围是指以p为圆心,R为半径的圆的范围内,如图中的黑色虚线所画的圆;
通过轮廓内外的像素强度差异判定轮廓点p在目标边缘区域内或者非目标边缘区域内,具体方法如下:在模糊边界图像中,目标边缘区域中轮廓周围内部和外部的像素强度均值差异较大,而非目标边缘区域中轮廓周围内部和外部的像素强度均值差异较小;当轮廓点p在非目标边缘区域时,c ip与c op的值相近,即c ip≈c op,|c ip-c op|≤c d,c d是判定c ip与c op是否相近的阈值;如图8所示,判定方法按照以下步骤:
S2.3.1、按照逆时针的顺序计算轮廓上每个轮廓点的c ip与c op的差值d p,按照d p得到的顺序构建闭环队列D;
S2.3.2、使用宽度为R的高斯滤波器平滑闭环队列D;
S2.3.3、寻找闭环队列D中长度大于2R且d p≤c d的片段ΔC;
S2.3.4、若存在满足步骤S2.3.3的片段,则片段中所有轮廓点处于非目标边缘区域,其他轮廓点处于目标边缘区域;
处于目标边缘区域轮廓点的局部区域内轮廓内部的能量和为:
Figure PCTCN2020125703-appb-000051
处于目标边缘区域轮廓点的局部区域内轮廓外部的能量和为:
Figure PCTCN2020125703-appb-000052
进一步地,若轮廓点p处于非目标边缘区域,进一步确定轮廓点p处于前景区域或背景区域;由于轮廓点周围的区域特性和所处区域相似,因此将模糊边界图像根据图像特性分为若干个子区域,在这些子区域中确定轮廓点p处于前景区域或背景区域;在本实例中,使用甲状腺超声图像作为测试图像。甲状腺超声图像分为横向扫描图和纵向扫描图,如图6和图7所示。图6中左右分割线分开支气管和劲动脉区域,上下分割线降低声波衰减的影响,部分甲状腺超声图像中像素强度会随深度加深而减弱,上面普遍比下面要高亮,同时分出肌肉区域。图7上下分割线同样是降低声波衰减的影响,同时分出肌肉区域。在这些子区域中,子区域A、B、C、D确定轮廓点p处于前景区域或背景区域。具体步骤如下:
S2.3.5、首先将模糊边界图像根据图像特性分为若干个子区域,确定轮廓片段ΔC所处的子区域Ο∈{A,B,C,D};
S2.3.6、在图像子区域Ο中建立二维坐标系,以处于轮廓片段ΔC中间的轮廓点坐标位置为中心点center(x 0,y 0),得到二维高斯函数
Figure PCTCN2020125703-appb-000053
以x 0与子区域边界1/6最大距离作为高斯函数X轴部分的标准差σ x,以y 0与子区域边界的1/6最大距离作为高斯函数Y轴部分的标准差σ y;使用二维高斯函数给子区域中的每个点赋予权值w ij,并分别对轮廓内部和外部的权值w ij做标准化处理,得到轮廓内部标准化后的权值w ij_in以及轮廓外部标准化后的权值w ij_out
S2.3.7、使用标准化的权值w ij_in,w ij_out和像素强度μ 0(i,j)计算子区域Ο中轮廓内外的均值c o1和c o2;当点(i,j)处于子区域Ο中轮廓内部时,
Figure PCTCN2020125703-appb-000054
N为处于子区域Ο中轮廓内部的点的个数;当点(i,j)处于子区域Ο中轮廓外部时,
Figure PCTCN2020125703-appb-000055
M为处于子区域Ο中轮廓外部的点的个数。
S2.3.8、计算轮廓片段ΔC中所有轮廓点周围区域的像素强度均值m Δc,比较m Δc与c o1和c o2的差异,若|m Δc-c ο1|≤|m Δc-c ο2|,则轮廓片段ΔC中的轮廓点处于前景区域,否则轮廓点处于背景区域;
若轮廓点p处于前景区域,则轮廓点p的演化方向朝向轮廓外部,在能量函数中,演化方向矫正体现为增加前景轮廓点的局部区域内轮廓外部的能量,增加的能量定义为:
Figure PCTCN2020125703-appb-000056
若轮廓点p处于背景区域,则轮廓点p的演化方向朝向轮廓内部,在能量函数中,演化方向矫正体现为增加背景轮廓点的局部区域内轮廓内部的能量,增加的能量定义为:
Figure PCTCN2020125703-appb-000057
S2.4、确定各个轮廓点所在的区域之后,计算偏微分方程的值,并通过
Figure PCTCN2020125703-appb-000058
迭代演化轮廓,直到达到最大迭代次数iter=1000或轮廓变动微小或不变则完成分割。其中,轮廓变动
Figure PCTCN2020125703-appb-000059
表示轮廓的变化情况,若连续多次变动微小则迭代停止。
本实施例中,图2为标准分割的图像,由经验丰富的医生标注。图3中U-Net的分割结果呈现分割错误和分割不足的问题,而在使用主动轮廓模型后,如图4所示,结果图像去除了分割错误的区域,并使轮廓在模糊区域向外扩张覆盖部分分割不足的区域。
基于主动轮廓和深度学习的模糊边界图像自动分割方法,目的是使分割模型能够分割模糊边界区域,同时精调分割轮廓,使分割轮廓尽可能的贴近目标边界。本发明采用深度卷积网络模型和主动轮廓模型结合的方式,使模型达到精准的分割结果。本发明的实验数据为甲状腺超声图像,数据集包含309张图像,其中150张作为训练集,其余159张作为测试集。使用150张训练图像训练U-Net模型,训练好的模型分割159测试图像,然后在使用主动轮廓模型进一步精调U-Net分割结果。分割结果的量化指标如下:
Figure PCTCN2020125703-appb-000060
Figure PCTCN2020125703-appb-000061
Figure PCTCN2020125703-appb-000062
其中TP,TN,FP,FN,A P,and A N分别表示真正类(True Positive),真负类(True negative),假正类(False Positive),假负类(False negative),正类(All Positive),负类(All Negative)。对159张图像进行分割后得到的平均量化指标如表1所示。
表1
Figure PCTCN2020125703-appb-000063
由上表可知,本发明结合U-Net和主动轮廓模型相比只使用U-Net,在细粒度分割上像素分类准确度更高,Accuracy达到0.9933;在分割为甲状腺的区域中有0.9278为正确的甲状腺区域,提升2.78%的精确度;分割为甲状腺的区域与真正的甲状腺区域的交并比为0.9026,相比只使用U-Net提升1.54%。本发明在量化指标Accuracy、PPV、IOU上的提升,说明本发明能够进一步提高模糊图像中目标分割的精准度,在模糊边界得出精细且精准的分割结果。本发明在U-Net的基础上使用主动轮廓模型得到了更好的模糊边界图像分割结果。基于主动轮廓和深度学习的模糊边界图像自动分割方法具备分割模糊边界图像中的模糊边界的能力,同时精调分割轮廓使轮廓向目标边界贴近。

Claims (9)

  1. 基于主动轮廓和深度学习的模糊边界图像自动分割方法,其特征在于,包括以下步骤:
    S1、使用深度学习模型分割模糊边界图像,得到初始化的目标分割结果;
    S2、使用主动轮廓模型精调模型的分割结果,得到更加精准的正常边界和模糊边界分割结果,具体包括:
    S2.1、使用S1中得到的初始化的目标分割结果中的区域边界初始化主动轮廓模型,构造初始的水平集;
    S2.2、使用水平集来表示能量函数,通过能量函数得到曲线演化的偏微分方程;
    S2.3、进行轮廓点所在区域的判定;
    S2.4、确定各个轮廓点所在的区域之后,计算偏微分方程的值,并迭代演化轮廓,直到达到最大迭代次数或轮廓变动微小或不变则完成分割。
  2. 根据权利要求1所述的基于主动轮廓和深度学习的模糊边界图像自动分割方法,其特征在于,步骤S2.1中,通过深度学习模型的分割结果构造主动轮廓模型的初始水平集φ I(x,y),初始水平集的定义如下:
    Figure PCTCN2020125703-appb-100001
    其中R(x,y)={0,1}为深度学习模型分割结果,R(x,y)=0表示点(x,y)属于目标区域,R(x,y)=1表示点(x,y)属于非目标区域;处于目标区域与非目标区域分界处的点构成目标边界B,D(x,y)为图像上每个点(x,y)与目标边界B的最短距离。
  3. 根据权利要求1所述的基于主动轮廓和深度学习的模糊边界图像自动分割方法,其特征在于,步骤S2.2中,能量函数中共包含三个部分:1)轮廓的周长、面积;2)轮廓局部区域能量;3)轮廓约束能量;
    整个能量函数的定义如下:
    Figure PCTCN2020125703-appb-100002
    其中,C表示当前的分割轮廓,C 0表示初始化的分割轮廓,Length(C)表示轮廓C的周长,Area(inside(C))表示轮廓C内部区域的面积,μ 0(x,y)是指源图像I在(x,y)处的像素强度,c 1是指轮廓C内部像素强度均值,c 2是指轮廓C外部像素强度均值,p是指轮廓C上的点,p∈N(C)表示轮廓点p处于目标边缘区域内,p∈F(C)表示轮廓点p处于前景(目标)区域内,p∈B(C)表示轮廓点p处于背景区域内,ia(p)是指处于轮廓点p周围且在轮廓C内部的点,oa(p)是指处于轮廓点p周围且在轮廓C外部的点,c ip是指满足ia(p)的点的像素强度均值,c op是指满足oa(p)的点的像素强度均值,所述轮廓点p周围是指以p为圆心,R为半径的圆的范围内;能量函数中的第一项和第二项表示轮廓的周长和面积,作用是使轮廓保持连续、光滑,只与轮廓本身的大小、形状有关;能量函数中的第三项和第四项是轮廓局部区域能量,作用是使轮廓向目标的边界演化,与图像数据有关;能量函数中的第五项是轮廓约束能量,作用是限制当前轮廓向极大偏离初始化轮廓的区域演化,u,v,λ 123是相应能量项的系数。
  4. 根据权利要求3所述的基于主动轮廓和深度学习的模糊边界图像自动分割方法,其特征在于,在能量函数F中,使用水平集方法来表示轮廓C以及轮廓内部和外部;在水平集方法中,图像域Ω中轮廓C表示为零水平集即φ=0,定义为:
    Figure PCTCN2020125703-appb-100003
    使用零水平集即φ=0表示轮廓C;
    Heaviside函数H和Dirac函数δ 0的定义如下:
    Figure PCTCN2020125703-appb-100004
    使用H表示轮廓C内部和外部:
    Figure PCTCN2020125703-appb-100005
    使用水平集φ、函数H、函数δ 0表示轮廓的周长、面积:
    Figure PCTCN2020125703-appb-100006
    轮廓约束能量是当前轮廓C和初始化轮廓C 0的差异,使用水平集φ、函数H、φ I表示,轮廓约束能量表示为当前水平集φ与初始化水平集φ I的差异:
    (C-C 0) 2=∫ Ω(H(φ(x,y)))-H(φ I(x,y)) 2dxdy;
    轮廓局部区域能量是所有轮廓点周围内部和外部能量的总和;轮廓周围区域的能量采用局部计算的方式,对每个轮廓点单独计算其局部区域内轮廓内部和轮廓外部的能量,然后叠加得到总体的能量;使用水平集φ、函数H表示后,轮廓周围区域的能量中的各项定义为:
    Figure PCTCN2020125703-appb-100007
    Figure PCTCN2020125703-appb-100008
    Figure PCTCN2020125703-appb-100009
    Figure PCTCN2020125703-appb-100010
    其中,对于轮廓点C上的点p(x p,y p),φ(p)=0;a(p)表示处于轮廓点p周围,所述轮廓点p周围是指以p为圆心,R为半径的圆的范围内;ia(p)表示处于轮廓点p周围且在轮廓C内部的点,对于满足ia(p)的点a(x a,y a),φ(x a,y a)>0且
    Figure PCTCN2020125703-appb-100011
    oa(p)表示处于轮廓点p周围且在轮廓C外部的点,对于满足oa(p)的点a(x a,y a),φ(x a,y a)<0且
    Figure PCTCN2020125703-appb-100012
  5. 根据权利要求3所述的基于主动轮廓和深度学习的模糊边界图像自动分割方法,其特征在于,在使用水平集方法表示各个能量项后,能量函数F定义为:
    Figure PCTCN2020125703-appb-100013
    其中c 1是指轮廓C内部像素强度均值,c 2是指轮廓C外部像素强度均值,分别满足:c 1(φ)=average(u 0)in{φ≥0},c 2(φ)=average(u 0)in{φ<0};通过水平集φ定义c 1和c 2
    Figure PCTCN2020125703-appb-100014
    Figure PCTCN2020125703-appb-100015
    c ip为满足ia(p)的点的像素强度均值,c op为满足oa(p)的点的像素强度均值;
    Figure PCTCN2020125703-appb-100016
    定义为:
    Figure PCTCN2020125703-appb-100017
    Figure PCTCN2020125703-appb-100018
    能量函数F通过欧拉-拉格朗日变分法和梯度下降流得到曲线演化的偏微分方程:
    Figure PCTCN2020125703-appb-100019
    其中
    Figure PCTCN2020125703-appb-100020
    (x,y)∈a(p)表示点(x,y)处于轮廓点p周围,所述轮廓点p周围是指以p为圆心,R为半径的圆的范围内;在曲线演化过程中,第n次迭代的水平集为φ n,第n+1次迭代的水平集
    Figure PCTCN2020125703-appb-100021
    使用有限差分的方式计算二维图像中水平方向和竖直方向的偏导数。
  6. 根据权利要求1所述的基于主动轮廓和深度学习的模糊边界图像自动分割方法,其特征在于,步骤S2.3中,通过轮廓内外的像素强度差异判定轮廓点p在目标边缘区域内或者非目标边缘区域内,具体方法如下:在模糊边界图像中,目标边缘区域中轮廓周围内部和外部的像素强度均值差异较大,而非目标边缘区域中轮廓周围内部和外部的像素强度均值差异较小;当轮廓点p在非目标边缘区域时,c ip与c op的值相近,即c ip≈c op,|c ip-c op|≤c d,c d是判定c ip与c op是否相近的阈值;判定方法按照以下步骤:
    S2.3.1、按照逆时针的顺序计算轮廓上每个轮廓点的c ip与c op的差值d p,按照d p得到的顺序构建闭环队列D;
    S2.3.2、使用宽度为R的高斯滤波器平滑闭环队列D;
    S2.3.3、寻找闭环队列D中长度大于2R且d p≤c d的片段ΔC;
    S2.3.4、若存在满足步骤S2.3.3的片段,则片段中所有轮廓点处于非目标边缘区域,其他轮廓点处于目标边缘区域;
    处于目标边缘区域轮廓点的局部区域内轮廓内部的能量和为:
    Figure PCTCN2020125703-appb-100022
    处于目标边缘区域轮廓点的局部区域内轮廓外部的能量和为:
    Figure PCTCN2020125703-appb-100023
  7. 根据权利要求6所述的基于主动轮廓和深度学习的模糊边界图像自动分割方法,其特征在于,若轮廓点p处于非目标边缘区域,进一步确定轮廓点p处于前景区域或背景区域;由于轮廓点周围的区域特性和所处区域相似,因此将模糊边界图像根据图像特性分为若干个子区域,在这些子区域中确定轮廓点p处于前景区域或背景区域;具体方法如下:
    S2.3.5、首先将模糊边界图像根据图像特性分为若干个子区域,确定轮廓片段ΔC所处的子区域Ο;
    S2.3.6、在图像子区域Ο中建立二维坐标系,以处于轮廓片段ΔC中间的轮廓点坐标位置为二维高斯函数f(x,y)中心点center(x 0,y 0),以x 0与子区域边界1/6最大距离作为高斯函数X轴部分的标准差σ x,以y 0与子区域边界的1/6最大距离作为高斯函数Y轴部分的标准差σ y;使用二维高斯函数给子区域中的每个点赋予权值w ij,并分别对轮廓内部和外部的权值w ij做标准化处理,得到轮廓内部标准化后的权值w ij_in以及轮廓外部标准化后的权值w ij_out
    S2.3.7、使用标准化的权值w ij_in、w ij_out和像素强度μ 0(i,j)计算子区域Ο中轮廓内外的均值c o1和c o2;当点(i,j)处于子区域Ο中轮廓内部时,
    Figure PCTCN2020125703-appb-100024
    N为处于子区域Ο中轮廓内部的点的个数;当点(i,j)处于子区域Ο中轮廓外部时,
    Figure PCTCN2020125703-appb-100025
    M为处于子区域Ο中轮廓外部的点的个数;
    S2.3.8、计算轮廓片段ΔC中所有轮廓点周围区域的像素强度均值m Δc,比较m Δc与c o1和c o2的差异,若|m Δc-c ο1|≤|m Δc-c ο2|,则轮廓片段ΔC中的轮廓点处于前景区域,否则处于背景区域。
  8. 根据权利要求7所述的基于主动轮廓和深度学习的模糊边界图像自动分割方法,其特征在于,若轮廓点p处于前景区域,则轮廓点p的演化方向朝向轮廓外部,在能量函数中,演化方向矫正体现为增加前景轮廓点的局部区域内轮廓外部的能量,增加的能量定义为:
    Figure PCTCN2020125703-appb-100026
    若轮廓点p处于背景区域,则轮廓点p的演化方向朝向轮廓内部,在能量函数中,演化方向矫正体现为增加背景轮廓点的局部区域内轮廓内部的能量,增加的能量定义为:
    Figure PCTCN2020125703-appb-100027
  9. 根据权利要求1所述的基于主动轮廓和深度学习的模糊边界图像自动分割方法,其特征在于,步骤S2.4中,通过
    Figure PCTCN2020125703-appb-100028
    迭代演化轮廓,直到达到最大迭代次数iter或轮廓变动微小或不变;其中,200≤iter≤10000;轮廓变动
    Figure PCTCN2020125703-appb-100029
    表示轮廓的变化情况,若连续多次变动微小则迭代停止。
PCT/CN2020/125703 2019-09-09 2020-10-31 基于主动轮廓和深度学习的模糊边界图像自动分割方法 WO2021047684A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/641,445 US20220414891A1 (en) 2019-09-09 2020-10-31 Method for automatic segmentation of fuzzy boundary image based on active contour and deep learning

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910846367.8 2019-09-09
CN201910846367.8A CN110689545B (zh) 2019-09-09 2019-09-09 基于主动轮廓和深度学习的模糊边界图像自动分割方法

Publications (1)

Publication Number Publication Date
WO2021047684A1 true WO2021047684A1 (zh) 2021-03-18

Family

ID=69107917

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/125703 WO2021047684A1 (zh) 2019-09-09 2020-10-31 基于主动轮廓和深度学习的模糊边界图像自动分割方法

Country Status (3)

Country Link
US (1) US20220414891A1 (zh)
CN (1) CN110689545B (zh)
WO (1) WO2021047684A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113724322A (zh) * 2021-07-30 2021-11-30 上海动亦科技有限公司 一种用于无人叉车的货物托盘定位方法及系统
CN115953690A (zh) * 2023-03-09 2023-04-11 济宁市保田农机技术推广专业合作社 用于无人收割机行进校准的倒伏作物识别方法
CN117474927A (zh) * 2023-12-28 2024-01-30 山东太阳耐磨件有限公司 基于人工智能的驱动齿生产质量检测方法

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689545B (zh) * 2019-09-09 2023-06-16 华南理工大学 基于主动轮廓和深度学习的模糊边界图像自动分割方法
CN115482246B (zh) * 2021-05-31 2023-06-16 数坤(上海)医疗科技有限公司 一种图像信息提取方法、装置、电子设备和可读存储介质
CN113506314B (zh) * 2021-06-25 2024-04-09 北京精密机电控制设备研究所 一种复杂背景下对称四边形工件的自动抓取方法及装置
CN114387523B (zh) * 2022-03-23 2022-06-03 成都理工大学 基于dcnn边界引导的遥感图像建筑物提取方法
CN114708277B (zh) * 2022-03-31 2023-08-01 安徽鲲隆康鑫医疗科技有限公司 超声视频图像活动区域自动检索方法和装置
CN115859485B (zh) * 2023-02-27 2023-05-23 青岛哈尔滨工程大学创新发展中心 一种基于船舶外形特征的流线种子点选取方法
CN116703954B (zh) * 2023-06-16 2024-04-16 江南大学 一种基于全局预拟合能量驱动的主动轮廓模型方法及系统
CN117422880B (zh) * 2023-12-18 2024-03-22 齐鲁工业大学(山东省科学院) 改进的注意力机制与cv模型相结合的分割方法及其系统
CN117522902B (zh) * 2024-01-04 2024-03-29 齐鲁工业大学(山东省科学院) 一种基于测地线模型的自适应割方法及设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886564A (zh) * 2013-11-29 2014-06-25 沈阳东软医疗系统有限公司 一种pet心脏图像中心肌轮廓线分割方法和装置
CN106056576A (zh) * 2016-03-09 2016-10-26 华侨大学 一种融合边缘和区域特征的ct图像中主动脉的分割方法
CN106447688A (zh) * 2016-03-31 2017-02-22 大连海事大学 一种高光谱溢油图像的有效分割方法
CN108013904A (zh) * 2017-12-15 2018-05-11 无锡祥生医疗科技股份有限公司 心脏超声成像方法
US20180330477A1 (en) * 2017-01-18 2018-11-15 David S. Paik Systems and methods for analyzing pathologies utilizing quantitative imaging
CN110689545A (zh) * 2019-09-09 2020-01-14 华南理工大学 基于主动轮廓和深度学习的模糊边界图像自动分割方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10420523B2 (en) * 2016-03-21 2019-09-24 The Board Of Trustees Of The Leland Stanford Junior University Adaptive local window-based methods for characterizing features of interest in digital images and systems for practicing same
CN107993237A (zh) * 2017-11-28 2018-05-04 山东大学 一种基于窄带约束的几何活动轮廓模型图像局部分割方法
CN110120057B (zh) * 2019-04-16 2023-09-26 东华理工大学 基于权重全局和局部拟合能量的模糊区域性活动轮廓分割模型

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886564A (zh) * 2013-11-29 2014-06-25 沈阳东软医疗系统有限公司 一种pet心脏图像中心肌轮廓线分割方法和装置
CN106056576A (zh) * 2016-03-09 2016-10-26 华侨大学 一种融合边缘和区域特征的ct图像中主动脉的分割方法
CN106447688A (zh) * 2016-03-31 2017-02-22 大连海事大学 一种高光谱溢油图像的有效分割方法
US20180330477A1 (en) * 2017-01-18 2018-11-15 David S. Paik Systems and methods for analyzing pathologies utilizing quantitative imaging
CN108013904A (zh) * 2017-12-15 2018-05-11 无锡祥生医疗科技股份有限公司 心脏超声成像方法
CN110689545A (zh) * 2019-09-09 2020-01-14 华南理工大学 基于主动轮廓和深度学习的模糊边界图像自动分割方法

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113724322A (zh) * 2021-07-30 2021-11-30 上海动亦科技有限公司 一种用于无人叉车的货物托盘定位方法及系统
CN113724322B (zh) * 2021-07-30 2024-05-07 上海动亦科技有限公司 一种用于无人叉车的货物托盘定位方法及系统
CN115953690A (zh) * 2023-03-09 2023-04-11 济宁市保田农机技术推广专业合作社 用于无人收割机行进校准的倒伏作物识别方法
CN115953690B (zh) * 2023-03-09 2023-05-19 济宁市保田农机技术推广专业合作社 用于无人收割机行进校准的倒伏作物识别方法
CN117474927A (zh) * 2023-12-28 2024-01-30 山东太阳耐磨件有限公司 基于人工智能的驱动齿生产质量检测方法
CN117474927B (zh) * 2023-12-28 2024-03-26 山东太阳耐磨件有限公司 基于人工智能的驱动齿生产质量检测方法

Also Published As

Publication number Publication date
CN110689545B (zh) 2023-06-16
CN110689545A (zh) 2020-01-14
US20220414891A1 (en) 2022-12-29

Similar Documents

Publication Publication Date Title
WO2021047684A1 (zh) 基于主动轮廓和深度学习的模糊边界图像自动分割方法
WO2019174276A1 (zh) 用于定位目标物体区域中心的图像处理方法、装置、设备和介质
US20150278589A1 (en) Image Processor with Static Hand Pose Recognition Utilizing Contour Triangulation and Flattening
WO2015067084A1 (zh) 人眼定位方法和装置
CN115797872B (zh) 基于机器视觉的包装缺陷识别方法、系统、设备及介质
JP2017510427A5 (zh)
CN109472792A (zh) 结合局部熵的局部能量泛函与非凸正则项的图像分割方法
WO2017193414A1 (zh) 一种基于转弯半径的图像角点检测方法
CN107590496A (zh) 复杂背景下红外小目标的关联检测方法
Wang et al. Active contours driven by multi-feature Gaussian distribution fitting energy with application to vessel segmentation
Gao et al. Automatic optic disc segmentation based on modified local image fitting model with shape prior information
Chen et al. Fast asymmetric fronts propagation for image segmentation
WO2020010620A1 (zh) 波浪识别方法、装置、计算机可读存储介质和无人飞行器
CN107424153B (zh) 基于深度学习和水平集的人脸分割方法
CN109523559A (zh) 一种基于改进的能量泛函模型的噪声图像分割方法
CN101430789B (zh) 基于Fast Slant Stack变换的图像边缘检测方法
Qian et al. Medical image segmentation based on FCM and Level Set algorithm
CN105913434B (zh) 一种白细胞定位和迭代分割方法
Yuan et al. Segmentation of color image based on partial differential equations
CN105243661A (zh) 一种基于susan算子的角点检测方法
CN104463889B (zh) 一种基于cv模型的无人机自主着陆目标提取方法
CN113706563B (zh) 一种自动初始化Snake模型的X光胸片肺野分割方法
CN111145142A (zh) 一种基于水平集算法的灰度不均囊肿图像分割方法
IT201900007806A1 (it) Metodo computerizzato per classificare una massa di un organo come ciste
CN112967305B (zh) 一种复杂天空场景下的图像云背景检测方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20862771

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09.08.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20862771

Country of ref document: EP

Kind code of ref document: A1