CN115359046B - Organ blood vessel segmentation method and device, storage medium and electronic equipment - Google Patents

Organ blood vessel segmentation method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN115359046B
CN115359046B CN202211276231.6A CN202211276231A CN115359046B CN 115359046 B CN115359046 B CN 115359046B CN 202211276231 A CN202211276231 A CN 202211276231A CN 115359046 B CN115359046 B CN 115359046B
Authority
CN
China
Prior art keywords
blood vessel
segmentation result
vessel segmentation
neural network
organ
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211276231.6A
Other languages
Chinese (zh)
Other versions
CN115359046A (en
Inventor
张雨萌
郭旭
池琛
罗富良
黄乾富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hygea Medical Technology Co Ltd
Original Assignee
Hygea Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hygea Medical Technology Co Ltd filed Critical Hygea Medical Technology Co Ltd
Priority to CN202211276231.6A priority Critical patent/CN115359046B/en
Publication of CN115359046A publication Critical patent/CN115359046A/en
Application granted granted Critical
Publication of CN115359046B publication Critical patent/CN115359046B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention provides an organ blood vessel segmentation method, an organ blood vessel segmentation device, a storage medium and an electronic device, wherein the organ blood vessel segmentation method comprises the following steps: acquiring a medical image to be segmented; extracting a target organ image in the medical image and performing primary segmentation on the target organ image to obtain a first blood vessel segmentation result; extracting the vessel center lines in the first vessel segmentation result, and determining the end points of the vessel center lines; performing vessel re-segmentation based on the end points of the central lines of the vessels to obtain a second vessel segmentation result; and merging the first blood vessel segmentation result and the second blood vessel segmentation result to obtain a merged blood vessel segmentation result. The invention ensures the continuity of organ vessel segmentation and obviously improves the accuracy of organ vessel segmentation.

Description

Organ blood vessel segmentation method and device, storage medium and electronic equipment
Technical Field
The invention relates to the field of artificial intelligence, in particular to a method and a device for organ blood vessel segmentation, a storage medium and electronic equipment.
Background
With the rapid development of minimally invasive surgery, preoperative planning is increasingly demanded. Taking a cobra knife cryoablation apparatus as an example, a doctor needs to plan a needle insertion path before an operation based on a CT (Computed Tomography) original image, and path planning on a two-dimensional cross section requires the doctor to have abundant clinical experience, which brings about a small challenge to inexperienced doctors. If preoperative planning of minimally invasive surgery can be performed on a three-dimensional level, the follow-up effect will be better. An important prerequisite for preoperative planning on a three-dimensional level is the correct segmentation and classification of organs and vessels, lesions inside the organs in CT images. With the development and landing of artificial intelligence technology, visceral organs and visceral vessel segmentation based on deep learning are increasingly receiving attention of people. However, the classification scheme for the organ blood vessel in the related art cannot guarantee the continuity of organ blood vessel segmentation, and therefore the accuracy is low.
Disclosure of Invention
In order to ensure the continuity of organ blood vessel segmentation, the invention provides an organ blood vessel segmentation method, an organ blood vessel segmentation device, a storage medium and electronic equipment, which obviously improve the accuracy of organ blood vessel segmentation.
In a first aspect, an embodiment of the present invention provides an organ blood vessel segmentation method, including:
acquiring a medical image to be segmented;
extracting a target organ image in the medical image and performing primary segmentation on the target organ image to obtain a first blood vessel segmentation result;
extracting the vessel center lines in the first vessel segmentation result, and determining the end points of the vessel center lines;
performing vessel re-segmentation based on the end points of the central lines of the vessels to obtain a second vessel segmentation result;
and merging the first blood vessel segmentation result and the second blood vessel segmentation result to obtain a merged blood vessel segmentation result.
In some implementations, the extracting a target organ image in the medical image and obtaining a first vessel segmentation result of a target organ based on the target organ image segmentation include:
extracting a target organ image in the medical image by using a first neural network model trained in advance and obtaining a first blood vessel segmentation result of the target organ based on the target organ image segmentation;
the first neural network model comprises an EfficientNet-B0 neural network and an EfficientRes-Unet Plus neural network formed by adding a down-sampling part, a jump connection part and an up-sampling part of the U-Net model into a depth separable convolution module, wherein the input of the EfficientNet-B0 neural network comprises a medical image to be segmented, the output of the EfficientNet-B0 neural network comprises a target organ image in the medical image to be segmented, the input of the EfficientRes-Unet Plus neural network comprises a target organ image in the medical image to be segmented, and the output of the EfficientRes-Unet Plus neural network comprises a first blood vessel segmentation result.
In some implementations, the method further includes: training a first neural network model, wherein a first loss function adopted for training the effective Res-UNet Plus neural network comprises a cross entropy loss function, and a second loss function adopted for training the effective Res-UNet Plus neural network is determined based on the cross entropy loss function and the Tversey loss function.
In some implementations, the determining the end points of the vessel centerlines includes:
and determining a target pixel which is positioned in the center of the three-dimensional pixel area with the first preset size and has at most one connected pixel around the three-dimensional pixel area with the first preset size as an end point of the central line of the blood vessel.
In some implementations, the performing vessel re-segmentation based on the end points of the vessel centerlines to obtain a second vessel segmentation result includes:
and performing vessel re-segmentation based on the end points of the central lines of the vessels by using a pre-trained second neural network model to obtain a second vessel segmentation result.
In some implementations, the second neural network model includes a V-Net neural network; the method further comprises the following steps:
and training the V-Net neural network by taking the background and different blood vessel classifications as prediction labels and taking a three-dimensional pixel region with a second preset size as input, wherein the three-dimensional pixel region with the second preset size is constructed by taking the end point of the central line of each blood vessel as the center.
In some implementations, the merging the first vessel segmentation result and the second vessel segmentation result includes:
overlaying the second vessel segmentation result with the first vessel segmentation result.
In some implementations, the target organ includes a liver, and the merged vessel segmentation result includes a hepatic vein classification, a portal vein classification, and an inferior vena cava classification; the method further comprises the following steps:
in hepatic vein classification in the merged blood vessel segmentation result, determining first connected domains with the pixel number smaller than a first preset value, judging whether other prediction labels exist around each first connected domain one by one, and changing the prediction labels of the connected domains with the other prediction labels around into the other prediction labels;
determining a maximum connected domain in portal vein classification in the merged blood vessel segmentation result, judging whether other prediction labels exist around other connected domains outside the maximum connected domain one by one, and changing the prediction labels of other connected domains around which other prediction labels exist into other prediction labels;
and removing connected domains with pixel values lower than a preset value in each prediction label.
In a second aspect, an embodiment of the present invention provides an organ blood vessel segmentation apparatus, including:
the acquisition module is used for acquiring a medical image to be segmented;
the first segmentation module is used for extracting a target organ image in the medical image and carrying out primary segmentation on the target organ image to obtain a first blood vessel segmentation result;
the determining module is used for extracting the blood vessel central lines in the first blood vessel segmentation result and determining the end points of the blood vessel central lines;
the second segmentation module is used for performing vessel re-segmentation based on the end points of the central lines of the vessels to obtain a second vessel segmentation result;
and the merging module is used for merging the first blood vessel segmentation result and the second blood vessel segmentation result to obtain a merged blood vessel segmentation result.
In a third aspect, embodiments of the present invention provide a computer storage medium, where a computer program is stored on the computer storage medium, and when the computer program is executed by one or more processors, the computer program implements the method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides an electronic device, including a memory and one or more processors, where the memory stores a computer program, and the computer program, when executed by the one or more processors, implements the method of the first aspect.
Compared with the prior art, one or more embodiments of the invention have at least the following advantages:
according to the scheme, the first blood vessel segmentation result obtained based on organ segmentation and the second blood vessel segmentation result obtained by performing blood vessel re-extraction on the blood vessel breakpoint are combined, so that the accuracy and continuity of the segmentation result are guaranteed, and the classification result is accurate.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described below, and it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope.
Fig. 1 is a flow chart of a method for organ vessel segmentation according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a first neural network model provided by an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of an EfficientNet-B0 neural network provided in an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of an Efficient Res-UNet Plus neural network provided in an embodiment of the present invention;
FIG. 5 is a block diagram of a depth separable convolution module according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a V-Net neural network structure provided by an embodiment of the present invention;
fig. 7 is a block diagram of an organ blood vessel segmentation apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Example one
The present embodiment provides an organ blood vessel segmentation method, as shown in fig. 1, including:
and S101, acquiring a medical image to be segmented.
In this embodiment, the target organ may refer to a liver, and the blood vessels to be segmented include a hepatic vein, a portal vein, and a inferior vena cava. When the method is applied, a medical image (e.g., a CT image) including a target organ (liver) is acquired, and then blood vessel segmentation of the target organ is performed.
Step S102, extracting a target organ image in the medical image and performing primary segmentation on the target organ image to obtain a first blood vessel segmentation result.
In some implementations, the extracting of the target organ image in the medical image and the obtaining of the first blood vessel segmentation result of the target organ based on the target organ image segmentation in step S102 may include:
step S102a, extracting a target organ image in the medical image by using a first neural network model trained in advance, and obtaining a first blood vessel segmentation result of the target organ based on the target organ image segmentation.
As shown in fig. 2, the first neural network model includes an EfficientNet-B0 neural network and an efficientres-UNet Plus neural network formed by adding the down-sampling part, the jump-connection part and the up-sampling part of the U-Net model to the depth separable convolution module, the input of the EfficientNet-B0 neural network includes a medical image to be segmented, the output of the EfficientNet-B0 neural network includes a target organ image in the medical image to be segmented, the input of the efficientres-UNet Plus neural network includes a target organ image in the medical image to be segmented, and the output of the efficientres-UNet Plus neural network includes a first blood vessel segmentation result.
The reasoning process of the first neural network model comprises the following steps: firstly, an original medical image is input into an EfficientNet-B0 neural network for organ segmentation, the segmentation result of the organ is multiplied by pixel points one by one with (slice) images corresponding to each channel of the original medical image, and then the multiplication result is input into an EfficientRes-UNet Plus neural network for blood vessel segmentation in the organ to obtain a preliminary segmentation result of the blood vessel in the organ, wherein the preliminary segmentation result comprises the segmentation results of different blood vessel classifications.
In this embodiment, organ segmentation and blood vessel segmentation can be successively realized by using a two-stage neural network model, i.e., a first neural network model, including an EfficientNet-B0 neural network and an efficientres-UNet Plus neural network, so as to obtain a preliminary blood vessel segmentation result, i.e., a first blood vessel segmentation result.
In some implementations, the method of this embodiment further includes: training a first neural network model, further comprising training an EfficientNet-B0 neural network and an Efficient Res-UNet Plus neural network.
The first loss function used for training the effectiveness Net-B0 neural network comprises a cross-entropy loss function, and the second loss function used for training the effectiveness Res-UNet Plus neural network is determined based on the cross-entropy loss function and the Tverseky loss function.
Because the first neural network is an end-to-end model, staged training is not needed, and the training efficiency is improved. The EfficientNet-B0 neural network is mainly used for extracting organ outlines, during training, firstly, one-Hot coding processing is carried out on a background and organs in a prediction label, secondly, the weight of a loss function is adjusted according to the prediction label corresponding to the One-Hot coding, and the adjustment mode comprises the steps of increasing the weight of the loss function of the organs and reducing the weight of the loss function of the background.
In some implementations, the first loss function is calculated as follows:
Figure 259273DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 509120DEST_PATH_IMAGE002
the first loss function is represented as a function of,
Figure 136410DEST_PATH_IMAGE003
and
Figure 644752DEST_PATH_IMAGE004
a cross entropy loss function value representing a background and a cross entropy loss function value representing an organ; m and n are the coefficient of the cross entropy loss function value of the background and the coefficient of the cross entropy loss function value of the organ respectively, and n is larger than m.
Cross entropy loss function
Figure 561804DEST_PATH_IMAGE005
The calculation formula (c) is as follows:
Figure 497399DEST_PATH_IMAGE006
wherein, the first and the second end of the pipe are connected with each other,
Figure 611986DEST_PATH_IMAGE007
which represents the total number of samples,
Figure 143592DEST_PATH_IMAGE008
representing a sample, a set and
Figure 925604DEST_PATH_IMAGE009
as authentic labels, sets, and
Figure 32100DEST_PATH_IMAGE010
are the prediction labels obtained by segmentation.
By setting the weight of the loss function of the visceral organ and the weight of the loss function of the background in the EfficientNet-B0 neural network to be values with a large difference, such as m =0.1, n =1.0, and not adopting the mode that the weight values of the visceral organ and the background in the related art are set to be equal, such as m =0.5, n =0.5, the visceral organ segmentation effect is effectively improved, and the visceral organ contour can be accurately extracted.
The EfficientNet-B0 neural network structure in this embodiment is shown in fig. 3, and the network model has 1 convolutional layer, 7 modules, and 1 output layer. The 512 x 512 original image was first converted to 32 x 512 standard size input by a convolution layer. And sequentially down-sampling the feature maps to 320 × 16 through the modules 1 to 7, up-sampling the images to the original size through an output layer, and outputting a final organ segmentation result, wherein m =2 corresponds to two classifications of the organ and the background.
In a specific implementation, as shown in fig. 4, the Efficient Res-UNet Plus neural network structure 120 includes three parts, namely a down-sampling part 1201, a skip-connecting part 1202 and an up-sampling part 1203, which are all added with a depth separable convolution module to improve the complexity of the model, and the depth separable convolution module has fewer parameters and faster computation speed than the residual neural network module. Of these, downsampling section 1201 has 4 pairs of depth-separable convolution modules and pooling layers in total, skip-connecting section 1202 has 5 depth-separable convolution modules in total, and upsampling section 1203 has 5 pairs of depth-separable convolution modules and deconvolution layers in total. The depth separable convolution module in this embodiment may adopt a structure as shown in fig. 5.
In some implementations, the second loss function is calculated as follows:
Figure 883250DEST_PATH_IMAGE011
wherein, the first and the second end of the pipe are connected with each other,
Figure 467815DEST_PATH_IMAGE012
representing a second loss function;
Figure 104333DEST_PATH_IMAGE003
a cross-entropy loss function value representing a background;
Figure 132463DEST_PATH_IMAGE013
a cross entropy loss function value representing portal veins;
Figure 956062DEST_PATH_IMAGE014
a cross entropy loss function value representing hepatic veins;
Figure 344318DEST_PATH_IMAGE015
a cross entropy loss function value representing the inferior vena cava;
Figure 835343DEST_PATH_IMAGE016
tversky loss function values representing the background;
Figure 532909DEST_PATH_IMAGE017
(ii) a Tversky loss function value representing the portal vein;
Figure 374963DEST_PATH_IMAGE018
(ii) a Tversky loss function value representing the hepatic vein;
Figure 317642DEST_PATH_IMAGE019
the Tversky loss function value representing the inferior vena cava; m, n 1 、n 2 、n 3 The coefficient of the cross entropy loss function value of the background, the coefficient of the cross entropy loss function value of the portal vein, the coefficient of the cross entropy loss function value of the hepatic vein, the coefficient of the cross entropy loss function value of the inferior vena cava, n 1 、n 2 、n 3 Greater than m, since the portal and hepatic vein fractions are small and the inferior vena cava fraction is slightly larger, m =0.1, n is empirically set 1 =2.0、n 2 =2.0、n 3 =1.0;o、p 1 、p 2 、p 3 The coefficient of the Tverseky loss function value of the background, the coefficient of the Tverseky loss function value of the portal vein, the coefficient of the Tverseky loss function value of the hepatic vein, the coefficient of the Tverseky loss function value of the inferior vena cava, p 1 、p 2 、p 3 Greater than o, with o =0.1, p, empirically set due to the smaller portal and hepatic vein occupancy and slightly larger inferior vena cava occupancy 1 =2.0、p 2 =2.0、p 3 =1.0; i and j are respectively a total coefficient of the cross entropy loss function value and a total coefficient of the Tverseky loss function value, j is larger than i, and i =0.4 and j =0.6 are set according to experience in order to play the role of the Tversey loss function.
The Tverseky loss function has two parameters
Figure 397594DEST_PATH_IMAGE020
And
Figure 282373DEST_PATH_IMAGE021
advantageously by adjustment
Figure 80565DEST_PATH_IMAGE020
And
Figure 325470DEST_PATH_IMAGE021
the two parameters can control the balance between the false positive FP and the false negative FN, further influence the segmentation accuracy of a small segmentation region and a large segmentation region, and can control the segmentation of a small region such as an organ blood vessel
Figure 525508DEST_PATH_IMAGE022
And
Figure 331921DEST_PATH_IMAGE023
tverseky loss function
Figure 882988DEST_PATH_IMAGE024
The calculation formula (c) is as follows:
Figure 151158DEST_PATH_IMAGE025
wherein, A represents a prediction label, and B represents a real label;
Figure 738127DEST_PATH_IMAGE026
the result is a false positive FP,
Figure 964709DEST_PATH_IMAGE027
false negative FN is indicated.
And training the first neural network model until the loss function value of the verification set is reduced to a preset threshold value, and stopping training.
And step S103, extracting the vessel center lines in the first vessel segmentation result, and determining the end points of the vessel center lines.
In some implementations, determining the end points of the vessel centerlines includes:
step S103a, determining a target pixel, which is located in the center of the first predetermined size of the three-dimensional pixel region and has at most one connected pixel around the first predetermined size of the three-dimensional pixel region, as an end point of the blood vessel centerline.
The three-dimensional pixel region can be a cubic region, the vessel center lines of the vessels in the visceral organs are extracted based on a first vessel segmentation result obtained by the first neural network model, the end points of all the vessel center lines are found out, and the fine segmentation of the end point part is realized through the subsequent steps, so that the continuity of the vessels at the end points is obviously improved. The first predetermined size may be 3 × 3, and if a pixel is located at the center of the 3 × 3 cubic region, and the pixel has no or only 1 pixel connected to it around the 3 × 3 cubic region, the pixel located at the center of the 3 × 3 cubic region is the end point of the blood vessel center line.
In some cases, only the vessel center lines of the portal vein and the hepatic vein can be extracted, the venation is clear and is usually easy to segment due to the thick main trunk of the inferior vena cava, the end point extraction can not be carried out, and the problem of good continuity is ignored. The portal vein and the hepatic vein are relatively thin, and the segmentation result is not accurate due to the fact that a discontinuous blood vessel segmentation result is caused by the fact that a breakpoint is easy to occur during segmentation.
In practical application, according to the preliminary blood vessel segmentation result obtained by segmenting the EffectientNet-B0 neural network and the Effeicient Res-UNet Plus neural network, the end points of various blood vessels may have discontinuous situations, and the embodiment performs blood vessel re-segmentation based on the end points formed by the break points of the center lines of the blood vessels, so as to obtain a second blood vessel segmentation result with improved continuity.
And step S104, performing vessel re-segmentation based on the end points of the central lines of the vessels to obtain a second vessel segmentation result.
In some implementations, the vessel re-segmentation based on the end point of each vessel centerline to obtain a second vessel segmentation result includes:
and performing vessel re-segmentation based on the end points of the central lines of the vessels by using a pre-trained second neural network model to obtain a second vessel segmentation result.
In some implementations, the second neural network model includes a V-Net neural network; the method also comprises the following steps:
and training the V-Net neural network by taking the background and different blood vessel classifications (including portal vein, hepatic vein and inferior vena cava) as prediction labels and taking a three-dimensional pixel region with a second preset size as input, wherein the three-dimensional pixel region with the second preset size is constructed by taking an end point of the central line of each blood vessel as the center.
The second predetermined dimension may be 64 x 64, and the V-Net neural network is trained using as input a cubic region of 64 x 64 centered on the end points of the centerline of each vessel.
As shown in fig. 6, the V-Net neural network includes a three-dimensional convolution operation and a residual operation, and the encoder and the decoder are connected by jumping to enrich effective information of the decoder, thereby obtaining a more accurate segmentation result. The three-dimensional convolution operation is similar to the two-dimensional convolution operation process, and is different from the three-dimensional convolution operation which is obtained by performing convolution operation on a three-dimensional matrix by using a three-dimensional convolution kernel, and is represented by a symbol of 8855in fig. 6. The residual operation consists of a three-dimensional convolution operation, obtained by directly summing the three-dimensional matrices before and after convolution, is represented in fig. 6 by \10753, which aims to alleviate the problem of gradient disappearance caused by increasing the depth of the neural network.
Since the proportion of vessels in CT images is too small compared to background organs (portal, hepatic and inferior vena cava), the loss function of the V-Net neural network needs to be improved. Firstly, carrying out One-Hot coding processing on different blood vessel classifications in the background and the viscera in the prediction label, and secondly, adjusting the weight of the loss function according to the prediction label corresponding to the One-Hot coding, wherein the adjustment mode comprises increasing the weight of the loss function of the portal vein, the hepatic vein and the inferior vena cava and reducing the weight of the background loss function.
In some implementations, a third loss function employed to train the V-Net neural network is determined based on the cross-entropy loss function and the Tversky loss function;
the third loss function is calculated as follows:
Figure 471913DEST_PATH_IMAGE028
wherein the content of the first and second substances,
Figure 560086DEST_PATH_IMAGE029
representing a third loss function;
Figure 734716DEST_PATH_IMAGE003
a cross entropy loss function value representing a background;
Figure 866620DEST_PATH_IMAGE013
a cross entropy loss function value representing the portal vein;
Figure 126700DEST_PATH_IMAGE014
a cross entropy loss function value representing hepatic veins;
Figure 251519DEST_PATH_IMAGE015
a cross entropy loss function value representing the inferior vena cava;
Figure 546235DEST_PATH_IMAGE016
tversky loss function values representing the background;
Figure 583461DEST_PATH_IMAGE017
(ii) a Tversky loss function value representing the portal vein;
Figure 347148DEST_PATH_IMAGE018
(ii) a Tversky loss function value representing the hepatic vein;
Figure 760812DEST_PATH_IMAGE019
the Tversky loss function value representing the inferior vena cava; m, n 1 、n 2 、n 3 The coefficient of the cross entropy loss function value of the background, the coefficient of the cross entropy loss function value of the portal vein, the coefficient of the cross entropy loss function value of the hepatic vein, the coefficient of the cross entropy loss function value of the inferior vena cava, n 1 、n 2 、n 3 Greater than m, since the portal and hepatic vein fractions are small and the inferior vena cava fraction is slightly larger, m =0.1, n is empirically set 1 =2.0、n 2 =2.0、n 3 =1.0;o、p 1 、p 2 、p 3 The coefficient of the Tverseky loss function value of the background, the coefficient of the Tverseky loss function value of the portal vein, the coefficient of the Tverseky loss function value of the hepatic vein, the coefficient of the Tverseky loss function value of the inferior vena cava, p 1 、p 2 、p 3 Greater than o, with o =0.1, p, empirically set due to the smaller portal and hepatic vein occupancy and slightly larger inferior vena cava occupancy 1 =2.0、p 2 =2.0、p 3 =1.0; i and j are respectively a total coefficient of the cross entropy loss function value and a total coefficient of the Tverseky loss function value, j is larger than i, and i =0.4 and j =0.6 are set according to experience in order to play the role of the Tversey loss function.
Since the proportion of the blood vessel region in the whole background is small, the two parameters α and β of the Tversky loss function can still be set to α =0.3 and β =0.7, so that the blood vessel segmentation effect is obvious.
And training the V-Net neural network until the loss function value of the verification set is reduced to a preset threshold value, and stopping training.
It should be noted that the process of training the first neural network model and the second neural network model may be performed simultaneously or sequentially, and this embodiment does not limit this order.
Because the V-Net neural network has good comprehension capability of three-dimensional space context and can be well connected with the blood vessel breakpoint, the V-Net neural network is used for re-extracting the enhanced CT image at the blood vessel breakpoint. And constructing a 64 x 64 cubic area (input V-Net neural network) by taking the end points of the central lines of all the blood vessels as centers, and re-segmenting the original enhanced CT image in the cubic area by using the trained V-Net neural network to obtain a re-segmentation result of the liver blood vessels.
And S105, combining the first blood vessel segmentation result and the second blood vessel segmentation result to obtain a combined blood vessel segmentation result.
In some implementations, merging the first vessel segmentation result and the second vessel segmentation result includes: the second vessel segmentation result is overlaid with the first vessel segmentation result. The V-Net neural network can improve the segmentation effect of the endpoint region in the blood vessel segmentation process, so that the continuity of the endpoint region is enhanced, the first blood vessel segmentation result covers the second blood vessel segmentation result, the merging of the two segmentation results is realized, the whole blood vessel structure of the primary blood vessel segmentation result obtained by segmenting according to the EffectintNet-B0 neural network and the EffectintRes-UNet Plus neural network can be kept, the continuity enhancement of the endpoint of the blood vessel structure can be realized, the segmentation result is optimized, the liver blood vessel can be reconstructed really and quickly by the merged result, and the preoperative planning can be performed accurately in the follow-up process.
On the basis of merging the segmentation results, the small connected domain is further cleaned, and the vessel connectivity can be ensured. In some cases, the connected domains with the number of the cleaning pixels smaller than the set value can be set within a range of 50 to 100.
In some implementations, where the target organ includes a liver, the merged vessel segmentation results include a hepatic vein classification, a portal vein classification, and an inferior vena cava classification; furthermore, the method may further comprise:
and S106, determining first connected domains with the pixel number smaller than a first preset value in hepatic vein classification in the merged blood vessel segmentation result, judging whether other prediction labels exist around each first connected domain one by one, and changing the prediction labels of the connected domains with the other prediction labels around into the other prediction labels.
In one example, the first preset value is 500. Finding out all first connected domains with the pixel number less than 500 in hepatic vein classification, judging whether other prediction labels (corresponding to other blood vessel classifications) exist around each first connected domain one by one, and if so, changing the prediction label of the first connected domain into the other prediction labels to update the blood vessel classification of the connected domain. If a prediction label corresponding to the portal vein classification exists around a certain first connected domain, the prediction label of the first connected domain is changed into the prediction label corresponding to the portal vein classification, so that the first connected domain is classified into the portal vein classification.
And S107, determining the maximum connected domain in portal vein classification in the merged blood vessel segmentation result, judging whether other prediction labels exist around other connected domains outside the maximum connected domain one by one, and changing the prediction labels of the other connected domains around which other prediction labels exist into the other prediction labels.
The method comprises the steps of firstly finding out the maximum connected domain in portal vein classification, judging whether prediction labels corresponding to other classifications exist around other small connected domains of the portal vein one by one except the maximum connected domain of the portal vein, and if so, changing the prediction labels of the small connected domains into the prediction labels corresponding to the other classifications. Other connected domains outside the largest connected domain of the portal vein are all considered small connected domains. In one example, if there is a prediction tag corresponding to a hepatic vein classification around a small connected component, the prediction tag of the small connected component is labeled from the portal vein as the hepatic vein, so as to classify the small connected component into the hepatic vein classification.
And step S108, removing connected domains with pixel values lower than a preset value in each prediction label.
On the basis of updating the prediction labels of the connected domains in the hepatic vein classification and the portal vein classification, the connected domains with the pixel values lower than the preset value in the prediction labels of each classification are removed again, and therefore the influence of noise is eliminated. For example, the range of the preset value may be 100 to 200.
In the method of the embodiment, the organ blood vessel segmentation is performed on the basis of organ segmentation through the two-stage first neural network model, the interference of other tissue images is eliminated, the accuracy of organ blood vessel segmentation is improved, a Tversey loss function is used in the first neural network model, the detection rate of small blood vessels is effectively improved through adjusting parameters alpha and beta, and the segmentation precision and the segmentation continuity are ensured. Compared with a blood vessel segmentation method of a single model, the method also combines the V-Net neural network to re-extract the blood vessel breakpoint, so that the continuity of blood vessel segmentation is ensured, the accuracy and continuity of the segmentation result obtained by combining the two models are obviously improved, and the classification result is accurate.
In order to further verify the effect of the method, res-UNet Plus and V-Net are compared with the method in a liver blood vessel segmentation scene by using an RTX3090 display card. Wherein Res-UNet Plus and the traditional V-Net are multi-classification models, and three liver blood vessels can be directly segmented. The three models are all optimized on the video memory, and the segmentation results of the three methods are shown in table 1.
TABLE 1 comparison of different liver vessel segmentation methods
Figure 644455DEST_PATH_IMAGE030
The calculation formula of the Dice index is as follows:
Figure 633008DEST_PATH_IMAGE031
the calculation formula of the cross-over ratio IoU index is as follows:
Figure 602101DEST_PATH_IMAGE032
the calculation formula of the sensitivity TPR index is as follows:
Figure 819456DEST_PATH_IMAGE033
the calculation formula of the accuracy PPV index is as follows:
Figure 573916DEST_PATH_IMAGE034
wherein the content of the first and second substances,
Figure 218524DEST_PATH_IMAGE035
representing true positive, which means predicting the labeling range of the correct real blood vessel;
Figure 674913DEST_PATH_IMAGE036
false negatives are represented, which refer to true extravascular labeling ranges with prediction errors;
Figure 945227DEST_PATH_IMAGE037
representing the scope of all real annotations.
As can be seen from table 1, most of the indexes of the method are superior to those of the conventional segmentation method, and although the calculation time of the method is slightly longer than that of the conventional method, the method occupies a small amount of video memory resources, so that the segmentation efficiency is still relatively high in general, and in addition, each index of the liver blood vessel is superior to those of the two conventional methods.
Example two
In accordance with an embodiment, the present embodiment provides an organ blood vessel segmentation apparatus, as shown in fig. 7, including:
an obtaining module 201, configured to obtain a medical image to be segmented;
the first segmentation module 202 is configured to extract a target organ image in the medical image and perform preliminary segmentation on the target organ image to obtain a first blood vessel segmentation result;
a determining module 203, configured to extract a blood vessel centerline in the first blood vessel segmentation result, and determine an end point of each blood vessel centerline;
the second segmentation module 204 is configured to perform vessel re-segmentation based on end points of the center lines of the vessels to obtain a second vessel segmentation result;
the merging module 205 is configured to merge the first blood vessel segmentation result and the second blood vessel segmentation result to obtain a merged blood vessel segmentation result.
In this embodiment, the target organ may refer to a liver, and the blood vessels to be segmented include a hepatic vein, a portal vein, and a inferior vena cava. When the method is applied, a medical image (e.g., a CT image) including a target organ (liver) is acquired, and then blood vessel segmentation of the target organ is performed.
In some implementations, extracting a target organ image in the medical image and obtaining a first blood vessel segmentation result of the target organ based on the target organ image segmentation may include: and extracting a target organ image in the medical image by using a first neural network model trained in advance and obtaining a first blood vessel segmentation result of the target organ based on the target organ image segmentation.
As shown in fig. 2, the first neural network model includes an EfficientNet-B0 neural network and an efficientres-UNet Plus neural network formed by adding the down-sampling part, the jump-connection part and the up-sampling part of the U-Net model to the depth separable convolution module, the input of the EfficientNet-B0 neural network includes a medical image to be segmented, the output of the EfficientNet-B0 neural network includes a target organ image in the medical image to be segmented, the input of the efficientres-UNet Plus neural network includes a target organ image in the medical image to be segmented, and the output of the efficientres-UNet Plus neural network includes a first blood vessel segmentation result.
The reasoning process of the first neural network model comprises the following steps: firstly, an original medical image is input into an EfficientNet-B0 neural network for organ segmentation, the segmentation result of the organ is multiplied by pixel points one by one with (slice) images corresponding to each channel of the original medical image, and then the multiplication result is input into an EfficientRes-UNet Plus neural network for blood vessel segmentation in the organ to obtain a preliminary segmentation result of the blood vessel in the organ, wherein the preliminary segmentation result comprises the segmentation results of different blood vessel classifications.
In this embodiment, organ segmentation and blood vessel segmentation can be successively realized by using a two-stage neural network model, i.e., a first neural network model, including an EfficientNet-B0 neural network and an efficientres-UNet Plus neural network, so as to obtain a preliminary blood vessel segmentation result, i.e., a first blood vessel segmentation result.
In some implementations, the present embodiment further includes:
and the training module is used for training the first neural network model, and further comprises training an effective network-B0 neural network and an effective Res-UNet Plus neural network.
The first loss function used for training the effective Net-B0 neural network comprises a cross-entropy loss function, and the second loss function used for training the effective Res-UNet Plus neural network is determined based on the cross-entropy loss function and the Tverseky loss function.
Because the first neural network is an end-to-end model, staged training is not needed, and the training efficiency is improved. The EfficientNet-B0 neural network is mainly used for extracting organ outlines, during training, firstly, one-Hot coding processing is carried out on a background and organs in a prediction label, secondly, the weight of a loss function is adjusted according to the prediction label corresponding to the One-Hot coding, and the adjustment mode comprises increasing the weight of the loss function of the organs and reducing the weight of the loss function of the background.
In a specific implementation, as shown in fig. 4, the Efficient Res-UNet Plus neural network structure 120 includes three parts, namely a down-sampling part 1201, a skip-connecting part 1202 and an up-sampling part 1203, which are all added with a depth separable convolution module to improve the complexity of the model, and the depth separable convolution module has fewer parameters and faster computation speed than the residual neural network module. Of these, downsampling section 1201 has 4 pairs of depth-separable convolution modules and pooling layers in total, skip-connecting section 1202 has 5 depth-separable convolution modules in total, and upsampling section 1203 has 5 pairs of depth-separable convolution modules and deconvolution layers in total. The depth separable convolution module in this embodiment may adopt a structure as shown in fig. 5.
In some implementations, determining the end points of the vessel centerlines includes: and determining a target pixel which is positioned in the center of the three-dimensional pixel area with the first preset size and has at most one connected pixel around the three-dimensional pixel area with the first preset size as an end point of the center line of the blood vessel.
The three-dimensional pixel region can be a cubic region, the vessel center lines of the vessels in the visceral organs are extracted based on a first vessel segmentation result obtained by the first neural network model, the end points of all the vessel center lines are found out, and the fine segmentation of the end point part is realized through the subsequent steps, so that the continuity of the vessels at the end points is obviously improved. The first predetermined size may be 3 × 3, and if a pixel is located at the center of the 3 × 3 cubic region, and the pixel has no or only 1 pixel connected to it around the 3 × 3 cubic region, the pixel located at the center of the 3 × 3 cubic region is the end point of the blood vessel center line.
In some implementations, the vessel re-segmentation based on the end points of the vessel centerlines to obtain a second vessel segmentation result includes: and performing vessel re-segmentation based on the end points of the central lines of the vessels by using a pre-trained second neural network model to obtain a second vessel segmentation result. In some implementations, the second neural network model includes a V-Net neural network.
The training module is further configured to: and training the V-Net neural network by taking the background and different blood vessel classifications (including portal vein, hepatic vein and inferior vena cava) as prediction labels and taking a three-dimensional pixel region with a second preset size as input, wherein the three-dimensional pixel region with the second preset size is constructed by taking an end point of the central line of each blood vessel as the center.
Since the proportion of vessels in CT images is too small compared to background organs (portal, hepatic and inferior vena cava), the loss function of the V-Net neural network needs to be improved. Firstly, carrying out One-Hot coding processing on different blood vessel classifications in the background and the viscera in the prediction label, and secondly, adjusting the weight of the loss function according to the prediction label corresponding to the One-Hot coding, wherein the adjustment mode comprises increasing the weight of the loss function of the portal vein, the hepatic vein and the inferior vena cava and reducing the weight of the background loss function.
The V-Net neural network has good comprehension capability of three-dimensional space context and can be well connected with the blood vessel breakpoint, so that the V-Net neural network is used for re-extracting the enhanced CT image at the breakpoint of the blood vessel. And constructing a 64 x 64 cubic area (input V-Net neural network) by taking the end points of the central lines of all the blood vessels as centers, and re-segmenting the original enhanced CT image in the cubic area by using the trained V-Net neural network to obtain a re-segmentation result of the liver blood vessels.
In some implementations, merging the first vessel segmentation result and the second vessel segmentation result includes: the second vessel segmentation result is overlaid with the first vessel segmentation result. The V-Net neural network can improve the segmentation effect of the endpoint region in the blood vessel segmentation process, so that the continuity of the endpoint region is enhanced, the first blood vessel segmentation result covers the second blood vessel segmentation result, the merging of the two segmentation results is realized, the whole blood vessel structure of the primary blood vessel segmentation result obtained by segmentation according to the EfficientNet-B0 neural network and the Efficient Res-UNet Plus neural network can be kept, the continuity enhancement of the endpoint of the blood vessel structure can be realized, the segmentation result is optimized, the liver blood vessel can be reconstructed by the merged result more truly and rapidly, and the preoperative planning can be performed accurately and subsequently.
On the basis of merging the segmentation results, the small connected domain is further cleaned, and the vessel connectivity can be ensured. In some cases, the connected domains with the number of the cleaning pixels smaller than the set value can be set within a range of 50 to 100.
In some implementations, where the target organ includes a liver, the merged vessel segmentation results include a hepatic vein classification, a portal vein classification, and an inferior vena cava classification; furthermore, the apparatus may further comprise:
the post-processing module is used for determining first connected domains with the pixel number smaller than a first preset value in hepatic vein classification in the merged blood vessel segmentation result, judging whether other prediction labels exist around each first connected domain one by one, and changing the prediction labels of the connected domains with the other prediction labels around into the other prediction labels; determining the maximum connected domain in portal vein classification in the merged blood vessel segmentation result, judging whether other prediction labels exist around other connected domains outside the maximum connected domain one by one, and changing the prediction labels of other connected domains around which other prediction labels exist into other prediction labels; and removing connected domains with pixel values lower than a preset value in each prediction label. On the basis of updating the prediction labels of the connected domains in the hepatic vein classification and the portal vein classification, the connected domains with the pixel values lower than the preset value in the prediction labels of each classification are removed again, and therefore the influence of noise is eliminated.
In the embodiment, the organ blood vessel is segmented on the basis of organ segmentation through the two-stage first neural network model, interference of other tissue images is eliminated, accuracy of organ blood vessel segmentation is improved, a Tversey loss function is used in the first neural network model, the detection rate of small blood vessels is effectively improved through adjusting parameters alpha and beta, and segmentation precision and segmentation continuity are guaranteed. Compared with a blood vessel segmentation method of a single model, the method also combines the V-Net neural network to re-extract the blood vessel at the blood vessel breakpoint, so that the continuity of blood vessel segmentation is ensured, the accuracy and continuity of the segmentation result obtained by combining the two models are obviously improved, and the classification result is accurate.
It should be understood that the apparatus of the present embodiment provides all of the benefits of the method embodiments.
Those skilled in the art should understand that the above modules or steps can be implemented by a general purpose computing device, they can be centralized on a single computing device or distributed on a network composed of a plurality of computing devices, and alternatively, they can be implemented by program codes executable by the computing devices, so that they can be stored in a storage device and executed by the computing devices, or they can be respectively manufactured into various integrated circuit modules, or a plurality of modules or steps in them can be manufactured into a single integrated circuit module. The present invention is not limited to any specific combination of hardware and software.
EXAMPLE III
The present embodiments provide a computer storage medium having a computer program stored thereon, where the computer program, when executed by one or more processors, implements the method of the first embodiment.
The computer-readable storage medium may be implemented by any type of volatile or nonvolatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically Erasable Programmable Read-Only Memory (EEPROM), erasable Programmable Read-Only Memory (EPROM), programmable Read-Only Memory (PROM), read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk.
Example four
The present embodiment provides an electronic device, which includes a memory and one or more processors, where the memory stores a computer program, and the computer program implements the method of the first embodiment when executed by the one or more processors.
The Processor may be an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a Microcontroller (MCU), a microprocessor, or other electronic components, and is configured to perform the methods of the above embodiments.
In the embodiments provided in the present invention, it should be understood that the disclosed system and method can be implemented in other ways. The system and method embodiments described above are merely illustrative.
It should be noted that, in this document, the terms "first", "second", and the like in the description and claims of the present application and in the drawings described above are used for distinguishing similar objects, and are not necessarily used for describing a particular order or sequence. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
Although the embodiments of the present invention have been described above, the above descriptions are only for the convenience of understanding the present invention, and are not intended to limit the present invention. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. An organ blood vessel segmentation method comprising:
acquiring a medical image to be segmented;
extracting a target organ image in the medical image and performing primary segmentation on the target organ image to obtain a first blood vessel segmentation result;
extracting the vessel center lines in the first vessel segmentation result, and determining the end points of the vessel center lines;
performing vessel re-segmentation based on the end points of the central lines of the vessels to obtain a second vessel segmentation result;
merging the first blood vessel segmentation result and the second blood vessel segmentation result to obtain a merged blood vessel segmentation result;
the merging the first vessel segmentation result and the second vessel segmentation result includes: overlaying the second vessel segmentation result with the first vessel segmentation result;
the target organ comprises a liver, and the merged blood vessel segmentation result comprises hepatic vein classification, portal vein classification and inferior vena cava classification; the method further comprises the following steps:
in hepatic vein classification in the merged blood vessel segmentation result, determining first connected domains with the pixel number smaller than a first preset value, judging whether other prediction labels exist around each first connected domain one by one, and changing the prediction labels of the first connected domains with the other prediction labels around into the other prediction labels;
determining a maximum connected domain in portal vein classification in the merged blood vessel segmentation result, judging whether other prediction labels exist around other connected domains outside the maximum connected domain one by one, and changing the prediction labels of other connected domains around which other prediction labels exist into other prediction labels;
and removing connected domains with pixel values lower than a preset value in each prediction label.
2. The organ blood vessel segmentation method according to claim 1, wherein the extracting of the target organ image in the medical image and the obtaining of the first blood vessel segmentation result of the target organ based on the target organ image segmentation include:
extracting a target organ image in the medical image by using a first neural network model trained in advance and obtaining a first blood vessel segmentation result of the target organ based on the target organ image segmentation;
the first neural network model comprises an EfficientNet-B0 neural network and an EfficientRes-UNet Plus neural network formed by adding a downsampling part, a jump connecting part and an upsampling part of a U-Net model into a depth separable convolution module, the input of the EfficientNet-B0 neural network comprises a medical image to be segmented, the output of the EfficientNet-B0 neural network comprises a target organ image in the medical image to be segmented, the input of the EfficientRes-UNet Plus neural network comprises the target organ image in the medical image to be segmented, and the output of the EfficientRes-UNet Plus neural network comprises a first blood vessel segmentation result.
3. The organ blood vessel segmentation method according to claim 2, further comprising: training a first neural network model, wherein a first loss function adopted for training the effective Res-UNet Plus neural network comprises a cross entropy loss function, and a second loss function adopted for training the effective Res-UNet Plus neural network is determined based on the cross entropy loss function and the Tversey loss function.
4. The organ vessel segmentation method according to claim 1, wherein the determining end points of the respective vessel centerlines includes:
and determining a target pixel which is positioned in the center of the three-dimensional pixel area with the first preset size and has at most one connected pixel around the three-dimensional pixel area with the first preset size as an end point of the central line of the blood vessel.
5. The organ vessel segmentation method according to claim 1, wherein the obtaining of the second vessel segmentation result by performing vessel re-segmentation based on the end points of the respective vessel center lines includes:
and performing vessel re-segmentation based on the end points of the central lines of the vessels by using a pre-trained second neural network model to obtain a second vessel segmentation result.
6. The organ vessel segmentation method according to claim 5, wherein the second neural network model includes a V-Net neural network; the method further comprises the following steps:
and (3) training the V-Net neural network by taking the background and different blood vessel classifications as prediction labels and taking a three-dimensional pixel region with a second preset size as input, wherein the three-dimensional pixel region with the second preset size is constructed by taking the end point of the central line of each blood vessel as the center.
7. An organ blood vessel segmentation apparatus comprising:
the acquisition module is used for acquiring a medical image to be segmented;
the first segmentation module is used for extracting a target organ image in the medical image and carrying out primary segmentation on the target organ image to obtain a first blood vessel segmentation result;
the determining module is used for extracting the blood vessel central lines in the first blood vessel segmentation result and determining the end points of the blood vessel central lines;
the second segmentation module is used for performing vessel re-segmentation based on the end points of the central lines of the vessels to obtain a second vessel segmentation result;
a merging module, configured to merge the first blood vessel segmentation result and the second blood vessel segmentation result to obtain a merged blood vessel segmentation result; the merging the first vessel segmentation result and the second vessel segmentation result includes: overlaying the second vessel segmentation result with the first vessel segmentation result;
the target organ comprises a liver, and the merged blood vessel segmentation result comprises hepatic vein classification, portal vein classification and inferior vena cava classification; the device further comprises:
the post-processing module is used for determining first connected domains with the pixel number smaller than a first preset value in hepatic vein classification in the merged blood vessel segmentation result, judging whether other prediction labels exist around each first connected domain one by one, and changing the prediction labels of the first connected domains with the other prediction labels around into the other prediction labels; determining a maximum connected domain in portal vein classification in the merged blood vessel segmentation result, judging whether other prediction labels exist around other connected domains outside the maximum connected domain one by one, and changing the prediction labels of other connected domains around which other prediction labels exist into other prediction labels; and removing connected domains of which the pixel values are lower than a preset value in each prediction label.
8. A computer storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when executed by one or more processors, carries out the method of any one of claims 1 to 6.
9. An electronic device comprising one or more processors and memory having stored thereon a computer program that, when executed by the one or more processors, implements the method of any of claims 1-6.
CN202211276231.6A 2022-10-19 2022-10-19 Organ blood vessel segmentation method and device, storage medium and electronic equipment Active CN115359046B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211276231.6A CN115359046B (en) 2022-10-19 2022-10-19 Organ blood vessel segmentation method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211276231.6A CN115359046B (en) 2022-10-19 2022-10-19 Organ blood vessel segmentation method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN115359046A CN115359046A (en) 2022-11-18
CN115359046B true CN115359046B (en) 2023-03-24

Family

ID=84008772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211276231.6A Active CN115359046B (en) 2022-10-19 2022-10-19 Organ blood vessel segmentation method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115359046B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111899245A (en) * 2020-07-30 2020-11-06 北京推想科技有限公司 Image segmentation method, image segmentation device, model training method, model training device, electronic equipment and storage medium
CN114820658A (en) * 2022-04-26 2022-07-29 北京深睿博联科技有限责任公司 Hepatic vein and portal vein segmentation method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6066197B2 (en) * 2013-03-25 2017-01-25 富士フイルム株式会社 Surgery support apparatus, method and program
CN111325759B (en) * 2020-03-13 2024-04-16 上海联影智能医疗科技有限公司 Vessel segmentation method, apparatus, computer device, and readable storage medium
CN112862835A (en) * 2021-01-19 2021-05-28 杭州深睿博联科技有限公司 Coronary vessel segmentation method, device, equipment and computer readable storage medium
CN112927239A (en) * 2021-02-22 2021-06-08 北京安德医智科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113379741B (en) * 2021-08-10 2021-11-16 湖南师范大学 Retinal blood vessel segmentation method, device and storage medium based on blood vessel characteristics
CN114565592A (en) * 2021-12-08 2022-05-31 深圳科亚医疗科技有限公司 Method, device and medium for performing blood vessel segmentation on medical image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111899245A (en) * 2020-07-30 2020-11-06 北京推想科技有限公司 Image segmentation method, image segmentation device, model training method, model training device, electronic equipment and storage medium
CN114820658A (en) * 2022-04-26 2022-07-29 北京深睿博联科技有限责任公司 Hepatic vein and portal vein segmentation method and device

Also Published As

Publication number Publication date
CN115359046A (en) 2022-11-18

Similar Documents

Publication Publication Date Title
CN107506761B (en) Brain image segmentation method and system based on significance learning convolutional neural network
CN110930416B (en) MRI image prostate segmentation method based on U-shaped network
CN111784671A (en) Pathological image focus region detection method based on multi-scale deep learning
CN111696089A (en) Arteriovenous determining method, device, equipment and storage medium
CN110310280B (en) Image recognition method, system, equipment and storage medium for hepatobiliary duct and calculus
CN112001928B (en) Retina blood vessel segmentation method and system
CN115457021A (en) Skin disease image segmentation method and system based on joint attention convolution neural network
CN110110723B (en) Method and device for automatically extracting target area in image
CN114266794B (en) Pathological section image cancer region segmentation system based on full convolution neural network
CN112884788B (en) Cup optic disk segmentation method and imaging method based on rich context network
CN112991365B (en) Coronary artery segmentation method, system and storage medium
CN110570394A (en) medical image segmentation method, device, equipment and storage medium
CN110992370A (en) Pancreas tissue segmentation method and device and terminal equipment
CN111667459A (en) Medical sign detection method, system, terminal and storage medium based on 3D variable convolution and time sequence feature fusion
CN115546570A (en) Blood vessel image segmentation method and system based on three-dimensional depth network
Shan et al. SCA-Net: A spatial and channel attention network for medical image segmentation
CN113205524A (en) Blood vessel image segmentation method, device and equipment based on U-Net
CN113256670A (en) Image processing method and device, and network model training method and device
CN114359248A (en) Medical image segmentation method and device, storage medium and electronic equipment
CN114677349A (en) Image segmentation method and system for edge information enhancement and attention guidance of encoding and decoding
CN115359046B (en) Organ blood vessel segmentation method and device, storage medium and electronic equipment
CN116091458A (en) Pancreas image segmentation method based on complementary attention
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
CN115546227A (en) Retinal vessel segmentation network based on improved Unet network, segmentation method, computer device and storage medium
CN112634224B (en) Focus detection method and device based on target image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant